Building open-source AI platform for next-generation AI hardware, reducing ML training costs by 30%.
TL;DR: We are building an open-source AI platform for non-NVIDIA GPUs. Today, we are launching one of the pieces, a seamless UI to spin up a TPU cluster of any size and providing an out-of-box notebook to fine-tune LLaMa 3.1 models. Try us at felafax.ai or check out our github!
Hi everyone, we're Nikhil and Nithin, twin brothers behind Felafax AI. Before this, we spent half a decade at Google and Meta building AI infrastructure. Drawing on our experience, we are creating an ML stack from the ground up. Our goal is to deliver high performance and provide an easy workflow for training models on non-NVIDIA hardware like TPU, AWS Trainium, AMD GPU, and Intel GPU.
Today, we're launching a cloud layer to make it easy to spin up AI training clusters of any size, from 8 TPU cores to 2048 cores. We provide:
In the coming weeks, we will also launch our open-source AI platform built on top of JAX and OpenXLA (an alternative to NVIDIA's CUDA stack). We will support AI training across a variety of non-NVIDIA hardware (Google TPU, AWS Trainium, AMD and Intel GPU) and offer the same performance as NVIDIA at 30% lower cost. Follow us on Twitter, LinkedIn and Github or updates!