TL;DR:
We're building reconfigurable chips for AI that are up to 27.6x more efficient and powerful than the H100 GPUs. This could save data centers hundreds of millions to billions in annual energy costs.
Meet the Team
Hello! We're Elias and Prithvi from Exa. We're developing reconfigurable chips for AI that are up to around 27.6x* more efficient and performant than the modern H100 GPUs.
*: Read our litepaper!
CEO, Elias Almqvist (right): Self-taught engineer who also studied computer science and computer engineering (dropped out and founded Exa, btw) at Chalmers University of Technology. Previously worked in the embedded software space but also worked on various aerospace projects at university.
CTO, Prithvi Raj (left): Holds an MEng from the world-leading Computational Stats & ML Lab at Cambridge. During his time there, he fell in love with scientific machine learning, a field that demands bespoke neural network architectures and extreme hardware efficiency, and also interned at Microsoft as a software engineer.
The problems!
The AI industry faces critical challenges threatening its sustainable growth:
- Unsustainable Energy Consumption: Modern GPUs consume 600-1000 W per unit, creating massive scaling issues for data centers. Large data centers face energy costs in the hundreds of millions to potentially billions each year. GPU power draw seems to be increasing with each new release, while compute per area has remained the same for the past 5 years.
- Exponential Compute Demand: With AI advancements, computational power demand is rapidly increasing. Unchecked, this trend could lead to an energy crisis, impeding AI progress and costing data centers billions of dollars.
- Hardware Limitations: Current fixed architectures constrain AI innovation. They lack the versatility to efficiently support diverse AI architectures and custom neural network designs crucial for solving real-world problems.
The solution.
Exa's polymorphic computing technology addresses these challenges:
- Reconfigures for each AI model architecture, maximizing efficiency and versatility
- Supports diverse approaches, from transformers and GPTs to novel AI architectures (e.g., the new Kolmogorv-Arnold Networks (KAN))
- Early simulations indicate potential efficiency gains of up to 27.6x over the H100 GPUs
This technology could save data centers hundreds of millions to billions in annual energy costs, significantly reducing operational expenses and environmental impact.
For a somewhat deeper technical dive, refer to our litepaper!
Asks :)
- Read our litepaper! All feedback welcome!
- Introduce us to anyone in the scientific machine learning space and/or someone conducting research in AI, particularly those who have very “cursed model architectures.”
- Get us in contact with any data center, AI research organization, or GPU cloud provider (i.e., AWS, OpenAI, Anthropic, DeepMind, Lambda).
- Give us intros to semiconductor industry professionals, particularly those interested in bringing back chip manufacturing to the US!
Feel free to reach us at founders@exalaboratories.com, we would love to hear your feedback and answer your questions!