A replacement to standard LLMs for coding tasks - eliminate hallucinations with a marketplace of expert LLMs trained on hundreds of technical topics.
Hey everyone! We’re Sam and Josh, and we’re building a platform for better LLMs for code and technical tasks.
Lune AI is a platform to find and train Lunes, or expert LLMs on any technical topic for code. We outperform the accuracy of standalone foundational models by 37%.
Here’s a demo:
LLMs hallucinate on material outside their training data, which often leads to inaccurate answers to technical questions related to code. Whether it’s working with a new library, or the ever changing nature of software documentation, every developer has wasted time on hallucinated LLM outputs.
On Lune Web, users can find a marketplace of community trained Lunes on hundreds of the most popular libraries and Github repositories for developers. In a concept similar to OpenAI’s GPT store, users that train popular Lunes receive monthly payouts in a revenue share program.
When interacting with Lunes on the web, users can also receive payouts for leaving feedback on Lune performance and contributing their own expertise to specific Lunes. This direct community driven feedback loop continuously improves the accuracy of our models.
In addition to Lune Web, we offer API access to our proprietary automatic context switching model, Tycho as well as all public Lunes. Tycho is our context switching model that automatically determines the right Lunes to retrieve information from given a user query, in order to provide the most accurate answer.
Our API is hot swappable with OpenAI, which means our users can integrate Tycho into any IDE, tool, or client that supports OpenAI compatible APIs with a personal API key, such as Cursor, Continue, and other AI client SDKs.
Project keys with our API can be used to easily integrate backend applications and workflows where superior accuracy and minimized hallucinations for technical topics are necessary. For example, generating compliable, front-end code with context from various pip packages or libraries.
We are brothers who have been working together our whole lives, starting out with Lego projects. We both dropped out of Harvard because we believed that there is a very near future where AI + Coding can lead to 10x better productivity for the world. We want to make that a reality.
If you code and you have experienced LLM hallucinations, hate reading through dense documentation and Github repos, give Lune AI a try!
If you are building a product that uses LLMs for generating code or other technical tasks and want a hot-swappable, more accurate alternative, reach out to me at sam@lune.dev