LLMs run riot in production. Get back in the driving seat. Build your own evals, iterate quickly, and go from prototype to production in no time! Try Now! https://console.unify.ai/ ✨
PhD in ML for 3D Vision + Robotics, Imperial College London. Love to dive deep into engineering rabbit holes 🕳️🐇
TL;DR Unify dynamically routes each prompt to the best LLM and provider so you can balance quality, speed, and cost with ease ✨
🔀 Try out our router in the browser
💸 Sign up now and get $50 free credits!
🧑💻 Get a Demo
💥Problem
There is a new LLM emerging almost every week, and all LLM applications have unique requirements for output quality, cost, and speed. It’s also unclear which models provide the highest quality on your task. This results in lots of manual signups, testing different models, bespoke benchmarking, etc. It’s overwhelming, and the final solution is almost always sub-optimal.
Many people give up and just use the largest models for everything. However, you really don't need GPT4 to summarize simple documents. Llama 8B is more than capable and is 15 times faster and 100 times cheaper. LLM apps are currently much slower and more expensive than needed 💵🔥⏱️, and also often underperform in output quality because of poorly selected models 📉 (Claude Opus vs. GPT4 vs. Gemni etc.) for your specific prompts.
🤩 Not Anymore
Unify automatically routes each prompt to the most suitable model based on your preferences for quality, speed, and cost. Simply tune these three dials and let Unify do all the hard work.
Your "easy" prompts will go to the fastest and cheapest models, and only the "hard" prompts will go to the most appropriate heavy lifters, like GPT-4o, Opus, and Gemini. You focus on building your LLM app, and we'll focus on providing the best models, with the fastest providers, at the lowest cost ⚡
Watch our explainer video to learn more about the solution at a high level.
What Unify Offers ✨
Getting Started 🛠️
Sign up, select your router, then pip install unifyai
and you're ready to go!
from unify import Unify
unify = Unify(
api_key="UNIFY_KEY",
endpoint="router@q:1",
)
response = unify.generate(user_prompt="Hello there")
🙏 Our Ask
Give Unify a try, and let us know what you think! Sign up, pip install unifyai
, take our router for a spin, and check out our product walkthrough. If you're excited about Unify, tell a friend about it 😊 If you don’t think it’s useful, please tell us. We’d love to know!