HomeCompaniesUnify

Unify

Take Back Control of Your LLM ✨

LLMs run riot in production. Get back in the driving seat. Build your own evals, iterate quickly, and go from prototype to production in no time! Try Now! https://console.unify.ai/ ✨

Jobs at Unify

Unify
Founded:2022
Team Size:10
Location:London, United Kingdom
Group Partner:Jared Friedman

Active Founders

Daniel Lenton, Founder

PhD in ML for 3D Vision + Robotics, Imperial College London. Love to dive deep into engineering rabbit holes 🕳️🐇

Daniel Lenton
Daniel Lenton
Unify

Company Launches

TL;DR Unify dynamically routes each prompt to the best LLM and provider so you can balance quality, speed, and cost with ease ✨

🔀 Try out our router in the browser

💸 Sign up now and get $50 free credits!

🧑‍💻 Get a Demo

💥Problem
There is a new LLM emerging almost every week, and all LLM applications have unique requirements for output quality, cost, and speed. It’s also unclear which models provide the highest quality on your task. This results in lots of manual signups, testing different models, bespoke benchmarking, etc. It’s overwhelming, and the final solution is almost always sub-optimal.

Many people give up and just use the largest models for everything. However, you really don't need GPT4 to summarize simple documents. Llama 8B is more than capable and is 15 times faster and 100 times cheaper. LLM apps are currently much slower and more expensive than needed 💵🔥⏱️, and also often underperform in output quality because of poorly selected models 📉 (Claude Opus vs. GPT4 vs. Gemni etc.) for your specific prompts.

🤩 Not Anymore

Unify automatically routes each prompt to the most suitable model based on your preferences for quality, speed, and cost. Simply tune these three dials and let Unify do all the hard work.

Your "easy" prompts will go to the fastest and cheapest models, and only the "hard" prompts will go to the most appropriate heavy lifters, like GPT-4o, Opus, and Gemini. You focus on building your LLM app, and we'll focus on providing the best models, with the fastest providers, at the lowest cost ⚡

Watch our explainer video to learn more about the solution at a high level.

What Unify Offers ✨

  • ⚙️ Control: Choose which models and providers to route to and adjust three sliders: quality, cost, and latency.
  • 📈 Self Improvement: Unify automatically improves your LLM application over time as new models and providers emerge.
  • 📊 Observability: Compare all models and providers and see which are best for your needs.
  • ⚖️ Impartiality: Unify treats all models and providers equally, ensuring unbiased quality, cost, and speed benchmarks.
  • 🔑 Convenience: With a single API key, access all models and providers behind a single endpoint, queryable individually or via the router.
  • 🧑‍💻 Focus: Don't stress about updating models and providers; Unify handles it for you so you can focus on building great LLM products.

Getting Started 🛠️

Sign up, select your router, then pip install unifyai and you're ready to go!

from unify import Unify
unify = Unify(
    api_key="UNIFY_KEY",
    endpoint="router@q:1",
)
response = unify.generate(user_prompt="Hello there")

🙏 Our Ask

Give Unify a try, and let us know what you think! Sign up, pip install unifyai, take our router for a spin, and check out our product walkthrough. If you're excited about Unify, tell a friend about it 😊 If you don’t think it’s useful, please tell us. We’d love to know!