TL;DR Openlayer provides a simpler workflow for AI evals and testing – connect your GitHub, define must-pass tests, and every commit triggers these tests automatically. Tests will also run on top of your live data and send alerts if they start failing. For AI developers, this is the ideal way to run continuous evaluations without much effort. Get started here.
—
Hey guys! We’re Gabe, Rishab, and Vikas, the founders of Openlayer. We met at Apple, where we noticed that every AI/ML team inevitably ran into the same problem: their models would not perform as accurately as they thought they would in the real world. We left to solve this problem by building a best-in-class process for evaluating and improving models.
🤯 The problem Most of us get how crucial AI evals are now. The thing is, almost all the eval platforms we’ve seen are clunky – there’s too much manual setup and adaptation needed, which breaks developers’ workflows.
✅ Solution We’re releasing a radically simpler workflow.
All you need to do is connect your GitHub repo to Openlayer and define must-pass tests for your AI system. Once integrated, every commit triggers these tests automatically on Openlayer, ensuring continuous evaluation without extra effort.
We offer 100+ tests (and are always adding more), including custom tests. We’re language-agnostic, and you can customize the workflow using our CLI and REST API. We also offer template repositories around common use cases to get you started quickly.
You can leverage the same setup to monitor your live AI systems after you deploy them. It’s just a matter of setting some variables, and your Openlayer tests will run on top of your live data and send alerts if they start failing.
🙏 Asks We love feedback! Sign up for free, and let us know your thoughts! Book a call with us for a demo, personalized onboarding, to give feedback, or just chat. Feel free to reach out to founders@openlayer.com for anything else.