HomeCompaniesOpenlayer

The fastest way to ship airtight AI

Openlayer helps you test and monitor high-quality AI systems.

Jobs at Openlayer

San Francisco, CA, US / Remote (US)
$140K - $200K
3+ years
San Francisco, CA, US / Remote (US)
$50K - $70K
3+ years
San Francisco, CA, US / Remote (US)
$70K - $120K
3+ years
San Francisco, CA, US / Remote (US)
$90K - $140K
3+ years
San Francisco, CA, US / New York, NY, US
$130K - $180K
3+ years
Openlayer
Founded:2021
Team Size:6
Location:San Francisco

Active Founders

Gabriel Bayomi Kalejaiye, Founder

I’m a Brazilian engineer and one of the co-founders of Unbox. Previously, I was a Machine Learning Engineer at Apple where I worked at several different areas in Siri, combining multimodal signals for natural language. Before that, I completed my master's in Computer Science at Carnegie Mellon where I had the opportunity to work at the Amazon’s Alexa Prize project. I love the combination of artificial intelligence applications and computational social science!
Gabriel Bayomi Kalejaiye
Gabriel Bayomi Kalejaiye
Openlayer

Vikas Nair, Founder

Co-founder of Openlayer (S21). Previously, built future AI/ML-powered experiences at Apple.
Vikas Nair
Vikas Nair
Openlayer

Rishab Ramanathan, Founder

Co-Founder of Unbox. Ex-Apple, Yale grad interested in making artificial general intelligence a reality.
Rishab Ramanathan
Rishab Ramanathan
Openlayer

Company Launches

TL;DR Openlayer provides a simpler workflow for AI evals and testing – connect your GitHub, define must-pass tests, and every commit triggers these tests automatically. Tests will also run on top of your live data and send alerts if they start failing. For AI developers, this is the ideal way to run continuous evaluations without much effort. Get started here.

Hey guys! We’re Gabe, Rishab, and Vikas, the founders of Openlayer. We met at Apple, where we noticed that every AI/ML team inevitably ran into the same problem: their models would not perform as accurately as they thought they would in the real world. We left to solve this problem by building a best-in-class process for evaluating and improving models.

🤯  The problem Most of us get how crucial AI evals are now. The thing is, almost all the eval platforms we’ve seen are clunky – there’s too much manual setup and adaptation needed, which breaks developers’ workflows.

✅  Solution We’re releasing a radically simpler workflow.

All you need to do is connect your GitHub repo to Openlayer and define must-pass tests for your AI system. Once integrated, every commit triggers these tests automatically on Openlayer, ensuring continuous evaluation without extra effort.

We offer 100+ tests (and are always adding more), including custom tests. We’re language-agnostic, and you can customize the workflow using our CLI and REST API. We also offer template repositories around common use cases to get you started quickly.

You can leverage the same setup to monitor your live AI systems after you deploy them. It’s just a matter of setting some variables, and your Openlayer tests will run on top of your live data and send alerts if they start failing.

🙏  Asks We love feedback! Sign up for free, and let us know your thoughts! Book a call with us for a demo, personalized onboarding, to give feedback, or just chat. Feel free to reach out to founders@openlayer.com for anything else.