OpenPipe is an SDK that abstracts away fine-tuning custom models. We capture your existing provider’s prompt-completion pairs in the background and use them to create a new model that is faster, cheaper and often more accurate than the original.
Repeat founder and former engineer at Google and YC. Led the Startup School team and built products that increased YC applications by over 40%.
I'm a co-founder at OpenPipe, a platform for turning your slow and expensive prompts into cheap fine-tuned models. My cofounder and I wrote the first web agent that ran on GPT-4 (Taxy.AI) and I've been fine-tuning models since 2021.
Hi there! We’re Kyle and David, and we’re building OpenPipe. OpenPipe lets you capture your existing prompts and completions, and then use them to fine-tune a model specific to your use-case. Here’s how the process works:
Before working on OpenPipe, we each ran into limitations of GPT-3.5 and GPT-4:
We’ve spoken with many other companies and these issues are common. Cost and latency are two major factors blocking production deployment of LLM-backed functionality.
Small models fine-tuned on a specific prompt are highly performant and can excel at many tasks. They’re particularly good at data extraction and classification, even on tasks that need significant world knowledge.
For example, in one project we built to classify recipes, our model was able to determine that a recipe that calls for sautéed mushrooms needs a stovetop, despite not being explicitly trained on that connection. It outperformed GPT-3.5 in classification accuracy and reached 95% of GPT-4’s performance.
And not only does our fine-tuned model outperform GPT-3.5, it costs 50X less to run!
We’ve built infrastructure to make fine-tuning your own model extremely easy. The process works like this:
You can reach us at founders@openpipe.ai. We’d love to help out if we can!