Boundary makes tools to build the most reliable AI pipelines. We take a code first approach and everything lives in your code, and runs locally (completely open-source!). Instead of function-calling on openai and JSON.parse, our approach allows you to write shorter prompts that lead to more accurate results. No fine-tuning. No modifying the model. That means less time prompt engineering to get the same quality results. Recently, we showed that our method makes GPT-3.5, GPT-4o-mini, and Claude-Haiku perform at the same level as GPT-4o without (of course GPT-4o gets better with us as well). Read more here: https://www.boundaryml.com/blog/sota-function-calling Boundary is composed of three main components: 1. BAML - a simple programming language to get structured outputs from LLMs. 2. BAML Playground - The first-ever VSCode playground for LLMs, that helps developers test and iterate on their AI functions. 3. Boundary Studio - An analytics dashboard to trace, label, and measure performance. Boundary's mission is to make AI app development seamless, and supports the entire development journey, from initial prompt iteration to measuring performance post-deployment.
Vaibhav is a software engineer with over 7 years of experience building innovative products. At Microsoft, he worked on realtime 3D reconstruction for HoloLens, gaining expertise in computer vision and 3D graphics. At Google, he led performance optimization on ARCore and Face ID, significantly improving latency and quality. Now he's bringing that same experience to help bring better quality and speed to Generative AI technology. For talking anything Computer Vision/AI/Performance, reach out!
Aaron has worked for the better part of a decade scaling out distributed systems at AWS and launching full-stack consumer facing products at Prime Video. He is now passionate about building scalable ML infrastructure and the best developer experience at Gloo.
tl;dr LLMs are awesome, but they have no knowledge about your data, and when you give it to them, they’re prone to make stuff up. We help developers build a searchable knowledgebase that LLMs can understand, and validate their responses are legit.
Hey everyone! We’re Aaron and Vaibhav from Gloo and we’re on a mission to help LLMs understand your data and get them to stop hallucinating all over the place.
Building a knowledge graph isn’t as easy as calling OpenAI embeddings API, storing them in a VectorDB and calling it a day. Developers have to choose from a myriad of parameters like chunking algorithm, embedding type (even based on the content), vectorDB, whether to fine-tune or not, how to securely store the data — and the list goes on and on.
Once you feed everything into the LLM, you’re only halfway there, as getting rid of hallucinations is tricky, and confidently incorrect answers to customer questions will give you headaches in the long-term.
On top of that, you still have your actual customer problems to still fix: getting the customer data, building flows on top of the LLM, writing the right prompts, getting more customers.
Gloo is the managed solution to building your knowledge graph — built for LLM context-windows from the ground-up. We stand up a search API for your data that supports both keyword and semantic search, and we smartly index and compute embeddings for you based on the content type. Once that’s setup, you can call our _Check-_GPT API to check LLM answers against your knowledgebase to ensure trustworthiness.
We built our search with a security-first approach. Your data is always server-side encrypted, and never actually stored in 3rd party Vector DBs. We are also supporting new transformations search engines couldn’t do before, like generating document summaries (powered by LLMs!), so you don’t have to spend precious time computing this at query time.
All of your data is viewable in your own personal dashboard where you can track embedding jobs, play around with different search parameters, and search query performance and analytics.
Reach out to us at founders@gloo.chat