Superpowered AI makes it easy to build production-ready LLM applications with access to external knowledge. Our API lets you connect external sources of knowledge (like regulatory archives, for example) to LLMs. We leverage proprietary RAG technology to dramatically improve retrieval performance and reliability for a wide variety of use cases. If you have an LLM-based application in mind, please reach out! We've helped companies build real-world applications, ranging from internal productivity tools to complete external products.
System Architect at Superpowered.ai Professional interests: AI systems, software architecture, Python, IaC Personal interests: biological systems, skiing, mountain biking, and just about anything outside
Dillon Martin is the co-CEO of Superpowered AI, a startup that provides a developer API for building sophisticated LLM-based applications. Previously, he worked as a Financial Advisor at Bank of America Merrill Lynch before co-founding a quantitative hedge fund with the same team that later founded Superpowered AI. When Dillon is not conversing with AI models, he can be found skating the streets of Salt Lake City.
Nick McCormick is the lead developer at Superpowered AI. He studied engineering at the University of Tennessee. Prior to starting Superpowered, he worked as a developer at a quant hedge fund where he developed the backtesting system. Outside of work, Nick's main passions are mountain biking and running, where he frequently competes in races.
Zach is the co-CEO of Superpowered AI. Prior to starting Superpowered, he was the Chief Investment Officer at a small quant hedge fund that he ran with his future Superpowered co-founders, where he put his Applied Mathematics degree to good use. When he’s not working, Zach enjoys spending time in the mountains, usually either skiing or biking.
We’re excited to announce the release of The SuperStack alongside our new Chat endpoint. Now, you can easily deploy conversational LLM applications with knowledge retrieval built-in. Our SuperStack suite of technologies directly targets common RAG failure modes, like hallucinations caused by out-of-context search results. Dive in and test it using our Playground, Python SDK, or clone these examples.
Many uses for LLMs, including most customer support and employee productivity applications, require effectively connecting LLMs to external knowledge sources. Doing this well is very hard. Current retrieval augmented generation (RAG) methods just take existing information retrieval methods and stuff the results into an LLM prompt. This works for simple demos, but usually isn’t reliable enough for real-world production applications.
Superpowered AI offers a simple API that lets you connect external data sources (like product documentation, for example) to LLMs. We leverage proprietary RAG technology we’ve developed (we call it the SuperStack) to dramatically improve performance and reliability for a wide variety of use cases.
Our solution is end-to-end, so you don’t have to worry about stringing together different APIs for different parts of the retrieval and generation pipeline. Here are some key features:
The SuperStack has three components that directly tackle the problems with standard RAG pipelines:
AutoQuery → Convert user inputs into well-formed search queries for better retrieval results.
Relevant Segment Extraction (RSE) → Dynamically group clusters of relevant results into longer sections of contiguous text to provide better context to the LLM. This is especially useful for more complex questions, where the answer isn’t contained in a single sentence or paragraph.
AutoContext → Automatically inject descriptive context into text chunks and embeddings, to capture the full context of each chunk of text, reducing the likelihood of poor search results and hallucinations.
Given that LLM applications often involve conversational interactions, we recently launched our Chat endpoint to make it easy to configure and deploy chat applications that utilize our knowledge retrieval pipeline. We currently support GPT-3.5-Turbo and GPT-4, with more models coming soon.
https://youtu.be/3bnS3ppoRtM?feature=shared
For companies that don’t have the resources or expertise to build their own LLM-based solutions, we’re here to help. Whether you're looking to enhance internal productivity or launch innovative new products with LLMs, we will work with you to bring your vision to life. Our team is dedicated to helping businesses of all sizes leverage the potential of AI to drive efficiency and customer love.