Laminar is an open-source platform which provides observability, text analytics, evals and prompt chain management for AI agent.
Co-founder and CEO @ Laminar (lmnr.ai). Previously, I interned at Palantir where I built semantic search package which now powers many internal AI teams and worked on resource allocation engine at core infrastructure team. I also interned at Bloomberg where I scaled market tick processing pipeline by 10x to 10M ticks/s.
Co-founder and CTO at Laminar (lmnr.ai). Previously, I have worked at Amazon for 2 years building and scaling critical payments infrastructure. Before that, I've spent a year creating ML infrastructure for a drug discovery biotech startup in Korea.
tldr: Laminar is an open-source developer platform that provides full instrumentation of LLM applications and combines trace data with event-based analytics.
—
Hey everyone, we’re Robert, Din, and Temirlan. Previously, we built infrastructure at Palantir, Amazon, and Bloomberg — now, we’re building an open-source platform to help developers understand how their LLM apps perform in production.
LLMs are stochastic, and designing robust software around them (e.g., RAG, Agents) is an iterative process. A great observability platform not only facilitates this process, but actually makes it more productive. Hence, many AI developers adopt observability tools early on.
Laminar goes beyond single LLM call tracing and provides tools for entire app instrumentation and powerful UI for full trace visualization, trace search, and session grouping.
LLM apps produce traces, which are essentially very rich text. Traditional event-based analytics tools are not built for extracting metrics from this kind of data.
Currently, AI devs spend a good chunk of their time manually inspecting traces to understand usage patterns of their LLM apps. As they scale, manual inspection is not feasible anymore.
Laminar tackles this problem by using other LLMs to process rich text outputs in the background. With Laminar, developers can define custom events, such as “USER SENTIMENT,” to collect user sentiments and track this metric at scale as they deploy their LLM apps into production.
Each event is linked to the trace that produced it, and developers can understand when and why certain semantic events have happened. It gives developers deeper understanding of the performance and usage of their LLM apps.