HomeCompaniesLangfuse

Open source LLM engineering platform

Traces, evals, prompt management and metrics to debug and improve your LLM application. Onboard via https://langfuse.com Langfuse helps you build and improve LLM applications across the entire lifecycle: - Develop: Observability, Langfuse UI & Prompt Management - Monitor: Traces, Analytics, Metrics & Evaluations - Test: Experiments, Releases & Datasets We are hiring: https://langfuse.com/careers

Jobs at Langfuse

Berlin, BE, DE
€70K - €130K EUR
0.10% - 0.35%
1+ years
Berlin, BE, DE
€70K - €130K EUR
0.10% - 0.35%
3+ years
Berlin, BE, DE
€70K - €130K EUR
0.10% - 0.35%
3+ years
Berlin, BE, DE
€70K - €130K EUR
0.10% - 0.35%
3+ years
Remote
€70K - €130K EUR
0.10% - 0.35%
3+ years
Langfuse
Founded:2022
Team Size:7
Location:Berlin, Germany
Group Partner:Gustaf Alstromer

Active Founders

Marc Klingen, Founder

Marc has diverse experience across Product, Sales, Business Intelligence and full-stack engineering at companies from large (Google, DHL) to early stage startups. He graduated within the top 1% of his Masters in Management and Computer Science from Technical University Munich. Besides that, he loves to hack on personal project and connect with other builders (something he does not get to too much right now).
Marc Klingen
Marc Klingen
Langfuse

Maximilian Deichmann, Founder

Max built trading systems at European 5bn Fintech, Trade Republic. He knows the ins and outs of building reliable, scalable systems to handle our customers’ most critical business processes. While Max started out studying Management in undergrad, he quickly found his love for computer science and transitioned into engineer self-taught and with a graduate degree.
Maximilian Deichmann
Maximilian Deichmann
Langfuse

Clemens Rawert, Founder

Before starting Langfuse, Clemens worked with the founder-CEOs of German Fintech Unicorn Scalable Capital including a unicorn fundraising, an acquisition and helped scale the org and team from 100 - 400 employees. On another note, he studied Economic History, dropped out of a PhD at Oxford, and was a competitive wine taster.
Clemens Rawert
Clemens Rawert
Langfuse

Company Launches

Langfuse is the open-source LLM engineering platform designed to help teams build production-grade LLM applications faster. We started building Langfuse in SF during Y Combinator's W23 batch - just when GPT 4 was initially released.
Today, Langfuse is used by tens of thousands of developers and is one of the most popular LLMOps platforms globally.

🤯 The Problem

Building production-grade LLM applications is challenging because of the probabilistic nature of LLMs and the multiple layers of scaffolding required to get complex workflows into production.
Developers need to debug their applications because of increasingly complex abstractions like chains, agents with tools, and advanced prompts. Understanding how an application executes and identifying the root causes of problems can be arduous. Additionally, monitoring costs and latencies is crucial since LLMs can incur high inference expense and take time to respond to users, making it important to track model usage and costs across applications.

Assessing the quality of LLM outputs also poses challenges. Outputs may be inaccurate, unhelpful, poorly formatted, or hallucinated, complicating the process of ensuring reliability and accuracy. Quickly identifying and debugging issues in complex LLM applications is essential but often difficult. Furthermore, building high-quality datasets for fine-tuning and testing requires capturing the full context of LLM executions..

✅ Our Solution

https://www.youtube.com/watch?v=2E8iTvGo9Hs 

Langfuse addresses these challenges by providing an open-source platform to debug and improve LLM applications.
Langfuse captures the full context of your application, tracing the complete execution flow—including API calls, retrieved context, prompts, parallelism, and more. By enabling hierarchical representations through nested traces, Langfuse helps you understand complex logic built around LLM calls. Langfuse also offers full multi-modal support, including audio, images, and attachments.

Langfuse measures cost and latency, breaking down metrics by user, session, feature, model, and prompt version, allowing for detailed analysis. To assess output quality, Langfuse facilitates the collection of user feedback, performs automated LLM-as-a-judge evaluations, and supports manual data labeling within the platform. It also offers prompt management features, allowing you to handle prompts effectively and perform prompt experiments over new ideas and systematically track success.

For testing and experimentation, Langfuse supports versioning your application and running tests of expected inputs and outputs via curated datasets. This provides quantitative insights into the impact of changes, helping you understand and improve your LLM applications more effectively.

🎬 Getting Started (Tracing OpenAI with Langfuse):

Below is a brief example highlighting how you can integrate with Langfuse. You can also try out Langfuse through our interactive live demo or our walkthrough video - more here.

(Not using OpenAI? Langfuse can be used with any model or framework through our Python Decorator and JS-TS SDK. Langfuse also natively integrates with popular frameworks such as Langchain, LlamaIndex, LiteLLM and more. )

Step 1: Create a New Project in Langfuse

  1. Sign up for Langfuse Cloud or self-host Langfuse OSS.
  2. Create a new project within Langfuse.
  3. Generate API credentials via the project settings.

Step 2: Log Your First LLM Call to Langfuse

The @observe() decorator makes it easy to trace any Python LLM application. In this quickstart, we use the Langfuse OpenAI integration to automatically capture all model parameters such as cost and token usage.

%pip install langfuse openai
# Get keys for your project from the project settings page https://cloud.langfuse.com
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..."
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..."
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
from langfuse.openai import openai # OpenAI integration
from langfuse.decorators import observe

@observe() # Langfuse decorator
def story():
    return openai.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
          {"role": "system", "content": "You are a great storyteller."},
          {"role": "user", "content": "Once upon a time in a galaxy far, far away..."}
        ],
    ).choices[0].message.content
 
@observe()
def main():
    return story()
 
main()

Step 3: See your Traces in Langfuse

Log into the Langfuse UI to view the created trace. You can now take it further by managing your prompts through Langfuse or by starting to test or evaluate your LLM executions (more below).

See this example trace in the Langfuse UI: https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/fac231bc-90ee-490a-aa32-78c4269474e3

🏃 Now what?

After you are set up in Langfuse you can now:

👇 Interested in learning more?

Other Company Launches

🪢 Langfuse - Open-source product analytics for LLM apps

Track quality, cost and latency of your LLM apps
Read Launch ›

YC Sign Photo

YC Sign Photo

Company Photo

Company Photo