HomeCompaniesParea

Parea

Test, evaluate & observe your LLM applications

Parea AI is the essential developer platform for debugging and monitoring every stage of LLM application development. We provide testing, evaluation, and monitoring in one unified platform. These capabilities help developers get visibility into LLM responses, quickly test and optimize prompts, and ensure customers get the best user experience.
Parea
Founded:2023
Team Size:2
Location:New York
Group Partner:Dalton Caldwell

Active Founders

Joel Alexander, Founder

Senior Software Engineer at Lyft. J.P. Morgan TMT Investment Banking. BA @ Columbia University. MSc @ New York University
Joel Alexander
Joel Alexander
Parea

Joschka Braun, Founder

Published my first paper in the highest-ranked number theory journal at age 17. Continued to pursue math until my M.Sc. from NYU as a Fulbright scholar. Noticed AI has more real world applications than math and started researching applications of deep learning to medical imaging at Covera Health. Built semantic search systems and automatic code generation software at Jina AI.
Joschka Braun
Joschka Braun
Parea

Company Launches

tl;dr: Parea AI automates the creation of evals for your AI products. We do this by bootstrapping an evaluation function with human annotations. Allowing you to automagically turn “vibe checks” into scalable and reliable evaluations aligned with human judgment.

😩 The Problem

Evaluating free-form text is often only possible by humans reviewing outputs or using LLMs to evaluate them. The former is laborious, slow, and expensive, while the latter often fails to evaluate the outputs correctly. For LLM evaluations to work properly, one needs to prompt engineer them; i.e., they require their own optimization process.

🚀 The Solution

The best LLM evals are adapted to your particular business use case & data. We've developed a method for uploading human annotations (via CSV or using our Annotation Queue) and bootstrapping an evaluation to mimic the annotations. To create a human-aligned eval, you need as few as 20 sample annotations. Using your new LLM eval is as easy as copying the code into your codebase or using it directly via Parea's API. Check out our docs to see the complete workflow.

🙏 Our Ask

Other Company Launches

Parea AI – DataDog for LLM apps

Parea AI helps developers ideate, evaluate, and manage LLM-powered applications to consistently deliver quality user experiences.
Read Launch ›