tl;dr: Weave uses AI to understand software engineering work, and give leaders X-ray vision into their teams.
The Problem
Engineering leaders are flying blind. They can’t dive in everywhere, so they need to rely on gut feel or shoddy metrics to try to get a handle on what’s going on and what needs fixing.
Engineering is unique in that there are no good metrics to solve this problem. And that's why we built Weave.
The Solution
Weave uses AI to measure engineering work. We run LLMs + our own models on every PR and review, analyzing both output and quality. Then we summarize this data and insights in dashboards
We've built a custom machine learning model that is trained on an expert-labeled data set of PRs. The data set lets us answer the question: “how long would this PR take for an expert engineer?” This enables us to calculate the metric most companies care the most about: how much actual work is getting done:
This isn't a line of code calculator, this is an actual estimate of the key metric: "How long would it take an experienced engineer to make this change?"
We can also tell you, for example, how much time each engineer is spending on code review and how useful their reviews are:
And we classify PRs into new features, bug fixes, or "keeping the lights on", so we can tell you how much of your engineering bandwidth is going to each bucket:
Our Story
Adam’s background is in operations and sales. He led organizations of 100+ people and created an internal tool to measure performance and help individuals identify their weak spots. This is common practice in revenue teams and he wants to bring it to engineering.
Andrew was employee #1 at Causal. He saw firsthand how subjective engineering management is and how hard it is to scale a high-performing engineering team.
We met at Causal, where Adam was hired to run the sales and customer success function. We got to talking about the big difference between how our two departments worked, and the rest is history.
Early Use Cases
Our Ask