
Hey everyone! I’m Anand, founder of Scope.
Problem
More and more products are now being used through Claude Code, Codex, Cursor, and similar agents. If your product is accessible to an agent, the agent can now decide which product to use, how to set it up, and whether it keeps using it after that. Most companies still can’t see that process clearly.
That means they often do not know when an agent is choosing them vs. a competitor, where it is getting stuck, or what part of the product or docs is causing the problem.
Solution
That’s what Scope is for.
We run real workflows across agents and show teams:
A simple example: a company thinks their product works well with agents, but on a real task the agent picks a competitor, gets stuck during auth, or goes down the wrong path because the docs are unclear. Scope shows the trace, where it failed, and what to change.
We’re already working with Blaxel and a decacorn.
Demo video:
https://youtu.be/OqtaJbjMbiQ
Why I’m working on this
Before Scope, I did interpretability research on closed-source models at Princeton, then worked as an ML engineer on GEO / AEO. I kept seeing the same thing: these systems were starting to shape product discovery and product usage in real ways, but for companies, the process was still mostly a black box. I wanted to build something that makes that behavior visible and useful to the teams trying to fix it.
Ask