HomeCompaniesRoark

Datadog for Voice AI

Roark monitors, evaluates and simulates real customer calls for Voice AI developers. Instead of spending hours manually calling their agent to test changes, teams can instantly simulate real scenarios and catch failures before customers do. Within just 10 days of launching, we signed 8 paying customers, reached $60k in ARR, and began processing over 10,000 calls per week. Our team has been building together for over a decade. James previously built AngelList’s portfolio infrastructure supporting its growth from $10B to $124B asset under management, while Daniel was a founding engineer at Akiflow (YC S20) scaling it to $1.5M ARR with 10,000+ customers. Voice AI is the future of business calls, but today, teams either fly blind or build in-house solutions because no off-the-shelf tooling exists. Roark solves this, bringing the first real testing and analytics platform to the space.
Roark
Founded:2025
Team Size:2
Status:
Active
Location:San Francisco
Group Partner:Gustaf Alstromer
Active Founders

James Zammit, Founder & CEO

James Zammit (CEO) is an infrastructure and AI engineer with 10+ years of experience. Previously a Senior Engineer at AngelList, he built core portfolio infrastructure as the company grew from $10B to $124B in assets under management, and led the development of Relay, an AI-powered portfolio manager processing thousands of financial documents each month. He also co-founded a startup showcased at Google I/O 2016 in partnership with Firebase, and received scholarships from Meta and Apple.
James Zammit
James Zammit
Roark

Daniel Gauci, Founder

Daniel Gauci (CTO) is a software engineer with 10+ years of experience. Previously a Senior Engineer at Akiflow (YC S20), he helped scale the company to $1.5M ARR and over 10,000 customers. Daniel spent seven years at Casumo, leading the development of a mobile app used by millions of players, significantly contributing to the company’s growth to over $50M ARR.
Daniel Gauci
Daniel Gauci
Roark
Company Launches
Roark - The Observability & Testing Platform for Voice AI
See original launch post ›

TL;DR

Roark is an observability and testing platform for Voice AI that shows you whether your agent meets its goals, tracks how customers feel, and lets you replay real calls on your latest changes.

If you’re building voice AI agents and want a faster, smarter way to test and improve them, we’d love to connect! Email james@roark.ai or book a time here.

(Replay real calls without picking up the phone.)

The Problem: Testing Voice AI Is Painfully Manual

Once a voice agent is live, teams have no easy way to test updates. Every time you tweak a prompt or logic, you have to manually call the bot, hoping to catch issues before customers do.

  • Does the agent follow the right flow? You don’t know unless you re-run conversations by hand.
  • Did a change break something? You won’t find out until users complain.
  • How do customers actually experience the bot? Traditional testing tools only analyze text transcripts, missing tone, hesitation, or frustration.

Voice AI teams, especially in healthcare, legal, and customer support need real-world validation for every change they ship. But existing testing tools rely on scripted test cases that don’t reflect real interactions, leading to blind spots and regressions.

The Solution

Roark lets you replay real production calls against your newest AI logic, so you can test changes before they go live. No more manually dialing your bot or relying on outdated scripted tests - get real-world validation instantly.

How It Works:

  1. Capture real-world calls: Automatically ingest production conversations from your existing voice AI setup (integrates seamlessly with VAPI, Retell, or custom APIs).
  2. Replay calls on your updated agent: Our system re-runs the same user inputs, sentiment, and tone against your latest agent, cloning the original caller’s voice for more realistic testing.
  3. Evaluate goal completion: Define key objectives (e.g., “Did the agent confirm insurance?”) and automatically flag failures or missteps.
  4. Monitor sentiment & vocal cues: Detect frustration, long pauses, sighs, and hesitation - signals that text-based evaluations miss.
  5. Track performance with reports & dashboards: Visualize conversation flows, track drop-offs, and measure key metrics with Mixpanel-style analytics.
  6. Get real-time alerts: Set up custom monitoring for compliance violations, negative sentiment spikes, or repeated failures.

Roark gives AI teams the same confidence in testing, iteration, and monitoring that software engineers had for years with modern dev tools.

Check out our demo below!

https://youtu.be/eu8mo28LsTc?feature=shared

Why We Built Roark

We first ran into this problem while building a voice agent for a dental clinic. Patients kept reporting issues, getting stuck in loops, failing to confirm insurance, or receiving irrelevant responses. But the only way to test fixes was to call the bot ourselves or read through hundreds of transcripts, hoping to spot patterns. It was frustrating, slow, and unreliable.

After talking to other teams working on Voice AI, we realized this problem was universal - everyone was struggling to validate their AI’s performance efficiently. That’s when we decided to build Roark.

Team

We’re engineers who have built and scaled complex systems at high-growth companies:

James Zammit (CEO) – Infra and AI engineer with 10+ years of experience. Previously at AngelList, where he worked on core infrastructure as the company scaled from $10B to $124B in assets under management and led the development of Relay, an AI-powered portfolio manager. Co-founded three startups, one of which partnered with Firebase and was showcased at Google I/O 2016.

Daniel Gauci (CTO) – Software engineer with 10+ years of experience. Previously at Akiflow (YC S20) as part of the mobile development team, helping the company reach $1.5M ARR and 10,000+ customers. Spent 7 years at Casumo, leading the development of the mobile app used by millions of players helping the company reach $50M+ ARR.

Try it out!

If your team is tired of manually testing voice AI updates and wants a faster, more reliable way to validate changes, email us at founders@roark.ai or book a demo here - we’d love for you to try out Roark.

YC Sign Photo

Company photo