đź“Ť Location: Palo Alto, CA (In-person Mon/Tues/Thurs/Fri, WFH Weds)
💰 Compensation: $150K–$200K + 0.50%–0.75% equity
About Distro
Distro is the AI co-pilot for inside sales and counter reps at industrial distributors. We’re building workflow software that combines verticalized LLMs, search, and quoting logic to help reps quote faster, convert more, and capture more margin per order.
We’re building for the people who keep America’s industrial supply chains running. Our customers sell everything from HVAC systems to plumbing components to electrical parts. They work at the counter, on the phone, and in the field — and they’re moving fast.
We’re a well-funded, venture-funded and Y Combinator-backed startup with strong early traction and growing customer demand. Our goal is to modernize a massive industry that has historically been underserved by software.
The Role
We’re hiring a Senior AI/ML Engineer to own our AI infrastructure end-to-end. You’ll take over AI/ML work directly from the CEO and be responsible for designing, building, and deploying core features powered by foundation models.
Distro is deeply AI-native: our product relies on language models to retrieve documents, extract product specs, generate quote language, and drive semantic search. You’ll bring these experiences to life — combining product thinking with hands-on engineering to ship fast, reliable, production-grade systems.
What You’ll Do
- Build and ship AI-powered features using foundation models (OpenAI, Anthropic, etc.)
- Own the AI stack — from prompt engineering to vector search to model serving
- Implement RAG pipelines, chat agent workflows, and quoting / document workflows
- Optimize for performance and latency in customer-facing UX
- Work closely with the CEO (our current AI/ML lead) to transition ownership and go even faster
- Evaluate, test, and iterate on LLMs, prompts, and product experiences
- Set technical direction for how we productionize and scale our AI infra
What We’re Looking For
- Strong engineering fundamentals and hands-on experience building with LLMs
- Experience with foundation models, prompt design, embeddings, and vector DBs
- You’ve shipped AI features to real users — not just notebooks or demos
- Bias toward action and speed — you move quickly and sweat the details later
- Comfort owning the stack end-to-end, including deployment and monitoring
- Builder mindset — you work well in ambiguity and don’t need a spec to get started
- Must be in-person 4 days per week in Palo Alto
Must Have (Professional Experience)
- Agentic chat + retrieval frameworks (i.e. LangGraph, LangChain, LlamaIndex)
- Vector databases (i.e. MongoDB Vector Store, Pinecone, Weaviate)
- Model deployment + fine-tuning (AWS Sagemaker or similar)
- Expertise in production quality Python and/or TypeScript
Nice to Have
- Strong product instincts or UX sensibility
- Exposure to vertical SaaS or AI products built for non-technical users
- Background in document parsing or quoting/pricing problems
Why Join
You’ll be joining Distro at a formative moment — post-Seed, with real customers, real revenue, and a huge market in need of transformation. This is a chance to own the AI stack at a company where AI isn’t just a feature — it is the product.
What Makes Distro Different
- Real AI, real users — Our product solves high-stakes, real-world problems for reps in the field.
- No AI theater — Everything we build gets shipped, used, and improved based on feedback.
- In-person speed — Fast feedback loops, high context, zero Zoom lag.
- Design + product-led — You’ll work side-by-side with great design and engineering talent.
- Massive vertical — Our customers are industrial distributors — a $7T global ecosystem underserved by modern software.
If you're excited to own the AI stack at a fast-moving, design-forward vertical AI company — let’s talk.
🧠How We Work
- This is a hybrid (4 days per week) role based in Palo Alto
- We work in office Monday, Tuesday, Thursday, and Friday
- Wednesdays are remote
- Every engineer at Distro is an individual contributor writing production code
- We prioritize speed, accountability, and strong cross-functional collaboration