LiteLLM

Call every LLM API like it's OpenAI [100+ LLMs]

Founding Backend Engineer

$140K - $200K / 0.10% - 0.35%
Location
San Francisco, CA, US / Remote (US)
Job Type
Full-time
Experience
1+ years
Connect directly with founders of the best YC-funded startups.
Apply to role ›
Krrish Dholakia
Krrish Dholakia
Founder

About the role

TLDR

LiteLLM is an open-source LLM Gateway with 18K+ stars on GitHub and trusted by companies like Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a founding full-stack engineer to help scale the platform. We’re based in San Francisco.

What is LiteLLM

LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format

We have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.

Why do companies use LiteLLM enterprise

Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)

What you will be working on

Your main responsibility will be to make sure that LiteLLM unifies the format for calling LLM APIs in the OpenAI spec. Doing this involves writing transformations to transform the API request from the OpenAI spec to the format of the LLM provider.

You will be working closely with our CEO and CTO in this role.

Example projects you will pick up on joining:

  • Migrate key systems from httpx to aiohttp to get 10x higher throughput
  • Add support for Anthropic, Bedrock Anthropic thinking param on LiteLLM
  • Handle LLM provider-specific quirks like OpenAI o-1 not supporting streaming
  • Ensure LiteLLM can compute aggregate spend once there are 1M+ Logs in the database
  • Add cost tracking + logging for /v1/messages Anthropic API

What is our tech stack

Backend built on Python, FastAPI. Frontend on JS/TS. Redis, Postgres, s3, GCS Storage, Datadog, Slack API

Who we are looking for

  • You have 1-2 years of experience in backend or full-stack development and have worked on production grade critical systems
  • You are excited about open source software. You want to talk to our users to understand why they use us and what they require.
  • You have played a key role in maintaining and scaling reliable, high-performance infrastructure.
  • You are a hard worker and thrive working in a small and accountable team
  • You are eager to take responsibility for shaping the infrastructure of a rapidly growing development tool.

About the interview

Stage 1: Solve 1 Github Issue on LiteLLM

  • Allows you to get familiar with litellm
  • Allows us to evaluate proficiency

Stage 2: 2 week work sprint

  • You will work with our team for 2 weeks, this gives both people data on what working together would be like. We aim to release 5 improvements per day to the open source repo

Step 3 decision + offer

About LiteLLM

LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.

LiteLLM
Founded:2023
Team Size:2
Status:
Active
Founders
Krrish Dholakia
Krrish Dholakia
Founder
Ishaan Jaffer
Ishaan Jaffer
Founder