Call every LLM API like it's OpenAI [100+ LLMs]
LiteLLM is an open-source LLM Gateway with 18K+ stars on GitHub and trusted by companies like Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a founding full-stack engineer to help scale the platform. We’re based in San Francisco.
LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format
We have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.
Why do companies use LiteLLM enterprise
Companies use LiteLLM Enterprise once they put LiteLLM into production and need enterprise features like Prometheus metrics (production monitoring) and need to give LLM access to a large number of people with SSO (secure sign on) or JWT (JSON Web Tokens)
Your main responsibility will be to make sure that LiteLLM unifies the format for calling LLM APIs in the OpenAI spec. Doing this involves writing transformations to transform the API request from the OpenAI spec to the format of the LLM provider.
You will be working closely with our CEO and CTO in this role.
Example projects you will pick up on joining:
thinking
param on LiteLLMWhat is our tech stack
Backend built on Python, FastAPI. Frontend on JS/TS. Redis, Postgres, s3, GCS Storage, Datadog, Slack API
Who we are looking for
Stage 1: Solve 1 Github Issue on LiteLLM
Stage 2: 2 week work sprint
Step 3 decision + offer
LiteLLM (https://github.com/BerriAI/litellm) is a Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere] and is used by companies like Rocket Money, Adobe, Twilio, and Siemens.