BricksAI helps enterprises develop LLM apps more securely
Tl;dr: BricksAI helps enterprises develop LLM apps more securely. We do this through an access manager that can set a spend limit, rate limit and expiration date on individual API keys.
The Problem: OpenAI does not provide enterprise level security features
“You wouldn’t believe how many organizations are building OpenAI wrappers.”
- A machine learning director at a unicorn startup.
OpenAI does not have enterprise-level security features. A lot of companies need to build enhanced security features on top of what OpenAI offers before using them.
“We simply assign each team an OpenAI key”
- A senior engineer working at a public fintech company.
Developing OpenAI applications often means that developers have to share OpenAI key credentials with each other to speed up development.
There are several risks and drawbacks associated with this approach:
Azure OpenAI service helps fill some of the gaps but this product is simply not available to users of GCP and AWS for competitive reasons.
The Solution: A self-hosted OpenAI access manager
BricksAI checks all requests between your applications and different LLMs. We ensure every call is authorized, and does not exceed any traffic and cost limits.
You can create a custom API key with a rate-limit, spend-limit, and expiration date either programmatically or through our UI:
Then use the API key to access an LLM like you normally would. When a key reaches its limit (e.g. has expired), our gateway blocks off your requests:
In addition, our enterprise offering includes
Who are we?
Spike (on the right) was a senior software engineer at Unity for three years. He worked on an internal API gateway used by hundreds of developers.
Donovan (on the left) was a software engineer at Credit Suisse, building internal tools dealing with financial instruments used by institutional investors.
While working on our previous AI powered Figma to code idea, we realized that developing applications using OpenAI credentials is easy to get started but poses huge security risks. After constantly hearing news about leaked OpenAI keys, we are inspired to create a solution that helps make OpenAI development safer at enterprises.
Interested in learning more?