Home›Companies›GradientJ

GradientJ

Platform to build large language model applications

GradientJ helps teams deploy large language models at scale. Companies use us to build GPT-4 powered APIs that process millions of job applications per year. Our app provides tools to build and compare prompts, track live performance, and continuously improve models from human feedback, including automated model comparison across LLM providers and open source alternatives.

GradientJ
Founded:2021
Team Size:2
Location:Austin, TX
Group Partner:Nicolas Dessaigne

Active Founders

Oscar A. Martinez, Founder

Economist turned data scientist. Building tools to make natural language processing accessible to anyone. Give me a shout if you want to talk about using reinforcement learning or NLP in your product.

Oscar A. Martinez
Oscar A. Martinez
GradientJ

Company Launches

What is it?

GradientJ is a web application to build, compare, and deploy prompts for large language models like GPT3.

Is this for me?

GradientJ is for:

  • Product teams that are just thinking of using something like GPT3 in their product and are trying to figure out how to get started.
  • Teams who are already using LLMs to delight their customers and are trying to level up their workflow with faster iteration, better organization, easy cost and performance comparisons, and the on-ramp to ~the mythical land of fine-tuning~.

What’s the Problem?

Currently, the road from exciting prototype output with ChatGPT to production-ready GPT3 deployment is unclear and littered with potential pitfalls.

Even for the teams that do get version 1 out the door, the workflow often ends up as a stitched together hodgepodge of makeshift processes and tools.

LLM deployment is a leading cause of “am I doing this right?”-itis. Symptoms include:

  • A text document with examples copy-pasted into the OpenAI playground
  • A prompt comparison process that consists of an eyeball test followed by “looks better-ish”
  • Hours spent tweaking a prompt on one example only to find, in the end, it breaks on all the others
  • Nightmares of users getting nonsensical or offensive output from a hallucinating LLM

How do we solve it?

We made a prompt engineering interface that:

Automatically suggests starting points or improvements to your prompts.

Tracks prompt versions and benchmark examples so you can always compare and deploy the best iteration.

Deploys to your own API endpoint with automatic cost and performance tracking in one click.

The Ask:

  • Let us help: If you’re trying to use LLMs like GPT3 in your application and want to discuss how to get version 1 out the door FAST, we’d love to talk.
  • Try it out: If you’re already using LLMs in your application, we’d love to talk.
  • Spread the Word: If anyone you know is using or thinking of using LLMs, we want to turn their ideas into reality.

Drop your email here https://forms.gle/23MosusmACYE83bq9, email us at founders@gradientj.com, or jump right in and book a time to talk here!