The leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside the IDE
an amplified.dev @ Continue. Previously Founding --> Group PM @ Rasa, adaptive computation @ UMich
Coding @ Continue. Previously math and physics @ MIT ('23), first hire @ Mayan (W21), building mission control software @ NASA Jet Propulsion Laboratory
Hi all! We’re Nate and Ty, co-founders of Continue.
Stop copy/pasting code from ChatGPT and take control of your development data 💪
Continue is the open-source autopilot for software development built to be deeply customizable and continuously learn from development data.
Our GitHub is https://github.com/continuedev/continue. You can watch a demo of Continue and download the extension at https://continue.dev.
A growing number of developers are replacing Google + Stack Overflow with Large Language Models (LLMs) as their primary approach to get help, similar to how developers previously replaced reference manuals with Google + Stack Overflow.
Unfortunately, existing LLM developer tools are cumbersome black boxes. Developers are stuck copy/pasting from ChatGPT and guessing what context Copilot uses to make a suggestion. As we use these products, we expose our code and give implicit feedback that is used to improve their LLMs, yet we don’t benefit from this data nor get to keep it.
The solution is to give developers what they need: transparency, hackability, and control. Every one of us should be able to reason about what’s going on, tinker, and have control over our own development data. This is why we created Continue.
At its most basic, Continue removes the need for copy/pasting from ChatGPT—instead, you collect context by highlighting and then ask questions in the sidebar or have an edit streamed directly to your editor.
But Continue also provides powerful tools for managing context. For example, type ‘@issue’ to quickly reference a GitHub issue as you are prompting the LLM, ‘@README.md’ to reference such a file, or ‘@google’ to include the results of a Google search.
And there’s a ton of room for further customization. Today, you can write your own
Continue works with any LLM, including local models using ggml or open-source models hosted on your own cloud infrastructure, allowing you to remain 100% private. While OpenAI and Anthropic perform best today, we are excited to support the progress of open-source as it catches up (https://continue.dev/docs/model-setup/select-model).
When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to .continue/dev_data on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM used by your team.
You can read more about how development data is generated as a byproduct of LLM-aided development and why we believe that you should start collecting it now: https://blog.continue.dev/its-time-to-collect-data-on-how-you-build-software/
Continue has an Apache 2.0 license. We plan to make money by offering organizations a paid development data engine—a continuous feedback loop that ensures the LLMs always have fresh information and code in their preferred style.
We started working on Continue because LLMs rapidly got better and quickly became an important part of our development workflows. In 2021, Ty was helping the research team at Rasa use GPT-3 to create user simulators and Codex to generate Actions code for Rasa Open Source—projects that did not result in shipped products because LLMs were not good enough yet.
This changed in 2022. During the day, Nate wrote code at NASA, unable to use LLMs due to privacy concerns. But at night, the difference was stark as GitHub Copilot and ChatGPT completely changed how we built side projects.
We believe that the role of LLMs in software development is only beginning. In order to reach their full potential, developers need to to be able to tinker and unleash their creativity. Every developer must be able to try new things and push what's possible using LLMs without having to ask for permission.