Preparing the world for AGI
Vectorview works with the big AI labs to evaluate the capabilities of their foundation models and LLM agents. Our evaluation framework is used to easily create, run, and score custom evaluation tasks. This is the key in creating safe, robust and reliable models. For example, we enable foundation model companies to audit dangerous capabilities in the next generation of models.
We’re creating the new standard for how to evaluate the agentic capabilities of LLMs. Foundation models will become as ubiquitous as javascript and our evaluations will be seen every time a company publishes a new model.
Our passion for AI safety is based on our belief that AI will transform our world for the better—but this won’t necessarily happen by default.
We’re at an early stage in our journey and becoming a part of the team now is a once-in-a-lifetime opportunity to create something new that will have a massive impact on the future of AI.