Foundation models that can explain their reasoning, and are easy to align
At Guide Labs, we build interpretable foundation models that can reliably explain their reasoning and are easy to align.
Current transformer-based large language models (LLMs) and diffusion generative models are largely inscrutable and do not provide reliable explanations for their output. In medicine, lending, and drug discovery, it is not enough to only provide an answer; domain experts would also like to know why the model arrived at its output.
We’ve developed interpretable foundation models that can explain their reasoning, and are easy to align.
These models:
Using all these explanations, we can:
We are interpretability researchers and engineers that have been responsible for major advances in the field. We have set out to rethink the way machine learning models, AI agents, and systems broadly are developed.
Please reach out to us at info@guidelabs.ai.