Interpretable AI systems
At Guide Labs, we are building AI systems that are engineered, from the start, to be modular, easily audited, steered, and understood by humans. Conventional model development approaches optimize for narrow performance measures, and jettison key interpretability (and reliability) concerns until after the model has been pre-trained. This approach is fundamentally broken. Guide Labs reimagines the entire model development process: the training dataset, model’s architecture, loss function, pre-training and post-training algorithm, all the way to user interaction with the model.
Team
We are interpretability researchers and engineers that have been responsible for major advances in the field. We have set out to rethink the way machine learning models, AI agents, and systems broadly are developed.
A note about experience
We do not care about formal credentials. If you share our vision, have evidence that demonstrates that you can be effective in this role, and would like to get involved, please reach out. If some of the below doesn’t line up perfectly with your profile or past experience, we still encourage you to apply!
About the Role
As a founding research engineer, you'll work closely with the team on cutting-edge model building, research, infrastructure, and tooling towards creating interpretable models and AI systems.
Responsibilities
Design and implement machine learning algorithms to enhance interpretability.
Work on distributed training systems to scale our models to billions of parameters. optimizing for performance and efficiency across multi-GPU and multi-node setups while handling large scale datasets.
Run large-scale experiments to test the reliability of a training pipeline.
Engineer efficient data pipelines to process large datasets for model training,
Implement Sdk, and API for our interpretable models.
Implement a fine-tuning API for our interpretable models.
Infrastructure serving interpretable models.
Our Stack
You are
Very comfortable in Python, and can pick up other languages, especially systems languages as needed.
Deep experience with PyTorch/Jax and its associated ecosystem.
Passionate about good engineering practices.
Comfortable with AWS/GCP/AZURE or can pick up as needed.
Experience with HPC systems
Examples of projects you’ll work on in your first 90 days
Implement a package for reliable distributed pre-training and fine-tuning of our interpretable models.
Contribute to empirical research and development for assessing the reliability of our models compared to contemporary alternatives.
Implement an SDK and/or API for interacting with our models.
At Guide Labs, we are building AI systems that are engineered, from the start, to be modular, easily audited, steered, and understood by humans. Conventional model development approaches optimize for narrow performance measures, and jettison key interpretability (and reliability) concerns until after the model has been pre-trained. This approach is fundamentally broken. Guide Labs reimagines the entire model development process: the training dataset, model’s architecture, loss function, pre-training and post-training algorithm, all the way to user interaction with the model.