tl;dr: Deasie ensures that only relevant, high-quality, and safe data is fed into language models.
Hi! We’re Reece, Leo, and Mikko and we’re excited to launch Deasie.
⚠️ Problem
For the first time, companies are turning towards their masses of unstructured data (e.g., documents, reports, emails) for a range of Generative AI use cases. Today, most companies are unable to answer the critical questions needed to ensure language models are trained and deployed reliably: Does this data contain sensitive information? Is this the most relevant data for this problem? Are there inconsistencies in the data that could skew my results?
💡 Solution
Deasie is a platform that provides automated checks for compliance (e.g., PII & proprietary data) and quality (e.g., irrelevant, untimely, or inconsistent information) of unstructured data, to reliably govern which data is used for which language model use case.
👨👦👦 The Team
Founders Leo, Mikko & Reece previously built McKinsey's award-winning product that used AI to enhance data quality in the enterprise, built ML applications across QuantumBlack, MIT, Amazon, and Mercedes, and have now set out to unlock the power of unstructured data for the upcoming wave on language model applications.
👋 Ask
Lots of companies must be delving into the world of LLMs. If you’ve encountered challenges identifying and using high-quality input data (e.g., data that is relevant, timely, accurate, and consistent), then we’d love to hear from you! Please reach out to us at leonard@deasie.com