Solving critical GPU performance, reliability, capacity and cost problems
Hey, we’re Neel and Niranjan from Cedana.
Losing work because of infra problems is painful. Imagine you have a long-running compute job and the instance fails. Your 20-hour job finished but because your pipeline was misconfigured you have to restart it from scratch.
Burning cash is stressful. Poor utilization results in your inference jobs costing more. If you’re managing a cluster of 1,000s of GPUs, poor utilization leaves money on the table even while demand is skyrocketing.
Cold start times impact your customer satisfaction and their reliance on your solution.
Limited GPU access makes it difficult for you to innovate, and finding GPUs can be a full-time job of constantly identifying, evaluating, and adapting different vendors.
Cedana is real-time migration for compute. We automatically schedule and move workloads across instances and vendors without interrupting progress or breaking anything.
There are several critical benefits:
OpenAI, Meta, Microsoft, and Databricks employ some of these methods internally and we’re bringing them to everyone.
Cedana is available as an open-source package and as a managed service.
Here’s a 1m30s demo video
Our team has built real-world robotics and large-scale computer vision systems across a number of places including 6 River Systems/Shopify and MIT. We’ve led the development, commercialization, and scale of NLP for clinical workflows used in the delivery of patient care. Our team’s publications span computer vision, computer graphics, robotics optimization, and spacecraft/aerospace controls, with patents in AI use cases for grid energy management, optimal battery control, and healthcare.