HomeCompaniesCedana

Save, migrate and resume compute jobs in real-time

Cedana (YC S23) enables real-time save, move and restore for compute workloads.  We expand GPU capacity, increase reliability, reduce latency up to 10x and increase CPU/GPU utilization up to 5x resulting in significant cost savings. OpenAI, Meta and Microsoft have flavors of these capabilities internally and we’re bringing them to everyone.  Our solution applies to AI training and inference, HPC, data analysis, dev tools and infra.  Our vision is to transform cloud compute into a real-time, arbitraged commodity.  We are a fully distributed remote company. https://www.cedana.ai
Cedana
Founded:2023
Team Size:5
Location:New York
Group Partner:Garry Tan

Active Founders

Neel Master, Founder

CEO of Cedana. Previously CEO/co-founder of Engooden, AI-powered chronic disease management proven to improves outcomes and lower costs for patients (Series B). VP of Corp Development for Petra Systems (predictive smart grid/solar company) scaled from $0-$70M ARR. At TL Ventures ($1.6B VC fund) investing across semi, software and systems. Built a system for large-scale, automated ML and computer vision at MIT CSAIL. Patents and publications in AI, computer vision.
Neel Master
Neel Master
Cedana

Niranjan Ravichandra, Founder

Niranjan Ravichandra
Niranjan Ravichandra
Cedana

Company Launches

Hey, we’re Neel and Niranjan from Cedana.

The Problem

Losing work because of infra problems is painful. Imagine you have a long-running compute job and the instance fails. Your 20-hour job finished but because your pipeline was misconfigured you have to restart it from scratch.

Burning cash is stressful.  Poor utilization results in your inference jobs costing more.  If you’re managing a cluster of 1,000s of GPUs, poor utilization leaves money on the table even while demand is skyrocketing.

Cold start times impact your customer satisfaction and their reliance on your solution.

Limited GPU access makes it difficult for you to innovate, and finding GPUs can be a full-time job of constantly identifying, evaluating, and adapting different vendors.

Our Solution

Cedana is real-time migration for compute.  We automatically schedule and move workloads across instances and vendors without interrupting progress or breaking anything.

There are several critical benefits:

  • Utilization is maximized to save costs and eliminate idle resources.
  • Job-level SLAs dynamically allocate compute between inference and training, prioritizing costs, latency, and performance based on preferences.
  • Avoid re-running jobs from scratch due to infra, pipeline, or memory failures. Jobs continue their progress in the event of infra failure or spot revocation.
  • Access planet-scale compute through vendor aggregation
  • Solve the cold start problem by enabling fast auto-suspend-resume
  • Spot management that migrates and provisions new instances automatically upon revocation or failure.

OpenAI, Meta, Microsoft, and Databricks employ some of these methods internally and we’re bringing them to everyone.

How it works

Cedana is available as an open-source package and as a managed service.

  • Cedana needs no code modification and works with Linux processes or containers
  • Current use cases and customers range from AI Training and Inference, High-Performance Compute, DevTools, ML Ops platforms, and Computational Biology.
  • It automatically provisions and manages infra with your existing credentials. Our managed service can leverage our vendor relationships if preferred.

Here’s a 1m30s demo video

Our Team

Our team has built real-world robotics and large-scale computer vision systems across a number of places including 6 River Systems/Shopify and MIT.  We’ve led the development, commercialization, and scale of NLP for clinical workflows used in the delivery of patient care.   Our team’s publications span computer vision, computer graphics, robotics optimization, and spacecraft/aerospace controls, with patents in AI use cases for grid energy management, optimal battery control, and healthcare.

We kindly ask