HomeCompaniesRunLocal

The On-Device AI Development Platform

RunLocal helps engineering teams discover, optimize, evaluate and deploy the best on-device AI model for their use case.
RunLocal
Founded:2024
Team Size:3
Location:San Francisco
Group Partner:David Lieb

Active Founders

Ismail Salim, Founder/CEO

CEO of Neuralize. Previously a product manager at the forefront of on-device AI/ML deployment, shipping AR/VR applications at Meta and the first AI video codec at Deep Render.
Ismail Salim
Ismail Salim
RunLocal

Ivan Chan, Founder/CTO

CTO of Neuralize. Previously a software engineer at Marshall Wace (Europe's largest hedge fund) building an internal AI platform for developers and optimizing low latency streaming systems.
Ivan Chan
Ivan Chan
RunLocal

Ciarán O' Rourke, Founder/CSO

CSO of Neuralize. Previously an ML performance engineer optimizing cross-platform on-device inference at Deep Render and scientific simulations at the Irish Centre for High-End Computing.
Ciarán O' Rourke
Ciarán O' Rourke
RunLocal

Company Launches

✨ TL;DR

  • Neuralize makes it easier to develop and deploy on-device AI/ML in mobile/PC apps
  • It's a single interface for applying model compression, benchmarking on our device farm, evaluating performance-quality trade-offs in our web dashboard, and deploying optimized models for different end-user devices.
  • Email founders@runlocal.ai if you deploy on-device AI/ML in production and would like a demo

🧭 Context

2024 is the year of the Apple Intelligence, AI PCs, and Neural Processing Units (NPUs). Apps are leveraging this new hardware to ship on-device AI/ML across a range of use cases, from creative tooling (e.g. image editing) to video conferencing to copilots for warehouse workers.

On-device inference is important for apps/features requiring low latency, privacy/security, offline functionality, and zero server costs.

❌ The Problem

Developing effective on-device AI/ML is finicky and time-consuming. You have to convert Python models to on-device formats. Then, you have to optimize models across end-user devices (with varying capabilities), benchmark them on physical devices, and evaluate performance-quality trade-offs… in a highly iterative development cycle.

Each phase of this dev cycle requires tedious work, like figuring out appropriate model optimizations across different devices. Transitioning between phases is also cumbersome, like preparing benchmarking data for evaluation.

💡 Our Solution

Neuralize is a single interface for applying model compression, benchmarking on our device farm, evaluating performance-quality trade-offs in our web dashboard, and deploying optimized models for different end-user devices.

Our backend identifies promising model optimizations, with automated and guided parameter sweeps, and benchmarks them across target devices. Results are automatically visualized for objective trade-off evaluation, and model outputs can be inspected/compared with subjective evaluation tools. Everything is organized in a single repository, accessible to the entire team, for better tracking and discussion.

Neuralize automates each phase of the on-device dev cycle and streamlines the transition between phases, accelerating the entire process.

🤝 Our Team

Ciaran (middle) and Ismail (right) were at Deep Render building the world’s first AI video codec optimized for cross-platform on-device inference. They were early to the pain of optimizing models across devices and benchmarking performance/quality.

Ivan (left) and Ismail had been hacking together for years, and had built various apps with on-device AI. Having worked on Marshall Wace’s internal server-side AI platform, though, Ivan really felt the shortcomings of on-device AI tooling.

We scrapped the on-device AI apps and focused on the tooling.

🙏 Our Ask

Contact founders@runlocal.ai if you work with on-device AI/ML in production.

Also, please share this post with anyone that works on:

  1. On-device AI/ML in production apps
  2. Cross-platform on-device frameworks (ONNX, TFLite, ExecuTorch)
  3. SDKs for NPUs (e.g. at Intel, Qualcomm or Apple)

Thank you! 🧡