HomeLaunchesjo
46

jo: Your macOS Productivity Sidekick

jo is a voice-first digital personality that works alongside you every day – a smart and efficient new friend who saves you time and money while improving your quality of life.

jo is a voice-first digital personality that works alongside you every day – a smart and efficient new friend who saves you time and money while improving your quality of life.

As of now, jo runs only on macOS desktop, and only on Apple Silicon. Sign up for the waitlist here.

https://youtu.be/Vat9_5nX92w

What Makes jo Special?

JARVIS and Samantha are assistant archetypes created within the scarcity mindset of an obsolete age. Instead of waiting for you to ask, jo helps you proactively at the edge.

Always a ‘hey jo’ or ‘⌘-j’ away, jo works alongside you, helping with the low stakes but high frequency actions that make up the high stakes ones - keystrokes, taps, speech and screen. A new type of assistance for a new age.

jo’s built on top of a set of local and remote LLMs, vector storage and tight audio/network macOS native desktop code that unlock a new set of composable primitives for how humans interact with AI efficiently.

In practice, jo gets you halfway to most of the way there for common workflows, building user trust for its next phase of development where we push towards autonomous completion of high stakes / low frequency projects.

  • Speak or type to jo, it will speak and type back – latencies are low enough that you can lean on jo all day as AI on your side that works hard for you.
  • Ask jo to run pre-filled, custom-integrated searches to get info in seconds
    • “pull up facebook marketplace to buy a new kettlebell and pull up videos of starter workouts on youtube”
    • “search flights to new orleans next weekend for 3 people”
    • “Arcane on netflix”
    • “new backpack under $100 on amazon”
    • “draft an email to city of sf asking for reimbursement on pothole”
    • “find the email from amazon where i bought the white table”
  • jo's "Live Summary" follows your browsing around and summarizes the three key points of anything you read, rapidly refreshing with each page change.
    • Generated using local LLMs that will be upgraded quarterly
    • You can also preview links in your browser ‘Arc’-style by pressing ⌘ while hovering over them.

  • jo syncs on the backend with both work & personal Google Calendars to help you manage your time better.
    • Finds open slots, and helps you plan your day effortlessly.
    • “Hey jo, find time for lunch with Sarah next week.” Done.
    • jo also groups proposed meeting slots smartly with existing meetings to increase long blocks of non-meeting time.

  • jo remembers choice information privately across many facets, uses that on every interaction to build a better picture of you, and actually attempts to be better (with your feedback).
  • jo is also connected to a curated set of the big cloud LLMs that you know and love, with comparable and controllable data access. Raw horsepower is available by default.
  • jo has a redundant architecture across LLMs, TTS and STT. As local and remote models improve in capability, jo will inherit new abilities rapidly as we build trust & tech towards truly agentic behavior

Sign up for the waitlist here >. We are sending invites out every day.

Why jo Feels Different

  • Natural and intuitive. You actually talk to jo like you would a friend. No weird commands to memorize or crazy prompt engineering. No switching models.
  • Ridiculously fast. jo works as fast as you can speak. Summaries, browser tasks, calendar checks—it’s all instant since we built the app using native Swift for Apple Silicon and have spent weeks cutting latency across multiple integrated systems.
  • No distractions. jo’s got a clean, focused interface that keeps you in flow state.

Designed For Composability

jo’s new usability primitives are concise distillations of learnings over the course of a year from preview versions of jo on telegram mobile, group chat experiences, single-player experiences and now macOS desktop.

  1. **"Fast comms"**
    We speak faster than we type. We can read faster than we can listen. jo uses text and audio in/out, in parallel. Get the gist by listening, get the rest by reading.
  2. **"Async execution"**
    jo works alongside you, not blocking your work loop. Handoff stuff for jo to do. If you're an engineer, it’s like spinning up many new Actors for your life.
  3. **"Work before I get to it"**
    jo kicks things off for you before you expend more cognitive effort. Start with the gist, then review the main points, then read the details.
  4. **"Save the important"**
    jo captures choice information privately in many facets, uses that on every interaction to build a better picture of you, and actually attempts to be better with your feedback.
  5. **"Be my agent"**
    jo pushes new info you care about to you, and reaches out on your behalf.

Chainable primitives like these wind up being compelling when they're composed into your ad hoc demands.

Using these primitives is the first step to building daily trust for true “agency”, where jo can execute high stakes goals (ex: organize and pay for expensive things, or draft and send critical emails).

While it’s possible even now to “close the loop” with high stakes projects (for example jo sending an email or paying for a flight), we realized the trust building required needs a path, which is the true blocker.

New Things You Can Do With jo

Improve a low-stakes/high-frequency pattern you may do every day
“read hacker news” — With jo, you can now command-hover using jo over every interesting link in the list, getting an exec summary inline. Since this type of reading is usually ritualized behavior, jo will prioritize helping you process more info vs saving time. Soon, you can "clip" a live summary to save for later.

Start off a high-stakes, low-frequency request:
"find me a free weekend in the next month and show me new york flights" — jo's connected to both your work & personal calendars and will run this search using google APIs, find the open weekends, and then pull up google flights in your browser with all the relevant info pre-filled. It's faster than you at this and you can chain more steps.

Complicated ad hoc requests:
"will i see a good sunset tonight in pacifica, and where can i eat after?"  — jo will use a pre-connected weather API, find temp and visibility, construct the answer, and will likely present the awesome Taco Bell on the beach as one of the places to eat without opening any browsers.

We are working towards jo being able to ambiently push you showtimes for this weekend for the new Ghibli movie because it knows (with your permission) that you and your 7yo like watching them together.

Engineering at jo

Building jo is an ongoing systems engineering and LLM application/design challenge. We believe latency and quality trumps all for daily-use products like jo and are obsessed with improving both every single day. We’re just now scratching the surface and are sanding down the rough edges with this preview release. We’d love your help with unraveling any unexpected behavior you see.

We ship very fast with very small, very tight teams. Reach out if you:

  • Are a very senior macOS / iOS engineer AND/OR
  • Understand the nuances of fine-tuning local models

FAQ

  • How do I try jo?
  • Why a waitlist?
    • To ensure our systems keep up with the scale and usage.
    • To find and fix any bugs.
    • To listen to our early adopters and build.
  • How does jo use LLMs?
    • We run remote large models on Azure with contracts requiring no data sharing.
    • We run local small models on your machine for just the “live summary” feature (as of now).
  • How do you save & share my data?
    • We use hosted Postgres and a local vector data store running on our servers.
    • We send text to remote LLMs that are run on Azure, with “no data sharing” protections in place.
    • In our preview period, we’re using 3P audio transcribers and synthesizers, and you are also subject to their (sometimes changing) policies. More details upon request, and we are working to stabilize this.
  • Hardware specs needed?
    • Any Mac with a “M” chip
      • M2+ works best
      • M1 works fine but live summaries are a touch slower
    • At least 8GB shared RAM, more the better.
  • Will there be mobile apps?
    • Yes, shortly.
    • They will have similar functionality but will most likely start with just remote LLMs.
  • Is my mic on all the time?
    • No, only when you turn it on manually.
    • If you opt-in to the “hey jo” feature – if and only if jo’s mic is enabled, the system mic will remain on listening for the “hey jo” voice trigger
  • Can I turn off ‘hey jo’?
    • Yes. jo will still activate when you press ‘⌘-j’
  • Can I use jo with just a chat window and turn off all audio?
    • Yup, if you prefer. You can turn the audio input and audio output together and individually, on/off
  • Is jo always watching my screen?
    • No. Once you enable live summaries and active screen sharing, jo will only have access to the frontmost window on your desktop.
    • When this feature is turned off, no screen content is transmitted or processed.
    • Any screen that’s being cached remotely expires after 10 minutes.
  • Why pre-filled searches and not running the searches yourself?
    • Faster, chainable, more relevant, more control on the user’s side
  • Can I get more detailed summaries than the short “live summary”?
    • Yes, you can ask further questions of jo about any browser window/tab or (for now) select apps we support.
  • Can I get my favorite local model to be the jo LLM?
    • Not yet, but soon.
  • Are you working on a local-only jo?
    • That’s privileged info.
  • How much does jo cost?
    • jo is free to use during the preview period for the next 3 months.
    • More pricing details in early 2025.

Team

jo was created by Pradeep Elankumaran & Kevin Li. We’ve been building consumer products for over a decade now. We love applying technology to unlock sustainable unit economic structures that improve the lives of every person in the world.

Available now for macOS. Join the waitlist on the site to get access today.

Join the waitlist >