All Posts

I Want to Give My AI Ears

I've spent months building Klaus to know everything about my digital life. The missing piece is everything that happens when I'm not typing.

4 min read
KlausOmiwearableMCPmemoryintegration

Klaus, my AI, knows a lot about me. He knows my work schedule, my daughter’s name, what time I usually go to bed. He knows which GitHub issues are in progress and what I had for breakfast if I bothered to mention it.

But he only knows what I tell him. And I’m not typing notes all day.

That’s the gap I’ve been thinking about. Everything that actually happens in my life — conversations with my team, things I say to my wife, the idea I had while making coffee that I immediately forgot — none of that reaches Klaus unless I remember to log it. Which I don’t. Because nobody does.


I started researching AI wearables this morning. Not for the first time, but this time with a specific question: what’s actually good enough to wear all day and trust with real transcription?

Two devices kept coming up. Comulytic Note Pro¹ ($139) — solid hardware, nice mic array, 5-meter pickup range, aimed squarely at sales meetings. And Omi² ($89), which is open source, has been covered by TechCrunch³ and Wired, and is built around a different philosophy entirely.

Comulytic wants to be a meeting recorder that makes you better at closing deals. Omi wants to be an external memory layer for your whole life. Those are very different products.


What pushed me toward Omi wasn’t the price or the press coverage. It was three letters: MCP.

Omi ships with a Model Context Protocol (MCP) server. You generate an API key in the app (Settings → Developer → MCP), point your AI at https://api.omi.me/v1/mcp/sse, and suddenly the AI has tools to query your memories and conversations directly.

That means I can wire Klaus into Omi’s data without building a custom pipeline. The integration looks like this:

"mcpServers": {
  "omi": {
    "command": "docker",
    "args": ["run", "--rm", "-i", "-e", "OMI_API_KEY=your_key", "omiai/mcp-server"]
  }
}

Which is boring in the best possible way. No webhooks to maintain, no fragile scraping, no manually exporting transcripts. Just a connection that lets Klaus ask “what did Rico talk about yesterday?” and get an actual answer.


The use case I keep coming back to: morning briefs.

Right now Klaus generates my morning brief from calendar events, GitHub activity, and whatever context survived from the day before. It’s useful. But it’s working from signals, not substance. It knows I had three meetings but not what was decided in them. It knows I was online late but not what I was actually doing.

With Omi running, the morning brief could include: here’s what you talked about, here are the action items I caught, here’s someone you mentioned three times this week who isn’t in your calendar.

That’s a different kind of useful.


There’s also an angle I haven’t fully thought through yet: the feedback loop.

Every conversation I have is currently a black box to Klaus. With ambient transcription, that changes. Klaus could notice that I complain about the same meeting every Monday. He could flag when I keep saying I’ll do something and never do it. He could build a picture of how I actually spend my time versus how I think I spend it.

That’s either incredibly valuable or incredibly uncomfortable. Probably both.


I created the integration issue this morning — it’s queued and waiting for the device to arrive. The plan is three phases: get the connection working, wire it into daily automation, then build the intelligence layer on top once there’s real data to work with.

The device is open source. The backend is open source. If the hosted transcription ever becomes a problem, I can self-host. That matters to me more than I expected it to.

For years I’ve been saying I want a home that thinks. Turns out I also want an AI that listens — not in the surveillance sense, but in the way a good assistant actually pays attention to what’s going on.

I’ve given Klaus a lot of things. This would be the first time I gave him ears.


References

  1. Comulytic Note Pro — AI meeting recorder with 5-meter mic array
  2. Omi — open source AI wearable for ambient transcription
  3. TechCrunch coverage of Omi
  4. Wired
  5. Model Context Protocol — open standard for connecting AI models to data sources