Apr 15, 2026
Computer speak, computer do
I barely type anymore.
That sentence would have sounded unhinged two years ago. Typing was the whole job. You thought in keystrokes, you composed in keystrokes, you prompted in keystrokes. The keyboard was the interface for everything. And then, almost without noticing, it stopped being that.
These days I open Monologue, I talk, and things happen. Emails get written. Ideas get captured. Posts draft themselves. I speak a thought out loud, and an agent somewhere finishes it. The friction between having an idea and acting on it has collapsed to nearly zero.
The speed thing is real, but it's not the main thing
The productivity argument for voice is easy to make. Stanford research put it clearly 10 years ago already: speech runs at roughly 160 words per minute against a typing average closer to 53. Which is three times faster, with 20% fewer errors. A ClickUp survey of 527 knowledge workers found that 72% experience physical discomfort from typing, and a third of them actively shorten their messages to avoid it. They're literally truncating their own thinking to spare their wrists.
But the speed stat, as clean as it is, misses something. When I dictate a prompt – to Monologue, to an AI agent, to anything that will listen, I include more. I naturally say the thing I'd have trimmed in a typed message: the context, the why, the edge case I'm half-worried about. The output gets better because the input got fuller. That's not a hardware improvement. That's just what happens when talking feels cheaper than typing.
What Her got right
I've watched Spike Jonze's Her more times than I'll admit. It came out in 2013, which means it imagined where we are now from a distance of more than a decade. The thing that always struck me wasn't the romance. It was the interface. Theodore doesn't click anything. He doesn't open apps or navigate menus. He just... talks. The computer speaks back. Work gets done, letters get written, a life gets lived through conversation
That felt like science fiction then. It doesn't anymore.
What Jonze understood early was that the keyboard and screen are a workaround, not a destination. They were the bridge we built because computers couldn't understand us yet. Now, increasingly, they can. The interface is finally catching up to how humans actually think and communicate through speech, through conversation, through back-and-forth.
Not just faster – different
There's a sharper point underneath all of this. Voice is not just a faster path to the same destination. It changes what you can do at all.
Researchers studying agent-human interaction have started calling this the "interface bottleneck” – the idea that for decades, we had to learn the language of software to get things done. Menus, keywords, syntax. The burden was always on us to adapt. Voice AI flips that. The machine meets you where you already are.
Monologue gets this. It doesn't just transcribe but it understands context, knows your vocabulary, adapts to your style. When I dictate into it, I'm not dictating into a blank recorder. I'm talking to something that knows what I'm working on. That distinction sounds small until you've felt the difference.
Are we in the Her timeline now?
Maybe not all of it. Theodore's Samantha eventually outgrows him in ways that are complicated and sad. I'm not worried about that part yet.
But the ambient, conversational, voice-first relationship with a computer that seemed so dreamy in 2013? We're building it, piece by piece. The agents are getting better. The transcription is getting better. The context layer and things like Monologue Notes now log and transcribe your meetings so agents can actually know what you've been doing.
It's interesting and efficient at the same time, which is a combination I never expected from a technology shift. Usually you get one or the other.
I clicked into this new way of working without really deciding to. One day I just noticed I wasn't typing much. The computer speaks, the computer does. For now, that's enough.