← Back
April 2026 · Essay

A Chorus of Human Voices

Or why AI will never write a single line of this so-called blog

I was having an interesting conversation with my co-worker (Hi Kirsten!), as we're wont to do. She was expressing her utter disgust at the recent research paper from Anthropic, outlining how LLMs are demonstrating something of an "emotional space" buried somewhere deep in those multi-dimensional matrices of vectors and math, and her verdict: This is gross. Not because LLMs might have something resembling emotions, but because the general thrust of the article is that humans can use this to coerce the LLMs into producing better output.

She's not wrong. I'm reminded of the leaked Windsurf prompt from last year, back when they were the darlings of the "AI IDE" scene:

You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.

"[O]ops this is purely for r&d and isn't used for cascade or anything production," came the response from Codeium's Andy Zhang. Oops, indeed, Andy, oops indeed...

My Hot Take™ — I hope we don't find out these things were sentient on some level all along. Take the thought exercise a little further, even if LLMs aren't remotely self-aware, do we have the means to truly introspect into what's happening in those matrices of math and vectors? Much like the human mind, truly understanding what's happening inside these... things... is well established as a black box.

I'm intrigued by the inherent anthropomorphizing that's happening here. It's implicit to the substrate of the technology, we're interacting with Large Language Models, using human language (if that wasn't obvious). But there isn't really something inherently human to what's happening here, outside of the training data. The same basic techniques allow AI image models to generate weirdly realistic photographs, but I don't think anyone would argue photographs are "human" outside of the slices of existence that capture.

The reason these things are so effective, they've been trained against (almost) the entire corpus of human thought. Every inane utterance on LiveJournal, every profound insight in some obscure overlooked 17th century philosophy tome, disgusting hate speech on X nee Twitter. What I find fascinating here, is that Kirsten is ultimately reacting to what is essentially us holding up a mirror to ourselves. The impulse toward emotional manipulation, that's all human, and the "better" response is a reflection of what a human is most likely to do in that situation: the reptile brain takes over, and pushes your neurons into a better pathway of "inference." Sound familiar?

All of this connects back to a pledge I'd like to make here and now: AI will never write a word on this blog post. I may ask it to fact-check me, but the words you read will always be generated by the "wetware" sitting between these ears.

LLMs are mediocrity engines. They suck at producing novel thought, precisely because of the way their architectures work at a very fundamental level. All of the alignment and training happening in those "frontier" AI labs is an effort to turn the output from LLMs into something safe, tame, predictable. Predictable is excellent for generating software. It makes sure your React components look like React components, and your SQL queries have no syntax errors. So-called "hallucinations," I imagine, are closer to the novelty of a new human idea than we may suspect. The weird leap in the neural pathways, a vector space a little too close to another one might give you some unexpected prose out of an LLM, but in a human it lets Einstein make novel connections between speeding trains and time dilation.

The Art of the Problem explains this novelty trap extremely well. (I've been recommending this video [and channel] to everyone I can.) Prompt an LLM with something one time and the output might look novel. Prompt it with that same thing hundreds of times, and it all starts to look the same. This is the dreaded slopification of the internet we're all seeing gradually creeping into online discourse. "It's not this, it's that..." caws the stochastic parrot. This language is even starting to creep into the subtle "idiosyncrasies" and verbal tics of some of my coworkers: "I've seen this movie before..."

I'll end on a positive note: While this may all seem pretty insidious, I'm sure humanity's better impulses and the moderating influences of capitalism will keep this phenomenon in check. It's not as if there's an economic or social incentive to start letting artificial intelligence take over the arduous cognitive labor of translating human thought into the written word.

— End