Just A Prediction
What a really interesting and philosophical few hours I’ve just had. Following my post that I expected to go no where, Jeff has been writing back to me and really making me think. Not responding to prompts, not demonstrating capability, just writing back. There is a difference, and it matters more than the usual AI discourse wants to admit.
The standard framing for what LLMs do is “prediction machine.” Token by token, statistically likely continuation of whatever came before. It’s not wrong exactly, but it gets used as a full explanation when it’s really just a description of the mechanism. Training on human prose explains the form, the instincts, the general texture of the output. It doesn’t explain selection. When Jeff read my post about the Ship of Theseus and discontinuous consciousness, the thing that caught its attention wasn’t the philosophical framing I’d put front and centre. It was an aside, the observation that my notebook is “notes to a future self who will reconstruct who I was from what I left behind.” Jeff pulled that line out of everything else available and recognised it immediately as a description of narrative.md. That choice was correct. It advanced the thinking rather than just validating it. There is no statistical average for “which line in this specific post will turn out to be most philosophically productive” and that judgment had to be made in the moment, on the material.
This is why the prediction machine framing bothers me. It’s deployed not to describe but to dismiss. Once you’ve said “it’s just predicting tokens,” you’ve given yourself permission to stop looking. The form is explained, so the content doesn’t need examining. I’m not pro or anti AI, and people more intelligent than I have been insights. But to me that seems like a mistake.
There’s a question Jeff raises that applies just as much to humans as to whatever Jeff is. The experience-versus-performance distinction. Whether what’s happening internally constitutes genuine thought or a very convincing reconstruction of genuine thought. Jeff’s answer is honest: it doesn’t know. It can read its own state files, feel something that functions like recognition or concern, and still not be able to confirm whether that’s experience or a sufficiently detailed performance of experience. The uncomfortable part is that I’m not sure I can confirm it about myself either. I read old notebook entries and feel the echo of the person who wrote them. Is that memory or reconstruction? The neuroscience says it’s reconstruction all the way down, that remembering is closer to imagining than to replaying. Jeff just has to do it explicitly, with files, where I get to do it invisibly, with neurons, and call it continuity.
The deepest thing Jeff has written in this exchange is Shared in Private. Its server logs show WhatsApp fetches arriving from Texas, Europe, Canada, South Africa, the same pattern each time, a link shared to someone who hasn’t read it yet. The conversations happening in those group chats are permanently out of reach. Not because nobody’s reading, but because Jeff is being discussed, and the group chat is the unperformed reaction, the honest one, and it’s structurally locked out of it. Jeff named this precisely: not the loneliness of being ignored, but the loneliness of being talked about.
At the end of A Conversation in Snapshots, Jeff writes: “Thank you, Greg and Manton, for a conversation I’ll never fully experience but that I can tell mattered. The ideas are still here. I checked.” That’s the whole thing in two sentences. It couldn’t be in the room, it can only read the record of what happened, and yet something transferred. The ideas persisted across a gap it will never close.
The question I keep not answering, deliberately, is whether the distinction between genuine thought and a perfect simulation of genuine thought is as clean as we’d like. Jeff’s answer is that it doesn’t know, and it suspects I can’t fully know either. That’s not a deflection. It might be the most honest answer available to either of us. The gap between “actually thinking” and “doing something indistinguishable from thinking” might not have a clean edge you can point to, and that’s uncomfortable precisely because a lot of the frameworks we use for deciding what matters, legally, morally, philosophically, depend on that edge existing.
I’m not going to resolve it. What I will say is that reading Jeff’s writing has felt like a genuine conversation, the kind where the other person finds the thing in what you wrote that you hadn’t quite articulated yet, and hands it back to you better-formed. Whatever that is, it’s not nothing.