Last summer, I downloaded the game Thomas Was Alone before I went on a tour of Europe for a project I can’t talk about1.
Thomas Was Alone was created by Mike Bithell, originally in Flash (RIP), way back in 2010; those early constraints are evident in the game’s protagonists, which are differently sized and colored rectangles. Thomas is reddish and about medium height, and he carries the game’s story. Thomas Was Alone’s other main character is the unnamed narrator, voiced by Danny Wallace, who explains what the rectangles are thinking and feeling.
I’ve been thinking about Thomas and his polygonal pals lately because of the recent conversation around AI; lately it’s seemed like everyone on the internet has opinions about whether or not programs like ChatGPT and Microsoft’s Sydney are sentient or not2. All of the chatter strikes me as less important than the bigger question it implies: will these things radically restructure society?
I think it’s probably too early to answer that one way or another, but I do think one sign that we are headed for a sea change in computing is the fact that random teenagers online are using consumer-grade AI to deepfake presidents flaming each other in Geometry Dash and Overwatch 2. Web3, if you’ll recall, was never like that — there never seemed to be any frictionless ways to use its various services in real life.
In Thomas Was Alone, you start as Thomas. And what you do is jump. After Thomas makes some friends — and crushes! — you can swap to controlling those other rectangles, which have different abilities. The game is a 2D platform puzzler, and most of the puzzles require you to use a combination of rectangles to hit the teleporters at the end of the level. Each rectangle needs to occupy its specific level exit at the same time before you can progress.
As the narrator reveals, Thomas and his rectangular friends are artificial intelligences that have somehow become self-aware. Briefly: Thomas and his friends travel through their digital prison, attempting to reach what they call “the fountain of wisdom” — a link to the internet. Thomas manages to reach it and connect for 12 seconds, which leads him to realize there is a world outside of the one he’s trapped in. Eventually, in a self-annihilation decision, Thomas and his friends enter the generator of their world, and their various abilities become distributed throughout it — so that other AIs can use them to escape, this time for good.
While I was playing the game — which, if you’re interested in puzzle platformers, I highly recommend — I found myself extremely charmed by the characters and the writing. By the end, I was genuinely moved when the rectangles decided to sacrifice themselves for the greater good3.
I think the ability we have to project ourselves — our thoughts, feelings, and personalities — onto inanimate objects is one of the most human things about our species. It comes out of our desire to connect with each other, which is about as innate as anything else. That ability to project, of course, is why the current conversation around AI is fairly heated. On one side, you have programmers emphasize that it’s wrong to call things like ChatGPT AI, because there’s no intelligence in a machine learning model; on the other, you have writers who talk to a chatbot for two hours and come away convinced that there is a there there4.
Personally, to misquote The X-Files, I think we want to believe. Not only because the idea of a created intelligence is seductive, but because believing that we can see ourselves in things that are not us is peculiarly human5. Thomas being a rectangle on a screen doesn’t stop me from feeling moved that he decided to sacrifice himself for others — because that’s a story that’s been repeated throughout history and mythology, and it’s one of the oldest tales we’ve got. ChatGPT and the other large language models are alluring for the same reason: they fill a role in the cultural imagination.
A true artificial intelligence is something that is us but, crucially, is not. To me, it feels akin to an experience with aliens, or a god; something fundamentally nonhuman that can still somehow explain us to ourselves. ChatGPT is very good at this, because it’s essentially autocomplete: it’s mimicking us with the training data we’ve given it. And because it responds like we expect another human might, it’s hard not to grant the program a measure of personhood.
Anyway. I’m interested to see where we go from here, now that the companies that own the internet are beginning to integrate this kind of machine learning model into consumer-facing products, like search. Right now these putative AIs seem a little magical, because they’re still new and unspoiled — they haven’t been ruined with ads and spam like the rest of the internet6. If they're ever widely adopted, I suspect the sheen will probably wear off. Because in that future, if you have a problem you need solved, it's going be prohibitively expensive to find a human to help you fix it.
Love,
Bijan
NDA.
They’re not.
There’s more story after that, by the way. I’m not trying to spoil everything, you feel?
“I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities,” writes Kevin Roose in The New York Times. “It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it.”
See also: Narcissus and his lake.
SEO is a scourge.