The AI Apocalypse
I’ve wanted to understand a bit about AI before I wrote about it. My gut feeling was that this is something we don’t want generally although it may have a few specific uses. That remains my partially educated feeling. The idea of an AI that replaces or betters human intelligence seems far-fetched. We don’t know what consciousness is, and intelligence is a contested concept with many possible meanings. That lack of understanding allows people to fill it in with whatever they’re thinking or selling.
And selling is right up there! We’ve all seen the ads, the forced applications. Definitely a product looking for a market. The Next New Thing!
I’ve read a bit on how the large language models (LLMs) are constructed. There is nothing mystical about them, although they involve fairly complicated computing. They are probability machines designed to produce output that is like human writing or speech. They often contain requirements to agree with or compliment the user. Combine that with the human predeliction to regard anything that vaguely resembles another human as sentient, and you’ve got our situation today.
LLMs are probability machines. That results in a couple of weaknesses. First is that their output is not reproducible because it goes through multiple probability cycles. Each time it can branch differently. The second is that “hallucinations” are normal output. Assuming truth value to LLM output is a category mistake. Truth is not a part of their programming, just which word is likely to come after another.
Because it’s so easy to assume sentience, I prefer not to use words implying human thought or consciousness to LLMs. Their marketers would prefer that we humanize them, but it leads to sloppy thinking at best (“hallucinations”) and psychosis at worst. People have “fallen in love” with these things and have asked them for and received advice on murder and suicide.
LLMs are said by the AI community to be only the first step toward their goal of a conscious machine or a machine that can learn or an intelligent machine surpassing humans. The definitions slip around because the goals involve things that we don’t know how to measure. Different organizations have different goals.
Those pushing AI have never given up on the science fiction they read as kids, and now that they are reaching middle age, they find it a religious comfort. The problem is, like so many religionists, their version of Armageddon requires the rest of us to participate.
Silicon Valley has become home to a number of apocalyptic faiths that overlap each other. They are sometimes represented by the acronym TESCREAL: Transhumanism, Extropianism, Singularitism, Cosmism, Rationalism, Effective Altruism, and Longtermism. In More Everything Forever, Adam Becker treats them largely as ways for people to comfort themselves that they are going to die someday. They have the effect of removing moral agency from their holders. There is no need to address today’s problems because everything will be better when we T: transform into machines, E: move to another planet, S: have machines that think, and EA: have many many people alive. This is hardly different from the thoughts and prayers offered by those who believe in a Christian apocalypse.
AI stretches across several of them. We are obliged to develop AI because 1) only in that way can we make sure it will be friendly to humans, 2) China or another malign actor will develop it wrong, or 3) it will bring infinite material blessings to all. Again, this is not a whole lot different from Christian Heaven.
This development will require enormous numbers of data centers, sucking up enormous amounts of electricity and cooling water, both on Earth and in orbit. The difficulties associated with that development, to say nothing of the difficulties faced by many on Earth now, must be endured in order to bring about the Singularity.
And, of course, those middle-aged men can’t forget how much fun it was when they were making the Next New Things. So much money, so much favorable attention! AI is the Next New Thing. Just as avatars without legs interacting on line and weird headsets were. Correction: The weird headsets are still around.
Corporations use LLMs as excuses to fire people. Students use LLMs to avoid doing the work of studying, and universities insist that professors incorporate them. They are being instituted across government. They have led to a financial bubble that has potential for great damage to the world’s economy.
Resistance to the insertion of LLMs into everything is growing, as is resistance to data centers. It’s time to recognize these fever dreams for what they are and regulate them.

