Home / General / Everything I read on AI is garbage

Everything I read on AI is garbage

/
/
/
1397 Views

This is the top selection of images I get on Google Images today. The fusion of human and machine, the allusions to Michaelangelo’s “The Creation of Adam,” and the brains illustrate the confusions I’m writing about.

I am sorry, but the discourse around what people are calling artificial intelligence, or AI, is so dumb that I do not see how those people get up and find the bathroom in the morning.

They fail to tell us what they understand intelligence to be – is it “learning” in the sese of making connections or memorizing things? Is it stringing words together in a persuasive way? Is it being able to back up and explain that string of words? Is it being able to use logical inference?

And those are the easy questions, before one starts to think about consciousness.

They muddle themselves up by using words that apply to human cognition (what I would call the act of receiving input and processing it in all the ways I’ve described above), and then turn around to claim that because they have used these words, the program is doing something like human cognition. And oh yes, the computers will get better at doing all these things and therefore more intelligent!

They seldom mention how much computing power these early steps require. It’s right up there with bitcoin. And oh yes all those great advances will need even more computing power.

It looks like the primary use of the large language models (LLMs) will be to churn out more trash to clog up the internet. They may marginally improve our experiences in dealing with too-large and too-powerful institutions like the insurance companies. Or they may make them more impervious to human interaction. Right now, these models are stealing copyrighted art and misidentifying people for police departments.

Here are the two articles that have most infurated me recently:

Never Give Artificial Intelligence The Nuclear Codes (The Atlantic) Bonus: A really dumb wargame scenario (possibly misunderstood by the reporter)

The world’s major military powers have begun a race to wire AI into warfare. For the moment, that mostly means giving algorithms control over individual weapons or drone swarms. No one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff. But the same seductive logic that accelerated the nuclear arms race could, over a period of years, propel AI up the chain of command. How fast depends, in part, on how fast the technology advances, and it appears to be advancing quickly. How far depends on our foresight as humans, and on our ability to act with collective restraint.

Geoffrey Hinton tells us why he’s now scared of the tech he helped build (MIT Technology Review)

“Crows can solve puzzles, and they don’t have language. They’re not doing it by storing strings of symbols and manipulating them. They’re doing it by changing the strengths of connections between neurons in their brain. And so it has to be possible to learn complicated things by changing the strengths of connections in an artificial neural network.”

For 40 years, Hinton has seen artificial neural networks as a poor attempt to mimic biological ones. Now he thinks that’s changed: in trying to mimic what biological brains do, he thinks, we’ve come up with something better. “It’s scary when you see that,” he says. “It’s a sudden flip.”

I’m walking away from AI for a bit. But I guess I’ll get dragged in if something particularly harmful or absurd shows up.

Cross-posted at Nuclear Diner

  • Facebook
  • Twitter
  • Linkedin
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :