Yesterday we had another AI kerfuffle.
This time it was a report that in a simulation, an AI-powered drone turned on its operator and killed them. I retweeted it because it was another example of obvious stupidity relative to AI. But I didn’t say that, largely because the locus of the stupidity was not clear. It could have been in whatever was done with the simulation, or it could have been in the reporting, or in a chain of half-reports that the writer summarized. The report now has a disclaimer. Scroll way, way down to “AI – is Skynet here already?”
I do not count myself as an expert in AI, although I’m learning about it daily. It is clearly Silicon Valley’s latest claim to relevance, and they are hyping it mightily with the aid of stenographic media who understand less about it than I do.
For those of us who read Isaac Asimov’s Three Laws of Robotics when we were eight years old or so, there was obviously something wrong with that report. Yes, there are problems with Asimov and with his three laws, but the need for programming a death robot so that it doesn’t attack its controller/ owner/ whatever should be obvious, particularly to the military.
But the military gets stuff wrong, and they can be as susceptible to Silicon Valley hype as the media.
The disclaimer now says that the “simulation” was just talk. But, of course, the debunking won’t get to all the people who saw the original report. And maybe that’s not so bad. If people believe that AI is dangerous, maybe we can do something to get it under control.
Cross-posted to Nuclear Diner