Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Back in 1975, the biology community, and the larger society, were concerned about the potential for recombinant DNA. Could it be used to generate monsters or new diseases? Would people use it to interfere with human reproduction?
The biologists themselves held a conference at Asilomar, California, to assess the dangers and recommend possible mitigations. They issued a public statement summarizing their findings. Since then, they have developed and applied guidelines as new areas of research opened up.
In contrast, Our Silicon Valley Overlords want us to be worried – very worried – about their research into artificial intelligence. With their usual hype, they have loosed chatbots called AI that are very fancy autocompletes that require immense amounts of “training” on work that other people have produced. And oh yes, they can’t tell you what that training base is because you might be mean about it and point out that a great deal of what is available is sexist, racist, and classist, and those traits just might be “trained” into their wonderful creation.
If there is a hazard, show us the pathways by which it might develop, as the biologists did. Then show us how it might be mitigated. Show some serious moral purpose, in other words.
If the hazard is so great, perhaps a statement would be appropriate that these Very Principled People feel they can no longer work on it and will be leaving the field to plant a garlic farm in the Santa Clara Valley where their offices used to be.
But this is the industry that cobbles something together and leaves it to the customer to figure out how to deal with its problems.
Cross-posted to Nuclear Diner