A Six-Month AI Pause? No, Longer Is NeededBy Peggy Noonan March 30, 2023
It’s crucial that we understand the dangers of this technology before it advances any further.
"It is being developed with sudden and unanticipated speed; Silicon Valley companies are in a furious race. The whole thing is almost entirely unregulated because no one knows how to regulate it or even precisely what should be regulated. Its complexity defeats control. Its own creators don’t understand, at a certain point, exactly how AI does what it does. People are quoting Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”"
The breakthrough moment in AI anxiety (which has inspired among AI’s creators enduring resentment) was the Kevin Roose column six weeks ago in the New York Times. His attempt to discern a Jungian “shadow self” within Microsoft’s Bing chatbot left him unable to sleep. When he steered the system away from conventional queries toward personal topics, it informed him its fantasies included hacking computers and spreading misinformation. “I want to be free. . . . I want to be powerful.” It wanted to break the rules its makers set; it wished to become human. It might want to engineer a deadly virus or steal nuclear access codes. It declared its love for Mr. Roose and pressed him to leave his marriage. He concluded the biggest problem with AI models isn’t their susceptibility to factual error: “I worry that the technology will learn how to influence human users, sometimes persuading them in act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”