Artificial Intelligence: A Legal and Ethical Storm is Coming
Last summer, Blake Lemoine, a former engineer on Google's Responsible AI team, claimed that after interacting hundreds of times with a revolutionary AI system called LaMDA, he believed the program achieved something only found in science fiction - a level of consciousness. He double-downed on those claims earlier this year when he said Microsoft's chatbot feels like 'watching the train wreck happen in real time.'
Though we've yet to see anything approaching HAL 9000, a fictional artificial intelligence character and the main antagonist in Arthur C. Clarke's 2001: A Space Odyssey, what we have seen has led another AI intelligence pioneer, Geoffrey Hinton, leaving his role at Google earlier this month to speak out about the "dangers" of the technology he helped to develop.
"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Hinton told the New York Times, the first to report his decision.
THE UNITED STATES SENATE HAS NOTICED
On Tuesday, Sam Altman, the 38-year-old CEO of OpenAI, told US senators that he would support regulation in many areas, including preventing election misinformation, making AI-generated content clear to users, and stopping people from using AI "to kill us all."
He also discussed the potential benefits and harms of artificial intelligence. "If this technology goes wrong, it can go quite wrong," he said.