Americans are worried that artificial intelligence technologies like ChatGPT will be used to worsen social ills, from fraud and identity theft to extremism and hate, and they want the companies like Microsoft, Google and OpenAI that are rushing to commercialize these tools to do something about it, according to a new survey from ADL shared exclusively with USA TODAY.
The ADL survey reflects growing unease over the rapidly evolving technologies that have the potential to improve people’s lives but could also cause substantial harm, said ADL CEO Jonathan Greenblatt.
The majority of Americans worry that people will use AI for criminal activity (84%), spreading false or misleading information (83%), radicalizing people to extremism (77%), and inciting hate and harassment (75%), the survey said.
Three-quarters of those surveyed – 75% – think the tools will produce biased content targeting marginalized groups while 70% say they will be used to make extremism and hate, including antisemitism, worse in America.
Google ups the ante on AIHere’s how search and Gmail will change.
The survey of 1,007 U.S. adults was released in advance of a Senate hearing Tuesday where the CEO of ChatGPT creator OpenAI Sam Altman and other officials are scheduled to testify about the potential risks of AI chatbots.
“If we’ve learned anything from other new technologies, we must protect against the potential risk for extreme harm from generative AI before it’s too late,” Greenblatt said in a statement to USA TODAY.
The new wave of AI tools has dazzled Americans, promising a bevy of benefits. They can carry on human-like conversations, write essays, compose music and create audio, video and images.
But these tools also have worrying implications for the future of work and education as well as the future of humanity.
Geoffrey Hinton, a top architect of artificial intelligence, recently warned that AI could someday take over the world and push humanity toward extinction.
The White House recently summoned officials from Microsoft, Google and ChatGPT creator OpenAI to discuss the risks and promote responsible innovation that protects the rights and safety of Americans.
The Federal Trade Commission has also warned that it will crack down on AI technologies if they run amok.
AI-generated images already fool people.Why experts say they’ll only get harder to detect.
The ADL survey revealed that government concerns reflect those of most Americans.
Nearly 90% say tech companies should take steps to prevent their AI tools from creating harmful content and from generating antisemitic or extremist images and support congressional efforts to intervene. They also support audits of the tools, with 86% agreeing that academic or civic groups “should have access to review or audit the tools to make sure they are properly constructed.”
Some 81% said creators of these tools should be held responsible if they are used to spread hate or harassment.