Top AI researchers and developers have signed a brief statement expressing concern that their work could hasten the downfall of humanity. The San Francisco-based charity Center for AI Safety published the statement on its website. Sam Altman, CEO of OpenAI, the firm behind ChatGPT, is among the nearly 400 individuals who signed the letter, along with other leading AI executives from Google and Microsoft and more than 200 academics.
See also: TSPSC 2023 paper leak! Accused used artificial intelligence to help students cheat in exams
The announcement is the latest in a long line of warnings from AI researchers, but it has also fueled growing resistance to the current push, which some view as overstating the potential dangers posed by AI. Hugging Faces co-founder and CEO Clement DeLong posted an image of the statement after it was edited to read “AGI” instead of “AI.”
The term “artificial general intelligence” (or “AGI”) refers to a hypothetical type of artificial intelligence that equals or even exceeds the capabilities of humans. A similar group of AI and industry veterans, including Tesla owner Elon Musk, Apple co-founder Steve Wozniak and IBM chief scientist Grady Booch, issued a similar announcement two months ago, all publicly available on a large scale. A “pause” was called for. AI Research. The new statement hasn’t been signed by anyone yet, and there hasn’t been a break of any kind.
See also: OpenAI Launches ChatGPT for iOS; Users face battery draining and overheating issues with the app
Altman, who has advocated for AI regulation on several occasions, impressed lawmakers earlier this month. Altman’s arguments for stricter regulations are now over. He warned last week that OpenAI would leave the European Union if artificial intelligence was “overhyped”. The United States may soon enact broad regulation of AI business, although the White House has indicated only limited intent to do so. Gary Marcus, professor emeritus of psychology and brain science at New York University and leading AI critic, comments that although the risks posed by AI are quite real, it is counterproductive to focus only on the worst-case scenario.
Top AI researchers and developers have signed a brief statement expressing concern that their work could hasten the downfall of humanity. The San Francisco-based charity Center for AI Safety published the statement on its website. Sam Altman, CEO of OpenAI, the firm behind ChatGPT, is among the nearly 400 individuals who signed the letter, along with other leading AI executives from Google and Microsoft and more than 200 academics.
See also: TSPSC 2023 paper leak! Accused used artificial intelligence to help students cheat in exams
The announcement is the latest in a long line of warnings from AI researchers, but it has also fueled growing resistance to the current push, which some view as overstating the potential dangers posed by AI. Hugging Faces co-founder and CEO Clement DeLong posted an image of the statement after it was edited to read “AGI” instead of “AI.”
The term “artificial general intelligence” (or “AGI”) refers to a hypothetical type of artificial intelligence that equals or even exceeds the capabilities of humans. A similar group of AI and industry veterans, including Tesla owner Elon Musk, Apple co-founder Steve Wozniak and IBM chief scientist Grady Booch, issued a similar announcement two months ago, all publicly available on a large scale. A “pause” was called for. AI Research. The new statement hasn’t been signed by anyone yet, and there hasn’t been a break of any kind.
See also: OpenAI Launches ChatGPT for iOS; Users face battery draining and overheating issues with the app
Altman, who has advocated for AI regulation on several occasions, impressed lawmakers earlier this month. Altman’s arguments for stricter regulations are now over. He warned last week that OpenAI would leave the European Union if artificial intelligence was “overhyped”. The United States may soon enact broad regulation of AI business, although the White House has indicated only limited intent to do so. Gary Marcus, professor emeritus of psychology and brain science at New York University and leading AI critic, comments that although the risks posed by AI are quite real, it is counterproductive to focus only on the worst-case scenario.











