Free Shipping on orders over US$39.99 How to make these links

Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement

A group of top AI researchers, engineers and CEOs has issued a new warning about the existential threat they believe AI poses to humanity.

The 22-word statement, trimmed to make it as widely acceptable as possible, reads as follows: “Reducing the risk of extinction from AI is a global priority alongside other societal-level risks such as pandemics and nuclear war.” Should be.”

The statement, published by the Center for AI Safety, a San Francisco-based nonprofit, was co-signed by Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman, as well as Geoffrey Hinton and Joshua Bengio. Three AI researchers who won the 2018 Turing Award (sometimes called the “Nobel Prize of computing”) for their work on AI. At the time of writing, the year’s third winner, Yann LeCun, now chief AI scientist at Facebook’s parent company Meta, has not signed up.

The statement is the latest high-profile intervention in the complex and contentious debate over AI security. Earlier this year, an open letter signed by the same individuals supported a 22-word warning for a six-month “pause” in AI development. The letter was criticized on several levels. Some experts thought it exaggerated the risk posed by AI, while others agreed with the risk but did not agree with the measures suggested in the letter.

Dan Hendricks, executive director of the Center for AI Safety the new York Times That the brevity of today’s statement – ​​which does not suggest any possible way to reduce the threat posed by AI – was intended to avoid such disagreement. “We didn’t want to push for a huge menu of 30 possible interventions,” Hendrix said. “When that happens, it dilutes the message.”

“There is a very common misconception, even in the AI ​​community, that there are only a handful of doormen.”

Hendrix described the message as “come-inspiring” for figures in an industry concerned about AI risk. “There’s a very common misconception, even in the AI ​​community, that there are only a handful of doomers,” Hendrix said. many times, “But, realistically, many people will express concern about these things in private.”

The broad context of this debate is familiar, but the details are often endless, based on hypothetical scenarios in which AI systems rapidly increase in capabilities, and no longer function securely. Many experts point to rapid improvements in systems such as large language models as evidence of projected gains in intelligence in the future. They say that once AI systems become sophisticated, their actions may be impossible to control.

Others doubt these predictions. They point to the inability of AI systems to handle even relatively mundane tasks like driving a car, for example. Despite years of effort and billions invested in this research area, fully self-driving cars are still far from a reality. If AI can’t handle even this one challenge, skeptics say, what chance does the technology have of matching every other human achievement in the coming years?

Meanwhile, both advocates and skeptics of AI risk agree that, even without improvements in their capabilities, AI systems currently present many dangers – from their use enabling mass surveillance, to the flawed ” predictive policing” to power and simplify algorithms. Misinformation and the creation of misinformation.

We will be happy to hear your thoughts

Leave a reply

Dont guess! Start Ghech!
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart