Hundreds of people lined up to watch OpenAI CEO Sam Altman speak at University College London on Wednesday. Bystanders chatted about the company and their experience using ChatGPT, while a handful of protesters stood in front of the entrance with a stern warning: Develop advanced AI systems before OpenAI and companies like it have a chance to do harm Must be closed. Humanity.
“Look, maybe he’s selling a grift. I sure as hell hope he is,” said one of the protesters, Gideon Futureman, a student studying solar geoengineering and existential risk at the University of Oxford. said of Altman. “But in that case, he’s hyping the system with enough Known harms. We should probably stop them anyway. And if he is right and he is creating systems that are generally intelligent, then the dangers are far, far greater.
When Altman went onstage inside, however, he was met with a thunderous reception. The CEO of OpenAI has been on a recent (and equally rendezvous) world tour following a Senate hearing in the US last week. So far, he has met French President Emmanuel Macron, Polish Prime Minister Mateusz Morawiecki and Spanish Prime Minister Pedro Sánchez. The objective seems to be twofold: quiet fear following the explosion of interest in AI caused by ChatGPT and move on from the conversation about AI regulation.
In London, Altman reiterated familiarity, noting that people are right to be concerned about AI’s effects, but in his opinion its potential benefits are far greater. Again, he welcomed the prospect of regulation – but only of the right kind. He said he wanted to see “something between the traditional European approach and the traditional American approach.” That is, some regulation but not too much. He stressed that too many regulations could hurt smaller companies and the open source movement.
“I want to make sure that we take this at least as seriously as we treat nuclear material.”
“On the other hand,” he said, “I think most people would agree that if someone cracks the code and creates a superstructure—however you want to define it— [then] Some global regulations on that are appropriate… I would like to make sure that we take this at least as seriously as we treat nuclear material; for megascale systems to be able Give birth to the superintendent.
According to critics of OpenAI, this talk of regulating superintelligence, otherwise known as artificial general intelligence, or AGI, is a rhetorical ploy – for Altman to deflect attention from the current pitfalls of AI systems and to educate lawmakers and the public. Science has a way of keeping distracted. Fi scenario.
Sarah Myers West, managing director of the AI Now Institute, said people like Altman “rightly position accountability in the future”. ledge Last week. Instead, West says, we should be talking about the currently known threats posed by AI systems – from faulty predictive policing to the spread of misinformation to racially biased facial recognition.
Altman did not dwell much on the existing pitfalls, but addressed the topic of misinformation at one point, saying that he was particularly concerned about the “interactive, personal, persuasive potential” of AI systems when misinformation It is about spreading. His interviewer, author Azeem Azhar, suggested that one such scenario might involve an AI system calling someone using an artificial voice and persuading the recipient to some unknown end. Altman said: “I think it will be a challenge, and there’s a lot to do.”
However, he said, he was optimistic about the future. very hopeful. Altman says he believes existing AI tools will also reduce inequality in the world and “there will be more jobs on the other side of this technological revolution.”
“This technology will uplift the whole world.”
“My basic model of the world is that the cost of intelligence and the cost of energy are the two finite inputs, like the world’s two finite reagents. And if you can make them dramatically cheaper, dramatically more accessible, it’s richer Helps poor people more than people, clearly,” he said. “This technology will uplift the whole world.”
He was also optimistic about the ability of scientists to keep increasingly powerful AI systems under control through “alignment”. (Alignment is a broad topic of AI research that can be described as “making software do what we want and not what we don’t”.)
“We have a lot of ideas that we’ve published about how we think the alignment of superintelligent systems works, but I believe it’s a technically solvable problem,” Altman said. “And I feel more confident in that answer now than I did a few years ago. There are paths that I think won’t be very good, and I hope we avoid them. But to be honest, I Very happy the trajectory things are currently on.
Outside the talks, however, the protesters were not convinced. Alistair Stewart, a master’s student studying political science and ethics at UCL, told ledge He wanted to see “some kind of pause or moratorium on advanced systems” — the same approach advocated in a recent open letter signed by AI researchers and prominent tech figures like Elon Musk. Stewart said that he did not necessarily think that Altman’s vision of a prosperous AI-driven future was wrong, but that there was “too much uncertainty” to leave things to chance.
Can Altman persuade the group? Stewart says that the CEO of OpenAI came out after his time off to talk to the protesters but was not able to change Stewart’s mind. He says they chatted for a minute or so about OpenAI’s approach to security, which includes developing capabilities for guardrails as well as AI systems.
Stewart said, “I entered that conversation a little more anxious than I was before.” “I don’t know what information he has that he thinks will work.”