Free Shipping on orders over US$39.99 How to make these links

OpenAI leaders propose international regulatory body for AI

AI is developing fast enough and the dangers it could pose are clear enough that OpenAI’s leadership believes the world needs an international regulatory body similar to the one governing nuclear power. Is – and faster. But not very fast.

In a post on the company’s blog, OpenAI founder Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever explain that the pace of innovation in artificial intelligence is so rapid that we can’t expect current authorities to adequately rein in the technology. Can.

While there is a certain merit of patting itself on the back here, it is clear to any unbiased observer that the technology most visible in OpenAI’s explosively popular ChatGPT conversational agent represents a unique threat as well as an invaluable asset. Does

The post, while generally light on details and commitments, nevertheless acknowledges that AI is not managing itself:

We need some degree of coordination between major development efforts to ensure that superintelligence is developed in a way that allows us to maintain safety and facilitate the smooth integration of these systems with society.

we eventually need something like a [International Atomic Energy Agency] for superintending efforts; Any effort above a certain capacity (or compute-like resource) threshold will need to be subject to an international authority that can inspect the system, requiring audits, testing for compliance with security standards, the degree of deployment and Can impose restrictions on the level of security etc.

The IAEA is the United Nations’ official body for international cooperation on nuclear energy issues, although of course like other such organizations it punches for punch. An AI-governing body built on this model may not be able to switch on and turn on a bad actor, but it can establish and track international standards and agreements, which is at least a starting point.

OpenAI’s post notes that tracking dedicated compute power and energy use for AI research is one of the relatively few objective measures that can and probably should be reported and tracked. While it may be difficult to say whether AI should be used for this, it may be useful to say that the resources dedicated to it should be monitored and audited like other industries. (Smaller companies may be given exemptions so as not to stifle the green shoots of innovation, the company suggested.)

Leading AI researcher and critic Timnit Gebru said something similar in an interview with the Guardian just today: “Companies are just not going to self-regulate unless there is external pressure to do something different. We need regulation and we need something better than just the profit motive.”

OpenAI has clearly embraced the latter, much to the dismay of many who hoped it would live up to its name, but at least as the market leader it is also calling for real action on the governance side — Beyond the latest hearing, where senators line up to deliver re-election speeches that end with question marks.

While the proposition is “maybe we should, like, do something,” it is at least a conversation starter in the industry and indicates an endorsement by the world’s largest AI brand and provider of doing something. Public oversight is desperately needed, but “we don’t yet know how to design such a mechanism.”

And although company leaders say they support tapping the brakes, they have no plans to do so yet, citing the enormous potential they have to “improve our society” (not to mention the bottom line). don’t want to leave. Because there is a risk that bad actors have their foot completely on the gas.

We will be happy to hear your thoughts

Leave a reply

Dont guess! Start Ghech!
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart