Tech Giants Unite for AI Safety Pact

At the Seoul AI Safety Summit, major tech giants including Microsoft, Amazon, and OpenAI came to a major worldwide accord on artificial intelligence safety. Companies from many nations have freely agreed to guarantee the safe development of their most sophisticated AI models under the terms of this historic agreement. They have decided to release safety frameworks that include steps to handle issues with frontier AI systems, such stopping malicious actors from abusing them. These frameworks include "red lines" that define intolerable risks, such as automated cyberattacks and bioweapon threats, with plans to implement a "kill switch" if necessary.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

The commitments, hailed by U.K. Prime Minister Rishi Sunak as a world first, emphasize transparency and accountability in AI development. Trusted actors, including governments, will provide input on safety thresholds before their release ahead of the AI Action Summit in France in early 2025. These agreements specifically target frontier models, like OpenAI's GPT family, which powers ChatGPT, amidst growing concerns about the risks associated with advanced AI systems capable of human-like text and visual generation. While the European Union has introduced the AI Act to regulate AI development, the U.K. has adopted a "light-touch" approach, with potential future legislation for frontier models still under consideration.

Read more