Microsoft President Downplays Near-Term Risk of Super-Intelligent AI and Emphasizes Safety Measures

Brad Smith, the president of Microsoft, addressed concerns about the rapid development of super-intelligent artificial intelligence (AI) and dismissed the idea of such breakthroughs occurring within the next 12 months. He cautioned that achieving artificial general intelligence (AGI), where computers surpass humans in most economically valuable tasks, could take years, if not decades.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

The statements follow recent developments at OpenAI, where cofounder Sam Altman was briefly removed as CEO before being reinstated following concerns raised by researchers. The removal coincided with reports about a project named Q* (pronounced Q-Star), an internal initiative at OpenAI that some believe could be a potential breakthrough in the pursuit of AGI.

However, Brad Smith, speaking to reporters in Britain, refuted claims of an imminent and dangerous breakthrough. He asserted that the development of AGI would likely take many years and emphasized the importance of focusing on safety measures during this period.

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," Smith stated.

Addressing the concerns raised about Altman's removal, Smith clarified that the decision was not fundamentally related to fears of a dangerous discovery. While acknowledging a divergence between the board and others, he highlighted that the core issue was not about a specific concern related to AI breakthroughs.

Smith emphasized the need for safety measures in AI systems, drawing parallels with safety features in other critical infrastructure. He proposed the implementation of safety brakes in AI systems controlling critical infrastructure to ensure that they always remain under human control.

“What we really need are safety brakes. Just like you have a safety brake in an elevator, a circuit breaker for electricity, and an emergency brake for a bus, there ought to be safety breaks in AI systems that control critical infrastructure so that they always remain under human control,” said Smith.

The comments by Microsoft's president reflect the ongoing discussions and debates surrounding the rapid advancements in AI and the potential risks associated with achieving AGI. While assuring that the development of super-intelligent AI is not imminent, the focus on implementing safety measures remains a priority for industry leaders.