Let us know about free updates
Just sign up for artificial intelligence Myft Digest – Delivered directly to your inbox.
When the Chernobyl nuclear power plant exploded in 1986, it was a catastrophe for those living nearby in northern Ukraine. However, the accident was also a disaster for the global industry, which promotes nuclear energy as a future technology. The net reactor numbers are kept almost flat as they are considered unsafe. What happens today if the AI industry gets into an equal accident?
That question was raised on the sidelines of this week's AI Action Summit in Paris by Stuart Russell, a professor of computer science at the University of California, Berkeley. His answer was that it was a mistake to believe there must be a trade-off between safety and innovation. Therefore, those who are most excited about the promise of AI technology still need to proceed with caution. “Without safety, you can't have innovation,” he said.
Russell's warnings were echoed by other AI experts in Paris. “We need to agree to minimum safety standards worldwide,” Wendy Hall, director of the University of Southampton's Web Science Institute, told me.
However, such warnings were mostly in the margins as government representatives at the summit were crushed around the spongy Grand Palais. In a punchy speech, JD Vance highlighted the national security order to lead AI. The US vice president argued that this technology makes us “more productive, more prosperous and more free.” “The future of AI will not win by wrapping hands around safety,” he said.
The first international AI summit held in Bletchley Park in the UK in 2023 had a near-full focus on safety, but Paris' priorities include President Emmanuel Macron's investment in the French technology industry. It was an action because I trumpeted. “The process that started with Bletchley is really amazing, but it was guillotine here,” Max Tegmark, president of the Future of Life Institute, co-hosted a fringe event on safety.
Most interested in safety campaigners are the speed at which technology is developing and the dynamics of companies to achieve artificial general information that computers may match humans on all cognitive tasks; It's a geopolitical competition. Several leading AI research companies, such as Openai, Google Deepmind, Anthropic, and China's Deepseek, have a clear mission to achieve AGI.
Later in the week, Humanity co-founder and CEO Dario Amody predicted that AGI would likely be achieved in 2026 or 2027. “Exponential functions can surprise us,” he said.
Alongside him, Demis Hassabis, co-founder and CEO of Google Deepmind, predicted a 50% chance of achieving AGI within five years, and was even more cautious. “If it was short I wouldn't be shocked. I would be shocked if it was over a decade,” he said.
Critics of safety campaigners portray them as fantasists in science fiction who believe that the creation of artificial emergency will lead to human extinction. However, safety experts are concerned about the damage that can be created by the extremely powerful AI systems that exist today and the dangers of large-scale AI-enabled cyber-weapon or biological weapon attacks. Even leading researchers acknowledge that they don't fully understand how their models work, creating security and privacy concerns.
Last year, a research paper on human sleeper agents found that some basic models can be fooled into believing that humans are operating safely. For example, a model trained to write secure code in 2023 can insert exploitable code when the year changes in 2024. Such backdoor behavior was not detected by standard safety techniques of humanity. The possibility of a Manchuria candidate for the algorithm lurking in China's deep seek model has led to it being already banned by several countries.
However, Tegmark is optimistic and I think that both AI companies and governments see overwhelming self-interest in the safety of readjustment. The US, China, or anyone else wants to put AI systems out of control. “AI Safety is a global public good,” Xue Lan, dean of the Institute of AI International Governance at Tsinghua University in Beijing, told the Safety Event.
In a race to make the most of AI potential, the best motto for the industry could be the US Navy seal motto, with so many hand-pickings not getting much attention. “The slow is smooth, and the smooth is fast.”
john.thornhill@ft.com