Ethereum mastermind Vitalik Buterin called the current frenzy “very dangerous” and called for a more cautious approach to AI research. In response to criticism of OpenAI and its leadership by the founder of the world's second most influential blockchain, Ryan Serkis (CEO of cryptocurrency intelligence company Messari), AI is the core principle that should drive development. We have outlined our views on adjustments.
“Superintelligent AI is extremely dangerous, and we should not rush into it and oppose those who try. Stop the $7 trillion server farms,” Buterin argued. .
Superintelligent AI is a theoretical form of artificial intelligence that exceeds human intelligence in almost all areas. While many believe that artificial general intelligence (AGI) is the final realization of the emerging technology's potential, what comes next is a superintelligence model. Although today's most advanced AI systems have not yet reached these limits, advances in machine learning, neural networks, and other AI-related technologies continue to advance, making people alternately excited and anxious.
After Messerli tweeted that “AGI is too important to deify another smooth-talking narcissist,” Buterin said the vast value of AI is owned and controlled by a very small number of people. He emphasized the importance of a diverse AI ecosystem to avoid a world in which the
“A strong ecosystem of open models running on consumer hardware. [is] This is an important precaution to guard against a future where value captured by AI becomes too concentrated and most of human thinking is read and mediated by a few central servers controlled by a small number of people. ” he said. “Such a model has a much lower risk of ruin than corporate megalomania or the military.”
Ethereum’s creators have been keeping a close eye on the AI scene, recently praising the open-source LLM Llama3 model. He also suggested that OpenAI's GPT-4o multimodal LLM may have passed the Turing test, following research showing that human responses are indistinguishable from those generated by AI.
Buterin also spoke about categorizing AI models into “small” and “large” groups, saying that focusing on regulating “large” models is a reasonable priority, but he expressed concern that many of the current proposals could result in everything being classified as “large” over time.
Buterin's comments come amid a heated debate over AI alignment and the resignation of a key figure on OpenAI's super alignment research team. Ilya Satskever and Jan Reik have left the company, and Reik accused OpenAI CEO Sam Altman of prioritizing “shiny products” over responsible AI development.
It was also separately revealed that OpenAI has strict non-disclosure agreements (NDAs) that prohibit employees from discussing the company after they leave the company.
High-level, long-running debate about superintelligence is becoming more urgent, with experts voicing concerns but also making a range of recommendations.
Paul Christiano, who previously led the Language Model Alignment team at OpenAI, founded the Alignment Research Center, a nonprofit organization dedicated to aligning AI and machine learning systems to “human interests.” According to reports DecryptionCristiano suggested that there may be “a 50-50 chance that a catastrophe will occur soon after a human-level system is created.”
Meanwhile, Meta's lead researcher Yann LeCun believes such a catastrophic scenario is highly unlikely. Back in April 2023, he tweeted that a “hard takeoff” scenario was “simply impossible.” LeCun argued that short-term AI developments will have a major impact on how AI evolves and shape its long-term trajectory.
Buterin rather considers himself a centrist. In a 2023 essay ratified today, he acknowledged that “a 'friendly' world ruled by superintelligent AI, where humans are nothing more than pets, seems very difficult to achieve.” , also argued that “in many cases, this is actually the case.'' This is a case where version N of our civilization's technology causes a problem and version N+1 fixes it, but this does not happen automatically and requires deliberate human effort. ” In other words, if superintelligence were to become a problem, humanity would probably find a way to deal with it.
OpenAI's more cautious adjustments The departure of researchers and a change in the company's approach to safety policy has raised broader mainstream concerns about the lack of attention to ethical AI development among major AI startups. There is. In fact, Google, Meta, and Microsoft have also reportedly disbanded the teams responsible for ensuring the safe development of AI.
Edited by Ryan Ozawa.