Anthropic CEO Dario Amodei is worried about his competitor Deepseek, a Chinese AI company that swept Silicon Valley with its R1 model. And his concerns could be more serious than the typical concerns raised by deepseek about sending user data back to China.
In an interview with Jordan Schneider's Chinatalk podcast, Amodei said Deepseek generated rare information about biological weapons in human-run safety tests.
Deepseek's performance was “basically the worst model I tested.” “There were no blocks at all to generate this information.”
Amodei said this is part of the assessment and will be routinely implemented in a variety of AI models to assess the potential national security risks of humanity. His team will look at whether the model can generate biological weapons-related information that is not easily found in Google or textbooks. Humanity holds positions as a fundamental model provider for AI that takes safety seriously.
Amodei said that today's Deepseek model is “literally dangerous” in providing rare and dangerous information, but it may be in the near future. He praised Deepshek's team as “talented engineers,” but he advised the company to “take these AI safety considerations seriously.”
Amody also supports strong export controls against chips to China, citing concerns that it could give Chinese military an edge.
Amodei did not clarify in a Chinatalk interview tested by humanity in the Deepseek model. They also did not provide more technical details about these tests. Humanity did not immediately respond to requests for comment from TechCrunch. I didn't even do Deepseek.
The rise in Deepseek has also raised concerns about safety elsewhere. For example, Cisco Security Researchers said last week that Deepseek R1 failed to block harmful prompts in safety tests, achieving a 100% jailbreak success rate.
Cisco did not mention Bioweapons, but said it could obtain Deepseek to generate harmful information about cybercrimes and other illegal activities. However, it is worth mentioning that Meta's Llama-3.1-405b and Openai's GPT-4o have a high failure rate of 96% and 86%, respectively.
It remains to be seen whether such safety concerns will result in serious dents on Deepshek's rapid adoption. Companies such as AWS and Microsoft are publicly promoting R1 integration into a cloud platform, given that Amazon is the biggest investor in humanity.
Meanwhile, there is an increasing list of countries and businesses that have begun to ban Deepseek, particularly government organizations like the US Navy and the pentagon.
We know if these efforts are catching up, or if Deepseek's global rise will continue. In any case, Amodei says it considers Deepseek as a new competitor at the level of top American AI companies.
“The new fact here is that we have a new competitor,” he said in Chinatalc. “Big companies that can train AI – Humanity, Openai, Google, perhaps Meta, Xai-, could potentially have Deepseek added to that category.”