Hello everyone, welcome to TechCrunch's regular AI newsletter. If you'd like to have this sent to your inbox every Wednesday, sign up here.
OpenAI gains at the expense of its major rivals.
On Tuesday, the company announced the Stargate Project, a new joint venture involving Japanese conglomerate SoftBank, Oracle and others to build AI infrastructure for OpenAI in the United States. All should go according to plan, with the potential to raise up to $500 billion in funding for AI data centers.
This news was no doubt disappointing for OpenAI's competitors like Anthropic and Elon Musk's xAI. These companies are not expected to have comparable large infrastructure investments.
xAI plans to expand its Memphis data center to 1 million GPUs, while Anthropic recently signed a deal with Amazon Web Services (AWS), Amazon's cloud computing division, to use and improve its custom AI chips. tied. But as was the case with Anthropic, it's hard to imagine either AI company being able to outperform Stargate, even with Amazon's vast resources.
Indeed, Stargate may not live up to its promise. That is not the case with other technology infrastructure projects in the United States. Recall that in 2017, Taiwanese manufacturer Foxconn pledged to spend $10 billion on a factory near Milwaukee, which subsequently fell through.
But Stargate has more backers and, from the impression it has so far, momentum. The first data center funded by this initiative has already broken ground in Abilene, Texas. And companies participating in Stargate have pledged to invest an initial $100 billion.
Indeed, Stargate appears poised to solidify OpenAI's place in the exploding field of AI. OpenAI has more active users (300 million weekly) than any other AI venture. And there are more customers. Over 1 million companies pay for OpenAI's services.
OpenAI had a first-mover advantage. It now has the potential to dominate infrastructure. If your rivals want to compete, they have to be smart. Brute force is not a viable option.
news
No more Microsoft exclusivity: Microsoft was once the exclusive provider of data center infrastructure for OpenAI to train and run its AI models. no longer. Currently, the company only has the “right of first refusal.”
Perplexity launches the API. AI-powered search engine Perplexity has launched an API service called Sonar. This allows businesses and developers to incorporate the startup's generative AI search tools into their own applications.
Speeding up the “kill chain” with AI: My colleague Max interviewed Radha Puram, the Department of Defense's chief digital and AI official. Plumb said the Department of Defense is leveraging AI to “significant advantage” in identifying, tracking and assessing threats.
Benchmark in question: Until relatively recently, the organization developing mathematical benchmarks for AI did not disclose that it received funding from OpenAI, leading some in the AI community to suspect fraud.
New models of DeepSeek: Chinese AI research institute DeepSeek has released an open version of its so-called inference model, DeepSeek-R1, and claims it performs on par with OpenAI's o1 on certain AI benchmarks.
This week's research paper
Last week, Microsoft highlighted MatterGen and MatterSim, two AI-powered tools that it claims could help design advanced materials.
MatterGen uses scientific principles to predict potential materials with unique properties. As described in a paper published in the journal Nature, MatterGen generates thousands of candidates with “user-defined constraints” to suggest new materials that meet very specific needs.
Regarding MatterSim, MatterGen predicts which of the proposed materials are stable and viable.
Microsoft says a team at the Shenzhen Institute of Advanced Technology was able to synthesize a new material using MatterGen. The material wasn't perfect. However, Microsoft has made MatterGen's source code publicly available, and the company says it plans to work with other external collaborators to further develop the technology.
this week's model
Google has released Gemini 2.0 Flash Thinking Experimental, a new version of its experimental “inference” model. The company claims it performs better than the original on math, science, and multimodal reasoning benchmarks.
Reasoning models like the Gemini 2.0 Flash Thinking Experimental are effective fact-checkers, allowing you to avoid some of the pitfalls that models typically trip up. As a result, inferential models take a little longer to arrive at a solution (typically seconds to minutes longer) than typical “non-inferential” models.
The new Gemini 2.0 Flash Thinking also has a 1 million token context window to analyze long documents such as research studies and policy documents. One million tokens is equivalent to approximately 750,000 words, or 10 books of average length.
grab bag
An AI project called GameFactory shows that it is possible to “generate” interactive simulations by training a model on Minecraft videos and extending that model to different domains.
GameFactory's researchers, most of them from the University of Hong Kong and the partly state-owned Chinese company Kuaishou, have published several simulation examples on the project's website. Although it leaves a lot to be desired, the concept is still interesting and it's a model that allows you to generate worlds in an infinite variety of styles and themes.