This week, Amazon completed the second phase of a deal it announced last September, pledging to invest up to $4 billion in OpenAI rival Anthropic. The additional $2.75 billion investment is the largest investment Amazon has ever made in another company and is another sign of how important developing language models at scale has become for Big Tech.
The logic is simple. Amazon needs to offer a model through AWS that competes with cloud rival Microsoft's OpenAI-powered offering, and Anthropic is the best alternative out there. If we could turn back the clock to a time when Big Tech could make large acquisitions without being stopped by regulators, Amazon would undoubtedly have tried to acquire Anthropic outright. Instead, the company passively invests billions of dollars and communicates a minority stake with no board seats. Meanwhile, conveniently for Amazon, Anthropic has agreed to spend his $4 billion on AWS over the next few years.
There are clear parallels here with Microsoft's funding of OpenAI's growing computing needs. But the relationship between Amazon and Anthropic isn't as cozy as it appears on the surface. In fact, another division of Amazon is trying to compete directly with Anthropic's model. I learned that his AGI team at Amazon is led by his SVP. Rohit Prasadhas an aggressive goal of outperforming Anthropic's latest Claude model by the middle of this year. The next flagship model, internally codenamed “Olympus,” is currently being trained and is extremely large, with hundreds of billions of parameters.