I’ve been dutifully chronicling the mismatch between Amazon’s endless generation of words about generative AI and their actual delivery (or lack thereof) of a large-language model (basically a divide-by-zero error).
Get Updates By Email
To date, Amazon’s primary generative output has been mediocre metaphors arguing it is too early to dismiss them as irrelevant. If metaphor quality is any indication of their LLM quality, they are in trouble.
They subjected us to an entire summer of “We’re only three steps into a 10k race” (at that rate, they are suggesting this race could take upwards of a millennium to play out). As gimme three steps got overplayed, they summoned the media last week to announce a metaphorical shift to “it’s Day 0.1” (announced on Day 296 of ChatGPT’s existence). Dismissing LLMs as a mere “parlor trick” might have been more compelling had they been able to perform said trick.
But we finally have something meatier than metaphors to mull. A press release:
It can be both fun and informative to parse this kind of announcement:
- The first paragraph tells us that Anthropic, seemingly of its own volition, “selects AWS as its primary cloud vendor”, is very excited about Amazon’s training and inference silicon, and will graciously subordinate developer access to their models through AWS’s Bedrock API. That is quite an endorsement for Amazon, a company some have written off in this space.
- But six paragraphs later we learn — “oh, by the way” — Amazon is investing “up to $4 billion” in Anthropic (presumably they have an option to buy at least control of the company). No details are provided as to whether all that coinage influenced Anthropic’s endorsements.
- Anthropic’s cap table has some strange bedfellows. The Series B was funded by the brain trust/criminal defendants at effective altruism company FTX (unclear what happens to the FTX stake with Amazon’s investment, but the bankruptcy administrators’ actions in July hint at when the Amazon talks heated up). Anthropic previously ran on AWS until Google Cloud wooed them away barely seven months ago for $300 million (Anthropic was really excited about Google’s chips too).
- Some wonder how Google Cloud could let Anthropic “get away“. Their escape just underscores how low GCP is on the list of priorities at Google, and Google proper, although still in the midst of bestirring itself from a long peacetime nap, doesn’t need or or want Anthropic. Amazon, meanwhile, spells desperation with A and I.
- The silicon allocation team at NVIDIA will no doubt see that first paragraph prominence of support for Amazon silicon in the announcement (and in the unlikely event they miss that, they’ll see the reiteration in the fourth paragraph). Amazon is kind of doubling down on not even at least going through the motions of making nice with NVIDIA.
- Despite being framed as an AWS announcement, Amazon CEO (and generative AI product manager for the company) Andy Jassy provides the quote. There is a single passing reference to “Amazon developers and engineers will be able to build with Anthropic models… and create net-new customer experiences across Amazon’s businesses” but that is it for discussion of first party use at Amazon. No word on whether last week’s demo of an LLM behind Alexa was using Anthropic as some have speculated.
- We get a lengthy reiteration of Amazon’s three-layer strategy for generative AI which includes everything except their own LLM. There is no reciprocity of primary-ness: Amazon is sticking to the claim they will support multiple models even as their internal Titan LLM is still missing in action and there are very few unaligned LLMs left. But they know just renting out GPUs leaves them precariously lower in the stack than the companies that use those GPUs (for next word guessing no less). It looks like Amazon has concluded they must buy, not build, and further they’re not going to be the center of a competitive, multi-model ecosystem where they can play off multiple LLMs against one another. That is a change in both ambition and direction.
- Companies that are cheapskates when it comes to acquisitions, especially as emerging categories tip, usually regret being penny pinchers later. That looks like what happened here. Amazon likely had a chance to invest in Anthropic’s earlier rounds, were aghast at the valuation, and now find themselves paying up by an order of magnitude two quarters later. That also suggests Titan failed to make (over)promised progress in recent months.
So it seems Amazon has found their OpenAI and hope to use it to rejoin the competition with Google (Gemini) and Microsoft/OpenAI.
But the most entertaining aspect of this announcement is thinking about how these two very different companies will work together. Some interesting attributes of Amazon’s new bestie:
- Anthropic describes itself as “an AI safety and research company”. Contrast that to OpenAI, which is “an AI research and deployment company”, or AWS’s insane boilerplate that enumerates precisely how many deployed and announced Availability Zones they have.
- Anthropic is a Public Benefit Corporation and is running some interesting experiments in corporate governance.
- Anthropic is the doomer-i-est of AI companies. They must have a discussion every morning whether to just shut the whole operation down due to the ongoing existential risk they pose to humanity. A recent profile of the company was entitled Inside the White-Hot Center of A.I. Doomerism. Quote: “At times, I felt like a food writer who was assigned to cover a trendy new restaurant, only to discover that the kitchen staff wanted to talk about nothing but food poisoning.”
- Anthropic spends a lot of time on highly academic and abstruse topics (I have an unfinished post in my Things I Don’t Understand series about their ideas around Constitutional AI, which are either very, very naive or very, very cynical).
- Anthropic’s AI assistant is named Claude, which sounds French.
Amazon is none of those things. And is extraordinarily frantic to show the world that generative AI has not left them behind.