
Anthropic has taken the high road by committing to keep its Claude AI model family free of advertising.
“There are many good places for advertising,” the company announced on Wednesday. “A conversation with Claude is not one of them.”
Rival OpenAI has taken a different path and is planning to present promotional material to its free and Go tier customers.
With its abjuration of sponsorship, Anthropic is leaning into its messaging that principles matter, a market position reinforced by recent reports about the company’s clash with the Pentagon over safeguards.
“We want Claude to act unambiguously in our users’ interests,” the company said. “So we’ve made a choice: Claude will remain ad-free. Our users won’t see ‘sponsored’ links adjacent to their conversations with Claude; nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.”
That choice may follow in part from how Anthropic’s customer base, and its path toward possible profitability, differ from rivals.
Anthropic has focused on business customers. According to The Information, “The vast majority of Anthropic’s $4.5 billion in revenue last year stemmed from selling access to its AI models through an application programming interface to coding startups Cursor and Cognition, as well as other companies such as Microsoft and Canva.”
For OpenAI, on the other hand, 75 percent of its revenue comes from consumers, according to Bloomberg. And given the rate at which OpenAI has been spending money – an expected $17 billion in cash burn this year, up from $9 billion in 2025, according to The Economist – ad revenue looks like a necessity.
Other major US AI companies – Google, Meta, Microsoft (to the extent its technology can be disentangled from OpenAI), and xAI – all have substantial advertising operations. (xAI, which acquired X last year, absorbed the social media company’s ad business, said to have generated about $2.26 billion in 2025, according to eMarketer.)
Anthropic’s concern is that serving ads in chat sessions would introduce incentives to maximize engagement. And that might get in the way of making the chatbot helpful and might undermine trust – to the extent people trust error-prone models deemed dangerous enough to need guardrails.
“Users shouldn’t have to second-guess whether an AI is genuinely helping them or subtly steering the conversation towards something monetizable,” the AI biz said.
The incentive to undermine privacy is what worries the Center for Democracy and Technology.
“Business models based on targeted advertising in chatbot outputs, for example, will create incentives to collect as much user information as possible, including potentially from the highly personal conversations some users have with chatbots, which inexorably will raise risks to user privacy,” the advocacy group said in a recent report.
Melissa Anderson, president of Search.com, which offers a free, ad-supported version of ChatGPT for web search, told The Register in a phone interview that she disagrees with Anthropic’s premise that an AI service can’t be neutral while serving ads.
“They’re kind of saying it’s one or the other and I don’t think that’s the case,” Anderson said. “And here’s a great example: The New York Times sells advertising. The Wall Street Journal sells advertising. And so I think what they’re conflating is the concept that maybe advertisers are gonna somehow spoil the editorial content.”
At Search.com and at some of the other large LLMs, she said, there’s a commitment to the natural, organic LLM answer not being affected by advertisers.
Anthropic’s view, she said, is valid but extreme. “The advertising industry for a long time has recognized that having too many ads is definitely a bad thing,” she said. “But it’s possible in a world where there’s the right volume of ads, and those ads are relevant and interesting and helpful to the consumer, then it’s a positive thing.”
Iesha White, director of intelligence for Check My Ads, a non-profit ad watchdog group, took the opposite view, telling The Register in an email, “We applaud Anthropic’s decision to forgo an ad-supported monetization model.
“Anthropic’s recognition of the importance of its role as a true agent of its users is both refreshing and innovative. It puts Anthropic’s trust-centered approach in stark contrast to its peers and incumbents.”
Other AI companies, she said, pointing to Meta, Perplexity, and ChatGPT, have chosen to adopt an ad monetization model that, by design, depends upon user data extraction.
“This data – including people’s deepest thoughts, hopes, and fears – is then packaged to sell ads to the highest bidders,” said White. “Anthropic has recognized that an ad-supported model would create incentives that undermine user trust as well as the company’s own broader vision. Anthropic’s choice reminds one of Google’s original but now jettisoned motto, ‘Don’t be evil.’ Let’s hope that Anthropic’s resolve to do right by its customers is stronger than Google’s was.” ®