CMA warns AI agents may not be ‘faithful servants’ • The Register


Britain’s competition watchdog says the next wave of agentic AI assistants could end up nudging people toward worse deals, manipulating choices, or quietly prioritizing the interests of the companies behind them.

In a report published Monday, the UK’s Competition and Markets Authority (CMA) explored the rise of so-called agentic AI, systems that go beyond answering questions and instead carry out tasks for people, such as shopping around for services, booking travel, switching providers, or managing subscriptions.

The pitch, at least from the tech industry, is that these agents could cut the time and effort required to navigate complex digital markets. But the regulator’s paper reads more like a warning than a celebration.

“Greater autonomy for agents increases the consequences of errors, may heighten risks of manipulation and loss of consumer agency, and could lead to worse overall outcomes for consumers,” the report notes. In plainer terms, handing decisions over to software may not always end well.

One of the CMA’s biggest worries is whose interests these agents will actually serve. An AI assistant that’s supposed to hunt down the best deal for you could just as easily push you toward products that make more money for the platform behind it. That could mean pricier or less suitable options quietly bubbling to the top. In the report’s words, there’s a risk the agent isn’t exactly a “faithful servant” to the consumer.

Personalization – usually pitched as a helpful feature – could also make the problem harder to see. If every user is shown different recommendations or prices based on detailed behavioral profiles, it becomes much harder to tell when something is being steered. The CMA warns that highly adaptive agents could supercharge the sort of manipulative interface tricks often called “dark patterns,” especially if the systems are optimized for engagement, conversions, or other commercial targets.

Even when an agent is trying to behave, there’s still the small matter of reliability. The CMA points out that today’s AI models remain prone to hallucinations and other errors, and those mistakes become more serious when software is allowed to take actions rather than merely offer advice. An incorrect answer from a chatbot is annoying; an autonomous agent canceling a service, switching a contract, or making a financial decision based on flawed information could be considerably more expensive.

Additionally, the watchdog flags the risk of bias and opaque decision-making. If AI agents rely on complex multi-step reasoning that consumers can’t easily inspect or challenge, unfair outcomes may become harder to detect or contest under existing consumer protection frameworks.

Another concern is that people may simply stop paying attention. As consumers delegate more tasks to automated assistants, the CMA suggests there’s a risk of over-reliance, where users defer to automated decisions and gradually lose the habit – or ability – to scrutinize them.

Despite the long list of warnings, the CMA isn’t proposing a fresh batch of rules just yet. Instead, it points out that existing consumer protection laws already apply whether a decision is made by a human or a machine. If an AI agent nudges customers into misleading or unfair deals, the company running it will still be responsible.

In other words, if your helpful AI shopping assistant turns out to be quietly upselling you on behalf of its creator, regulators may have a few questions. ®



Source link