This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Competition Law Meets AI: Enforcement Heats Up

Competition authorities in the EU and the UK are increasingly scrutinising whether AI‑driven products, data tools and digital ecosystems comply with competition law. Recent enforcement activity shows a particular focus on two risks: exclusionary strategies by dominant firms seeking to favour their own AI products, and coordination risks arising from the design and use of data‑rich, AI‑enabled tools, even where there is no express agreement between competitors.

Taken together, these cases suggest that competition authorities are now treating AI not merely as a neutral tool, but as a core competitive parameter - alongside price, quality and innovation - requiring careful consideration as to how such tools are used and designed to operate.

Meta, WhatsApp and AI: The Commission Draws a Red Line

At EU level, the European Commission (“Commission”) has opened proceedings against Meta in relation to third‑party AI assistants and access to WhatsApp. In October 2025, Meta introduced a policy preventing third‑party general‑purpose AI assistants from accessing WhatsApp, leaving Meta’s own AI assistant as the only option on the platform.

The Commission launched proceedings in December 2025 on the basis that Meta may be dominant in the EEA market for consumer communications applications, and that excluding rival AI assistants could amount to an abuse of dominance under Article 102 TFEU.  Meta later reversed the outright ban - it replaced it with a fee payable by third‑party AI providers. However, in February 2026, the Commission issued a supplementary statement of objections, provisionally finding that the revised policy may have equivalent exclusionary effects.

Notably, the Commission has signalled it is considering interim measures to ensure third‑party AI assistants can access WhatsApp on the same terms as before October 2025, citing the risk of serious and irreparable harm to competition in a rapidly evolving AI market. This reflects a growing willingness to intervene early where exclusion today could shape market structure irreversibly tomorrow - particularly where access to data, users and distribution is critical to developing competitive AI products.

Benchmarking or Collusion? The CMA’s Warning on AI‑Driven Coordination

In parallel, the UK Competition and Markets Authority (“CMA”) is investigating suspected indirect information sharing in the hotels sector through STR, a benchmarking and analytics platform owned by CoStar. While the CMA accepts that analytics, algorithms and dynamic pricing tools are not inherently anti‑competitive, it is examining whether such tools may reduce competitive uncertainty in practice.

The investigation focuses on whether features such as frequent or granular outputs, narrow competitive sets, or “give‑to‑get” data arrangements may enable hotels to infer, anticipate or align behaviour – in effect, a form of coordination by data environment rather than by agreement. Importantly, STR itself is also under investigation, suggesting that the CMA is scrutinising not just how the tool is used, but whether its design, configuration and governance choices may have facilitated the exchange of competitively sensitive information.

This appears to represent a subtle but important evolution in enforcement thinking: competition risk may arise from system architecture, even where there is no evidence of intent, communication or overt collusion. AI‑enabled tools potentially can, in effect, displace independent commercial judgment and give rise to what might be viewed as tacit coordination.

What This Means for Businesses

These developments point to a clear enforcement trend. Authorities are adapting competition law to algorithmic markets by looking beyond traditional notions of agreement or intent, and focusing instead on how AI systems are designed, trained and deployed. For businesses, competition compliance increasingly needs to be assessed at the point of AI procurement, configuration and governance, not just downstream behaviour. In fast‑moving AI markets, competition risk is now as much about architecture as it is about conduct.

With thanks to Bryony Roy and Sarah Miller for their contribution to this article.   

“Meta's conduct risks blocking competitors from entering or expanding in the rapidly growing market for AI assistants.”

Tags

partner, london, competition, artificial intelligence, technology, commercial data & tech