Generative AI in the spotlight of competition regulators.
The European Commission launched in January 2024 an investigation into the agreements between large AI players and their impact on the market and competition. The Commission specified that it is checking whether Microsoft's investment in OpenAI might be reviewable under the EU Merger Regulation. On the European side of the supply chain, Microsoft's investment in the French startup Mistral AI raised concerns as part of the European Commission‘s ongoing investigation into competition within the AI market. Meanwhile, Mistral released in April 2024 its open - source frontier model, Mixtral 8x22B, free for anyone to download and build upon.
Examination of the Supply Chain in France
On February 7, 2024, the French Autorité de la Concurrence started inquiries ex officio into the generative artificial intelligence sector and launched a public consultation. Several points from the Center for AI and Digital Policy (CAIDP)’s timely submitted Consultative Input to this consultation are particularly relevant and quoted below.
Foundation models are resource intensive. The resources needed to train and develop foundation models include significant computing power, large amounts of high-quality data and a highly-skilled workforce.
The high computing power is one of the reasons smaller companies cannot easily enter the generative AI sector.
The data collected for training foundation models comes at the cost of individual privacy and autonomy over the data. The CNIL action plan on artificial intelligence states that protection of personal data is a major challenge for the design and use of these tools.
In a complaint filed with the U.S. Federal Trade Commission (FTC), CAIDP warned of the specific risks to privacy of generative AI products such as ChatGPT. ChatGPT’s multi-modal capabilities pose a significant threat to public safety. OpenAI itself has acknowledged that “GPT-4 has the potential to be used to attempt to identify private individuals when augmented with outside data. Access to high-quality data and resources can be considered a barrier to entry or expansion for providing generative AI services. Leaner foundation models that require less data and computing power can enable smaller companies or individuals with limited resources to participate in the AI market. By reducing the reliance on vast amounts of data and expensive computing infrastructure, these leaner models can promote access to AI technology and foster innovation among new entrants.
CAIDP’s recommendations for the French Autorité de la Concurrence were the following:
- The entire lifecycle of generative AI needs to be governed through a transparent monitoring system that is periodically reviewed and updated. Monitoring metrics should be based on human-centric metrics, and must include impact assessments towards human safety and risk minimization. The results of such assessments need to be published.
- Rigorous documentation and disclosure of training set data
- Within the human rights impact assessments any red flags (public safety, civil rights risks) need to be highlighted so that non-AI based systems will remain a viable alternative.
- Implementation of independent third-party audits of the system.
Looking at the French Competition’s Authority Roadmap for 2024-2025, the Autorité will pay close attention to the competition concerns raised by AI, particularly to the risk of the biggest digital players being able to control access to the resources needed to deploy it . The first step will be the publication, before the summer of 2024, of the opinion on the competitive situation in France in the generative artificial intelligence sector.
Competition and Markets Authority (CMA) on AI Foundation Models
Meanwhile, on April 11, 2024, the latest report by UK's Competition and Markets Authority (CMA) on AI Foundation Models (FM) summarizes the sector’s evolution since September 2023 and proposes new principles to address competition concerns. The report sets out risks to competition and consumer protection, including three key risks to fair, open and effective competition that we see arising from current and potential developments in the FM sector:
- firms controlling critical inputs for developing FMs may restrict access to shield themselves from competition ;
- powerful incumbents could exploit their positions in consumer or business facing markets to distort choice in FM services and restrict competition in deployment ;
- partnerships involving key players could exacerbate existing positions of market power through the value chain .
Sarah Cardell, CEO of the CMA, said: ”When we started this work, we were curious. Now, with a deeper understanding and having watched developments very closely, we have real concerns. ”
“Our strongest concerns arise from the fact that a small number of the largest incumbent technology firms, who already have market power in many of today’s most important digital markets, could profoundly shape the development of FM related markets to the detriment of fair, open and effective competition and ultimately harm businesses and consumers, for example by reducing choice and quality, and by raising prices.”
The competition regulators seem to be preparing for possible enforcement actions. Looking forward to the unfolding of the factors driving competition in Generative AI sector.

