The EU is developing a Code of Practice to govern general purpose AI, as part of the implementation of the AI Act. But Big Tech has heavily influenced the process to successfully weaken the Code.
Corporate Europe Observatory (CEO) is a research and campaign group working to expose and challenge the privileged access and influence enjoyed by corporations and their lobby groups in EU policy making.
Cross-posted from Corporate Europe Observatory

“The current draft,” Meta wrote in a confidential lobby paper, is a case of “regulatory overreach” that “poses a significant threat to AI innovation in the EU.”
It was early 2025, and the text Meta railed against was the second draft of the EU’s Code of Practice. The Code will put the EU’s AI Act into operation by outlining voluntary requirements for general-purpose AI, or models with many different societal applications (see Box 1).
Meta’s lobby message hit the right notes, as the second von der Leyen Commission has committed to slashing regulations to stimulate European ‘competitiveness’. An early casualty of this deregulatory drive was the EU’s AI Liability Directive, which would have allowed consumers to claim compensation for harms caused by AI.
And the Code may end up being another casualty. Meta’s top lobbyist said they would not sign unless there were significant changes. Google cast doubt on its participation.
But as this investigation by Corporate Europe Observatory and Lobby Control – based on insider interviews and analysis of lobby papers – reveals, Big Tech enjoyed structural advantages from early on in the process and – playing its cards well – successfully lobbied for a much weaker Code than could have been. That means weaker protection from potential structural biases and social harms caused by AI.
Box 1: Big Tech’s long fight against regulating general-purpose AI
The launch of ChatGPT in November 2022 put the issue of regulating general-purpose AI systems squarely on the political agenda.
General-purpose AI – for example ChatGPT – can be used for a wide range of purposes. The systems are often complex and behave in ways that can surprise even their developers. General-purpose AI is trained on societal data and if the data carry structural biases – from racism to ableism and more – these risks are baked into the systems. There are also serious questions, and ongoing lawsuits, over the alleged violation of copyright laws in the development of these large AI models.
Moreover, Big Tech companies have used their vast resources to monopolize the AI market, raising concerns about further market concentration.
Not surprisingly, Big Tech firms have been fighting tooth and nail against attempts to regulate the development of the most advanced AI models. As Corporate Europe Observatory, LobbyControl, and Observatoire des Multinationales have previously documented, tech companies have been successful in watering down key obligations in the AI Act for general-purpose AI providers. In December 2023, during a marathon negotiation of 36 hours between the European Parliament, the EU Commission and the Member States, most binding obligations on general purpose AI were severely watered down, and instead, important guardrails against fundamental rights violations and copyright infringement would be operationalised through a voluntary Code of Practice.
Potemkin participation: how civil society was sidelined
In a private meeting with the Commission in January 2025, Google “raised concerns about the process” of drafting the Code of Practice. The tech giant complained “model developers [were] heavily outweighed by other stakeholders”.
Only a superficial reading could support this. Over 1,000 of stakeholders expressed interest in participating to the EU’s AI Office, a newly created unit within the European Commission’s DG CNECT. Nearly four hundred organisations were approved.
But tech companies enjoyed far more access than others. Model providers – companies developing the large AI models the Code is expected to regulate – were invited to dedicated workshops with the working group chairs.
“This could be seen as a compromise,” Jimmy Farrell of the European think tank Pour Demain said. “On the one hand, they included civil society, which the AI Act did not make mandatory. On the other, they gave model providers direct access.”
Fifteen US companies, or nearly half of the total, were on the reported list of organisations invited to the model providers workshops. Among them, US tech giants Google, Microsoft, Meta, Apple, and Amazon.
Others included AI “start-ups” with multi-billion dollar valuations such as OpenAI, Anthropic, and Hugging Face, each of which receive Big Tech funding. Another, Softbank, is OpenAI’s lead partner for the US$500 billion Stargate investment fund.
Several European AI providers, which lobbied over the AI Act, were also involved. Some of these also partner with American tech firms, like the French Mistral AI or the Finnish SiloAI.
The participation of the other 350 organisations – which include rights advocates, civil society organisations, representatives of European corporations and SMEs, and academics – was more restricted. They had no access to the provider workshops, and despite a commitment to do so, sources said meeting minutes from the model providers workshops were not distributed to participants.
It put civil society, which participated in working group meetings and crowded plenaries, at a disadvantage. Opportunities for interaction during meetings were limited. Questions needed to be submitted beforehand through a platform called SLIDO, which others could then up-vote.
Normally, the AI Office would consider the top ten questions during meetings, although sources told us, “controversial questions would sometimes be side-stepped”. Participants could neither submit comments during meetings, nor unmute themselves.
“There is no chat function; the only option is reacting through emojis to express support or disapproval of comments from other speakers,” said Dinah van der Geest, Digital Programme Operations Manager at Article 19, a fundamental rights organisation actively involved in the discussions.
Stakeholders have regularly stressed the importance of a strong Code of Practice to protect fundamental rights, and the copyright of publishers. In an open letter to Vice-President Virkkunen, civil society organisations have raised concerns about making any measures to prevent fundamental rights abuses, child sexual abuse material and privacy violations voluntary in nature in the third draft of the Code of Practice.
The organisations also protested against shifting responsibility for assessment solely to downstream AI system providers – a long-standing lobby aim of Big Tech companies.
Reporters without Borders raised concerns about risks posed to the right to reliable information by AI-generated deepfakes, propaganda, and the systematic production of inaccurate information by large-language models.
“You want to find allies,” commented Nicole Pfister Fetz, Secretary General of the European Writers Council, “but if you don’t know who is part of the group, you cannot coordinate.”
In the absence of full list of individual participants, which she requested but not received, Pfister Fetz would “write down every name she saw on the screen” and look people up after, “to see if they were like-minded or not.”
Participants received little notice to review and comment on draft documents with short deadlines. Deadlines to apply for a speaking slot to discuss a document would come before said document had even been shared. The third draft of the Code was delayed for nearly a month, without communication from the AI Office, until one day, without notice, it landed in participants’ mailboxes.
A private complaint to Commissioner Virkkunen, filed by the EWC and seen by Corporate Europe Observatory and LobbyControl, called it a “rushed procedure [that] is highly inappropriate”.
For civil society organisations, who lacked the resources of large tech companies and often relied on one or two people to follow the entire process, this created a heavy burden.
“Big Tech firms have more people on their policy team than all civil society organisations have combined,” said a source at a non-profit focusing on AI governance, who requested anonymity to speak freely about the content of the discussions on the Code. “CSOs have felt drowned out of the process.”
A long-standing demand from civil society was a dedicated civil society workshop. It was only after the third severely watered down Code of Practice draft that such a workshop took place.
“They had many workshops with model providers, and only one at the end with civil society, when they told us there would only be minor changes possible,” van der Geest, the fundamental rights advocate, said. “It really shows how they see civil society input: as secondary at best.”
Partnering with Big Tech and the AI office: a conflict of interest?
A contract to support the AI Office in drafting the Code of Practice was awarded, under an existing framework contract, to a consortium of external consultants – Wavestone, Intellera, and the Centre for European Policy Studies (CEPS).
It was previously reported that the lead partner, the French firm Wavestone, advised companies on AI Act compliance, but “does not have [general purpose AI] model providers among its clients”.
But our investigation revealed that the consultants do have ties to model providers.
In 2023 Wavestone announced it had been “selected by Microsoft to support the deployment and accelerated adoption of Microsoft 365 Copilot as a generative artificial intelligence tool in French companies.”
This resulted in Wavestone receiving a “Microsoft Partner of the Year Award” at the end of 2024, when it already supported the AI Office in developing the Code. The consultancy also worked with Google Cloud and is an AWS partner.
The other consortium partners also had ties to GPAI model providers. The Italian consultancy Intellera was bought in April 2024 by Accenture and is now “Part of Accenture Group”. Accenture boasted at the start of 2025 that they were “a key partner” to a range of technology providers, including Amazon, Google, IBM, Microsoft, and NVIDIA – in other words, US general purpose model providers.
The third and final consortium partner, CEPS, counted all Big Tech among corporate members – including Apple, AWS, Google, Meta, Microsoft. At a rate of between €15,000 – €30,000 EUR (plus VAT) per year, members get “access to task forces” on EU policy and “input on CEPS research priorities”.
The problem is that these consultancy firms can hardly be expected to advise the Commission to take action that would negatively impact their own clients. The EU Financial Regulation states that the Commission should therefore reject a contractor where a conflicting interest “can affect or risk the capacity to perform the contract in an independent, impartial and objective manner”.
Also the 2022 framework contract under which the consortium was initially hired by the European Commission stipulated that “a contractor must take all the necessary measures to prevent any situation of conflict of interest.”
Moreover, when requesting bids for the contract to support the AI Office, the Commission specifically asked the contractors to indicate “whether you have a conflict of interest”.
Christoph Demmke, Professor in Public Management at the University of Vaasa, told CEO and Lobby Control that the Commission has wide discretion to hire external consultants to support the AI Office and having ties “does not necessarily and automatically mean they have a conflict of interest.”
Nevertheless, Prof. Demmke observed, “the ties of these contractors to US technology firms with an interest in the Code of Practice could compromise their own consultancy interests.”
While “not illegal, this may be problematic from a political, democratic or representative point of view,” he added.
Insiders who dealt with the external actors, however, indicated that the external contractors did stakeholder management, external communications, and the preparation of meetings by collecting questions and input. The consortium would, sources said, also review or summarise written input received from stakeholders that would go into the drafting of the Code.
Prof. Demmke, who co-authored a report on conflict-of-interest policies for the European Parliament, concluded it would be important to know whether the consortium declared their ties to US GPAI model providers to the Commission, and whether the Commission deemed this acceptable or if they independently verified the absence of a potential conflict of interest.
CEO and Lobby Control contacted the European Commission and the EU AI Office for comment, but by the time of publication no reply was received. We also contacted the three organisations in the consortium. Wavestone would not say which specific tasks the consortium fulfilled for the AI Office, stating it had “a right to privacy with regard to our customers,” and did not answer questions about conflicting interests. Intellera and CEPS did not respond to a request for comment.
Is discrimination risky business?
On key issues, the messaging of the US tech firms was well coordinated. Confidential lobby papers by Microsoft and Google, submitted to EU members states and seen by Corporate Europe Observatory and LobbyControl, echoed what Meta said publicly – that the Code’s requirements “go beyond the scope of the AI Act” and would “undermine” or “stifle” innovation.
It was a position carefully crafted to match the political focus on deregulation.
“The current Commission in trying to be innovation and business friendly, but is actually disproportionately benefiting Big Tech” said Risto Uuk, Head of EU Policy and Research from the Future of Life Institute.
Uuk, who curates a biweekly newsletter on the EU AI Act, added that “there is also a lot of pressure on the EU from the Trump administration not to enforce regulation.”
At an AI Summit in Paris, US Vice President JD Vance warned the EU’s regulation could “kill” the technology.
Big Tech has successfully weaponised the US government to aggressively push back against EU digital rules. In February, the Trump administration passed an executive order threatening to impose tariffs on foreign governments in response to taxes or fines of Big Tech companies. At the end of April, Bloomberg reported that the US government’s Mission to the EU put additional pressure on the EU Commission in a letter calling the Code of Practice excessively burdensome and even pushed for pausing the implementation of the AI Act.
One of the most contentious topics has been the risk taxonomy. This determines the risks model providers will need to test for and mitigate. The second draft of the Code introduced a split between “systemic risks,” such as nuclear risks or a loss of human oversight, and a much weaker category of “additional risks for consideration”.
“Providers are mandated to identify and mitigate systemics risks,” Article 19’s Dinah van der Geest said, “but the second tier, including risks fundamental rights, democracy, or the environment, are optional for providers to follow.”
These risks are far from hypothetical. From Israeli mass surveillance and killing of Palestinians in Gaza, the dissemination of disinformation during elections including by far-right groups and foreign governments, to massive lay-offs of US federal government employees, generative AI is already used in countless problematic ways. In Europe, investigative journalism has exposed the widespread use of biased AI systems in welfare systems.
The introduction of a hierarchy in the risk taxonomy offered additional lobby opportunities. Both Google and Microsoft argued that “large-scale, illegal discrimination” needed to be bumped down to optional risks.
Interestingly, they did so in identical wording. It shows that Google and Microsoft, which often present themselves as competitors, coordinated their positions on minute details of the Code.

The tech giants got their way: in the third draft, large-scale, illegal discrimination was removed from the list of systemic risks, which are mandatory to check for, and categorised under “other types of risk for potential consideration”.
Like other fundamental rights violations, it now only needs to be checked for if “it can be reasonably foreseen” and if the risk is “specific to the high-impact capabilities” of the model.
“But what is foreseeable?” asked Article 19’s Dinah van der Geest. “It will be left up to the model providers to decide.”
It threatens to leave a large loophole for Big Tech to exploit, about which eight current and former MEPs have expressed “great concern” to Commission Vice President Virkkunen.
Copyright: the “make or break” chapter
A document from Business Europe, submitted to the Swedish ministry responsible for AI policy, said that “copyright is the ‘make or break’ chapter for the success of the Code as this touches upon the heart of the data collection practices that power these models”.
Insiders confirmed the sensitivity of copyright in the discussions.
“For external evaluation or risk mitigation, we have seen pushback from the tech industry,” said the non-profit representative working on AI governance, “but it is nothing compared to what we have seen on copyright.”
Tech companies baulked at the copyright measures proposed in the first and second draft of the Code. Of particular concern were measures to undertake “copyright due diligence” on third party datasets and “ensure lawful access to copyright-protected content”.
According to Google and Microsoft’s lobby papers, the measures would require them to “demonstrate proof of compliance” and “reverse the burden of proof in relation to copyright law”.
The third draft of the Code concedes to many of the tech firms’ copyright demands. Conducting a copyright due diligence of third-party datasets changes to “make reasonable efforts to obtain adequate information”. A prohibition on “copyright infringing uses of the model” becomes “make reasonable efforts to mitigate the risks of production of copyright-infringing output”.
It fits a broader pattern. From the second to the third draft of the Code, obligations changed to encouragements. Prohibitions became suggestions. Best efforts, reasonable efforts.
But creators are concerned the Code will give ample room for Big Tech companies to run circles around copyright legislation.
“The basic principle is that an author can decide to give, or not give, their work for use,” said Nicole Pfister Fetz of the EWC. “But they used our works, they never asked, and now we have to prove they used it.”
Rights-holders often have no way of knowing whether their work was used to train AI models, and the Code’s third draft weakened commitments to publish information on compliance with rights reservations.
“If we don’t get full title-by-title transparency on which specific copyrighted work was used to train AI models,” added Nina George, President of Honor of the EWC, “we can neither file complaints nor check if opt-outs are respected.”
A tool launched by the American magazine The Atlantic shows that at least two books by Corporate Europe Observatory have been used to train Meta’s AI model. The books are part of LibGen, a database which contains 7.5 million pirated books, which Meta has controversiallyused to train its Llama 3 AI model. According to the NGO Open Future, the third draft would however leave many kinds of data collecting practices, including the downloading of pirated books, outside of the scope of the Code of Practice. Sidenote
Towards adoption
The AI Act states a Code of Practice needs to be adopted by 2 May 2025. Participants submitted final comments at the end of March, and chairs are now working out a final version behind closed doors.
In the end, both the Commission and the AI Board, made up of representatives from Member States, will need to approve the Code.
There are concerns Big Tech will “turn up the lobby dial to eleven” in the final stretch of negotiations. And as model providers, they hold a lot of the cards.
“If they don’t sign the Code, it’s going to be a joke, a massive waste of resources,” a civil society representative said. “So, they don’t want to give them everything – but enough to get them to sign.”
While some civil society organisations, like Reporter Sans Frontières, decided to walk out on the process, others were afraid this would only lead to a weaker Code.
“If we keep engaging in the process, they get to rubber stamp it,” a civil society representative who requested anonymity said, “but there is a real risk that the bare minimums are lowered if we’re not there. It’s a tricky situation.”
Big Tech rides the deregulation wave
At the AI Action Summit in Paris in February 2025, European Commission President Ursula von der Leyen had clearly drunk the AI Kool-Aid: “We want Europe to be one of the leading AI continents. And this means embracing a way of life where AI is everywhere.” She went on to paint AI as a silver bullet for almost every societal problem: “AI can help us boost our competitiveness, protect our security, shore up public health, and make access to knowledge and information more democratic.”
The AI Action Summit marked a distinctive shift in the Commission’s discourse. Where previously the Commission paid at least lip-service to safeguarding fundamental rights when rolling out AI, it now largely abandoned that discourse talking about winning “the global race for AI” instead.
At the same summit, Henna Virkkunen, the Commissioner for Tech Sovereignty, was quick to parrot von der Leyen’s message, announcing that the AI Act would be implemented ‘innovation-friendly’, and after criticism from Meta and Google a week earlier, she promised that the Code of Practice would not create “any extra burden”.
Big Tech companies have quickly caught on to the new deregulatory wind in Brussels. They have ramped up their already massive lobbying budgets and have practiced their talking pointsabout Europe’s ‘competitiveness’ and ‘over-regulation’.
The Code of Practice on General-Purpose AI seems to be only one of the first casualties of this deregulatory offensive. With key rules on AI, data protection, and privacy up for review this year, the main beneficiaries are poised to be the corporate interests with endless lobbying resources.
At the same time, the close ties between Big Tech and the Trump administration show more than ever the clear and acute danger of unchecked corporate power.
Big Tech cannot be seen as just another stakeholder. The Commission should safeguard the public interest from Big Tech influence. Instead of beating the deregulation drum, the Commission should now stand firm against the tech industry’s agenda and guarantee the protection of fundamental rights through an effective Code of Conduct.
Be the first to comment