Interview with the leader of the European Trade Union Confederation about artificial intelligence and work.
Esther Lynch is the General Secretary of the European Trade Union Confederation (ETUC).
Seden Anlar is a Brussels-based journalist, podcast host/producer, moderator, and political communications specialist.
Cross-posted from Green European Journal

Seden Anlar: The shift towards Artificial Intelligence in the workplace has sparked intense debate. Some view AI as a natural progression, much like previous waves of automation, arguing that it will create new opportunities, boost productivity, and ultimately benefit workers. However, others warn that it could fundamentally change the nature of work, leading to greater job insecurity, loss of autonomy, and new forms of workplace surveillance. Where do you think the reality lies between these opposing perspectives?
Esther Lynch: You’re absolutely right – change has always shaped the world of work, and every new technology has brought both opportunities and challenges. AI is no exception.
When AI is used as a tool to support workers, it can be incredibly beneficial. In healthcare, for example, AI is highly effective at detecting early signs of cancer. In legal work, it can quickly sift through documents to identify key phrases. It’s also excellent at collecting and organising large amounts of information. When workers are the ones controlling AI and using it to assist their jobs, they report clear benefits.
The problems arise when AI takes control. We’ve seen cases where it determines job assignments, sets work speeds, or even influences hiring and firing decisions. In some instances, AI-driven systems have reinforced discrimination – against women, against workers based on their location, and in ways that mirror older biases but in an automated form. These biases might not be intentional, but they’re not accidental either. If we know a system can cause harm and fail to prevent it, that’s a failure of responsibility. Any time AI takes decision-making power away from workers, the outcomes are negative.
Another major shift we are seeing is AI taking over the easier, more routine parts of jobs – including some creative aspects – while leaving workers with the most difficult and emotionally taxing tasks. Take customer service: before, a worker’s day included a mix of responsibilities, such as answering basic queries, filling out reports, and occasionally handling a difficult customer. Now, AI handles routine tasks, leaving workers to spend their entire shift dealing with frustrated customers who have already been let down by an automated system. This increases emotional strain, and it’s no surprise that stress and burnout are rising.
So, where does the truth lie? AI can absolutely be a force for good when it empowers workers. But when it is used to control them – whether through algorithmic decision-making, surveillance, or by shifting all the hardest work onto human employees – it becomes a serious problem. While the impact varies from workplace to workplace, these overarching trends are shaping the future of work across Europe.
Discussions about AI’s impact on work often focus on jobs, productivity, and efficiency – whether AI will create new opportunities or lead to job losses. But we rarely talk about how AI will affect workers on a daily basis. The same happened with automation and digitalisation, with economic impact taking centre stage while the lived experience of workers is sidelined. How do you see AI reshaping fundamental aspects of work, such as autonomy, dignity, and workplace relationships?
One of the biggest concerns workers have about AI-driven surveillance is not knowing when they’re being monitored, what exactly is being measured, or what the consequences of that monitoring might be.
Take AI’s ability to analyse emotions by tracking facial expressions or body movements. In some cases, this could be helpful – for example, in healthcare settings where workers, particularly those in emergency rooms, face physical threats. AI could act as an early-warning system, alerting security staff to intervene before an incident escalates. But critically, it should always be a human making the decisions, never AI.
On the other hand, emotion-tracking AI can be deeply problematic when it’s used to police workers’ attitudes and behaviour. Imagine being monitored every second to ensure you are smiling enough while checking in hotel guests or serving customers. And not just any smile – a genuine one, because AI can distinguish between real and fake emotions. This kind of surveillance forces workers to suppress their natural reactions, creating a work environment where they must “deepfake” their emotions just to keep their jobs.
We shouldn’t underestimate the damage this does to human relationships at work. Work is not just about productivity; it’s about people coming together, interacting, and being able to show up as themselves. Of course, professionalism matters in any job, but workers also need space to be human. If someone has just suffered a bereavement, for instance, their colleagues will naturally understand that they’re not going to be cheerful. Artificial Intelligence, however, doesn’t have that nuance. It reduces everything to data points and expectations, stripping away the flexibility and understanding that comes with human interactions.
There is also a broader issue here: some professions should never be handed over to AI. Journalism, the justice system, and healthcare, for instance – these fields require human judgment, ethical reasoning, and accountability. No one should be forced to make life-altering decisions about their career, health, or legal rights by interacting solely with an algorithm. AI should be a tool to support workers, not a barrier between people and essential services.
At the heart of the debate isn’t whether AI is good or bad, but how it is used. Who’s in control? What is being measured, and why? And what happens to that data? Instead of being used to control and monitor workers, AI should be designed to make their jobs easier and their workdays better. That’s why we need strong regulations to protect workers’ dignity – such as ensuring that emotion tracking isn’t allowed in the workplace because it’s simply too invasive. The AI Act has taken some steps in the right direction, but those protections need to be much stronger.
This isn’t about being anti-technology or anti-science. It’s about making sure AI serves workers rather than making work more dehumanising and stressful.
Beyond individual mental health, there’s also the question of human relationships at work. AI is often seen as something that removes human interaction, but in many workplaces, those interactions have already become highly transactional – whether it’s cashiers working under pressure to meet efficiency targets or call centre workers handling scripted conversations all day. Will AI further reduce human relationships at work?
I think the importance of human relationships is only going to become more obvious. I was on a flight recently, reading a magazine about exclusive gold cards for the ultra-wealthy, and one of the services they offered was the ability to speak to a real person when dealing with problems. That in itself shows how valuable human interaction is, as it’s already being marketed as a luxury. But our view [at the European Trade Union Confederation, ETUC] is that this shouldn’t be a privilege reserved for the wealthy. Everyone should have the right to speak to a human when they need support.
At the same time, the workers providing these services should receive their due value – not just in terms of recognition, but in their wages and working conditions. Human labour needs to be properly acknowledged and rewarded. A major concern in this transformation is that all the financial benefits of AI will end up concentrated at the top while workers are left with increased psychological burdens and little to no recognition for the emotional labour they perform. That’s why, as trade unions, we’re focused on ensuring that profits from AI and automation are fairly distributed.
It would be a huge mistake if all the gains went to a handful of CEOs and major shareholders while workers saw none of the rewards. That’s not just about wages – it’s also about taxation. This technological revolution should benefit society as a whole, not just a select few. We need policies that ensure wealth generated from AI is distributed justly so that workers receive their fair share of profits and public services are properly funded.
We’ve seen too often how technological change benefits those at the very top while leaving everyone else to deal with the fallout. That cannot be allowed to happen again. Part of valuing human labour means paying people fairly, ensuring reasonable working conditions – including adequate breaks for jobs with high psychological stress – and making sure that the tax system supports a fairer distribution of wealth.
Even though we’re talking about AI as a futuristic, cutting-edge technology, so much of what you’re describing – like wealth distribution and the question of who benefits from these changes – feels very familiar. How does the current AI hype reflect historical power dynamics between workers and employers? Are we simply seeing a continuation of past trends, or do you think AI is introducing something fundamentally new in this relationship?
AI is already being used in troubling ways, particularly to target workers who are likely to join or form trade unions. We’ve seen cases where employers use AI-driven systems to identify and take action against these workers. A striking example was when delivery workers organised a protest. The digital systems tracking them – whether AI or another form of surveillance – immediately detected their activity, and as a result, many of them were cut off from their ability to make a living.
Unfortunately, this isn’t surprising. Some employers have always sought to prevent workers from organising, and now they see Artificial Intelligence as another tool to achieve that goal. But what’s even more concerning is how AI can go beyond tracking workers’ activities: it can also predict and interpret information in ways that put workers at further risk. Employers may soon have AI systems that predict when someone is likely to get pregnant, take parental leave, or even develop a health condition. That could allow unscrupulous employers to dismiss workers before these events occur, reinforcing old patterns of discrimination against women, parents, and vulnerable employees.
This is why we urgently need legal protections. Existing EU directives that safeguard workers’ rights are at risk of being undermined by AI-driven management tools. We need a directive that explicitly prevents AI from being used to weaken those protections. But we also need new rights – particularly around workplace surveillance, monitoring, and the governance of AI. Every worker should have a say in how AI is used in their workplace, and the only way to ensure that without fear of retaliation is through collective bargaining and trade unions.
What worries us [at ETUC] is the lack of what we call “moral imagination” among lawmakers. We need policymakers to think ahead about the ways AI can be misused in the workplace and act now to prevent harm. We shouldn’t wait until thousands of workers have been exploited, discriminated against, or had their dignity stripped away. Instead, we need clear, simple laws prohibiting AI from being used for specific harmful purposes. Artificial Intelligence should not be a tool for discrimination or exploitation; it should be regulated to ensure it supports workers, not undermines them.
Your use of the term “moral imagination” made me think of another issue you’ve touched on – accountability – which I believe is missing from this discussion.
Part of the problem might be the nature of AI itself. It’s not a person or a tangible entity, but a machine. So, when decisions like hiring and firing are made by AI, it becomes harder to pinpoint responsibility.
That’s exactly the case. This is why we emphasise the “human-in-control” principle. This principle serves two key purposes.
First – and most importantly – it ensures that workers are never under the control of AI. Artificial Intelligence should be a tool that workers use, not something that dictates their actions or decisions. We’ve already covered that point, but the second part of this principle is just as crucial: there must always be a human who can be held responsible for what AI does.
We cannot allow a situation where employers shrug off responsibility by saying, “Oh, that’s unfortunate, but it was the AI’s fault, not mine.” That’s completely unacceptable. Employers must be accountable for the systems they use.
I can imagine a scenario where a union official is representing a worker, and the AI system has experienced what’s known as a hallucination – producing false or misleading information. As a result, the worker may have completed a series of tasks based on incorrect AI-generated data. But unless we change the law, the only person held accountable in that situation will be the worker, not the AI system or those responsible for implementing it.
We need to make sure that AI doesn’t operate without accountability, leaving workers to bear the consequences of its mistakes. A worker might not have received all the necessary information, might have been given inaccurate data, or might have been assigned impossible targets due to an AI error. It’s critical that workplace policies make it clear that AI hallucinations cannot be used as a justification for punishing workers.
Let’s expand on AI hallucinations. As we discussed, AI is often presented as completely objective – almost infallible – because it’s a machine. But time and time again, reports and news articles reveal that the data AI relies on is inherently biased and leads to discrimination. How do you think this bias is shaping workplace decisions, and what are the most urgent challenges that need to be addressed?
One thing to note is that AI doesn’t just rely on flawed data – it also has a tendency to invent things out of nowhere. In addition to drawing from existing sources; it generates its own conclusions, which don’t always align with reality. While it has been designed to create new ideas, the way it does so isn’t always sufficiently grounded in real-world understanding. More importantly, I question whether these systems have been developed with a real awareness of human relationships and their significance.
Another key issue is privacy. Workers care deeply about their privacy. It’s not just a concern for the wealthy. When we talk about privacy, we’re talking about workers’ personal data – their information and digital footprint. If a worker wears a fitness device, for example, that data belongs to them, not their employer.
We’re particularly concerned about the pressure placed on workers to share personal data as part of their jobs, often without any transparency about how that data will be used or who it will be sold to. Many workers are asked to sign away their data rights on their first day, in sections buried somewhere in their contract. But let’s be honest: most people don’t read every clause in a job contract, and even if they do, they often don’t have the bargaining power to object. It’s difficult to negotiate a specific clause about data use when you’re just trying to secure a job.
That’s why it’s absolutely essential that workers, through their trade unions, have the power to protect themselves from these unfair practices. No employer should be able to collect and use a worker’s data without clear, informed consent – and even then, there must be safeguards in place. Individual consent alone isn’t enough. There needs to be a collective agreement to ensure that data isn’t misused. Without that balance of power, employers can gather and sell workers’ data for purposes that have nothing to do with the workplace, with no transparency about where it goes or how it’s used.
And it’s not just data – workers’ images and identities also need to be protected. AI-driven surveillance, such as facial recognition and biometric tracking, is increasingly being used in workplaces, often without workers’ explicit consent. In some cases, AI even generates content based on real workers’ images without their knowledge, further raising concerns about privacy and exploitation. It’s completely unacceptable for AI to exploit a worker’s image without their consent. The same respect and dignity that the elites demand for their privacy should also apply to working people.
AI doesn’t inherently understand privacy. That means it is up to us, as the ones in control, to set those boundaries – but to do so in a way that is fair and applies equally to everyone. Do we need to start thinking about new rights that specifically protect human interaction and decision-making in an AI-driven world? Can we look to the past for lessons on how to approach this?
This is a discussion we urgently need to have because, honestly, I don’t know where all the answers lie. But one of the key questions we need to ask is: Do we have a right to human care?
For example, if you’re sick, do you have the right to be cared for by a human? Do we have a right to human-led education? More broadly, what rights should exist to protect human relationships in an era increasingly shaped by AI? A century ago, people came together to create the Universal Declaration of Human Rights. Now, we need to start thinking about what additional human rights are necessary to safeguard human interaction – things like the right to be yourself and, as I mentioned earlier, the right to have key decisions made by a human when they have a material impact on your life. No AI system should ever have the final say in such cases.
This discussion can’t just be left to experts – it needs to include the people who are directly affected.
Strong regulations could, in an ideal world, address many of the risks we’ve discussed. But how is AI in the workplace currently regulated in the EU? You mentioned the EU AI Act – can you explain what it covers and where you see gaps?
I feel the need to defend the EU AI Act because it has come under enormous pressure from tech entrepreneurs. I strongly support the AI Act, but that doesn’t mean we don’t also need a separate AI at Work directive; We absolutely do.
This directive doesn’t need to be long, but it must be clear. It should guarantee the human-in-control principle, ensuring AI never replaces human decision-making in critical areas. Much like the existing EU Framework Directive on Health and Safety at Work, which states that all workplaces must be safe, AI should be integrated into that framework – setting clear standards for what “safe” means in an AI-powered workplace.
We also need to address the psychological and social risks of AI. Issues like work intensification, stress, injuries caused by AI-driven management systems, and increasing worker isolation should be recognised as psychosocial risks. That’s why we’re also advocating for a directive that explicitly addresses psychosocial risks at work, with AI being a key part of that discussion.
Finally, trade unions must have the right to bargain and access information about how AI is being used in workplaces. This is crucial for managing technological change, ensuring that AI doesn’t lead to mass layoffs or redundancies, but instead supports workers through training and skills development. A strong directive should protect workers and invest in them while ensuring AI-driven changes do not have negative impacts on jobs and working conditions.
Within this contested debate, one of the key arguments being used right now is competitiveness. The EU recently released its “Competitiveness Compass”, which ETUC declined to endorse because “it undermines jobs, rights, and standards”.
How does the current discourse on competitiveness – and the push for less regulation – intersect with AI and workers’ rights? If deregulation is being framed as a way to make the EU more competitive, where and how do we draw the line when it comes to protecting the rights we’ve been discussing?
We’re not convinced that workers’ rights in Europe are the reason companies are struggling to advance, innovate, or compete. Workers and their rights are not holding businesses back.
The obsession with deregulating workers’ rights is a distraction from the real problems. The real issue is the lack of investment. Time and time again, instead of reinvesting in innovation, skills, and development, we’ve seen company profits go towards stock buybacks, dividend payments, and inflated executive pay. What we need is a clear industrial policy – a deliberate investment strategy that supports both businesses and workers. But that investment must come with conditions.
Those conditions should ensure that workers are involved in managing workplace changes, that they have a say in decision-making, that they receive proper training, and that transitions are handled responsibly. This is the discussion we need to be having: How do we make European businesses competitive while ensuring workers are protected and included in shaping the future of work?
We do not believe that equal pay, fair wages, or paid holidays are obstacles to competitiveness. In fact, we think the opposite. Many companies choose to base themselves in Europe precisely because of our strong social protections, fair justice system, and social security networks that contribute to societal stability. These are strengths, not weaknesses.
Frankly, we’re disappointed that much of the business community in Europe isn’t speaking up for workers or highlighting Europe’s advantages as a competitive and fair place to do business. Instead of advocating for a race to the bottom – cutting wages and stripping away rights – we should be building a competitiveness model based on high standards, social dialogue, and shared prosperity.
Europe can absolutely compete while protecting a strong foundation of rights and standards. And it can compete more effectively by fostering social dialogue – moving forward together rather than adopting a winner-takes-all approach, where only a handful of companies succeed while all the benefits go to a tiny elite at the top.
At ETUC, you’re working on AI and the future of work more than ever. You’ve already mentioned collective bargaining, but how are trade unions approaching this issue? What are your key focus areas – both at the EU level, given your role, and at the national level?
At the company, sector, national, European, and even international level, we have a clear set of demands. First and foremost, the decision to introduce AI in the workplace must be discussed and negotiated with the workforce. How AI is used must also be negotiated, with sufficient safeguards in place. Profits generated from AI must be fairly shared with workers, and the importance of human dignity at work must be recognised and protected through collective agreements. These are comprehensive packages that cover every stage – from before AI is introduced in the workplace to how it is implemented and managed.
One of the biggest advantages for workers who are part of our affiliates is that we bring unions from different sectors together to share ideas and strategies. We ask: “What’s in your agreement? How are you providing the necessary training? How are you preventing work from becoming overly monotonous or stressful?”
Trade unions are about collaboration, solidarity, and standing up for each other – not just within individual occupations but across different sectors. That’s at the heart of trade unionism. No matter what challenges AI presents, we believe that by empowering workers and standing together, we can shape a fairer workplace and ensure AI is used in a way that benefits workers, not just employers. And by regulating AI in the workplace, we’re also helping to set standards that benefit society as a whole.
Be the first to comment