Anton Ekker is the lawyer for British Uber and Ola Cab drivers who, via the Worker Info Exchange, are pursuing their data rights at work at an appeal court in Amsterdam on 18 May. Before the trial, Ekker spoke to the Gig Economy Project about the legal fight for workers’ data rights.
The Gig Economy Project was initiated by BRAVE NEW EUROPE enabling us to provide analysis, updates, ideas, and reports from all across Europe on the Gig Economy. If you have information or ideas to share, please contact Ben on GEP@Braveneweurope.com.
This series of articles concerning the Gig Economy in Europe is made possible thanks to the generous support of the Andrew Wainwright Reform Trust.
EVERY worker whose labour is managed via a digital application has data collected on them. In private hire platforms like Uber, thousands of data-points are collected on drivers, from how fast they drive to whether they are also working for a competitor.
This data is hugely valuable for Uber as it informs every decision the company makes about its operations. But for drivers, what does it mean when your boss gathers more information about you in a day than any human being could in a life-time, information that you do no have any access to?
In the report ‘Managed by Bots: data-driven exploitation in the gig economy’, Worker Info Exchange, a non-profit organisation which helps workers gain insight and access to their data, explain how workers’ own data is used against them by the private hire platforms, but also what workers are doing to challenge this information asymmetry.
The WIE’s ambition is to build workers’ data trusts so that workers can understand the algorithmic decision-making which affects all aspects of their work, including their pay, and use that analysis to strengthen workers’ collective bargaining power. In the battle to gain access to workers’ data, the courts have been a key site of struggle, and WIE has led the way in path-breaking litigation.
Private-hire drivers for Uber and Ola Cabs in the App Drivers and Couriers Union (ADCU) were supported by Workers Info Exchange in pursuing court cases in Amsterdam last year, arguing for access to their data on the basis of EU General Data Protection Regulation (GDPR) law. There were four cases: two on workers seeking transparency over the data which is collected on them, one on ‘robo-firings’, where workers are deactivated from the app (thus losing their livelihood overnight) by an automated decision, and one on the use of facial recognition software in platform decision-making.
The workers won the facial recognition case by default, while the other three cases all saw mixed results, with some successes (including fines for Uber and Ola Cabs and compensation for drivers) and some disappointments. Now, Workers Info Exchange will be back in Amsterdam on the 18 May to appeal the verdicts which did not go in their favour.
Ahead of the appeal, Anton Ekker, Amsterdam-based lawyer for Worker Info Exchange, spoke to the Gig Economy Project about the importance of the 18 May trial and, more broadly, why litigation can be a powerful tool in fighting for workers’ data rights.
The Gig Economy Project: Can you tell us a bit about how you came to be a lawyer working on digital rights and AI? And also how your relationship with Workers Info Exchange and members of the ADCU union began?
Anton Ekker: I started my career as a lawyer at the Institute of Information Law at the University of Amsterdam. Then I started working as a lawyer, specialising in privacy.
Some years ago I was doing a case against the Dutch Government about a risk-profiling system used by the Dutch government for profiling groups of Dutch citizens within specific neighbourhoods. We did this on behalf of a coalition of privacy organisations, and quite unexpectedly, we won this case in the first instance. The court of the Hague struck down the Dutch law which allowed these systems to be used, and that had a lot of media coverage because it was the first case about risk profiling and AI, and also striking down a law like that doesn’t happen very often, it is something quite extraordinary.
So after that success I was invited to one of the brainstorming sessions of DFF, which sponsored the case, and that’s where I met James Farrar from Worker Info Exchange and ADCU just very briefly. Later, he called me to ask me whether it would be possible to make a simple subject access request on behalf of UK drivers in the Netherlands. And yes of course that’s possible, because Uber is located in the Netherlands.
These cases are nothing more than subject access requests, that’s all – it’s very simple. But since Uber and Ola are resisting so much, it becomes a huge thing. Uber is represented by one of the biggest law firms of the Netherlands which brings up every legal argument they can think of. So I think it shows clearly that companies like Uber don’t like transparency.
GEP: You were the lawyer for ADCU members in three trials in Amsterdam last year all to do with platform algorithms and workers’ data rights, using EU GDPR law. There was Uber Drivers v Uber 1, Ola Drivers v Ola, and Uber Drivers v Uber II. Can you explain the significance of these trials and the key verdicts, in the context of the upcoming appeal in the Amsterdam court?
AE: There are two general transparency cases based on subject access requests directed to Uber and Ola. In the Uber case we are talking about a lot of data categories, such as GPS, driver performance indicators, individual ratings, upfront pricing, etc, as mentioned in the ‘Guidance Notes’ that Uber provides to drivers.
A very important aspect here is transparency about automated decision-making and profiling. There are a few criteria for whether a decision is automated under GDPR legislation: an automated decision should have a legal effect or similar effect, and there should not be meaningful human interference. The drivers are appealing the District Court’s reasoning, where it accepted Uber’s argument that there was human interference so it cannot be established if there is automated decision-making. But of course, there are huge forms of automated decision-making going on all the time, such as price-setting, ratings, calculation of routes. It’s clear to us that this is one of the things that Uber should be more transparent about.
The drivers have also requested data portability, where Uber drivers wanted to share their data with Worker Info Exchange, to enable WIE to get an overview by comparing data-sets from different drivers. Uber is questioning the scope of the rights to data portability under the GDPR.
The Ola case is similar to the Uber one.
And then we have another case which is about automated de-activations. This involves drivers that have been deactivated by the system, and they all received very standard messages with very vague explanations of why they were being deactivated. It’s very hard to understand, so we are using Article 15 and 22 GDPR to ask for transparency about how these decisions were made. Uber’s first line of defence is to say the decisions were not automated because Uber’s fraud department reviews all the recommendations by the system. In the appeal case, the drivers argue that these decisions were fully automated, there was in fact no meaningful human interference at all and thus Uber must be transparent.
At the Court hearing on 18 May we are appealing several decisions where the court did not grant our requests in these previous cases.
GEP: In the Ola case, it was the first time that an algorithmic decision was qualified as an automated decision under article 22 of GDPR law. This is a significant legal precedent, is it not?
AE: Exactly, that’s very important. In appeal, the drivers argue: ‘It was right what the court said in this case, so why should this not apply to other forms of automated decision-making, because the reasoning is the same?’
GEP: We’re seeing the use of these technologies growing rapidly, Worker Info Exchange has called it a “worker surveillance arms race”. Do you foresee many more court cases will be necessary to establish clear legal precedents?
AE: I expect a lot of litigation, because this is a very small part of the whole puzzle, these are only the algorithms used by Uber and Ola, but there’s so much more going on in the platform economy. And then we also have the new AI Act and the new Platform Workers’ Directive, which will introduce even more specific rules.
It seems strange that there has been so little litigation about automated decision-making because the impact of it is very big. For some reason Article 22 of the GDPR regulation has been like a forgotten provision in some respects. However, I’m confident that bringing cases on behalf of individual platform workers will be a successful avenue to address this issue.
GEP: How would you respond to the argument of the platforms, that these technologies are preventing fraud and keeping people safe? Do these technologies just need to be better regulated, or are there problems with them fundamentally?
AE: There’s a lot of issues here. First, Uber is always talking about fraud when basically most of the time it’s about risk management for them. They have risks they don’t like, and they want to get rid of those risks, including regulatory risks, because sometimes they have been close to losing their license because they didn’t establish enough safeguards in terms of the safety of the passenger, for example. The definition of fraud becomes so general, so that even if you have a passenger which might have done something wrong then the driver is accused as well of fraudulent activity.
The platform thinks: ‘If we see a risk on the platform, the easy way to solve it is to get rid of the driver. We want to be friendly to the passengers because they are bringing money in. Also, we want to make sure the regulator is very happy about everything we do, so the best thing we can do is just let the driver go, we can get another driver anyway.’ This is a cynical way to put it, but I think this is the dynamic that is going on.
So, for example in the case of facial recognition, it might have certain uses that might be okay, but we also know that these technologies have flaws. There must be appropriate measures to make sure that if something goes wrong it is being handled appropriately.
We took another case against Uber with four drivers which was about real time identification through facial recognition. This was a default judgement because Uber didn’t show up. The automated decisions were overruled, and the drivers got compensated for a total amount of around €100,000, which sets a new standard under the GDPR.
GEP: These court verdicts have obviously helped workers directly involved in the case, but then Uber often continues as if nothing happened. How can Uber be forced to make changes to their overall operations?
AE: These are all parts of the puzzle. Going forward, wins before the Dutch courts will make it easier for the Data Protection Authority to act. Also, if online platforms do not provide more transparency in the future, it will make it easier to demand transparency in a class action.
One big problem is the data protection authorities everywhere seem to be overloaded, they just have a shortage of time, resources and funding.
To sign up to the Gig Economy Project’s weekly newsletter, which provides up-to-date analysis and reports on everything that’s happening in the gig economy in Europe, leave your email here.
Support us and become part of a media that takes responsibility for society
BRAVE NEW EUROPE is a not-for-profit educational platform for economics, politics, and climate change that brings authors at the cutting edge of progressive thought together with activists and others with articles like this. If you would like to support our work and want to see more writing free of state or corporate media bias and free of charge. To maintain the impetus and impartiality we need fresh funds every month. Three hundred donors, giving £5 or 5 euros a month would bring us close to £1,500 monthly, which is enough to keep us ticking over.