Workers demand gig economy companies explain their algorithms. Frustrated workers say there is little redress after computers make decisions

More than 500 gig workers from companies such as Amazon Flex, Bolt, Deliveroo, and Uber have been asking a union, Workers Info Exchange, for help understanding automated decisions

Over two years, Alexandru, a 38-year-old Uber driver in London amassed nearly 7,000 trips and kept a flawless five-star customer service rating. When other drivers complained that Uber’s system was punishing them for no reason, he did not believe them. “My feeling was this wasn’t the whole truth. Surely they must be guilty of something,” he said. But in July last year, Alexandru received his own warning from Uber’s computers, telling him he had been flagged for fraudulent activity. Another warning came two weeks later. A third warning would lead to his account being shut down. He stopped using the app, in fear of being permanently barred, and began racking his brain for what he might have done wrong.

Like many gig economy companies, Uber manages its tens of thousands of UK drivers with artificial intelligence programs, which handle everything from using facial recognition to check identities, to pairing drivers with customers, to spotting fraud when drivers cheat passengers or share accounts.

Alexandru claimed that while he was warned for fraud, such as deliberately extending the time or distance of a trip, he was not given any explanation of what had happened, and was unable to find out. “It’s like you find a warning left on your desk by your boss, but they are inaccessible, and if you knock on their door, they say, just stop whatever you’re doing wrong, or you will be fired, but they won’t tell you what it is. You feel targeted, discriminated against,” he said. When he called Uber for support, no one could help him. “They kept saying, ‘The system can’t be wrong, what have you done?’”

Finally he took his case to a union, the Workers Info Exchange (WIE), which is lobbying companies such as Uber to explain how their systems work. In October, Uber apologised for flagging him by mistake. “[Uber] use AI software as an HR department for drivers,” said Alexandru. “The system might make you stop working, it might terminate your contract with Uber, it might cost you your [private hire] license,” he said. Uber said that while it does use automated processes for fraud detection, decisions to terminate a driver are only taken following human review by company staff. Over the past eight months, more than 500 gig workers from companies such as Amazon Flex, Bolt, Deliveroo, and Uber, have also asked WIE for help understanding automated decisions.

Under European data protection laws, drivers have the right to know whether and how they have been subject to automated decision-making. But so far just 40 workers have received raw data on their working patterns, such as job acceptance rates, decline rates, and ratings. No company has clearly explained, however, how the data was used by their computers to make decisions, said WIE. “The key is the Kafkaesque element of what it means to have an algorithm as your boss,” said Eva Blum-Dumontet, senior researcher at advocacy group Privacy International, and co-author of a new report ‘Managed by Bots’, written with WIE. “People are getting suspended, unable to work, for reasons they don’t know about. The company’s employees themselves don’t know. The insanity of the situation has an impact on their mental health — the feeling of being treated as if you are guilty, without knowing why. All of the drivers I interviewed spoke about that, much more than the financial aspect.” In Europe, regulators and courts are beginning to recognise the harms of algorithmic management practices.

In January, an Italian court ruled that Deliveroo had discriminated against workers because its computers did not differentiate between workers who were unproductive and those who were sick or exercising their right to strike. Italy has also fined Deliveroo and Glovo, a grocery delivery app, for not revealing how their computers allocated jobs and rated performance.

In March, a Dutch court ruled that ride-hailing company Ola had to provide the data used to generate ‘fraud probability scores’ and ‘earnings profiles’ of drivers, used to decide on allocating jobs. Despite these scattered legal wins, enforcement remains weak. “The laws — both employment and data privacy laws — are failing these drivers,” said Cansu Safak, the report’s co-author and a researcher working on algorithmic surveillance at WIE. “When you try to exercise these rights, you find yourself speaking to bots again.” She said she found the industry to be “deeply hostile and resistant to the exercise of worker rights”, therefore requiring stronger enforcement of the existing laws. Uber said: “Our technology plays an important role in keeping everyone that uses our platform safe, as well as maximising earning opportunities for drivers and couriers.” The company added Uber was committed to being open and transparent about its systems, a claim that was disputed by campaigners, workers and union members, who said that gig companies treat the way their algorithms work as a trade secret.

Privacy International and WIE on Monday will launch a public petition demanding more details such as how automated decisions are reviewed from half a dozen gig platforms. Their goal is to draw attention to the increasing automation of all workplaces — not just in the gig economy. “Everybody feels safe, they have a nice job and this won’t affect them, just those poor Uber drivers. But it isn’t just about us,” said Alexandru. “The lower classes, gig workers, self-employed, we are the first to be affected because we are the most vulnerable. But what’s next? What if AI decides if an ambulance is sent to your house or not, would you feel safe without speaking with a human operator? This affects all our lives.”

Source:  Financial Times