LATEST NEWS:

What can workers do if they are fired by Artificial Intelligence?

What can workers do if they are fired by Artificial Intelligence?
Drivers delivered a petition with 10 signatures to Uber's London office to protest what they call "automated downloads" and "unfair deactivations" (photo: Jakub Porzycki/NurPhoto/Getty Images)

By: Sarah O'Connor / The Financial Times
Translation: Telegrafi.com

One day last month, a man with a neat white beard stood on the sidewalk in front of the headquarters of Uberin London. He carried a transparent plastic folder containing a stack of letters, printed emails, and a carefully marked, hand-drawn map.

Ghulam Qadir was "deactivated" by Uberin 2018 after what he described as a misunderstanding involving a passenger who paid in cash when the app had canceled her trip. He was trying to explain to her Uber-it about what had happened (hence the map), but said that no one listened to him. In fact, the MP who represented him was also involved in the case. Addressing your MP seems to have become a version of the drivers of Uber-it for repeatedly pressing the zero button during a customer service call, when you just want to talk to a human. Qadiri thought this was absurd. “Parliament should deal with world issues, not my personal problems,” he said.


Also read by Sarah O'Connor:
- Let's not underestimate humans as "AI losers"
- New Western concepts of "good work"
- Why are countries pushing to attract migrant workers?
- Young people are suffering the social recession

He was at the office. Uberto submit a petition with 10 signatures from drivers and their supporters, organized by the workers' rights platform, Organized, to protest against what they call "automated downloads" and "unfair deactivations."

UberThe UK, for its part, said its policies had improved significantly over the past 12 to 18 months and that every driver now had the right to request that their case be reviewed by a panel of experts. “We work constantly to ensure that our approach is transparent and fair,” a spokesman said. But the petition is an indication of an understated tension in the British government’s approach to the future.

On the one hand, the Prime Minister has promised to remove bureaucratic obstacles to inject Artificial Intelligence [AI] “into the veins” of the British economy, convinced that it will boost productivity and therefore economic growth. On the other hand, the government has also promised fairer and safer work for low-paid workers.

Of course, these goals are not necessarily incompatible.

It is in the interest of workers to become more productive, provided they share in the gains that come from increased productivity. And many workers are already choosing to use generative AI tools (sometimes without their employers’ knowledge) because they see the value in reducing the time they spend on certain tasks, such as writing emails.

But one of the increasingly common uses of AI and other algorithmic tools is in making important decisions about employees – from recruitment to performance management. An OECD survey last year of more than 6,000 middle managers in France, Germany, Italy, Japan, Spain and the US found that the use of algorithmic management tools, initially popularised by kiosk companies [gig companyy – a system in which temporary positions are common and organizations hire freelance workers for short-term engagements] as Uber-i, is already widely spread.

Usage rates ranged from 90 percent in the U.S. to 40 percent in Japan. However, managers themselves also seemed somewhat concerned about these tools. Six in ten said the technology improved the quality of their decision-making, but nearly two-thirds had at least one concern. The most cited concern was the lack of clear accountability in the event of a wrong decision, followed by the inability to understand the logic of the algorithm’s decisions and insufficient protection of workers’ physical and mental health.

In the UK, these tensions are clearly evident in the Data Protection (Use & Access) Bill, which is currently being considered in Parliament. Trade unions are concerned that a provision in the bill could weaken legal protections around the use of automated decision-making. The bill would move from a general ban with some exceptions to a general presumption of permission, along with some safeguards – such as the right of an individual to object to a decision affecting them, to receive an explanation of how the decision was reached and to request human intervention.

For Adam Cantwell-Cornin, political officer at the TUC [Trades Union Congress], the umbrella organisation for unions, the problem is that these safeguards would place the burden on the individual and would usually only come into effect after an event has occurred. “Let’s say an employee makes a decision about dismissal, performance appraisal, recruitment or whatever – first you have to understand that such a decision has been made … then you have to get through some legal and bureaucratic hurdles to seek information and challenge it,” he said. “Even in environments where there is strong and active union activity, this is very difficult. In workplaces with high insecurity, this becomes almost unenforceable.”

Current protections against automated decisions are already weakly enforceable. But the principle behind the legal change remains important. Silicon Valley’s pro-tech mantra, “act fast and break things,” doesn’t work so well if the “things” that could be broken are actually people – standing on the sidewalk, a stack of printed emails in their hands. Especially if you’re a government that has promised to be on their side. /Telegraph/