Thursday, April 22, 2021

Do People Governed by Algorithms Improve or Quit?

Wait, what does it mean to be governed by algorithms? If you do not know that yet, you are not working for any of the gig contracting platforms (Uber, TaskRabbit) or employers using algorithms to assess employees and predict and manage training and promotion. The increase in data processing capacity and machine learning tools means that algorithms have crept into a multitude of organizations and influenced how they manage people. Importantly, in some places the use of algorithms is acknowledged, and the results are shared with the employees, but in other places it is secret. Some workplaces make the algorithm transparent to employees, and others make it opaque. Because people usually learn how to game transparent algorithms to get high scores, opaque algorithms are becoming increasingly common and are currently the most important to understand.

So, what do opaque algorithms do to people? That is the topic of research by Hatim A.Rahman published in Administrative Science Quarterly. He focused on a labor platform that matches freelance workers with clients. The platform implemented an opaque evaluation routine that produced a new type of quality score for freelancers that was visible to them and to potential clients. How do people react to such scores? We know that scores become goals, and people commonly try to improve their performance by making changes. That is exactly why transparent algorithms result in inflated scores after a period of adapting to the algorithm. But opaque algorithms do not tell people how to improve, making the scores produced by those algorithms less useful as goals.

Instead of targeted improvements, opaque algorithms can produce experiments to find out what elements of the algorithm affect the score, and how. Many freelancers tried to change how they worked with clients through simple actions such as changing the type of work, length of contract, procedure for closing the contract, and so on. But these changes were unlike the changes made when decision-makers face goals that are more easily understood. As has been documented in research on performance feedback, it is very common for people facing low performance relative to a goal to react by making changes to improve the performance. That happened with the opaque algorithm too, but it was much more selective.

First difference: Not everyone tried to make changes. Many individuals who were not highly dependent on the platform responded by quitting it. And this was true whether they had high or low performance, so even many high-performing freelancers (according to the algorithm) simply left.

Second difference: Not everyone’s likelihood of making changes was a result of the algorithm score. Low-performing individuals were experimenting with different approaches regardless of whether they had setbacks in their scores or not. That was important because in the platform, a score below 90% was considered low, so the result was continuing turmoil in how freelancers were working.

Third difference: Among those who performed best and were dependent on the platform, those who experienced setbacks made changes to how they worked. So far so good, especially if those changes actually improved how they worked. But what about those who did not experience setbacks in the score? They tried to limit their exposure, including by not working with new clients on the platform. Having a high score was valuable, and accepting new work on the platform might endanger it, so they preferred to stick with existing clients or to find new clients that would let them work outside the platform.

Clearly, the opaque algorithm produced scores that made it easier for clients to distinguish between freelancers, and it also governed the freelancers by changing how they behaved. Were these changes improvements? Normally performance feedback on a meaningful goal results in improvements, but it is far from clear that an opaque goal has the same effect. Indeed, the three differences in how these freelancers reacted suggest that the opaque algorithm was a poor governance tool. 

Rahman, Hatim A. 2021. "The Invisible Cage: Workers’ Reactivity to Opaque Algorithmic Evaluations." Administrative Science Quarterly, Forthcoming.