Can artificial intelligence (AI) make fairer decisions than people? If the right conditions are created for it. A comment.
Termination Via AI – Is That Fair?
In the last few days, there has been a lot of excitement about personnel decisions at Amazon and the VW subsidiary Moia. In both cases, smart algorithms made work decisions that were difficult to understand.
At Amazon, the aim was for algorithms to constantly evaluate the delivery drivers and then send out an automated letter of termination via a bot in the event of the smallest errors.
At Moia, on the other hand, an algorithm categorizes “productive” and “non-productive” working hours so that drivers, for example, had to apply for toilet breaks in the digital system, and these were not billed as “non-productive” time.
These examples show well how it shouldn’t work. Nevertheless, we will see it much more frequently in the future that algorithms decide about our jobs – and that’s a good thing.
People Make Far Worse Decisions
Because if we are honest, personnel decisions in the company are often very arbitrary. It’s about personal preferences, subjective feelings, or individual sensitivities and sympathies, and less about whether a person does their job well or is suitable for the job.
In addition, there are often prejudices against certain groups of which we are sometimes not even aware. Studies show again and again how exactly such subconscious prejudices influence (unfair) personnel decisions.
We are human, and that is exactly the problem.
AI In The Job Saves Money And Avoids Conflicts
That is why many technologies are already being used in the HR department. In the long term, however, well-programmed Artificial Intelligence (AI) can make decisions much more objectively and precisely than a purely human HR department.
At the same time, this saves a lot of effort, time, personnel, and money and prevents conflicts. Because if everyone agrees that AI is fair, there is not too much to argue about decisions.
That’s the theory. However, in practice, five conditions must be met for AI personnel decisions to be fair because cases like Amazon or Moia show that very clearly: Otherwise, things can go completely wrong.
Parameters For AI Must Be Fair
The best basis for fair decisions by an AI on the job is, of course, the careful development of these technologies. Not only programmers should be involved in this, but also the human resources department and employees and ideally also labor, legal, and ethics experts.
This is the only way to develop well-founded and, above all, fair parameters for evaluating personnel.
For example: How quickly and without errors employees complete their tasks should certainly be an evaluation criterion. But what about criteria such as willingness to work, ability to motivate, ability to criticize, or good team spirit?
After all, these are also important aspects of the job. At the same time, AIs have to assess errors in a more differentiated manner than we saw in the case of Amazon. And the employees themselves have to perceive the AI as fair for the technology to find acceptance at all.
For this purpose, basic moral values must also be programmed in so that an AI does not, for example, generally put older employees on the dismissal list.
Algorithms Vs. Human: You Have To Pass The Test
Developing AI for job decisions is only the first step. It then has to be tested in practice. Human employees and AI should make independent personnel decisions in a test phase.
Companies then have to analyze and compare this data carefully. Which decisions are the same? Where did the AI make mistakes or perhaps even make fairer decisions? Where do you have to improve?
The AI should only be used with 90 or 95 percent high success rates. Of course, in the end, subjective people also rate this. But if several people are involved in the evaluation process, that is already fairer than is currently the case in many HR departments.
Personnel Decisions Based On AI Must Be Transparent
But this also means that the decisions made by the AI on the job must be transparent – also for employees. For example, in the case of Amazon, the algorithm suddenly downgraded an employee’s rating, but she never found out why.
It’s not OK. Only if employees understand what they did wrong and are given a chance to improve can such AI survive in everyday work in the long term.
AI Can And Must Be Questioned
This also means that the AI can and must be questioned by the HR department and employees. Because no matter how well you program an AI: There will always be errors or aspects that can be improved.
Medical cases have shown that it can be fatal when people blindly rely on technology. Therefore, companies have to develop processes that regularly question the technology.
Create The Legal Framework
Last but not least, not every company should be able to decide how an AI hires and fires employees. After all, it’s also about human existence. Therefore, legal framework conditions, which set at least rough limits here, are decisive.
Yes, AI In The Job Is The Future
With a view to Amazon and Moia, it may seem frightening and unfair when artificial bits of intelligence make job decisions. But the principle itself is promising, with thorough development and a transparent process.
Ultimately, Artificial Intelligence has great potential to make job decisions that are much fairer than we humans can.
ALSO READ: EDR From The Cloud? Who Needs That?