How Can Humans Work With Artificial Intelligence?

How will artificial intelligence (AI) influence the workplace of the future and thereby the human working condition? The focus of this discussion has been on the rather tautological conclusion that many current jobs will eventually be performed by machines. In our research we have come to the conclusion that the consequences of automation may not be that one-sided. We acknowledge the rapid development in machine learning, AI, and related fields. But, we also would like to point out that the answer to questions like, “Who will win: humans or machines?” is clear. Considering current advances in computing, and acknowledging that human performance is not a serious upper bound or benchmark for many tasks, it is quite obvious that humans will be outperformed by computers in a vast majority of cases. This will also happen for tasks that currently appear demanding and require intuition and human experience. Pitching humans against AI emphasizes frictions that arise from the adoption of AI, and it supports a gloomy outlook on employment.

We believe that not enough attention has been paid to other possibilities. For example, could humans augment the capabilities of machines or vice-versa?  Even those who believe that humans will be obsolete in value chains must explore a transitory period when computers must still learn from humans. New work arrangements might facilitate the computers’ ability to examine billions of alternatives. At the same time, humans could contribute their ability to generate new alternatives from connecting otherwise unintelligible dots.

A better question to ask might be, “How should humans and AI work together?” It is quite possible that in some work arrangements humans and AI working together outperform humans and AI working alone. Simple economics would then dictate that managers not replace the human with an AI, but let her work with the AI in a team. There are two requirements for this to happen:

  1. Humans and AI must have complementary skills (i.e., the humans must know things the AI does not, and vice versa)
  2. If there are complementary skills, the work must go to the party most competent to do it. In our research, we considered a simple model for this: We split the workload between humans and AI, and work can be moved to the other party through delegation.

We have conducted a series of experiments on image classification, which requires humans and an AI to describe what they see in a picture. Humans are traditionally highly capable of recognizing objects (even small children can readily identify a cat, a dog, or a mouse) and were just recently surpassed at the task by deep neural networks.

Regarding the first issue (complementary skills), our results indeed suggest that certain images seem to be easy for humans and hard for the AI, and vice versa.

To address the second issue, we have compared four different work arrangements. (1) The AI worked alone, (2) the humans worked alone, (3) the AI could delegate hard images to humans (we call this the “inversion” scenario), and finally, (4) humans could delegate images to the AI if they found the images too hard, or simply did not want to classify them.

The results clearly showed that the AI outperformed the humans when working alone. Also, humans could improve by working with the AI. However, we found that humans could not perform better than the AI. Even though the humans could improve somewhat by delegating hard images to the AI, they were still not as good as the AI working alone.

By far, the best performing scenario was inversion. In this scenario, the AI would classify images, and it would delegate images to the humans if it was uncertain. In the inversion scenario, the AI told the humans what to do and when to do it and significantly improved its performance.

This is the first surprising result of the study: Inversion might be economically desirable because it generates the best results. However, for which jobs does inversion make sense? Do we want to live in a society where computers are responsible for the decisions and ask humans for help only if they are uncertain?

Why did the humans fail? In theory, they could outperform all other scenarios by delegating wisely, but they did not do so. Human delegation suffered from wrong self-assessment and lack of strategy. This is the second surprising result of the study: The humans did not fail because they did not try hard enough or did not trust the AI. We can show that they acted rationally and consistently on the basis of their perceptions of what is difficult and what they know. However, the results suggest that these perceptions were wrong. The humans were bad at judging their own capabilities, in particular if the images were difficult to discern. Because their perception of difficulty was not aligned with the real difficulty, they delegated the wrong images. They could not beat the AI despite their best intentions.

This result is important because it might be very hard to design delegation rules that lead to an optimal distribution of work between humans and the machines that support them. Because humans have difficulty assessing their abilities and evaluating where the AI could be better, they are bad at delegating work to the AI. The AI, on the other hand, knows precisely what it does not know, and it delegates well.

The inability of humans to delegate well to an AI seems not to be explained by the cognitive limitations of their minds, otherwise known as “bounded rationality.” According to the theory of bounded rationality, humans tend to make decisions that  merely suffice rather than decisions that are optimal. However, bounded rationality does not address whether humans have the ability to judge how difficult a task is.

What does all this mean for the future of work? In the short term, inversion-like scenarios are a looming threat. We see significant economic benefits and potential performance enhancement due to AI, and humans will often be involved in inversion-like conditions. Computer algorithms will allocate work that the algorithm cannot do with confidence. As a result, humans are engaged in tasks that leverage human abilities that are hard to codify.

However, there may be situations where inversion is not an option. Two examples are critical medical and perhaps legal decisions, which for cultural and ethical reasons require human decision-makers. In these cases, AI will support the humans, but it will not make the decision.

To ensure that humans remain in control over the distribution of work, researchers have to figure out how to teach them to delegate. Cultural biases towards human decision-makers may not prevail forever. If humans delegate badly, but AI continues to become better and better at the focal task, then a human might eventually be attacked for his poor performance and the work arrangement might eventually be inverted.

In an age of AI co-workers, the most essential skill might be delegation. As educators, we share a responsibility to teach students how to develop the ability to assess their talents and skills honestly and accurately. We strongly believe that delegation to machines is a skill that can, and should be, taught – and in the end would be far more productive than focusing simply on the planned obsolescence of humans in the workplace.

Source: How Can Humans Work With Artificial Intelligence?