satyamkapoor

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First. ...

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First.

Success story of Haptik
41 days ago

Who is afraid of automation?
41 days ago

What's happening in AI, Blockchain & IoT
42 days ago

3 million at risk from the rise of robots
42 days ago

5 ways Machine Learning can save your company from a security breach
42 days ago

Google Course for IT beginners, certificate in 8 months: Enrollment starts on Coursera today, check details
27333 views

IIT Madras launches Winter Course on Machine Intelligence and Brain Research
11544 views

You can now train custom machine learning models without coding using Google's AutoML
9828 views

Could your job be taken over by Artificial Intelligence?
9444 views

7 of the best chatbot building plaftorms out there
8988 views

How can we make humans trust artificial intelligence

Jan 10, 2018 | 2547 Views

Today artificial intelligence is capable of predicting the future. Police forces are currently using it to map when and where. Doctors are using it to predict when a patient is likely to have a disease. Some researchers are even trying to use to plan for unexpected consequences.
There are many decisions in our lives that require a good forecast, and AI agents are almost better at this compared to their human counterparts. Yet, despite all these technological advances, we still see that people prefer to rely on human experts rather than trust an AI entity.
If people are to benefit from AI, we will have to find ways to bridge this gap in trust. To do that, we need to first understand why people are reluctant to trust AI.

Should you trust Dr. Robot?

IBM's attempt to promote its (Watson for Onology) was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world's cases. As of today, have received advice based on its calculations.

But when they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson's recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts' opinion, doctors would typically conclude that Watson wasn't competent. And the machine wouldn't be able to explain why its treatment was plausible because its machine learning algorithms were simply to be fully understood by humans. Consequently, this has caused even , leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson's premier medical partner, the MD Anderson Cancer Center, recently announced it was . Similarly, a Danish hospital reportedly after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The doctor will see you now.

The problem with Watson for Oncology was that doctors simply didn't trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a . AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions

using a complex system of analysis to identify potentially hidden patterns and from large amounts of data.

Even if it can be (and that's not always the case), AI's decision-making process is usually . And interacting with something we don't understand can and make us feel like we're losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background.

Instead, they are acutely aware of instances where AI goes wrong: a that classifies people of colour as gorillas; a that decides to become a white supremacist in less than a day; a that resulted in a fatal accident. These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren't.

A new AI divide in society?

Feelings about AI also run deep. My colleagues and I recently ran an experiment where we asked people from a range of backgrounds to watch various sci-fi films about AI and then asked them questions about automation in everyday life. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants' attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.

This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as . As AI is reported and represented more and more in the media, it could contribute to a , split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.

Three ways out of the AI trust crisis

Fortunately we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people's attitudes towards the technology, as we found in our study. also suggests the more you use other technologies such as the internet, the more you trust them.

Another solution could be to open the "black-box" of machine learning algorithms and become more transparent about how they work. There are companies out there that release transparency reports about government requests and other surveillance disclosures. There can be a similar practice for AI systems which would help people better understand how algorithmic decisions are made.

There are research reports which suggest that involving people in the AI decision-making could help boost trust and also allow AI to learn from human experience. For example, when some people were given the freedom to slightly modify an algorithm, they felt more satisfied with its decisions. They were more likely to use it in future and more likely to consider it superior.

We as laymen don't need to understand the intricacies of the workings of an AI system, but if people are given some bits of information and control over how these systems get implemented , they will become more open accepting them in their lives.

Source: HOB
satyamkapoor

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First. ...

Full Bio 

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First.

Success story of Haptik
41 days ago

Who is afraid of automation?
41 days ago

What's happening in AI, Blockchain & IoT
42 days ago

3 million at risk from the rise of robots
42 days ago

5 ways Machine Learning can save your company from a security breach
42 days ago

Google Course for IT beginners, certificate in 8 months: Enrollment starts on Coursera today, check details
27333 views

IIT Madras launches Winter Course on Machine Intelligence and Brain Research
11544 views

You can now train custom machine learning models without coding using Google's AutoML
9828 views

Could your job be taken over by Artificial Intelligence?
9444 views

7 of the best chatbot building plaftorms out there
8988 views