What? When Artificial Intelligence Fails!

By POOJA BISHT |Email | Mar 7, 2019 | 2403 Views

It is of no doubt that Artificial intelligence in recent years is successful in catching everyone's attention, amazing the world by showing the unpredictable sights which normal humans had only imagined, turning fiction into reality. It is also true that there are some failures of these mind-boggling innovations which has left us in a state of disappointment and to really think over the issue again.

While famous leaders and innovators all around the world have already put forward their negative viewpoints about the failures that AI could actually bring, its a time to not be completely negative but keeping a sense of awareness and learning from the past that systems and machines could fail and we need to be prepared for that.

The unforgettable Jaywalking incident in China which became viral and gained much criticism in 2018 made us think that AI machines, when failed, could make the situation much problematic and sometimes embarrassing and this was what actually happened. The incident took place in late 2018 when in China an AI system in traffic erroneously recognized billionaire Mingzhu Dong as a Jaywalker and showed up the image on a public screen. 

The incident went viral for many days on social media and put the controllers in a state of embarrassment.

It is not the first incident when AI machines or system proved to be wrong. There were many incidents in the past from which we really should learn a lesson, trying not to repeat it in the future so that mishappenings could be avoided.

In 2017, Facebook shut down its chatbots after the chatbots Alice and Bob after they developed their own language and started communicating with each other which was out of human understanding. The two chatbots, when connected to negotiate with each other, developed their own language which was hard to be understood by humans.

There was yet another report in 2018 when it was noticed that AI-driven recruiting tool in Amazon rejected the applications that contain the word women and every application including the word women was rejected in the entire recruitment process, which was a big failure as the sources told. The software was then abandoned by the Amazon and never used.

The most harming incident took place by the wrong interpretation of AI when it appeared in the news that self-driving uber car killed a woman in Arizona. It left us all shocked and disappointed with the fact that AI could have the worst effect, if not supervised and programmed correctly. 

The reason for showing the above mishappenings is not to be pessimistic about the technology, even I personally have an affection using technology and watching world technologically developed with every coming day. The reason is to really look back and reflect what could be done better so to avoid any undesired incidents in the nearer future.

Source: HOB