satyamkapoor

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First. ...

Full Bio 

I work at ValueFirst Digital Media Private Ltd. I am a Product Marketer in the Surbo Team. Surbo is Chatbot Generator Platform owned by Value First.

Success story of Haptik
496 days ago

Who is afraid of automation?
496 days ago

What's happening in AI, Blockchain & IoT
497 days ago

3 million at risk from the rise of robots
497 days ago

5 ways Machine Learning can save your company from a security breach
497 days ago

Google Course for IT beginners, certificate in 8 months: Enrollment starts on Coursera today, check details
32514 views

7 of the best chatbot building plaftorms out there
20799 views

Could your job be taken over by Artificial Intelligence?
19560 views

IIT Madras launches Winter Course on Machine Intelligence and Brain Research
17943 views

WILL ROBOTS FIGHT THE NEXT WAR? U.S. AND RUSSIA BRING ARTIFICIAL INTELLIGENCE TO THE BATTLEFIELD
16170 views

Google mistakes photo of machine guns as helicopter

By satyamkapoor |Email | Dec 28, 2017 | 5961 Views

In new research released last week, a team of MIT computer science students managed to trick Google's Cloud Vision artificial intelligence into thinking that a picture of four machine guns was probably a helicopter. They did it by carefully manipulating the underlying pixels of an original image, changing it in ways that were imperceptible to humans but completely disorienting for the AI.

The team demonstrated several other tricks, including convincing Cloud Vision that a group of skiers were actually a dog. They did it all without access to the vision system's underlying code, a so-called "black box" scenario. The research points towards potential vulnerabilities in the systems behind technology like self-driving cars, automated security screening systems, or facial-recognition tools.

To fool the system, the researchers manipulated the original image pixel-by-pixel, changing it in ways humans couldn't detect but which, bit by bit, altered what Cloud Vision saw. It sounds not too different from brute-force password hacking, in which a malicious algorithm plugs in letters and numbers until it finds the combination that opens your email.

Speaking with Wired, one of the researchers said that this sort of randomized hack can actually help us better understand how artificial intelligences think. Google and other big tech firms, meanwhile, are working to address these sorts of attacks, hopefully before their real-world applications become more widespread.

One significant qualifier is that this particular trick relies on digital alteration of 2-D images, while something like a self-driving car draws on much richer, less easily manipulated visual data. But lo-fi, real-world hacks have also been used to trick AI vision systems' for instance, when carefully-placed stickers were recently used to make an AI misread traffic signs.

Source: Fortune