Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
284 days ago

Data science is the big draw in business schools
457 days ago

7 Effective Methods for Fitting a Liner
467 days ago

3 Thoughts on Why Deep Learning Works So Well
467 days ago

3 million at risk from the rise of robots
467 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
306285 views

Here's why so many data scientists are leaving their jobs
79950 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
74544 views

2018 Data Science Interview Questions for Top Tech Companies
73902 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
60678 views

AI Researchers Disagree With Elon Musk's Warnings About Artificial Intelligence

By Nand Kishor |Email | Jul 20, 2017 | 8097 Views

When Elon Musk told U.S. governors on Saturday that artificial intelligence (AI) is mankind's biggest threat, the warning didn't fall on deaf ears. At least AI researchers caught it. Now they're saying Musk is being overly cautious about AI. But is he?

DISTORTING THE DEBATE?
The fear of super-intelligent machines is as real as it gets for Tesla and SpaceX CEO and founder Elon Musk. He's spoken about it so many times, but perhaps not in the strongest terms as when he told U.S. governors that artificial intelligence (AI) poses "a fundamental risk to the existence of human civilization." The comment caught the attention of not just the governors present, but also AI researchers - and they're not very happy about it.

"While there needs to be an open discussion about the societal impacts of AI technology, much of Mr. Musk's oft-repeated concerns seem to focus on the rather far-fetched super-intelligence take-over scenarios," Arizona State University computer scientist Subbarao Kambhampati told Inverse. "Mr. Musk's megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate."

Kambhampati, who also heads the Association for the Advancement of AI and is a trustee for the Partnership for AI, wasn't the only one who reacted to Musk's most recent AI warning. Francois Chollet and David Ha, deep learning researchers at Google, also took to Twitter to defend AI and machine learning (ML).

University of Washington in Seattle researcher Pedro Domingos simply tweeted a "sigh" of disbelief.

IS THERE REALLY AN AI THREAT?
Both Kambhampati and Ha commented on the premise that Musk - because of his work in OpenAI, in developing self-driving technologies in Tesla, and his recent Neuralink project - has access to cutting edge AI technologies so knows what he's talking about. "I also have access to the very most cutting-edge AI and frankly I'm not impressed at all by it," Ha said in another tweet.

Kambhampati, meanwhile, pointed out to the 2016 AI report by the Obama administration that made some very timely but positive recommendations about AI regulations and policies. The White House report didn't have "the super-intelligence worries that seem to animate Mr. Musk," Kambhampati said to Inverse, which is a strong indicator that these concerns are not well-founded.

It seems unfair, however, that Musk is getting all the attention when he's not the only person who's spoken about the threat of super-intelligence. Famous physicist Stephen Hawking has always made comments about an AI apocalypse. The real question is: should we really fear AI?

With the current state of AI, there seems to be nothing much to fear. While the technology has seen tremendous advances recently, and some experts think that we're closer to reaching the technological singularity (when computers surpass human-level intelligence), current AI isn't as advanced as those doomsday robots we see in science fiction. Nor is it clear that they will ever be.

Notable futurist and "singularity enthusiast" Ray Kurzweil even thinks that the singularity won't be something we should fear. If anything, what's more frightening is how we make use of the AI. That's why the best course right now is to pursue AI research with clear goals and guidelines. So, Musk is right in saying that regulation is necessary. But Kambhampati, Chollet, and Ha are also right that there's no need for alarmism.

Source: Futurism