Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...Full Bio
Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...
Data science is the big draw in business schools
1094 days ago
7 Effective Methods for Fitting a Liner
1104 days ago
3 Thoughts on Why Deep Learning Works So Well
1104 days ago
3 million at risk from the rise of robots
1104 days ago
Top 10 Hot Artificial Intelligence (AI) Technologies
Is Artificial Intelligence Dangerous? Device Warns It's Wearer When Robots Impersonate Humans
Just in case artificial intelligence turns on humankind, researchers have engineered a warning sign. Australian technology agency DT created a device that cautions its wearer when artificial intelligence is impersonating a human - just by sending a chill down the person's back.
"The post-truth era is just getting started," DT wrote in a blog post introducing the technology. "Near the end of 2017 we'll be consuming content synthesized to mimic real people. Leaving us in a sea of disinformation powered by AI and machine learning. The media, giant tech corporations and citizens already struggle to discern fact from fiction. And as this technology is democratized it will be even more prevalent."
The company's answer to that? The "Anti-AI AI." The small device, situated behind the ear, is designed to alert its wearer when the voice they are hearing is artificial intelligence.
"We wanted the device to give the wearer a unique sensation that matched what they were experiencing when a synthetic voice is detected," DT said in the blog post. "By using a 4x4 mm thermoelectric Peltier plate, we were able to create a noticeable chill on the skin near the back of the neck without drawing too much current."
Artificial intelligence as a whole is fraught with complications, so much so that experts often argue about the ethics and safety of the industry. Many scientists and engineers have cautioned about the future of artificial intelligence, including John McAfee, a computer programmer who developed the first ever commercial anti-virus software.
"The goal of AI - a self-conscious entity - contains within it the necessary destruction of its creator," McAfee wrote in an April opinion piece. "With self-consciousness comes a necessary self-interest. The self-interest of any AI created by the human mind, will instantly recognize the conflict between that self-interest and the continuation of the human species."
Elon Musk, the founder of private space venture SpaceX, also argued that artificial intelligence could turn malignant. Musk has said that the technology is humanity's "biggest existential threat" his Mars colonization would essentially serve as an escape hatch should artificial intelligence betray humanity. Musk also created a nonprofit called OpenAI, a group that aimed to "advance digital intelligence in the way that it is most likely to benefit humanity as a whole."
The future of artificial intelligence is so uncertain that in January, a team of tech billionaires created a $27 million fund to research AI's safety and ethics.
"There's an urgency to ensure that AI benefits society and minimizes harm," LinkedIn co-founder Reid Hoffman said in a statement at the time. "AI decision making can influence many aspects of our world - education, transportation, health care, criminal justice and the economy - yet data and code behind those decisions can largely be invisible." Read More