Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
34 days ago

Data science is the big draw in business schools
208 days ago

7 Effective Methods for Fitting a Liner
217 days ago

3 Thoughts on Why Deep Learning Works So Well
217 days ago

3 million at risk from the rise of robots
218 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
228684 views

Here's why so many data scientists are leaving their jobs
76233 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
69429 views

2018 Data Science Interview Questions for Top Tech Companies
60468 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
57513 views

Artificial Intelligence : The Complete Guide

Feb 2, 2018 | 11706 Views

ARTIFICIAL INTELLIGENCE IS overhyped-there, we said it. It's also incredibly important.

Superintelligent algorithms aren't about to take all the jobs or wipe out humanity. But software has gotten significantly smarter of late. It's why you can talk to your friends as an animated poop on the iPhone X using Apple's Animoji, or ask your smart speaker to order more paper towels.

Tech companies' heavy investments in AI are already changing our lives and gadgets, and laying the groundwork for a more AI-centric future.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves "training" computers to perform tasks based on examples, rather than by relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

For most of us, the most obvious results of the improved powers of AI are neat new gadgets and experiences such as smart speakers, or being able to unlock your iPhone with your face. But AI is also poised to reinvent other areas of life. One is health care. Hospitals in India are testing software that checks images of a person's retina for signs of diabetic retinopathy, a condition frequently diagnosed too late to prevent vision loss. Machine learning is vital to projects in autonomous driving, where it allows a vehicle to make sense of its surroundings.

There's evidence that AI can make us happier and healthier. But there's also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won't automatically be a better one.

The Beginnings of Artificial Intelligence
Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. "We think that a significant advance can be made," he wrote with his co-organizers, "if a carefully selected group of scientists work on it together for a summer."

MOMENTS THAT SHAPED AI
1956
The Dartmouth Summer Research Project on Artificial Intelligence coins the name of a new field concerned with making software smart like humans.

1965
Joseph Weizenbaum at MIT creates Eliza, the first chatbot, which poses as a psychotherapist.

1975
Meta-Dendral, a program developed at Stanford to interpret chemical analyses, makes the first discoveries by a computer to be published in a refereed journal.

1987
A Mercedes van fitted with two cameras and a bunch of computers drives itself 20 kilometers along a German highway at more than 55 mph, in an academic project led by engineer Ernst Dickmanns.

1997
IBMâ??s computer Deep Blue defeats chess world champion Garry Kasparov.

2004
The Pentagon stages the Darpa Grand Challenge, a race for robot cars in the Mojave Desert that catalyzes the autonomous-car industry.

2012
Researchers in a niche field called deep learning spur new corporate interest in AI by showing their ideas can make speech and image recognition much more accurate.

2016
AlphaGo, created by Google unit DeepMind, defeats a world champion player of the board game Go.

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn't long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the "Embryo of Computer Designed to Read and Grow Wiser." But neural networks tumbled from favor after an influential 1969 book co-authored by MIT's Marvin Minsky suggested they couldn't be very powerful.

Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.

In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to publish results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find.

The Future of Artificial Intelligence
Even if progress on making artificial intelligence smarter stops tomorrow, don't expect to stop hearing about how it's changing the world.

Big tech companies such as Google, Microsoft, and Amazon have amassed strong rosters of AI talent and impressive arrays of computers to bolster their core businesses of targeting ads or anticipating your next purchase.

YOUR AI DECODER RING
Artificial intelligence
The development of computers capable of tasks that typically require human intelligence.

Machine learning
Using example data or experience to refine how computers make predictions or perform a task.

Deep learning
A machine learning technique in which data is filtered through self-adjusting networks of math loosely inspired by neurons in the brain.

Supervised learning
Showing software labeled example data, such as photographs, to teach a computer what to do.

Unsupervised learning
Learning without annotated examples, just from experience of data or the world-trivial for humans but not generally practical for machines. Yet.

Reinforcement learning
Software that experiments with different actions to figure out how to maximize a virtual reward, such as scoring points in a game.

Artificial general intelligence
As yet nonexistent software that displays a humanlike ability to adapt to different environments and tasks, and transfer knowledge between them.

They've also begun trying to make money by inviting others to run AI projects on their networks, which will help propel advances in areas such as health care or national security. Improvements to AI hardware, growth in training courses in machine learning, and open source machine-learning projects will also accelerate the spread of AI into other industries.

Meanwhile, consumers can expect to be pitched more gadgets and services with AI-powered features. Google and Amazon in particular are betting that improvements in machine learning will make their virtual assistants and smart speakers more powerful. Amazon, for example, has devices with cameras to look at their owners and the world around them.

The commercial possibilities make this a great time to be an AI researcher. Labs investigating how to make smarter machines are more numerous and better-funded than ever. And there's plenty to work on: Despite the flurry of recent progress in AI and wild prognostications about its near future, there are still many things that machines can't do, such as understanding the nuances of language, common-sense reasoning, and learning a new skill from just one or two examples. AI software will need to master tasks like these if it is to get close to the multifaceted, adaptable, and creative intelligence of humans. One deep-learning pioneer, Google's Geoff Hinton, argues that making progress on that grand challenge will require rethinking some of the foundations of the field.

As AI systems grow more powerful, they will rightly invite more scrutiny. Government use of software in areas such as criminal justice is often flawed or secretive, and corporations like Facebook have begun confronting the downsides of their own life-shaping algorithms. More powerful AI has the potential to create worse problems, for example by perpetuating historical biases and stereotypes against women or black people. Civil-society groups and even the tech industry itself are now exploring rules and guidelines on the safety and ethics of AI. For us to truly reap the benefits of machines getting smarter, we'll need to get smarter about machines.

Source: Wired