Rajendra

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...

Full Bio 
Follow on

I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing

This asset class turned Rs 1 lakh into Rs 625 crore in 7 years; make a wild guess!
1196 days ago

Artificial intelligence is not our friend: Hillary Clinton is worried about the future of technology
1200 days ago

More than 1 lakh scholarship on offer by Google, Know how to apply
1201 days ago

Humans have some learning to do in an A.I. led world
1201 days ago

Human Pilot Beats Artificial Intelligence In NASA's Drone Race
1202 days ago

Google AI can create better machine-learning code than the researchers who made it
79068 views

More than 1 lakh scholarship on offer by Google, Know how to apply
67488 views

Rise of the sex robots: Life-like doll goes on sale for 15,000 pound
53454 views

13-year-old Indian AI developer vows to train 100,000 coders
48753 views

What is Deep Learning and Neural Network
45843 views

This Super Bowl Experiment Proves Machine Learning Still Needs a Helping Hand From Humanity

By Rajendra |Email | Oct 3, 2017 | 8304 Views

Machine learning is an important part of artificial intelligence and everything that goes with it these days when it comes to creating new tech and reliable automated processes. But as a certain experiment by TheNextWeb contributor and cybersecurity expert Steve Grobman will demonstrate, humans will still be valuable for some time when it comes to cybersecurity and artificial intelligence in general. This simple process should explain things quite nicely.

Grobman previously decided he would get into the world of machine learning to dabble in the field by creating models to predict the winner of the Super Bowl. One model was trained on 14 years' worth of team data from the years stretching from 1996 to 2010. He used regular season results, offensive strength, defensive strength and other tidbits of information to program the model. The model predicted most wins correctly, except in 2009 when it made a mistake saying both the Arizona Cardinals and Pittsburgh Steelers would end up winning. But going on from there, starting in 2011, it started making several errors, which made it less useful than simply guessing at who might win.

According to Grobman, this alteration in how the model behaved was due to the fact that it began to become over-trained as it learned all of the smaller, less important noise about the games it had been trained to use in the path. Since these models don't actually know what it is that they're being asked in the first place nor can they understand said concepts behind them, even a bad model can be used to look as though it's performing well.

As Grobman went on to explain, these things can make even the most seemingly foolproof AI seem as though it's doing a great job when in reality it's not. Meaning there are dangerous holes for anyone who wants to launch a cyber attack on others, rendering machine learning vulnerable at certain points and reiterating the need for humans, even in an age where it seems that machines can simply do everything we need them to do. They're vulnerable, too.

Source: geek