Machine learning is an important part of artificial intelligence and everything that goes with it these days when it comes to creating new tech and reliable automated processes. But as a certain experiment by TheNextWeb contributor and cybersecurity expert Steve Grobman will demonstrate, humans will still be valuable for some time when it comes to cybersecurity and artificial intelligence in general. This simple process should explain things quite nicely.
Grobman previously decided he would get into the world of machine learning to dabble in the field by creating models to predict the winner of the Super Bowl. One model was trained on 14 years' worth of team data from the years stretching from 1996 to 2010. He used regular season results, offensive strength, defensive strength and other tidbits of information to program the model. The model predicted most wins correctly, except in 2009 when it made a mistake saying both the Arizona Cardinals and Pittsburgh Steelers would end up winning. But going on from there, starting in 2011, it started making several errors, which made it less useful than simply guessing at who might win.
According to Grobman, this alteration in how the model behaved was due to the fact that it began to become over-trained as it learned all of the smaller, less important noise about the games it had been trained to use in the path. Since these models don't actually know what it is that they're being asked in the first place nor can they understand said concepts behind them, even a bad model can be used to look as though it's performing well.
As Grobman went on to explain, these things can make even the most seemingly foolproof AI seem as though it's doing a great job when in reality it's not. Meaning there are dangerous holes for anyone who wants to launch a cyber attack on others, rendering machine learning vulnerable at certain points and reiterating the need for humans, even in an age where it seems that machines can simply do everything we need them to do. They're vulnerable, too.