Understanding the role of technology in marketing
1190 days ago
IBM breaks patent record in 25th straight year as No 1
1190 days ago
EBay Deploys Chatbots To Ease Holiday Gift-Giving Stress
1191 days ago
Management AI: Overfit, Why Machine Learning Isn't Trained to Perfection
The core of most modern Machine Learning (ML) systems is automated neural networks (ANNs). The training of ANN's require large data sets. One misconception of those data sets is the idea that "if we get enough data, we can make the system 100% accurate." Yes, that can happen, but it's not what we really want.
Many methods can be used to group data into relevant categories. The analytics can be made more and more precise by the addition of variables that allow us to recognize items from each dataset. Let's see how that can work and how it can cause problems. Notice the figure below has two different lines to represent different algorithms for classifying data points.
The squiggly green line accurately groups every data point in the training set into the correct categories. The black line is a 'best fit' algorithm with a smooth curve. What happens with each algorithm as the system becomes production and accesses real data points?
The green curve will look for the specific data points it had in training. New data won't fit and classification can become very inaccurate. The problem is due to what's called 'overfit,' an algorithm defined specifically for the training data.
The algorithm driving the black curve will look at real data, find similar groups and adjust much better to the actual data. By allowing for error and focusing on 'best fit,' the people implementing the algorithm are working to achieve 'high reliability,' a concept that greatly varies depending on the situation.
Why does that matter? No expert in any field is perfect, there is always error; what we want is a system that can improve on existing methods. ML can perform complex analysis much faster than humans are able to process, but accuracy is both still fluid in subjective areas and a trade-off between costs to gain accuracy and benefits gained from improving accuracy.
For instance, think about web sites that recommend additional purchases based on purchase history. No reasonable person would think that it's possible for a system to always present an item that is an additional need. However, is a 65% rate for additional purchases based on recommendations good or bad? That is up to the company's ROI comparison with existing sales methods.
The same is the case for medical ML applications, but legal issues arise. If the average doctor is 85% accurate in the diagnosis of a patient given certain symptoms and an ML system is 87% accurate, then logic tells us to use the computer system. However, is the company that created the system legally responsible for an incorrect diagnosis? Should the company using such system be more, less, or identically responsible for such a diagnosis as would a doctor? The adoption of certain types of machine learning is as dependent on the legal climate as it is on technology.
Alternatively, we should look at the area that gets a lot of press: autonomous vehicles. While 65% accuracy might be good for a recommendation system, vehicles will need to be strongly in excess of 99% accuracy to provide the safety necessary for widespread use. What that means is that people involved in autonomous people can't put all the focus on ANNs. Other rule-based systems and more formal algorithms will need to be incorporated into standard operations while leveraging ML to quickly handle exceptional circumstances. Autonomous vehicles rely on a complex interaction of technologies, and while ML is a key part it can't be the sole focus.
Managers do not need to understand the mathematics behind machine learning and the problem of overfitting. All that is needed is to comprehend that perfection isn't an option with ANNs, your technology teams must work with business teams to ensure the appropriate level of fit for your business needs.