Having an intuition for how machine learning algorithms work - even in the most general sense - is becoming an important business skill. As Andrew Ng has written: "Almost all of AI's recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B)." But how does this work? As you might imagine, many exciting machine learning problems can't be reduced to a simple equation like y = mx + b. But at their essence, supervised machine learning algorithms are solving for complex versions of m, based on labeled values for x and y, so that they can predict future y's from future x's. If you've ever taken a statistics course or worked with predictive analytics, this should all sound familiar: It's the idea behind linear regression, one of the simpler forms of supervised learning.

Artificial intelligence is no longer just a niche subfield of computer science. Tech giants have been using AI for years: Machine learning algorithms power Amazon product recommendations, Google Maps, and the content that Facebook, Instagram, and Twitter display in social media feeds. But William Gibson's adage applies well to AI adoption: The future is already here, it's just not evenly distributed.

The average company faces many challenges in getting started with machine learning, including a shortage of data scientists. But just as important is a shortage of executives and nontechnical employees able to spot AI opportunities. And spotting those opportunities don't require a Ph.D. in statistics or even the ability to write code. (It will, spoiler alert, require a brief trip back to high school algebra.)

Having an intuition for how machine learning algorithms work - even in the most general sense - is becoming an important business skill. Machine learning scientists can't work in a vacuum; business stakeholders should help them identify problems worth solving and allocate subject matter experts to distill their knowledge into labels for datasets, provide feedback on output, and set the objectives for algorithmic success.

As Andrew Ng has written: "Almost all of AI's recent progress is through one type, in which some input data (A) is used to quickly generate some simple response (B)."

But how does this work? Think back to high school math - I promise this will be brief - when you first learned the equation for a straight line: y = mx + b. Algebraic equations like this represent the relationship between two variables, x, and y. In high school algebra, you'd be told what m and b are, be given an input value for x, and then be asked to plug them into the equation to solve for y. In this case, you start with the equation and then calculate particular values.

Supervised learning reverses this process, solving for m and b, given a set of x's and y's. In supervised learning, you start with many particulars - the data - and infer the general equation. And the learning part means you can update the equation as you see more x's and y's, changing the slope of the line to better fit the data. The equation almost never identifies the relationship between each x and y with 100% accuracy, but the generalization is powerful because later on you can use it to do algebra on new data. Once you've found a slope that captures a relationship between x and y reliably, if you are given a new x value, you can make an educated guess about the corresponding value of y.

As you might imagine, many exciting machine learning problems can't be reduced to a simple equation like y = mx + b. But at their essence, supervised machine learning algorithms are also solving for complex versions of m, based on labeled values for x and y, so they can predict future y's from future x's. If you've ever taken a statistics course or worked with predictive analytics, this should all sound familiar: It's the idea behind linear regression, one of the simpler forms of supervised learning.