Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
372 days ago

Data science is the big draw in business schools
545 days ago

7 Effective Methods for Fitting a Liner
555 days ago

3 Thoughts on Why Deep Learning Works So Well
555 days ago

3 million at risk from the rise of robots
555 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
312639 views

Here's why so many data scientists are leaving their jobs
81225 views

2018 Data Science Interview Questions for Top Tech Companies
77754 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
76947 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
61776 views

Machine Learning Crash Course: Part 3 Tutorial

By Nand Kishor |Email | May 26, 2017 | 8487 Views

Neural networks are perhaps one of the most exciting recent developments in machine learning. Got a problem? Just throw a neural net at it. Want to make a self-driving car? Throw a neural net at it. Want to fly a helicopter? Throw a neural net at it. Curious about the digestive cycles of your sheep? Heck, throw a neural net at it. This extremely powerful algorithm holds much promise (but can also be a bit overhyped). In this article we'll go through how a neural network actually works, and in a future article we'll discuss some of the limitations of these seemingly magical tools.

The Biology

The biological brain is perhaps the most powerful and efficient computer that we know of. Compared to our complex organ, even our most powerful supercomputers are a joke. In 2014, Japanese researchers used a supercomputer to simulate just one second of human brain activity. It took 40 minutes and 9.9 million watts. As for the real thing? The little ball of grey matter in our skulls runs on only 20 watts, which translates to roughly one McChicken a day.

Neglecting a lot of details, biological neurons are cells that send and receive electrical impulses from other neurons that they are connected to. A neuron will only fire an electrical impulse when it receives impulses from other neurons that together are stronger than a certain threshold. Anything lower than that threshold and the neuron won't do anything. Just what that threshold is depends on the chemical properties of the neuron in question and varies from neuron to neuron. Upon firing, an electrical impulse shoots out of the neuron and into more neurons downstream and the process continues. In the brain, billions of these interconnected neurons communicating with each other form the basis for consciousness, thought, and McChicken cravings.

The History

In the mid 1900's, a couple of researchers came up with the idea of creating a "mathematical model" that would be based on how the brain works. They first created a model for a single neuron which imitated a real neuron's outputs, inputs, and thresholds. The outputs of these single artificial neurons were then fed into even more artificial neurons, creating an entire artificial neural network.

There was just one problem: While researchers had created a model of the human brain, they had no way of teaching it anything. The artificial brain could be wired in whatever way researchers wanted, but the vast majority of these wirings didn't create a brain that had any logical output at all. What was needed was a learning algorithm for their artificial brain.

It was not until the 1980's that such an efficient learning algorithm was used on neural networks. The algorithm was called backpropagation, and finally allowed neural networks to be trained to do amazing things such as understanding speech and driving cars.

The Model (Overview)

Now that we know the basics of how the brain works and the history of neural networks, let's look at what a neural network actually does. First off, we'll think of our neural network as a black box, some machine whose inner workings we don't really know about yet. We want this machine to take in some set number of numerical inputs (that we can choose) and spit out a set number of numerical outputs (that we can also choose).


A neural network takes in some inputs, math happens, and some number of outputs pop out

For example, if we want to classify images (say apples and oranges) then we'd want the number of inputs to be the number of pixels in our images, and the number of outputs to be the number of categories we have (two for the case of apples and oranges). If we were trying to model housing prices then the number of inputs would be the number of features we have, such as location, number of bathrooms, and square footage, and the number of outputs would be just one, for the price of the house.

Our machine has inputs and outputs, but how do we control what inputs create what outputs? That is, how do we change the neural network so certain inputs (say an image of an apple) give the correct outputs (say a 0 for the probability of being an orange and a 1 for the probability of being an apple)? Well, we can add "knobs" to our machine to control the output for a given input. In machine learning lingo, these "knobs" are called the parameters of a neural network. If we tune these knobs to the correct place, then for any input we can get the output that we want.

Going back to our apples and oranges example, if we give our machine an image of an apple but it tells us it thinks it's an orange then we can go ahead and adjust the knobs of our machine (in other words, tune the parameters) until the machine tells us it sees an apple. In essence, this is what it means to train a neural network and this is exactly what the backpropagation algorithm does.

The Model (Details)

Now that we know what a neural network should do and roughly how we can get it to learn, let's peer inside the black box and talk about what is happening inside the network. To start, we'll discuss what happens inside a single artificial neuron and build it up from there.

For those who have read our post on perceptrons, this will be very familiar material. That's because a neuron in a neural network is basically a perceptron on steroids. Similar to a perceptron, a neuron takes in any number of numerical inputs and spits out just one output. To get to this output, the neuron calculates an intermediate value called s by multiplying each input by a different weight, adding them all together, and adding an additional number called the bias. In math: s=weight1√?input1+...+weightn√?inputn+bias

A neuron weights its inputs and then sums them up with a bias. An activation function is then applied, which produces the output for the neuron

Now each neuron could simply output s, but that would be a bit boring as s is just a linear function, which makes it rather inflexible for modeling real-world data. What we want to do instead is to add one more step, called an activation function. An activation function is any function that takes in our

s and gives the output of our neuron, called the activation. The perceptron that we described in the last post gave definitive yes/no answers using a blocky step function as its activation function.

For the step function, there is no way to tell how close you are to a "yes" or a "no"

However, using a step function makes training very difficult because there's no way to tell whether the neural network is getting closer or farther from the correct answer. Imagine you are an ant that can only see things very close to you. You are on the higher part of the step function trying to get to the lower part of the step function. But because everything is so flat, you wouldn't know how far away the "step" part of the step function is, or even in which direction it is. The "blocky" structure makes the step function a bad activation function for neural networks.

To make it easier to train a network, we'll use a function that is smooth (in other words, a differentiable function). For example, we can use the sigmoid function, which looks something like this:

A sigmoid function is a nice activation function because it is smooth everywhere, making it easier to figure out if you're getting closer to the top

Going back to our ant analogy, an ant could figure out exactly which direction to go and how far to go just by checking in which direction and how much the graph slopes at its current location. Despite the fact that the ant can't see the low part of the sigmoid function, it can get a rough idea of where it is by looking whether the part of the function it is standing on is sloping up or down.

Linking it all together

We wouldn't have much of a network if we just had one neuron, would we? The secret to a neural network's ability to make complex decisions lies in its internal structure of interconnected neurons. Just like how neurons in the brain are connected to each other, the output of one neuron becomes the input of another neuron, allowing the neurons to work together to come up with the correct answer. Read More


Source: Berkely