Master Deep Learning in 6 Weeks

By ridhigrg |Email | Jan 29, 2019 | 21414 Views

Here is a video where you will get to know how to learn deep learning in six weeks. What is the next rocket science in one week? This video will help you learn the art of deep learning and the only prerequisite is knowing the basic python.
By the end of this curriculum, you'll have a broad understanding of some of the key technologies that make up deep learning. You might be thinking why deep learning without machine learning. Machine learning is a broad set of algorithms used to derive insights from datasets.

Deep learning is a subset of all of those algorithms specifically all the various types of neural networks and when applied to massive datasets and given massive computing power these neural networks will outperform all other models most of the time. Deep learning is the hottest field in AI right now and its responsible for everything from Googles latest duplex assistant to Teslas self-driving cars to robot companions.

 You can get a job as a deep learning engineer by browsing through listings like AngelList, hacker news and indeed.comm and when it comes to getting hired by these employers usually what they're looking for is experience, building, training and deploying deep learning models for real life. Use cases by uploading one project every week to your GitHub profile you'll have built a substantial portfolio to show prospective employers and some of them can even be deployed to Heroku to be usable as a web app so where do we begin. 

Deep learning is built on mathematical principles it's all math really specifically its linear algebra probability theory calculus and statistic. These are subfields of math that cover an enormous breadth of topics each and it's a little daunting to try to tackle them all ideally we could learn from a source that teaches us these Jax.  But not all of them just the relevant parts that apply to deep learning. The deep learning book by Google brain researcher Ian Goodfellow does a perfect job of that. Its free and openly available on deep learning book org. for the first week of this curriculum read part one of this book it does an incredible job of diving into specific subtopics in each subject that are used consistently in deep learning. Terms like a derivative and dot product are explained in an easy to understand way with ample explanation behind the math notation that's used. This video also includes a cheat sheet for math notation to help you understand what the symbols mean if you don't.

 Don't worry if you don't get how all of these concepts are used in a neural network right away just get yourself familiar with the ideas presented in the book first. Know that a matrix is a group of numbers and matrices can be modified in different ways. Know that a distribution models the probabilities of different possible outcomes occurring in an experiment. once you've read part one its time to build your first neural network. Watch my build a neural network in four minutes video on YouTube to get started then read Andrew Trasks amazing blog post called a neural network in eleven lines of Python.

WEEK 1
 Fire up your text editor and start coding your neural network in Python. You don't have to memorize it you could even type it out line by line as you watch my video or stare at the finished code but the simple act of typing it out will be beneficial to your memory. A nipple network is simply a series of operations that are applied to some input data until it results in some output data nothing really special about that but the real magic of these things is a result of an optimization technique called the backpropagation. Its how a neural network learns to improve its output over time its a technique from calculus. I have a great video on this called back propagation in five minutes check that one out. By the end of this week, you should have a basic idea of how a simple feedforward neural network works back propagation included once
you've mastered that everything else becomes easier. Every other neural network is just some variation of this and each variation excels at a specific use case speaking of variations. 

WEEK 2
 Convolutional networks there's a lot of different types of data out there numbers text video audio and different types of networks can be used to learn from each of them. While feedforward networks are great for learning the mapping between numerical input and output data. Convolutional networks are well made for learning from image datasets. If we think of an image as a group of numbers each describing its pixel intensity value on an RGB scale then we can consider it a matrix and this matrix can be input into a neural network operated on and result in an output which is a class probability.

Conv nets were invented by Yamla Coons team over two decades ago and are still responsible for some of the state-of-the-art advances in computer vision technology including driverless vehicles. A great resource to learn about them is the convolutional neural networks course on Coursera taught by Professor Andrew egg of Stanford University, it dives into both the architecture specific and application specific details in an easy to understand way. Like all neural networks confidence have lots of variations, d convolutional networks, skip connection networks they can also be used as building blocks for more complicated models like variational autoencoders.

 WEEK 3
 Where current networks, while feedforward nets are great for numerical data and covenants, are great for images, where current networks are great for sequential data any kind of data, where time matters audio video since videos are sequence of image frames stock price data where current networks are the perfect network to use here. Why you might ask? Normally neural networks while training only use the next data point as an input but recurrent networks take both the next data point and the learned state value from the previous time step as input. This recurrence allows it to remember data sequentially ports on this is again on Coursera by professor and called sequence models. This will cover the variations of recurrent networks including long short-term memory and gated recurrent unit neural networks.

 Additionally, there are some great videos on this link to those will be in the provided syllabus at the end of this week make sure to write out a simple or current network using Andrew Trasks, LS, TM, RN, AND Python blog post as a guide. Now that we have the basic types of neural networks out of the way we can dive into some of the toolings for well be all about getting yourself familiar with some of the tools in this space and because up to this point you've only built your neural networks using Numpy. You'll appreciate the benefits of using libraries like the tensor flow and careless rather than typing out gradient updates by hand you can benefit from tensor flows automatic differentiation. For example of all the deep learning libraries, I have found the tensor flow to be the best tool to use since it offers a complete pipeline for AI development including building testing training and serving models in production. 

A great course on this is CS 20 by Stanford called tensor flow for deep learning research read the documentation page examples to quickly get an understanding of how it works. You'll also want to read this blog post that compares some of the best GPU cloud providers so you get a sense of their pros and cons and can decide on which one is the best one for you to use. 
At the end of this week, you should write out a simple image classification demo using TensorFlow as practice. Once we've got that down in week 5 we can peer into one of the newest models in deep learning the generative adversarial Network. This allows us to generate all sorts of data and it's currently very popular. We can learn about these models from YouTube. You have some great videos on Dans you can also find some interesting lectures on Ganz that, compiled in the syllabus by researchers across the field explore their possibilities across a wide range of applications. And to projects for this week build again from scratch and build again using tensor flow. 

For the last week, we can focus on the most bleeding edge of all techniques deep reinforcement learning. This is what's responsible for some of the latest breakthroughs in the field including alpha go the Atari and EQ learner and more. Berkeley recently released a free course called CS 294 deep reinforcement learning. You can find all the videos on YouTube a Reddit community of students and a bunch of helper materials on their website. This one is nontrivial and it could even take you two weeks to finish this bit. For a final project create a deep cute learning algorithm using tensor flow .you can have it play Atari games using the open AI gym environment.
Deep learning is the dark art of our time extremely powerful and mysteriously good at everything we throw at it.

Source: HOB