Jyoti Nigania

Hi,i am writing blogs for our platform House of Bots on Artificial Intelligence, Machine Learning, Chatbots, Automation etc after completing my MBA degree. ...

Full Bio 

Hi,i am writing blogs for our platform House of Bots on Artificial Intelligence, Machine Learning, Chatbots, Automation etc after completing my MBA degree.

A Strong Determination Of Machine Learning In 2K19
18 days ago

Data Science: A Team Spirit
25 days ago

If You Are A Beginner Then Have Handy These Machine Learning Books To Gain Knowledge
26 days ago

Industry Automation Is Gearing Up Various Companies
28 days ago

Perks Of Becoming A Big Data Engineer Highlighted In A YouTube Video
34 days ago

These Computer Science Certifications Really Pay Good To You
117750 views

List Of Top 5 Programming Skills Which Makes The Programmer Different From Others?
115194 views

Which Programming Language Should We Use On A Regular Basis?
106365 views

Cloud Engineers Are In Demand And What Programming Language They Should Learn?
86571 views

Python Opens The Door For Computer Programming
65757 views

What's next to Deep Learning and why Machine Learning Experts are Turning to Deep Learning?

By Jyoti Nigania |Email | Jul 25, 2018 | 3819 Views

What is the future of Deep Learning? Are most Machine Learning experts turning to Deep Learning?

Answered by Divyansh Agarwal on Quora:
Two areas in which the future of deep learning lies are as following:
1. Interpretation Techniques for Deep Learning:
Deep Learning is not very interpretable, and this makes it undesirable in cases where it is important to understand why a deep learning model is making certain predictions. For e.g. if you are a Venture Capitalist trying to use Deep Learning to decide which start-up to invest in, and a Deep Learning model tells you to invest in a certain Blockchain start-up, you might want to understand why it is recommending that particular start-up, to decide whether the prediction makes sense or you agree with the criteria used by the model. 
In general, you would also want to make sure that your deep learning model does not overfit, and good interpretation techniques can help reveal whether your model overfits as well (although they are not the only way to determine this, and in some cases may not even be the best way to determine this).
Interpretation techniques for deep learning is a very active area of research, and some good techniques such as Contextual Decomposition have been coming out recently. While Contextual Decomposition is currently limited to LSTMs, It is expected that quality interpretation techniques for other models to come out in the future.
2. Bayesian Deep Learning: 
Sometimes, it might help to have a measure of uncertainty about your model's predictions. Deep Learning currently does not give you confidence intervals for the predictions, and Bayesian techniques can be used to obtain confidence intervals. While some research has been done in this area, it is still a relatively nascent area of research.
Bayesian approaches can also help make deep learning less reliant on large datasets for training. A related example of this is the area of active learning, which use Bayesian techniques to update models using small amounts of data. Less reliance on large datasets will allow deep learning solutions to be implemented by start-ups that don't have a lot of data to begin with. This is highly desirable for start-ups such as a fashion start-up that seeks to provide quality personalized clothing recommendations, but does not have much data to start with.
Advancements in these two areas will make deep learning more interpretable, and allow it to be less data-hungry, thus mitigating two major problems in deep learning. There are other aspects that I may not have covered, since deep learning is a vast field, and I hope other answers provide insights into the aspects missing from my answer.

According to Elijah Philpotts on Quora:
Many companies are just now turning to machine learning solutions to fuel insights, so we're still on the basic machine Learning uptick on the spectrum right now. There are a few reasons why which are as following: 

Trying to deploy deep learning models to a scalable system is very hard because of the complexity of setting up the appropriate number of layers, hyperparameter optimization, etc. is still a great burden on a company and usually not worth the effort. Also, what if the company needs to re-train the model? That's a different beast in of itself. Think of it from this perspective: some companies are using Logistic Regression over Decision Trees and ensembles because they fit much better into their stack. If some large companies can't use ensembles in their stack, then deep learning methods are way out of the question right now.
Deep Learning as others mentioned still takes a long time to train and test. You have to have some serious GPU power to get this done (and that ain't cheap!). Basic Machine Learning models can be trained on a cheap laptop computer.
Basic machine learning models still solve most business problems. Until we can prove the viability of deep learning as a money maker to companies and ease the ability to integration into their software architecture, it won't be the "in" thing corporate-wise. Basic machine learning methods are much easier to explain to a business executive rather than trying to use neurological analogies to explain Neural Networks. Zeeshan pointed out, that deep learning has made massive breakthroughs at a much faster rate than anyone could have imagined.

There are a lot of things that are next for deep learning. Instead of thinking of moving forward in one direction, think of expanding outward in many directions:

  • Better reinforcement learning/integration of deep learning and reinforcement learning. Reinforcement learning algorithms that can reliably learn how to control robots, etc.

  • Better generative models. Algorithms that can reliably learn how to generate images, speech and text that humans can't tell apart from the real thing.

  • Learning to learn and ubiquitous deep learning. Algorithms that redesign their own architecture, tune their own hyperparameters, etc. Right now it still takes a human expert to run the learning-to-learn algorithm, but in the future it will be easier to deploy, and all kinds of businesses that don't specialize in AI will be able to leverage deep learning.

  • Machine learning for security, security for machine learning. More cyberattacks will leverage machine learning to make more autonomous malware, more efficient fuzzing for vulnerabilities, etc. More cyberdefenses will leverage machine learning to respond faster than a human could, detect more subtle intrusions, etc. ML algorithms from opposing camps will fool each other to carry out both attacks and defensive actions.

  • Dynamic routing of activity will lead to much larger models that may use even less computation to process a single example than current models use today. But overall, massive amounts of computation will continue to be key for AI; whenever we make one model use less computation, we'll just want to run thousands of models in parallel to learn-to-learn them.

  • Semi-supervised learning and one-shot learning will reduce the amount of data needed to train several kinds of models and make AI use more widespread.

  • Research will focus on making extremely robust models that almost never make a mistake, for use in safety-critical applications.

  • Deep learning will continue to spread out into general culture and we'll see artists and meme creators using it to do things that we never would have anticipated. I think Alexei Efros's lab and projects like CycleGAN are the start of this.

Source: HOB