Which is the most Popular Deep Learning Framework?
Share This On
Aug 8, 2018 | 1467 Views
Deep Learning is a collection of algorithms used in machine learning. This is an approach used for building and training neural network. There are many frameworks like Tensorflow, Theono, Caffe, Torch and PyTorch. If you are selecting one framework to learn, would choose TensorFlow. The reason is that this where all see the future is going. TensorFlow is gaining a lot of popularity at high rate and it is powered by Google, so it is going under consistent and continuous development, and it is still Open Source and free.
TensorFlow is the newest released to public on Nov 2015 among the frameworks you mentioned (TensorFlow, Theano, Caffe, Torch). However, it is becoming more popular at high rate. You can see this clearly from the chart below shows the number of stack overflow questions posted about each framework.
In addition to that, it is more popular on GitHub. At the moment Oct 1, 2016 Tensor flow has been starred 33,133 times and forked 14,421 times on Github. Other frameworks, despite the fact that they have been available for much longer time, have lower numbers. For example, Theano has been starred only 4,653 times, and forked 1,656 times. And you are right Keras and "Sk-flow" are simplified high level interfaces for TensorFlow (Keras supports both Theano and TensorFlow) and cannot be compared to TensorFlow.
This all deep learning applications are very popular but we should go with PyTorch. Since this question is old and doesn't include PyTorch as option, but still most of the professionals suggest PyTorch. TensorFlow, Caffe, Theano, Torch, Keras are some of the popular open source deep learning frameworks of today. They are different with each other on languages supported, availability of tutorials and training materials, convolutional neural network modeling capability, recurrent neural network modeling capability, ease of use in terms of architecture, speed and support for multiple GPUs. Someone who is a scientist might have a very different viewpoint. This answer is given after a long analysis of all the frameworks.
Before PyTorch came in, there were two types of Neural Net frameworks:
1. The research work frameworks: You can implement new architectures, try out variations and see below the hood to see what is wrong with your work before you tweak your network. The examples here were: TensorFlow, Theano, Torch (which needed Lua to work not Python).
2. The Practitioner's framework: Keras used to reign supreme in applied industry, Caffe for vision people (esp. the ones working in universities). Take a standard architecture, train/transfer learn and $$$. This was good for the initial times in Deep Learning, when transfer learning from image net to differentiate dogs and cats was to be done by internal teams of companies, but with APIs from Google or Amazon available now, the time is lost. They are still good for starting to learn Deep Learning though.
3. Lasagne: Lasagne was in the middle. You could literally write both Theano and Lasagne in the same python file and hence had the power to look under the hood as well as deploy fast. When you are working on applied research problems like we do at ParallelDots, where you have a different dataset from real world like hyperspectral images as one example out of many such datasets, but don't have enough time to invent all new architectures like say an "inceptionnet for hyperspectral", you need something which gives your ability to tweak existing research for your use case.
Now, came in PyTorch. PyTorch can be used to write code with as much low level control as Tensorflow, but is easy to deploy like Keras. That is because it is a low level-ish dynamic graph framework with imperative programming. This change of paradigm that makes debugging easy is probably what makes a research framework like PyTorch so usable. That is a super combination. I think they had this beautiful framework in original torch as well, but learning lua and shifting tech stack in lua was the obstacle in its widespread adoption. PyTorch is now a mixture of all the above stated points. With that, the PyTorch forum is super active and helpful, just like theano Gmail group. This is what made us jump ship to PyTorch.
Chainer, a framework like PyTorch existed from a long time, but it just never got enough PR. Also unlike similar frameworks which came later like DyNet, PyTorch isn't restricted anyway to CV or NLP.
Due to reasons highlighted at the end of point 2, you can see more and more frameworks coming out to be like 3. TensorLayer, for example, looks like a lasagne on tensorflow (and it mixes seamlessly with keras). Keras, on which you couldn't even check gradients till sometime back, is now evolving towards three like capabilities.
Hence, this is actually very hard to select any one of the framework of deep learning, but most of the person love the framework is Tensorflow and this is used more frequently.