Deep Learning, what is it? Well in layman's term it is when a machine tries to copy working of a human's brain. All I want to say is if it copies someone I hope it is a smart human brain; it will for sure shutdown if it tries's copying from my brain.
Deep learning is here to change our perspective towards technologies. AI already has a lot of excitement around it and especially with Deep Learning. It was actually predicted that DL will have a affect to our life's, it already is having its share of impact. DL is growing faster than what most of us expect, it feeds on data, that's its fuel, and for once data is something there is plenty of in this word. Get in my belly; or get in my programming however DL likes it.
Deep learning is a technology that is introduced so that it can provide a solution to every problem, a genie that is able to turn all your wishes true. But what happens when deep learning is really put to test.
Gramenar AI labs have been studying advances in deep learning and translating them into specific projects that map to client problems. In this article I will share some of their learning that they got from implementing Deep learning solutions in the last few years; deep learning over the years has seen some success and some difficulty.
In this articles will cover causes why deep learning programs hit dead end in some cases.
Real life vs. science fiction
AI not now around us it is real not a sci-fi thing with self-driving cars, drones that can deliver pizzas and machines that are able to read one's brain signals. But that not all of it most of these are still in research labs and work only under carefully curated scenarios. There's a thin line between what's production-ready and where it's still a stretch of the imagination. Businesses often misread this, and teams wade deep into the tech. This is where businesses can experience AI disenchantment, prompting them to become over-cautious and take many steps back. With some due diligence, the DL use cases that are business-ready must be identified. One can be ambitious and push boundaries, but the key is to under-promise and over-deliver.
Performance feeds on Data
Analytics delivers magic because of data, not in spite of its absence. And DL does not solve the festering challenge of data unavailability. If anything, DL's appetite for data is all the more insatiable. To set up a simple facial recognition-based attendance system, for example, you'll need mugshots of employees as your training data. These pictures may be taken live or submitted with some variation of features (orientation, glasses, facial hair, lighting, etc.). Usually, such data gathering can quickly turn into a mini project. Project sponsors often assume that this data is already available or that collecting it is easy. But after their best efforts, they may end up with just partial data that delivers moderate accuracy. This shortcoming can mean the difference between production-grade solutions and just an attractive research prototype.
Lack of Data labelling
Curated database with a million data points at one's disposal, is not always enough for DL to do its magic? Not so fast. For the model to learn, the training data needs to be painstakingly labelled. This step is often overlooked. You need to draw boxes around pictures for algorithms to learn and spot people. You need to label faces with names, tag emotions, label voices, and even describe a table of numbers with detailed metadata. You might say, "Wow, that's a lot of work!" But that's the tradeoffs in teaching DL models if we're not doing the even more painstaking process of feature extraction.
The name of the game is money
The effort to gather and label data, combined with GPU-grade computing, can prove costly. Add this to the ongoing effort to maintain production models with labelling, training, and tweaking, and the total cost of ownership shoots up. In some cases, clients realize this late and find that hiring people to do manual inspection and classification can be cheaper. But when you talk about large volumes and scalability, then DL again starts making sense. But not all businesses have this need as a priority. With research in DL progressing steadily, this is changing by the day. Hence, it's critical to examine the total cost of ownership of DL early on. At times, it may be wise to defer investment until the cost economics become more favourable.
Creepy zone is a big no-no
This concern falls on the opposite side of the spectrum. There are use cases which turn out to be sweet spots for DL, where data availability and business needs are ripe for use. The challenge here is that the model knows way too much, even before people verbalize the need for it. And that's when it crosses the creepiness zone. It may be tempting to cross-sell products before the need is felt or detect deeper employee disconnects by tracking intranet chatter. But this skirts an ethical dilemma or cast doubts on data privacy with customers or employees. When in doubt whether a use case can alienate the target audience, companies must give it a pass in spite of the potential at hand. Remember, with great power comes greater responsibility.