Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
346 days ago

Data science is the big draw in business schools
519 days ago

7 Effective Methods for Fitting a Liner
529 days ago

3 Thoughts on Why Deep Learning Works So Well
529 days ago

3 million at risk from the rise of robots
529 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
310569 views

Here's why so many data scientists are leaving their jobs
80796 views

2018 Data Science Interview Questions for Top Tech Companies
76635 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
76083 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
61386 views

A-Why? The Generalized Model in Artificial Intelligence

By Nand Kishor |Email | Jul 3, 2017 | 5346 Views

After over half a century of fictitious media narratives about machines with goals ranging from planetary genocide to human instrumentality, the technology behind the concept of artificial intelligence is constantly over and underestimated in terms of its potential to disrupt the way human beings work, communicate, and live. AI has been here since at least the 90's, under a host of different names and fields. It's in your phone and it's on the cloud. AI is everywhere that an unfathomably massive data set has rendered a human being obsolete, sifting through raw information and sussing out patterns that can be used to turn a profit.

The reality of what doomsdayers and luddites refer to as "A.I., is, even now, somewhat removed from the reality of today's sufficiently advanced algorithms, neural networks, and "smart" interfaces. As experts such as Kevin Kelly have eloquently pointed out, generalized intelligence does not follow a linear path. What human beings tend to think of as intelligence is in fact more of a loose association of competencies and adaptations that we have developed as part of both our mental development and the natural course of some hundred million-odd years of evolution. AI, therefore, is not intelligent in the same way we are, it is simply evolving to achieve different goals. The reason AI technologies are often perceived as "becoming smarter than us" or "coming to steal our jobs" is because while humans have been free to expand the sphere of our common knowledge in any direction they pleased, AI has only advanced in a handful of very specific directions to serve the pleasure of the experts building them.

In this context, artificial intelligence is here. This much is known. What remains to be seen is where it's headed.

The recent advances in technologies that can be categorized under "artificial intelligence" have been significant. To the uninformed observer, the notion of artificial generalized intelligence, the lofty aspiration of creating a machine with something approaching human-like consciousness and self-awareness, might seem inevitable, especially as big names like Google, Alibaba, and Nvidia invest in every promising new startup under the sun. However, although the prospect of a future replete with nearly sentient interfaces attached to our daily tasks and objects seems attractive at first, market forces are currently guiding the field in an opposite direction. True artificial intelligence, be it expressed as a function of machine learning, natural language processing, machine vision, neural networks, deep learning, or some other new development, has yet to address one important issue, and it is an issue that all new technologies face before even coming to market: who is this for?

The technologies mentioned above have all crossed a feasibility threshold within the last decade or so, thanks largely to the rise of the internet, increased mediation, and especially big data. Now, computer systems which can process large sets of both structured and unstructured data are in high demand, as companies seek to squeeze value out of these datasets with speed and efficiency. AI, then, can be seen as uniquely qualified to undertake these tasks on behalf of users who possess big datasets, such as corporations, but as a result are decidedly less useful to individuals who don't. AI in general will remain a diligent, behind-the-scenes tool present anywhere that these specialized data tasks exist.

With this in mind, considering the value propositions of AI applications will be important with regard to applications that can sustain a business. Logically, bespoke applications that serve specific purposes and can complete well-defined tasks are ideal spaces for AI to thrive in. Any attempt at generalized intelligence, however, would not necessarily result in a system which was more apt or better at performing tasks than an intelligence trained exclusively in that same task. For instance, the learning algorithm that allows a Roomba to effectively navigate a room and the one curating Spotify's Discover Weekly playlist would both be more adept (as well as cost-effective) in their respective tasks than an intelligence attempting to learn the skills necessary to do both.

With regard to media, this problem is probably best thought of in terms of a signal/noise problem. In the 21st century, hypermediation has dramatically lowered barriers to broadcasting and distributing media content, causing noise to shoot up to an all time high; accelerating trends in scientific progress are making it more and more difficult for companies to quickly identify new tasks and problems to be resolved by technology, as both companies and individuals struggle to find a method to discover signal, i.e. a way to effectively harness the considerable power of AI to perform desireable, scaleable services which can extract value from big datasets. After all, having terabytes of data holds little inherent value without both the means to process it efficiently and a clear objective for said processing.

In the future, it will be essential for corporate entities large and small who are eying AI applications to be mindful of what assets and resources, namely data, can be leveraged to create new revenue streams or added value to an existing product. Discovering the "signal" of the generalized model is unlikely to come before solving more immediate bottlenecks, such as those in processing power and programming environments (to say nothing of market trends).

The greatest data sets in the world have already been leveraged to create machine learning algorithms designed to create machine learning algorithms. And yet the inability of many experts to adequately explain what goes on inside the black box is a clear indication that we have still yet to scratch the surface of this technology. "True" AI is certainly a technology to watch for, but for now it remains firmly in the future.

Source: Chatbot News Daily