Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
222 days ago

Data science is the big draw in business schools
395 days ago

7 Effective Methods for Fitting a Liner
405 days ago

3 Thoughts on Why Deep Learning Works So Well
405 days ago

3 million at risk from the rise of robots
405 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
298014 views

Here's why so many data scientists are leaving their jobs
78969 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
72762 views

2018 Data Science Interview Questions for Top Tech Companies
71454 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
59883 views

Apple is next up to strut its artificial intelligence ambitions

By Nand Kishor |Email | Jun 2, 2017 | 4992 Views

We're in the heart of the tech conference season, in which giant players including Microsoft, Google, Facebook and next, Apple, lay out their visions for where their company futures-as well as the tech industry as a whole-are headed.

Looking at what's been discussed to this point (and speculating on what Apple will announce at its Worldwide Developer Conference Monday), it's safe to say that all of these organizations are keenly focused on different types of artificial intelligence, or AI. What this means is that each wants to create unique experiences that leverage both new types of computing components and software algorithms to automatically generate useful information about the world around us. In other words, they want to use real-world data in clever ways to enable cool stuff.

You may hear scary-sounding terms like convolutional neural networks, machine learning, analytics, and deep learning associated with AI, but fundamentally, the concept behind all of them is to organize large amounts of data into various structures and patterns. From there, work is done to learn from the combined data, and then actions of various types-such as being able to better interpret the importance of new incoming data-can be applied.

While some of these computing principles have been around for a long time, what's fundamentally new about the modern type of AI being pursued by these companies is its extensive use of real-world data generated by sensors-such as still and moving images, audio, location, motion, etc.-and the speed at which the calculations on the data are occurring.

When done properly, the net result of these computing efforts is a nearly magical experience where we can have a smarter, more informed view of the world around us. At Google's recent I/O event, for example, the company debuted its new Lens capability for Google Assistant, which can provide information about the objects and places within your view. In practical terms, Lens allows you to point your smartphone camera at something and have information about the objects in view appear overlaid on the phone screen. Essentially, it's a form of augmented reality that I expect we will see other major platform vendors provide soon (hint: Apple).

Behind the scenes, however, the effort to make something like Lens work involves an enormous amount of technology, including reading the live video input from the camera (a type of sensor, by the way), applying AI-enabled computer vision algorithms to both recognize the objects and their relative location, combining that with location details from the phone's GPS and/or WiFi signals, looking up relevant information on the objects, and then combining all of that onto the phone's display.

Continue Reading>>

Source: USA Today