Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
6 days ago

Data science is the big draw in business schools
179 days ago

7 Effective Methods for Fitting a Liner
189 days ago

3 Thoughts on Why Deep Learning Works So Well
189 days ago

3 million at risk from the rise of robots
189 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
209445 views

Here's why so many data scientists are leaving their jobs
75174 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
68157 views

2018 Data Science Interview Questions for Top Tech Companies
58542 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
56673 views

Your next phone may have an ARM machine learning processor

Feb 14, 2018 | 5370 Views

ARM doesn't build any chips itself, but its designs are at the core of virtually every CPU in modern smartphones, cameras and IoT devices. So far, the company's partners have shipped more than 125 billion ARM-based chips. After moving into GPUs in recent years, the company today announced that it will now offer its partners machine learning and dedicated object detection processors. Project Trillium, as the overall project is called, is meant to make ARM's machine learning (ML) chips the de facto standard for the machine learning platform for mobile and IoT.

For this first launch, ARM is launching both an ML processor for general AI workloads and a next-generation object detection chip that specializes in detecting faces, people and their gestures, etc. in videos that can be as high-res as full HD and running at 60 frames per second. This is actually ARM's second-generation object detection chip. The first generation ran in Hive's smart security camera.

As ARM fellow and general manager for machine learning Jem Davies, and Rene Haas, the company's president of its IP Products Group, told me, the company decided to start building these chips from scratch. "We could have produced things on what we already had, but decided we needed a new design," Davies told me. "Many of our market segments are power constrained, so we needed that new design to be power efficient." The team could have looked at its existing GPU architecture and expanded on that, but Davies noted that, for the most part, GPUs aren't great at managing their memory budget, and machine learning workloads often rely on efficiently moving data in and out of memory.

ARM stresses these new machine learning chips are meant for running machine learning models at the edge (and not for training them). The promise is that they will be highly efficient (the promise is 3 teraops per watt) but still offer a mobile performance of 4.6 teraops - and the company expects that number to go up with additional optimizations. Finding the right balance between power and battery life is at the heart of much of what ARM does, of course, and Davies and Haas believe that the team found the right mix here.

ARM expects that many OEMs will use both the object detection and ML chips together. The object detection chip could be used for a first pass, for example, to detect faces or objects in an image and then pass the information of where these are on to the ML chip, which can then do the actual face or image recognition.

"OEMs have ideas, they have prototype applications and they are just waiting for us to provide that performance to them," Davies said.

ARMs canonical example for this is an intelligent augmented reality scuba mask (Davies is a certified diver, in case you were wondering). This mask could tell you which fish you are seeing as you are bobbing in the warm waters of Kauai, for example. But the more realistic scenario is probably an IoT solution that uses video to watch over a busy intersection where you want to know if roads are blocked or whether it's time to empty a given trash can that seems to be getting a lot of use lately.

"The idea here to note is that this is fairly sophisticated work that's all taking place locally," Haas said, and added that while there is a fair amount of buzz around devices that can make decisions, those decisions are often being made in the cloud, not locally. ARM thinks that there are plenty of use cases for machine learning at the edge, be that on a phone, in an IoT device or in a car.

Indeed, Haas and Davies expect that we'll see quite a few of these chips in cars going forward. While the likes of Nvidia are putting supercomputers into cars to power autonomous driving, ARM believes its chips are great for doing object detection in a smart mirror, for example, where there are heat and space constraints. At another end of the spectrum, ARM is also marketing these chips to display manufacturers that want to be able to tune videos and make them look better based on an analysis of what's happening on the screen.

"We believe this is genuinely going to unleash a whole bunch of capabilities," said Haas.

We've recently seen a number of smartphone manufacturers build their own AI chips. That includes Google's Pixel Visual Core for working with images, the iPhone X's Neural Engine and the likes of Huawei's Kirin 970. For the most part, those are all home-built chips. ARM, of course, wants a piece of this business.

For developers, ARM will offer all the necessary libraries to make use of these chips and work with existing machine learning frameworks to make them compatible with these processors. "We are not planning to replace the frameworks but plug our IP (intellectual property) into them," said Davies.

The current plan is to release the ML processor design to partners by the middle of the year. It should arrive in the first consumer devices roughly nine months after that.

Source: TC