Google's Cloud AutoML Mainstreams Machine Learning

Feb 8, 2018 | 2823 Views

The main mindset of Google behind this is to support the relatively large population of enterprise developers who have limited knowledge and expertise with machine learning.

AutoML Vision is the first product released under the Cloud AutoML banner. It is built on Google's image recognition technology, which includes transfer learning and neural architecture search technologies. This is designed to make the process of creating customer ML models quicker, simpler and easier with a drag-and-drop interface that allows developers to upload images, train and manage models, and then deploy those "trained" models directly on the Google Cloud.

Google also mentions AutoML Vision as the first of a planned series of services for all other major fields of AI.

Fei-Fei Li, Chief Scientist in Google's Cloud AI group, and Jia Li, the head of the R&D team, blogged about this release and the gap in ML/AI developer skills.

They mentioned, "Currently, only a handful of businesses in the world have access to the talent and budgets required to fully appreciate the advancements of ML and AI,". Further, they also added "There's a very limited number of people that can create advanced machine learning models. And if you're one of the companies that has access to ML/AI engineers, you still have to manage the time-intensive and complicated process of building your own custom ML model."

Google's "AutoML" approach to machine learning employs a controller neural net that can propose a "child" model architecture, which can then be trained and evaluated for quality on a particular task, explained Quoc Le and Barret Zoph, research scientists on Google's Brain team, in one of the blog posts. "That feedback is then used to inform the controller how to improve its proposals for the next round," they mentioned in their blog. "We repeat this process thousands of times generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly." 

Source: HOB