How Nvidia is surfing the AI wave

May 30, 2017 | 2067 Views

Nvidia co-founder and CEO Jensen Huang says that the rapid adoption of artificial intelligence (AI) technologies, such as machine learning and deep learning, augur well for the growth prospects of his company.

Jensen Huang, co-founder, president and chief executive officer of Santa Clara-based Nvidia Corp., says that the rapid adoption of artificial intelligence (AI) technologies such as machine learning, deep learning, natural language processing and computer vision augur well for the growth prospects of his company.

His confidence stems from the fact that Nvidia designs the chips that can deliver the extra computing power that clients need in an algorithm-driven world, which is increasingly using these AI technologies to make business sense of the voluminous data that users generate and thus gain a competitive edge.

Also Read:  Nvidia selects 5 most-disruptive AI startups

These chips, called graphics processing units (GPUs), helped Nvidia fuel the growth of the personal computer gaming market almost two decades back. Huang hopes the increasing use of GPUs for AI will help his company repeat the success.

Huang argues that even when you increase the number of central processing unit transistors in a computer, they result in a small increase in application performance, whereas GPUs, which are specifically designed to handle multiple tasks simultaneously, make them more suitable for high-performance computing tasks.

Hence, "building great GPUs", as Huang puts it, is a critical part of Nvidia's strategy to "create the most productive platform for deep learning", which is one of the main triggers for the AI explosion. The other components of the strategy revolve around showcasing cutting-edge AI applications across sectors, building partnerships with other companies, nurturing technology start-ups and help build an AI ecosystem.

At the cutting edge of AI

If the proof of the pudding lies in the eating, then driverless cars and robotics are ably helping Nvidia showcase the power of its GPUs and other technologies in the AI world.

At GTC (GPU Technology Conference) 2017, for instance, Nvidia announced that one of the 10 largest companies in the world, Toyota Motor Corp., had selected Nvidia for its autonomous car.

Autonomous vehicles require an on-board supercomputer to process and interpret the data from all the sensors on the car. While many prototype vehicles "contain a trunk full of computers to handle this complex task, the Nvidia Drive PX platform, equipped with the next-generation Xavier processor, will fit in your hand and deliver 30 trillion deep learning operations per second", Nvidia said in a 10 May statement.

Also Read: 6 AI startups win $1.5 million in prizes at Nvidia Inception event

The Drive PX platform combines the data generated from cameras, lidar, radar and other sensors. It then uses AI to understand the environment surrounding the car, localize itself on a high-definition map and anticipate potential hazards while driving.

The Nvidia Drive is "basically an architecture that spans level two to level five-from augmented driving, all the way up to completely driverless systems", according to Huang. The company's partners include Robert Bosch GmbH, Toyota and Germany-based ZF-one of the industry's largest automotive suppliers.

In his keynote speech, Huang also demonstrated the potential of Project Holodeck- "a photorealistic, collaborative virtual reality environment that incorporates the feeling of real-world presence through sight, sound and haptics (applying touch)"- by showing the audience a demo of how the technology works with the Koenigsegg Regera supercar. The demo showed engineers in the Holodeck environment exploring the car and consulting each other on design changes in real time.

Meanwhile, even as the world is getting used to the concept of driverless cars, the Airbus Group is working on GPU-powered autonomous air taxis. Christened Vahana, the project which began in early 2016, envisages an aircraft that does not need a runway, is self-piloted, and can automatically detect and avoid obstacles and other aircraft. It is designed to carry a single passenger or a load of cargo along short distances, Arne Stoschek, head of autonomous systems at Airbus A3 (pronounced "A-cubed")-a unit of the Airbus Group, said during his presentation at GTC.

Further, to enhance the field of robotics, Nvidia announced that it has developed Isaac-a virtual robot that helps make other robots. Unlike robots that are hand programmed, and do exactly and only what they were programmed to do, Isaac is trained using AI algorithms like reinforcement learning, after which its virtual brain is downloaded into Jetson-Nvidia's AI supercomputer-to give birth to a new robot. This "pre-trained" robot from the Isaac world wakes up almost as if it was born. The last little bit of domain adaptation is done in the physical world.

Nurturing tech start-ups

Nvidia is currently nurturing 1,300 technology start-ups around the globe as part of its 18-month-old Inception programme. The start-ups, according to Serge Lemonde, who heads the programme for Europe, the Middle East and Africa, and India regions, are helped with "early access to technology, resources, market exposure, and sometimes funding (through GPU Ventures)".

Consider the example of Culture Machine, an Aleph Group Pte Ltd company, whose tagline is: Automated story telling using videos. Venkat Prasad, co-founder and chief technology officer/chief operating officer, explains that with the help of "machine learning GPUs", Culture Machine processes and analyses about 150GB of data daily and "can create 3,000 videos a day".

"We use our technology platform to understand what content to create; what format should be applied; what type of story-telling is needed; and what type of emotional treatment needs to be given," says Prasad.

The data is sourced from multiple social networking sites including Instagram, Snapchat, Facebook and YouTube, and fed to the deep learning convolutional neural networks that continuously learn about formats from the data. The deep learning algorithms, then, recommend the most suitable format and this "creative brief" is sent to the "Video Machine Platform" of Culture Machine to "create an automated video".

Also consider the case of Cincinnati-based Genetesis Llc, which runs clinical trials for CardioFlux-a non-invasive bio-magnetic imaging system that measures the heart's weak magnetic fields. Powered by GPUs, it generates a three-dimensional map of the heart's electrical performance in just 90 seconds. This provides doctors a fast and accurate way to diagnose blocked arteries and pinpoint their location.

San Francisco-based Bay Labs Inc., on its part, wants to combat heart disease by putting an inexpensive ultrasound scanner in the hands of every general practitioner. By training its GPU-accelerated deep learning software to recognize ultrasound images, it aims to make scans easier to interpret.

Building the AI ecosystem

Nvidia also believes in developing an AI ecosystem of partners, companies and developers. For instance, it also collaborates with the world's top AI laboratories, be it the "University of Toronto or Stanford or Berkley or Oxford or Harvard or MIT or Tsinghua University or the University of Tokyo-we have 20-odd universities that we're working with around the world where the greatest, brightest minds in artificial intelligence are supported and working directly with us so that we can advance the future of computing together", says Huang.

Also Read: Why Artificial Intelligence Is Creating a New Category of Jobs

Nvidia's Deep Learning Institute provides developers, data scientists and researchers with practical training on the use of the latest AI tools and technology, according to Greg Estes, vice-president of developer programs at Nvidia. To meet the surging demand for expertise in the field of AI, Nvidia said on 9 May that it plans to train 100,000 developers this year-a tenfold increase over 2016-through its Deep Learning Institute. Analyst firm International Data Corporation estimates that 80% of all applications will have an AI component by 2020.

Promoting coopetition

"Software is eating the world, as Marc Andreessen said, but AI is eating software," Huang noted in his keynote speech at GTC on 10 May in Silicon Valley. However, AI is a space that leading technology companies such as Google Inc., Intel Inc., Microsoft Corp., Hewlett Packard Enterprise, Qualcomm Inc., Facebook Inc., and International Business Machines Corp. (IBM) also want to claim.

All these companies compete, and also collaborate, with Nvidia. So other than promoting the use of GPUs to power AI, Huang's strategy also involves building healthy partnerships with these companies in a spirit of coopetition-a portmanteau of competition and cooperation.

When Google announced at its I/O global developer conference on 17 May that it will launch its tensor processing unit's (TPU) second version-the TPU2 chip-Huang blogged on Nvidia's official site on 25 May: "It's great to see the two leading teams in the AI computing race while we collaborate deeply across the board-tuning TensorFlow's performance, and accelerating the Google cloud with Nvidia CUDA GPUs."

TensorFlow is Google's deep learning framework while CUDA (Compute Unified Device Architecture) is a free software platform provided by Nvidia that enables users to program GPUs.

Further, public cloud services providers such as Alibaba Group Holdings Ltd, Amazon Web Services, Baidu Inc., Facebook, Google, IBM, Microsoft and Tencent Holdings Ltd use Nvidia GPUs in their data centres, prompting Nvidia to launch its GPU Cloud platform, which integrates deep learning frameworks, software libraries, drivers and the operating system.

Nvidia also worked with SAP SE to develop a product called Brand Impact-a fully automated and scalable video analytics service for brands, media agencies and media production companies. "Companies using SAP are sitting on a pile of data. If we could figure out a way to use AI to harvest that dark matter, it would be incredibly valuable," says Huang.

Competition is stiff

Ray Wang, founder of Constellation Research, a technology research and advisory firm, believes that Nvidia is "well placed for explosive growth in AI". "There are seven components to successful AI-computer power, large corpus of data, time, math talent/algorithms, domain expertise, human user interfaces, and recommendation engines. In the future, we will be paying for AI calls by the kw/hour. Those with the biggest and baddest GPUs at the lowest cost will win," says Wang.

Jayanth Kolla, co-founder and partner at Convergence Catalyst, an Indian research and advisory firm, concurs that "with the advancement and growth of AI, especially the neural nets and deep learning branches of AI, Nvidia has a good chance of being a strong player in the enterprise space".

He, however, believes that Nvidia's entry into the enterprise space was more "accidental than intentional" because five years ago, neural networks and deep learning algorithms started being used extensively by companies such as Google, Baidu and Facebook, and they started using GPUs, "especially Nvidia's GPUs to train their algorithms".

Now that the company's chips have been the choice for AI servers over the last five years, Kolla reasons that Nvidia has had an opportunity to learn the needs of the evolving technology and its enterprise side, and "designed and launched specific AI chips starting 2016 and the latest V100 (Volta) this year".

To be sure, the competition from the likes of Intel and Google for Nvidia is "real", according to Kolla. "While Intel's X-86 based solutions have been the processors of choice for enterprise applications and server farms for a long time, and the company enjoys the advantage of being an incumbent, Google has the advantage of owning the end-to-end elements-applications on personal devices to collate data, AI algorithms and programs, training data sets and even server farms and is in a front-runner position in the AI race to develop customized AI chipsets and potentially standardize them for the industry," he says.

One of the biggest challenges that Nvidia faces, according to Kolla, is that it is dependent on relationships and partnerships with other bigger companies in the space to forge ahead in the AI space. "Also, especially in the enterprise chipsets space, Nvidia is relatively a new entrant."

According to Wang, the Tesla V100 GPU is the key to Nvidia's differentiation.

"The ability to recognize objects in photos, do multipole translations of text, understand voice through deep learning algorithms is what this chip is designed to do. They (Nvidia) have a 40% better raw performance lead in their chips and they are neural network ready. However Intel and AMD's Radeon instinct chips are playing catch-up in this lucrative market. The new cloud services for AI developers, is the long term differentiator," notes Wang.

In his GTC keynote speech, Huang recalled a recent talk where Fei-Fei Li, associate professor at the computer science department at Stanford University, said that the "Big Bang of AI" was made possible by three fundamental ingredients: deep learning algorithms and the deep learning approach; the availability of an enormous amount of data; and the discovery of the efficiency of using GPUs in computers to accelerate deep learning.

Huang believes the AI revolution, which started with the Big Bang in 2012, has since grown exponentially. He says, "Not everybody knows how to program but everybody has data. And they can use that data now, use the experience of their domain, the experience of their career, experience of their professions and teach a computer how to automate their work. So teaching computers is something that I think everybody can do. We have democratized computing."

India strategy

The India strategy of Nvidia Corp. is not very different from its global strategy. "Our highest strategic priority is to be the captain of the AI ecosystem ignited by GPU Deep Learning," said Vishal Dhupar, managing director, South Asia.

In India, Nvidia focuses on partnering with top labs and researchers to apply deep learning and solving "India's grand challenges"; proliferating Nvidia's platform and software development kit among developers, students and researchers; and accelerating AI start-ups by means of the Inception program.

Dhupar underscores that India is at "an intersection of talent, a vibrant start-up ecosystem, strong IT services and an offshore industry" to harness the power of deep learning.

"While the internet is an appetizer, AI is the real entrée," Dhupar insists.

AI, Dhupar points out, has become a critical component of Prime Minister Narendra Modi's flagship Make in India, Skill India and Digital India programmes.

The fact that there are "...over 170 AI startups, top academic institutes doing research, and many corporates adopting AI for creating a competitive edge, are great indicators of the potential for AI in India", according to Dhupar. Read More


Source: Mint