I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing ...
I write columns on news related to bots, specially in the categories of Artificial Intelligence, bot startup, bot funding.I am also interested in recent developments in the fields of data science, machine learning and natural language processing
Even experts can't agree on the many parts of artificial intelligence, or AI, perhaps the most important technology of our lifetime.
AI pros from different fields - a venture capitalist, a retired math professor, a CMO, a telecom veteran, a tech entrepreneur, and a machine learning scientist - gathered at Mobile World Congress in San Francisco last week to offer their conflicting views.
The first question posed by moderator Paul Hsiao at Canvas Ventures: How do you define it?
What is the defining trait of AI?
First to answer this question is the scientist, Danny Lange, vice president of AI and machine learning at Unity Technologies, with previously stints at IBM, Microsoft, Amazon and Uber.
"The tools, mechanism, technology underneath, that's deep learning, that's machine learning, but AI is the appearance," Lange says. "If it appears smart and insightful... I would judge it as AI."
Lange's AI definition, however, drew a quick rebuke from Kris Bondi, CMO at Neura, an AI software company.
"One of my biggest pet peeves is the perception that everything that appears intelligent is AI," Bondi says. "I would argue that Alexa, which most of you have, is not intelligent. It is wonderful, but it is connectivity, voice-activated. Most of the things that are happening with it are programmed in."
By his own definition, Lange agrees that Alexa isn't AI. Rather, Alexa is merely a "hard-wired, voice-response system that doesn't appear intelligent," he says.
That's not to say Alexa, Siri and other voice-response systems won't become AI in the near future. In fact, it's probably where AI will make its greatest impact on society. Bondi describes Alexa as a Trojan Horse that has gained entrance in American culture, eased fears about a listening device inside homes, and will one day unleash AI.
Given Alexa's example, it seems AI's defining trait is that it must be able to learn, adapt and evolve beyond its programming.
The childlike mindset
This AI definition requires a new way of looking at both software development and business application.
Imagine an AI system as a child who experiences things and evolves from them. A child is not a logic-based system that merely repeats itself. For software developers, this represents a fundamental shift in how they perceive themselves and their work, says Soma Velayutham, head of (telecoms) industry development, AI and deep learning at Nvidia.
"You've got to create a fertile ground for the talent to think AI, because we've been ingrained to think very logically," Velayutham says. "A software developer develops an algorithm, and the algorithm drives the logic." But with AI, "you're almost telling the software developer,You're going to teach the software to write software, and you're not the software developer.'"
Business executives, too, have to change their expectations with AI. They tend to focus on quarterly performance goals and quick returns on investment, but they'll need to be as patient as a parent and allow AI to grow up.
This doesn't mean AI should be permitted to wander aimlessly. Even before embarking on AI, Bondi says, a company should work through its key performance indicators to find out where AI can help. This gives AI direction and buy-in from the get-go.
"The company's KPIs are X, Y and Z, and an AI approach can get us from here to here and help us improve in this much," Bondi says.
Lange, however, shakes his head. "I don't think you're ambitious enough," he says. "I think AI is a complete and total disruption. It's not about software engineers, it's about data. It's about being able to train systems rather than program systems."
While general manager of Amazon Machine Learning, Lange says Amazon used only three algorithms throughout the company. The learning and reinforced learning comes from the data that feeds into the algorithm, which Lange calls the "AI loop." The real bottleneck is acquiring interesting data and getting data in shape for machines to crawl - not the algorithm itself.
"We initially thought algorithms really matter," Hsiao at Canvas Ventures says, adding, "A lot of these black boxes platform companies are really making them available for free to everyone. We're surprised by how fast that whole space has actually commoditized."
How AI impacts the future - and the past
While no one disputes the appeal of AI coming up with new insights, companies run the risk of forsaking their past. That is, AI can muddy a company's personality and brand, says Michael Fitzpatrick, president and COO of PullString, a maker of talking toys.
"There's tremendous amount of value in data, no question, but you also have to make sure that, in particular, in the case of interactions with customers, you're not losing the core attributes that make your company," Fitzpatrick says.
One way AI can wreck a trustworthy brand is accidental privacy violations.
Given the nature of learning, an AI system can discover information about a person based on surrounding data. For instance, let's say an Amazon customer doesn't want to reveal their gender. But an AI system looking at purchasing habits will learn it.
"You can't hide it," Lange says. "There are legal issues in the [European Union] because you actually have to promise that you're not trying to relearn private information, which is a challenge to always adhere to that."
An AI system that adapts will learn things it's not supposed to. Also, AI algorithms and data sets are so complex and evolving that it's difficult to know precisely why AI makes the decisions it does - a phenomenon known as the "AI black box."
It's a real danger that can lead to real-world consequences, as AI gains the ability to make strategic decisions with imperfect information. This is why Gunnar Carlsson, co-founder of Ayasdi and a retired math professor at Stanford University, wants AI to essentially show its work.
"I would like to see as requirements for AI the need to involve answers with explanations," Carlsson says.
Inside the AI arms race
Making matters worse is the speed at which an AI system can learn, adapt and evolve. This is especially true when one AI system is pitted against another. At Amazon, Lange says, machine learning detects fake reviews, while at Google, machine learning produces reviews that don't get detected. They continuously learn from each other.
"As my machine learning model gets better and better, what I'm doing is actually training their machine learning model to get better and better," Lange says. "Now we have an arms race."
This mysterious, evolving "AI black box" has created a genre of apocalyptic sci-fi movies and a narrative that AI will lead to human suffering. But don't worry, say AI experts. AI portends a brighter future, they say.
"If you look at the aging population in Japan, if you look at trying to do very risky jobs like cleaning a uranium reactor, I think a lot of these things could be done by robotics and AI," Nvidia's Velayutham says. "I think it's more positive for the human race."
Carlsson, too, doesn't think AI will lead to a doomsday scenario.
"Most of the things we do in machine learning are about optimizing some kind of objective function," he says. "Oftentimes that's chosen because of its mathematical simplicity, and it's some kind of proxy for what we recognize as welfare of people."
Neura's Bondi says AI can change human behavior to achieve a positive goal, such as getting someone to exercise more or become more productive at work. AI can learn what buttons to push at the right time to get a person to do something that he or she wanted to do anyway.
That is to say, AI is here to help. It's the reason why AI exists in the first place.
"Suppose you actually get all the way to where machines are doing everything," Carlsson says. "Then you would've forgotten about that part of the objective function that says people have to have something to do."