Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
430 days ago

Data science is the big draw in business schools
603 days ago

7 Effective Methods for Fitting a Liner
613 days ago

3 Thoughts on Why Deep Learning Works So Well
613 days ago

3 million at risk from the rise of robots
613 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
317784 views

Here's why so many data scientists are leaving their jobs
82296 views

2018 Data Science Interview Questions for Top Tech Companies
80343 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
78231 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
62892 views

The Darker Side Of Machine Learning

By Nand Kishor |Email | Jul 6, 2017 | 5592 Views

Machine learning can be used for many purposes, but not all of them are good-or intentional.

While much of the work underway is focused on the development of machine learning algorithms, how to train these systems and how to make them run faster and do more, there is a darker side to this technology. Some of that involves groups looking at what else machine learning can be used for. Some of it is simply accidental. But at this point, none of it is regulated.

"Algorithms people write algorithms," said Andrew Kahng, professor at the University of California at San Diego. "In general, algorithms used inside chip design have been deterministic and not statistical. Humans can understand how they work. But what folks expect in this world of deep learning is gleaned from fitting a neural network model on a classic Von Neumann machine, doing tenfold cross-validation, and that's it. You get statistically likely good results. But that's not something that IC designers and concepts of signoff and handoff - or, even, the concept of an ASSP/SOC product - know how to live with."

But what happens when the data is bad or the data is corrupted on purpose? This might come down to the DNA of the engineer and the product sector, according to Kahng.

That data can be corrupted inadvertently, as well. Bias is a well-known problem in training systems, but one that is difficult to prevent.

"We found that in early versions of the software we worked on that it made mistakes based on ethnicity that we weren't even aware of," said Seth Neiman, chairman of eSilicon. "You have to have a pretty sophisticated speaker and member of the culture to even point out the mistakes. It's dumb learning-like your kids didn't realize you taught them to hate peas because you hate peas."

This can quickly get out of control, too, because those systems are used to create other systems. "It used to be humans wrote software," said Neiman. "Now data writes software. We have a system where if we pump enough data in it, it will write the software for you. It's not going to write a user interface-or at least not yet."

Minimizing problems
One way to handle these problems is to add checks and balances into machine learning. "When we as humans are faced with making too many mistakes, what do we do? We ask one guy to check another other guy's work," said Ting Ku, senior director of engineering at NVIDIA. "There is an adversarial network mechanism that does that cross-checking, so perhaps a few layers of redundancies is necessary to deal with that data corruption problem. This is not out of the ordinary. We've been doing this for thousands of years. When I don't trust one guy, I get two guys. If I don't trust two guys, I get a congress to vote because we don't want a king, we want a whole bunch of people that are accountable for decisions. And we want to leverage so that even if one guy gets shot, we're still okay. Essentially that's the same answer as how we manage human society - redundancies."

Harry Foster, verification chief scientist, Mentor, a Siemens business, points to a "trust-and-verify" approach as the solution.

Best practices still apply, of course. Machine learning requires good coding methods, asserted Sashi Obilisetty, director of R&D at Synopsys. "You have to write secure code, you have to write more secure code, you have to have checks and balances. Let's say your output is not as you expect. You have to have redundancies to make sure that your QoR or whatever you're trying to output, you're not compromising that."

And just how bad this can get depends upon the application. [The data corruption] problem is not as bad as autonomous driving, where there are fatal mistakes," said Norman Chang, chief technologist of the semiconductor business unit at ANSYS. "We need to learn with bad data, and customers will come up with a strategy to deal with the bad data."

While safety-critical systems are certainly more important than a failed $10 million tapeout, that tapeout is still a serious issue.

"There remains the fact that when you have a manually-written tool, there is probably one guy you can go to and say, ??This didn't get the output I expected. Can you look again at the algorithm and really convince me that this is right,' said Chris Rowen, CEO, Cognite Ventures. "Whereas especially with deep neural networks, it's very difficult to figure out how it arrived via training at that solution. This is something that's a big push in the deep learning community to have more auditability, more transparency, more analysis tools for the models themselves, and those will be important. But for some time there will be an inexplicable gap between manually written and learning models."

In terms of a more nefarious model, he said bizarre and interesting examples exist of people that come up with inputs that game the system by looking like something other than what they really are. So far that hasn't happened for the chip design process, which is otherwise a fairly secure process. Why, for example, would inadvertently try to fool the tools.

Bias plays a role here. So does a wrong decision made by an engineer, which may be nothing more than an honest mistake put into the database as part of the training. But that can have a significant impact, Ku said. "You reference the bad one, make a bad decision, and the whole decision-making process gets skewed to the wrong side. That's a worrisome thing to most people. Cross-checking helps with that to steer the data back."

Another strategy includes implementing time expirations for the data. "If the data is really, really old, I treat it with less importance," he said. "Data that is newer is a little bit more relevant. So hopefully the mistake that we made 10 years ago has been forgotten."

There is one built-in safeguard, as well. "This notion of using a diversity of data types also gives you implicit cross check," said Rowen. "You really take several different views of the data, even if it is the same database at its center. There may be no new true information. You may have different biases or different flaws in how that data was extracted and prepared and labeled, and even that will then create some self checking implicit in the process."

Still, because machine learning is just coming to point that it's being applied on a more widespread basis, there are a whole bunch of unknowns. Simon Davidmann, CEO of Imperas Software, said things can go wrong for malicious reasons, and things that can be stolen and misused to make other things happen. And while a better architecture is needed so that machine learning can be implemented more quickly, because everything is going to need to do learning, he does not believe people have started considering the security aspects of it. The discussion always centers around ways to speed up the process, he said.

All of this is related to the dark side of progress, in general, Davidmann continued. "And it turns out that a lot of the embedded world is worried about the security aspects of what they do. If the wrong seeds got in there at the beginning, how do you ever find out? This is true with everything. Every now and again a medical professional does something really stupid on purpose, because something's gone wrong and humans are fallible. So we're going to get this. Things are going to go wrong in machine learning applications, and it often will be because someone puts a back door in accidentally or on purpose. They may do it maliciously. But it's a game of probabilities. As long as that probability will be very low, it will be there, and we're going to live with it."

Conclusion
Whether we can live with the errors inherent in machine learning data remains to be seen. This is a technology approach that is just beginning to roll out, with uncertainties that have yet to be fully defined. But this definitely is an area where more discussion needs to take place.


Source: Semi Engineering