Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
39 days ago

Data science is the big draw in business schools
212 days ago

7 Effective Methods for Fitting a Liner
222 days ago

3 Thoughts on Why Deep Learning Works So Well
222 days ago

3 million at risk from the rise of robots
222 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
231723 views

Here's why so many data scientists are leaving their jobs
76374 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
69672 views

2018 Data Science Interview Questions for Top Tech Companies
60786 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
57645 views

The Dark Side Of Artificial Intelligence

Dec 7, 2017 | 10029 Views

Algorithms are incredible aids for making data-driven, efficient decisions. And as more industries uncover their predictive power, companies are increasingly turning to algorithms to make objective and comprehensive choices. However, while we often rely on technology to avoid inherent human biases, there is a dark side to algorithm-based decisions: the potential for homogenous data sets to produce biased algorithms.

Many of the people and companies employing algorithms hope that their use of technology in replacement of humans results in reduced unconscious bias. While it would be great if it were that simple, this mindset is often a case of "mathwashing:" our tendency to attribute objectivity to technology.

Consider this example: there are more men named John than there are women named anything as S&P 5000 CEOs. If we built a predictive model of CEO performance, it's possible that John would be a better predictor of being a successful CEO than the female gender. Is this truly reflective of a person's potential to be a CEO, or just noise from the bias in the training set? In this example, it is obvious that being named "John" is simply noise. However, when presented with similar evidence in real world situations, it is not always as easy to spot the biases.

We've started to see this in less consequential but still troubling situations. Tay, Microsoft's Twitter bot, turned misogynistic and antisemitic. If you search for images of gorillas on Google, you may get shown images of black men. If you're a non-native English speaking student and plagiarize part of an essay, Turnitin (a plagiarism detection software) will be more likely to detect your cheating than it will native speakers.

To avoid situations like this, I strongly believe that any algorithm making decisions about opportunities that affect people's lives requires a methodological design and testing process to ensure that it is truly bias-free. Because when you employ AI itself to remove bias from an algorithm, the results can be extraordinary.

Take for example what we're working on at pymetrics, where we build algorithms based on top performers to select the ideal candidates for jobs. Sometimes we have to build our algorithms based on a homogenous group of people - all white men, for example.

A crucial part of our algorithm development process is to correct for bias so that anyone - regardless of their gender or ethnicity - has the same probability of matching to any job. I strongly believe it is the duty of any algorithm's creator to check for bias, remove it, and monitor outcomes to ensure it is creating equal access to opportunity.

Other technology-driven platforms, like Humanyze and HireVue, are also developing processes to remove bias from algorithms and create equal access to job opportunities. By using the bias-free algorithms developed by these types of companies, we've seen global organizations dramatically transform their gender, ethnic and socioeconomic diversity in ways that they've never been able to achieve in the past. We've seen financial services companies take roles that were previously 80-20 male-female employees to 50-50. We're seeing algorithms move the needle for diversity in ways that had never been possible when we solely relied on humans to make decisions.

In the next five to ten years, algorithms will be making decisions directly affect our health, job prospects, and abilities to get loans. They have the potential to be our most powerful tool for making efficient, effective and bias-free decisions, but only if we design them intentionally.

Source: Forbes