Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
11 days ago

Data science is the big draw in business schools
184 days ago

7 Effective Methods for Fitting a Liner
194 days ago

3 Thoughts on Why Deep Learning Works So Well
194 days ago

3 million at risk from the rise of robots
194 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
213642 views

Here's why so many data scientists are leaving their jobs
75357 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
68388 views

2018 Data Science Interview Questions for Top Tech Companies
58923 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
56850 views

Google explains how artificial intelligence becomes biased against women and minorities

Aug 29, 2017 | 5907 Views


Time and again, research has shown that the machines we build reflect how we see the world, whether consciously or not. For artificial intelligence that reads text, that might mean associating the word "doctor" with men more than women, or image-recognition algorithms that misclassify black people as gorillas.

Google, which was responsible for the gorilla error in 2015, is now trying to educate the masses on how AI can accidentally perpetuate the biases held by its makers. It's a nice bit of public relations, but also a pretty good overview of simple ways AI programmers can bias their algorithms.

The video outlines three kinds of bias:

Interaction bias: The user (you and me!) biases an algorithm by the way we interact with it. As an example, Google asked users to draw a shoe. Users drew a man's shoe, so the system didn't know that high heels were also shoes.

Latent bias: The algorithm incorrectly correlates ideas with gender, race, sexuality, income, etc. This is the idea of correlating "doctor" with men, just because that's what stock imagery says.

Selection bias: The data used to train the algorithm over-represents one population, making it operate better for them at the expense of others. If image recognition is trained only on white people, they'll win AI-judged beauty contests.

These aren't the only mechanisms for AI to be biased, but it's a good starting point for becoming acquainted with the idea. For a deeper dive, read some of Quartz' previous coverage on the subject.

  • If you're not a white male, artificial intelligence's use in healthcare could be dangerous
  • When computers learn human languages, they also learn human prejudices
  • When artificial intelligence judges a beauty contest, white people win
  • MIT researchers can now track AI's decisions back to single neurons
  • AI in the prison system: To fix algorithmic bias, we first need to fix ourselves
  • Can gender bias be coded out of algorithms?

Source: QZ