Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

3 Best Programming Languages For Internet of Things Development In 2018
250 days ago

Data science is the big draw in business schools
423 days ago

7 Effective Methods for Fitting a Liner
433 days ago

3 Thoughts on Why Deep Learning Works So Well
433 days ago

3 million at risk from the rise of robots
433 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
301998 views

Here's why so many data scientists are leaving their jobs
79437 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
73473 views

2018 Data Science Interview Questions for Top Tech Companies
72597 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
60207 views

The field of AI research is about to get way bigger than code

By Nand Kishor |Email | Nov 16, 2017 | 11034 Views

When it comes to developing artificial intelligence, the largest technology companies in the world are all-in. Google and Microsoft say they're "AI-first," and businesses like Facebook and Amazon wouldn't be possible without the scalable personalization that AI allows.

But if you look for research on how artificial intelligence affects society-like how algorithms used in criminal justice can discriminate against people of color, or whether data used to train AI contains implicit bias against women and minorities-there's almost no academic or corporate research to be found.

Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, want to change that. They announced today the AI Now Institute, a research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence.

"The amount of money and industrial energy that has been put into accelerating AI code has meant that there hasn't been as much energy put into thinking about social, economic, ethical frameworks for these systems," Crawford tells Quartz. "We think there's a very urgent need for this to happen faster."
AI Now released a report last month that outlined many of the issues the institute's researchers will explore more fully. Initially, the founders plan to hire somewhat fewer than 100 researchers.

The organization's advisory board includes California supreme court justice Mariano-Florentino Cuéllar, NAACP Legal Defense Fund president Sherrilyn Ifill, and former White House CTO Nicole Wong. Other board members are Cynthia Dwork, the creator of differential privacy-an idea that has become a standard for protecting individuals' data in a large database, and Mustafa Suleyman, cofounder of DeepMind.

The institute will be based at New York University, where many academics studied the artificial neural networks responsible for today's AI boom. AI Now is partnered with eight NYU schools, including the NYU School of Law and the Steinhardt School of Culture, Education, and Human Development.

AI Now will focus on four major themes:
  1. Bias and inclusion (how can bad data disadvantage people)
  2. Labor and automation (who doesn't get hired when AI chooses)
  3. Rights and liberties (how does government use of AI impact the way it interacts with citizens)
  4. Safety and critical infrastructure (how can we make sure healthcare decisions are made safely and without bias)
Crawford and Whittaker have worked for years on such issues within Google and Microsoft. One barrier to creating solutions to AI's societal issues is the lack of a shared language between those who build AI and those studying its implications and affects.

"Part of what we're doing is talking to the people who build the systems about the real practices and processes around this. Where are the assumptions?" says Whittaker. "And what don't you know, that you would want to know, if you were going to do this in a way that you felt was responsible?"

Source: QZ