Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

Data science is the big draw in business schools
87 days ago

7 Effective Methods for Fitting a Liner
97 days ago

3 Thoughts on Why Deep Learning Works So Well
97 days ago

3 million at risk from the rise of robots
97 days ago

15 Highest Paying Programming Languages Trending
98 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
166587 views

Here's why so many data scientists are leaving their jobs
64596 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
62247 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
53946 views

2018 Data Science Interview Questions for Top Tech Companies
51774 views

Google, Tesla, Amazon think of how to use AI safely

Apr 21, 2017 | 2154 Views

A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the increasing power of AI could result in a "fascist's dream" if the technology were misused by authoritarian regimes."

Just as we are seeing a step function increase in the speed of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism," Ms Crawford told the SXSW tech conference. The creation of vast data registries, the targeting of population groups, the abuse of predictive policing and the manipulation of political beliefs could all be enabled by AI, she said. Ms Crawford is not alone in expressing concern about the misapplication of powerful new technologies, sometimes in unintentional ways.

Nuanced judgement

Sir Mark Walport, the British government's chief scientific adviser, warned that the unthinking use of AI in areas such as the medicine and the law, involving nuanced human judgment, could produce damaging results and erode public trust in the technology. Although AI had the potential to enhance human judgment, it also risked baking in harmful prejudices and giving them a spurious sense of objectivity. "Machine learning could internalise all the implicit biases contained within the history of sentencing or medical treatment - and externalise these through their algorithms," he wrote in an article in Wired.

As ever, the dangers are a lot easier to identify than they are to fix. Unscrupulous regimes are never going to observe regulations constraining the use of AI. But even in functioning law-based democracies it will be tricky to frame an appropriate response. Maximising the positive contributions that AI can make while minimising its harmful consequences will be one of the toughest public policy challenges of our times.

For starters, the technology is difficult to understand and its use is often surreptitious. It is also becoming increasingly hard to find independent experts, who have not been captured by the industry or are not otherwise conflicted. Driven by something approaching a commercial arms race in the field, the big tech companies have been snapping up many of the smartest academic experts in AI.

Much cutting-edge research is therefore in the private rather than public domain.

To their credit, some leading tech companies have acknowledged the need for transparency, albeit belatedly. There has been a flurry of initiatives to encourage more policy research and public debate about AI.

Elon Musk, founder of Tesla Motors, has helped set up OpenAI, a non-profit research company pursuing safe ways to develop AI. Amazon, Facebook, Google DeepMind, IBM, Microsoft and Apple have also come together in Partnership on AI to initiate more public discussion about the real-world applications of the technology.

Verifiable data

Mustafa Suleyman, co-founder of Google DeepMind and a co-chair of the Partnership, says AI can play a transformative role in addressing some of the biggest challenges of our age. But he accepts that the rate of progress in AI is outstripping our collective ability to understand and control these systems. Leading AI companies must therefore become far more innovative and proactive in holding themselves to account. To that end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics board to scrutinise all the company's activities.

But Mr Suleyman suggests our societies will also have to devise better frameworks for directing these technologies for the collective good. "We have to be able to control these systems so they do what we want when we want and they don't run ahead of us," he says in an interview for the FT Tech Tonic podcast. Some observers say the best way to achieve that is to adapt our legal regimes to ensure that AI systems are "explainable" to the public.

That sounds simple in principle, but may prove fiendishly complex in practice. Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on "mindless minds" that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. "If you cannot meaningfully explain your system's decisions then you cannot make them," she says. We are going to need a lot more human intelligence to address the challenges of AI.


Source: Financial Times
Nand Kishor Contributor

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...

Full Bio 
Follow on

Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...

Data science is the big draw in business schools
87 days ago

7 Effective Methods for Fitting a Liner
97 days ago

3 Thoughts on Why Deep Learning Works So Well
97 days ago

3 million at risk from the rise of robots
97 days ago

15 Highest Paying Programming Languages Trending
98 days ago

Top 10 Hot Artificial Intelligence (AI) Technologies
166587 views

Here's why so many data scientists are leaving their jobs
64596 views

Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
62247 views

Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
53946 views

2018 Data Science Interview Questions for Top Tech Companies
51774 views