Starting May 25, the European Union will require algorithms to explain their output, making deep learning illegal.
Nand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc... ...
Full BioNand Kishor is the Product Manager of House of Bots. After finishing his studies in computer science, he ideated & re-launched Real Estate Business Intelligence Tool, where he created one of the leading Business Intelligence Tool for property price analysis in 2012. He also writes, research and sharing knowledge about Artificial Intelligence (AI), Machine Learning (ML), Data Science, Big Data, Python Language etc...
3 Best Programming Languages For Internet of Things Development In 2018
913 days ago
Data science is the big draw in business schools
1086 days ago
7 Effective Methods for Fitting a Liner
1096 days ago
3 Thoughts on Why Deep Learning Works So Well
1096 days ago
3 million at risk from the rise of robots
1096 days ago
Top 10 Hot Artificial Intelligence (AI) Technologies
346782 views
2018 Data Science Interview Questions for Top Tech Companies
98625 views
Want to be a millionaire before you turn 25? Study artificial intelligence or machine learning
93372 views
Here's why so many data scientists are leaving their jobs
90636 views
Google announces scholarship program to train 1.3 lakh Indian developers in emerging technologies
70500 views
Will GDPR Make Machine Learning Illegal?
- Global explanation: how a Machine Learning algorithm works (which may be very hard for complicated methods like Deep Learning) and
- Local explanation: what factors contributed to a particular decision impacting a specific person (easier). There are already some algorithms like LIME: Local Interpretable Model-Agnostic Explanations, which can explain the predictions of any machine learning classifier. If, for example, a person is declined mortgage, should she know which factors contributed to the decision? On the one hand, if you are denied something by the algorithm, you want to know why and have a chance to appeal. On the other hand, enough such explanations could allow a decision boundary to be reverse-engineered and allow potential evil doers to game the system. This is very undesirable in many cases (e.g. security applications).
AI and its challenges
AI based systems are often opaque 'black boxes' and are difficult to scrutinise. As increasingly more of our economic, social and civic interactions - from credit markets and health insurance applications, to recruitment and criminal justice systems - are carried out by algorithms, concerns have been raised about the lack of transparency behind the technology, which leaves individuals with little understanding of how decisions are made about them. We need proper safeguards in place to make sure that the decisions that are being made about us are actually fair and accurate.
AI and the EU's General Data Protection Regulation
In 2016 the EU General Data Protection Regulation (GDPR), Europe's new data protection framework, was approved. The new regulation will come into force across Europe - and the UK - in 2018. It has been widely and repeatedly claimed that a 'right to explanation' of all decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the new regulation. This "right to explanation" is viewed as an ideal mechanism to enhance the accountability and transparency of automated algorithmic decision-making.
Such a right would enable people to ask how a specific decision (e.g. being declined insurance or being denied a promotion) was reached.
An explanation can be offered in various ways. There are at least two possible algorithmic explanations: an explanation of "system functionality" and an explanation about the "rationale" of an individual decision. Explaining the algorithmic methods used to assess the credit-worthiness or to set interest rates (system functionality) does not have the same quality as an explanation of "how" a certain rate was set or "why" a credit card application was declined.
Together with Turing Researchers Dr. Brent Mittelstadt and Prof. Luciano Floridi we examined this claim. Unfortunately, contrary to what was hoped, our research has revealed that the GDPR is likely to only grant individuals information about the existence of automated decision-making and about "system functionality", but no explanation about the rationale of a decision. In fact, in the whole GDPR the "right to explanation" is only mentioned once in the regulation, in Recital 71, which lacks the legal power to establish stand-alone rights. The purpose of a Recital is to provide guidance on how to interpret the operational part of a regulatory framework, if there is ambiguity. But in our research, I see there is no ambiguity regarding the minimum requirements that requires further clarification.
Placing the "right to explanation" in a Recital and the fact that the recommendation of the European Parliament to make this right legally binding was not adopted, suggests that European legislators did not want to grant this idea the same legal status as the other safeguards in the legally binding text in Art 22 GDPR. Of course, that does not mean that data controllers could not voluntarily decide to offer explanations, or that future jurisprudence or law built on this Recital could create such a right in the future.
You were denied a loan because your annual income was £30,000. If your income had been £45,000 you would have been offered a loan.
There is no single, neat statutory provision labelled the 'right to explanation' in Europe's new General Data Protection Regulation (GDPR). But nor is such a right illusory.
Articles 13-15 provide rights to 'meaningful information about the logic involved' in automated decisions. This is a right to explanation, whether one uses the phrase or not.
As in other areas, the GDPR is less than clear. And as a result, the idea that the GDPR mandates a "right to explanation" from machine learning models - meaning that those significantly affected by such models are due an accounting of how the model made a particular decision - has become a controversial subject. Some scholars, for example, have spoken out vehemently against the mere possibility that such a right exists. Others, such as the UK's own Information Commissioner's Office, seem to think the right is pretty clearly self-evident.
Ultimately, I have some good news for lawyers and privacy professionals . . . and some potentially bad news for data scientists.