The internet can be mysteriously opaque. Whether you've been turned down for that online loan, or you are just wondering why you're being shown irrelevant ads, it's human nature to want to know the inner workings of internet decisions. On that theme, the EU's new General Data Protection Regulations include restrictions on automated decision-making and a right to explanation. This is potentially a major headache for every company using AI to power their services.
The whole point of AI, and deep neural networks in particular, is the ability to leverage large volumes of data to infer correlations that we wouldn't pick out as humans. So while it's theoretically possible to explain why a decision was made, by sharing all the individual algorithm outcomes, the data they used to tune the algorithm and how each data point combines, practically nobody would understand it.
Let's now add into the mix proprietary data sources, differential privacy deliberately adding noise to individual data points and algorithms learning over time. I think of the impact of AI as moving us from a neat causal world, where decisions are made from static flowcharts, to more of a quantum or biological world, where the system is too complex to understand and can only be judged by its outcome. Perhaps then a better regulatory approach would be to learn from medicine and require proof of efficacy, not individual pathways?
The regulations prohibit any automated decision that “significantly affects” EU citizens. This includes techniques that evaluate a person’s “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.” At the same time, the legislation provides what Goodman calls a “right to explanation.” In other words, the rules give EU citizens the option of reviewing how a particular service made a particular algorithmic decision.