The internet can be mysteriously opaque. Whether you've been turned down for that online loan, or you are just wondering why you're being shown irrelevant ads, it's human nature to want to know the inner workings of internet decisions. On that theme, the EU's new General Data Protection Regulations include restrictions on automated decision-making and a right to explanation. This is potentially a major headache for every company using AI to power their services.

The whole point of AI, and deep neural networks in particular, is the ability to leverage large volumes of data to infer correlations that we wouldn't pick out as humans. So while it's theoretically possible to explain why a decision was made, by sharing all the individual algorithm outcomes, the data they used to tune the algorithm and how each data point combines, practically nobody would understand it.

Let's now add into the mix proprietary data sources, differential privacy deliberately adding noise to individual data points and algorithms learning over time. I think of the impact of AI as moving us from a neat causal world, where decisions are made from static flowcharts, to more of a quantum or biological world, where the system is too complex to understand and can only be judged by its outcome. Perhaps then a better regulatory approach would be to learn from medicine and require proof of efficacy, not individual pathways?