There's an increasing focus on why decisions powered by artificial intelligence (AI) are made the way they are. The difficulty is that researchers and developers often don't know, or can't clearly articulate, why their AI has made a particular decision. The truth is neural networks, unsupervised learning and reinforcement learning make trying to trace the path of any individual decision just too complicated.
The point of comparison against which AI is compared in these situations is often the incumbent system it's replacing. If the system being replaced is a rules-based flow chart, (e.g. for processing mortgage applications) there will be a clear path for each decision. However, if the system being replaced is a human expert (e.g. a recruiter picking candidates to interview) there may be no rule book to refer to, but at least the human expert will think the same way as you or I and can share their thought process.
Part of the challenge with AI is that it may observe the world in a completely different way to humans and pay attention to different signals. The article below gives a glimpse into how AI understands music. Spoiler alert - it's nothing like the way you or I do.
So here's my question:
Will we just have to learn to live with the fact that we won't get an explanation for decisions made by AI, or is the lack of explanation a by-product of the fact that the majority of research to date has been focused on getting AI to make better decisions, and not on making transparent decisions?
The thing is that the algorithms known as neural networks are essentially black boxes. We’ve developed the high-level concepts that govern them and designed the networks themselves, but picking apart decisions that they make on their own is intensely difficult due to their internal complexity.