Sometimes AI powered systems do completely unexpected things, which take the researchers who designed it by surprise. My favourite example of this is the first time the Google self-driving car encountered a fixed wheel bicycle at a four-way stop junction. The cyclist stopped at the junction and was doing a track-stand (rocking back and forth to stay up while stationary), and the car just didn't understand what was going on (full story here).

The problem was that the Google car lacked that basic understanding of the world that any human driver would have had, and the consequence was unexpected. In this case the result was a harmless for all involved, but it's easy to imagine situations that lead to more dangerous outcomes.

I recently read through IBM's response to the White House RFI on AI and they highlighted the same challenge - can we provide the AIs we design with a functioning understanding of the world we live in? Or if that's not possible, can we equip them with a decision engine to tackle situations they haven't encountered before? As AI advances developers will need to focus not just on optimising for the targeted tasks, but also ensuring it can safely deal with the unexpected.