As a child grows, their personality is shaped largely by their experiences. From accent to what is considered polite behaviour, big chunks of our personality are heavily influenced by what we've seen and the culture around us. It's the same for intelligent machines, just substitute data for culture.

Imagine a machine learning algorithm trained to spot mammals in photos, and trained on Facebook photos with animals tagged in them. This machine may believe there are a lot more cats in the world than there actually are, or not recognise that a whale is a mammal. The problem here isn't with the algorithm, but with biases in the data it has learnt from.

When planning how to train machines, it is important to think carefully about the training data and what biases it may introduce into the system. In most cases there isn't a magic, perfect training data-set, which just reflects the real world. So, when deploying machines trained on biased data, it is important to reflect on how those biases could effect it's operation, and whether you need to compensate.