What happens when machine learning algorithms pick up on human biases? Naturally, the bias scales.
This article covers the extraordinary challenge that machine learning practitioners face when they set their algorithms loose on training datasets. What will be the consequences of their data choice? In what context will the results be used?
The results can often be surprisingly offensive - and the anti-bias techniques now being employed to counter this are becoming more and more sophisticated.
There are many other examples of algorithmic bias, where algorithms help propagate inequity. A translation tool produced female associations with family and male associations with career, while Google’s photo tagging service mistakenly identified black photo subjects as gorillas. Often these types of mistakes aren’t due to an actual computing error or an evil cackling data scientist behind a partition. They occur when the algorithm is trained on data that doesn’t represent a population well enough, or when the algorithm is irresponsibly designed to optimize a singular type of decision.