AI poses an interesting puzzle. On the one hand, we praise AI for achieving and surpassing human capability while simultaneously querying the outcomes it achieves when these are undesirable.

Take recent EU legislation on AI for example. The law, which will take effect in 2018, states that decisions 'based solely on automated processing' will be banned if they have an 'adverse legal effect' on the person concerned. As the article below points out, we could interpret such actions by a machine either as a 'glitch' or as a 'bad decision' i.e. a technical fault or an autonomous agent that had the ability to choose the 'right' decision.

Assessing which interpretation is more plausible gets to the heart of two important questions: what is the relationship between artificially intelligent machines and humans? And, are these machines moral agents and therefore morally culpable for their actions? These questions are deeply interwoven - it is impossible to answer one without saying something about the other.

Let's start with the machine-as-fallible interpretation. This could result from a fault in the AI technology itself, or from inaccuracies in the initial data set (provided by humans) for the machine. Legally, one can imagine that in both cases humans would be judged as culpable for the fault as the machines lie under their purview. In drawing such a judgement, machines would also be denied a level of agency necessary to be morally responsible.

Things get more complicated if we think of machines as being capable of making 'good' and 'bad' decisions. The 'right to explanation' proposed by this law essentially asks how a computer was able to produce a bad result. I don't think this isn't simply a question about technical craftsmanship - or lack of - but about examining what kinds of things machines are learning. At one level basic raw data is fed into a computer, such as a picture of a cat which, if learned correctly, will ensure a computer recognises a never-seen-before cat.

But, as Microsoft's Tay bot showed, computers are capable of learning more complex data like social practices, norms and moral principles. So we can imagine that, when faced with a moral dilemma, machines do exactly what humans do: they make a choice. Though understanding inputs can help us to understand an individual's moral reasoning or standpoint, these inputs cannot be reduced to easily crunchable data that one can use to predict moral decision making for machines or humans for that matter. Even if we trained machines to be hardcore consequentialists, there would be cases when it might be difficult to determine the result of a dilemma. For example, while a machine may learn to maximise the overall good in a given situation, the moral decision is based on what the machine has learned as 'good', whether this 'good' is to be upheld permanently or contextually etc. This isn't a matter of infinite regress but of understanding that apparently simple moral questions rarely offer simple answers. Therefore, if computers do make moral judgements, these ought to be understood as autonomous/semi-autonomous actions: computers using their knowledge to come to conclusions about moral dilemmas. If so, we can call computers moral agents.

Some individuals have argued that this is precisely what makes machines dangerous. Computers could gain such a level of moral intelligibility and autonomy that they surpass any checks or balances humans could offer. This is certainly a possibility at some point in the future. But, at least at the current level of technology, I think this danger ought to be recognised as a danger produced by humans themselves. Individuals developing and using AI have enormous control on making robots morally better, worse or the same as ourselves. This is a scary but also liberating prospect - robots may even become resources for moral learning.

Without doubt, as deep learning continues to develop, it will become more and more difficult to draw hard-and-fast rules on whether a machine is just a machine, or whether AI tech has moral culpability. Drawing sharp lines is almost certainly the wrong focus anyway. AI is here to stay and is likely to become increasingly unavoidable in modern life. Instead, time would be better spent on understanding how to navigate the moral, legal, social and political dilemmas posed by the coupling of AI and human beings to ensure AI works for the benefit of all.