38.The development, programming and use of robotics and AI raises a host of ethical and legal issues. Our witnesses were clear that these need to be identified and addressed now, so that the societal benefits of the technologies can be maximised while also mitigating the potential risks. Both steps are essential to building public trust, particularly as robotics and AI diffuse into more aspects of everyday life. In this chapter we consider safety and control, and how society can make sure that the outcomes of robotics and AI are beneficial, intentional and transparent. We then examine what roles standards, regulation and public dialogue might play.
39.It is important to ensure that AI technology is operating as intended and that unwanted, or unpredictable, behaviours are not produced, either by accident or maliciously. Methods are therefore required to verify that the system is functioning correctly. According to the Association for the Advancement of Artificial Intelligence:
it is critical that one should be able to prove, test, measure and validate the reliability, performance, safety and ethical compliance—both logically and statistically/probabilistically—of such robotics and artificial intelligence systems before they are deployed.
Similarly, Professor Stephen Muggleton saw a pressing need:
to ensure that we can develop a methodology by which testing can be done and the systems can be retrained, if they are machine learning systems, by identifying precisely where the element of failure was.
40.The EPSRC UK-RAS Network noted that the verification and validation of autonomous systems was “extremely challenging” since they were increasingly designed to learn, adapt and self-improve during their deployment. Innovate UK highlighted that “no clear paths exist for the verification and validation of autonomous systems whose behaviour changes with time” while Professor David Lane from Heriot-Watt University emphasised that “traditional methods of software verification cannot extend to these situations”.
41.Part of the problem, according to Dr Michael Osborne, was that researchers’ efforts had previously been focused on “achieving slightly better performance on well-defined problems, such as the classification of images or the translation of text” while the “interpretation of the algorithms that [were] produced to achieve those goals [had] been left as a secondary goal”. As a result, Dr Osborne considered that “we are not where we would want to be in ensuring that the algorithms we deliver are completely verifiable and validated”. He added, however, that progress was now being made.
42.Google DeepMind, for example, was reported in June 2016 to be working with academics at the University of Oxford to develop a ‘kill switch’; code that would ensure an AI system could “be repeatedly and safely interrupted by human overseers without [the system] learning how to avoid or manipulate these interventions”. In the same month, researchers from Google, Open AI, Stanford University and UC Berkeley in the United States, together published a paper which examined potential AI safety challenges and considered how to engineer AI systems so that they operated safely and reliably.
43.It is currently rare for AI systems to be set up to provide a reason for reaching a particular decision. For example, when Google DeepMind’s AlphaGo played Lee Sedol in March 2016 (see paragraph 3), the machine was able to beat its human opponent in one match by playing a highly unusual move that prompted match commentators to assume that AlphaGo had malfunctioned. AlphaGo cannot express why it made this move and, at present, humans cannot fully understand or unpick its rationale. As Dr Owen Cotton-Barratt from the Future of Humanity Institute reflected, we do not “really know how the machine was better than the best human Go player”.
44.When the stakes are low—such as in a board game like Go—this lack of transparency does not matter. Yet, as Tony Prescott, Professor of Cognitive Neuroscience at the University of Sheffield noted, “machine learning and probabilistic reasoning will lead to algorithms that replace human decision-makers in many areas”, from financial decision-making to the development of more effective medical diagnostics. Nesta suggested that in these types of applications, where the stakes are far higher, an absence of transparency can lead to a “level of [public] mistrust in its outputs” since the reasoning behind the decision is opaque. Patients, for example, may be unwilling to simply accept the “supposed quality of [an] algorithm” where their treatment is concerned and may instead want a clear justification from a human.
45.Dr Cotton-Barratt was one of a number of witnesses who supported “a push towards developing meaningful transparency of the decision-making processes”. Dave Coplin from Microsoft, for example, stated that:
The building blocks […] the way in which we create the algorithms […] They must be transparent. I must be able to see the pattern or rules that have been used to create the outcome. As a human I need to be able to inspect that, as much as the algorithms need to understand what the humans may choose to do with that information.
Similarly, Professor Alan Winfield from the Bristol Robotics Laboratory emphasised the importance of being able to ‘inspect’ algorithms so that, if an AI system made a decision that “[turned] out to be disastrously wrong […] the logic by which the decision was made” could be investigated.
46.As we noted in our Big Data Dilemma report, the European Union’s new General Data Protection Regulation is due to come into effect across the EU in 2018. It will create a “right to explanation,” whereby a user can ask for an explanation of an automated algorithmic decision that was made about them. Whether, and how, this will be transposed into UK law is unclear following the EU Referendum.
47.Instances of bias and discrimination being accidentally built into AI systems have recently come to light. Last year, for example, Google’s photo app, which automatically applies labels to pictures in digital photo albums, was reported to have classified images of black people as gorillas. The app learnt from training data and, according to Kate Crawford from Microsoft, the AI system built “a model of the world based on those [training] images”. Yet, as Drs Koene and Hatada from the University of Nottingham explained “all data-driven systems are susceptible to bias based on factors such as the choice of training data sets, which are likely to reflect subconscious cultural biases”. So, if a system was “trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces”.
48.It is not clear how much attention the design of AI systems—and the potential for bias and discrimination to be introduced—is receiving. John Naughton, Emeritus Professor of the Public Understanding of Technology at the Open University, was reported as saying that these types of biases can go unrecognised because developers take “a technocratic attitude that assumes data-driven decision-making is good and algorithms are neutral”.
49.Dave Coplin from Microsoft, however, acknowledged that “in AI every time an algorithm is written, embedded within it will be all the biases that exist in the humans who created it”. He emphasised a need “to be mindful of the philosophies, morals and ethics of the organisations […] creating the algorithms that increasingly we rely on every day” but added that our understanding of “how we as humans imbue human bias in artificial intelligence” was still “relatively new”. Safeguards against discriminatory, data-driven ‘profiling’ are included in the EU’s forthcoming General Data Protection Regulation, as discussed in our Big Data Dilemma report.
50.During the course of our inquiry, there were reports in the media about Google DeepMind working with NHS hospitals to improve patient diagnoses and care. Media commentary focused not just on the work that was underway—such as building an app that helps clinicians detect cases of acute kidney injury, or using machine learning techniques to identify common eye diseases—but also on DeepMind’s access to patient data: namely how much data the company could access, whether patient consent had been obtained, and the ownership of that data. Such concerns are not new. As we highlighted in our Big Data Dilemma report, the anonymisation and re-use of data is an issue that urgently needs to be addressed. In the same report, we also drew attention to potential improvements in NHS efficiency, planning and healthcare quality that could be realised through greater use of data analytics. Data, as Professor Jennings explained during our current inquiry, is the “fuel for all the algorithms to do their stuff and make smart decisions and learn”. Yet he stressed that there remained “a whole load of issues associated with appropriate management of data to make sure that it is ethically sourced and used under appropriate consent regimes”.
51.Dr Cotton-Barratt identified the “large benefits”, as well as the challenges, that arise when AI is applied in healthcare:
If it can automate the processes and increase consistency in judgments and reduce the workload for doctors, it could improve health outcomes. To the extent that there are challenges, essentially it means there is less privacy from the same amount of shared data, in that people can get more information out of a limited amount of data.
He added that ways to handle those privacy challenges needed to be found, and suggested that responses should include “making sure that the data is held in the right places and is properly handled and controlled”. Similar points were raised by Dave Coplin from Microsoft who told us that if AI was “going to work successfully for us as a society, we need some intelligent privacy and we need to figure out how to do that”. One approach—which we recommended in our Big Data Dilemma report—is to establish a ‘Council of Data Ethics’ to address the difficulties associated with balancing privacy, anonymisation, security and public benefit. We were pleased that the Government agreed with this step and is in the process of setting up the Council within the Alan Turing Institute, the UK’s national institute for data science.
52.For some aspects of robotics and AI, questions of accountability and liability are particularly pertinent. To date, these have predominately been discussed in the context of autonomous vehicles (‘driverless cars’) and autonomous weapons systems. The key question is ‘if something goes wrong, who is responsible?’ Dave Coplin from Microsoft emphasised that “we need a level of accountability for the algorithms. The people making the algorithm and the AI need to be held accountable for the outcome”. He suggested that a “safety net” provided by Government was required “so that people can be held to account in how we build” AI systems.
53.The debate on driverless cars has also focused on liability. The Law Society highlighted that situations may arise in which a driverless car takes action that causes one form of harm in order to avoid other harm. This raises:
issues of civil, and potentially even criminal liability [as well as] the ownership of that liability, whether the manufacturer of the vehicle, the software developers, the owner of the vehicle and so on. The questions multiply.
54.Whether such questions can be decided in the courts, and solutions developed through case law, or if new legislation will be needed, remains under discussion. The Law Society noted that “one of the disadvantages of leaving it to the courts […] is that the common law only develops by applying legal principles after the event when something untoward has already happened. This can be very expensive and stressful for all those affected”.
55.After we had concluded our evidence taking, the Government set out its proposal for addressing liability for automated vehicles. It stated that:
Our proposal is to extend compulsory motor insurance to cover product liability to give motorists cover when they have handed full control over to the vehicle (ie they are out-of-the-loop). And, that motorists (or their insurers) rely on courts to apply the existing rules of product liability—under the Consumer Protection Act, and negligence—under the common law, to determine who should be responsible.
Consultation on this and other proposals for automated vehicles runs until 9 September 2016.
56.Accountability is also critically important for autonomous weapons and, more specifically, ‘lethal autonomous weapons systems’ (LAWS). These are systems that, when given a set objective, can assess the situational context and environment, then make decisions on what intervention is required, independent of human control or intervention. According to Future Advocacy, LAWS could “have the power to kill without any human intervention in the identification and prosecution of a target”. Google DeepMind also highlighted the “possible future role of AI in lethal autonomous weapons systems, and the implications for global stability and conflict reduction”. While there are “still no completely autonomous weapons systems”, Innovate UK thought that “the trend towards more and more autonomy in military systems [was] clearly visible”.
57.Richard Moyes from Article 36, an NGO working to prevent the “unintended, unnecessary or unacceptable harm caused by certain weapons”, explained that his organisation’s concern was the “identification and application of force to the target being in the hands of the weapons system” rather than a human. In his view, if a weapon was deployed, there should always be a human ‘in the loop’. He added that “a human should be specifying the target against which force is to be applied”. According to Mr Moyes, military personnel may “not feel comfortable being held accountable for a system when they cannot quite understand its functioning and cannot be completely sure what it is going to do”. He believed that there was an opportunity for the UK in a “diplomatic landscape to have an influential position on how we orientate the role of computers in life and death decisions”.
58.Giving evidence to the Defence Committee in 2014, the Ministry of Defence stated that the UK complied fully with all of its obligations under international humanitarian law irrespective of the weapons systems used. More recently, at the UN Convention on Conventional Weapons meeting in November 2015, the Government stated that:
Given the uncertainties in the current debate, the United Kingdom is not convinced of the value of creating additional guidelines or legislation. Instead, the United Kingdom continues to believe that international humanitarian law remains the appropriate legal basis and framework for the assessment of the use of all weapons systems in armed conflict.
Elsewhere, the Government has asserted that “the operation of weapons systems by the UK armed forces will always be under human control”. Article 36 reflected that “whilst such assertions seem on the surface to be reassuring” there needed to be further explanation from officials about the “form and extent of that human ‘control’ or ‘involvement’”.
59.Though some of the more transformational impacts of AI might still be decades away, others—like driverless cars and supercomputers that assist with cancer prediction and prognosis—have already arrived. The ethical and legal issues discussed in this chapter, however, are cross-cutting and will arise in other areas as AI is applied in more and more fields. For these reasons, witnesses were clear that the ethical and legal matters raised by AI deserved attention now and that suitable governance frameworks were needed.
60.TechUK believed that such frameworks were “vital” to ensure “that we have a way to ask, discuss and consider the key legal and ethical questions” such as “‘What are the ethics that should underpin our use of artificial intelligence?” Innovate UK expressed a similar view, stating that:
Appropriate legal and regulatory frameworks will have to be developed to support the more widespread deployment of robots and, in particular, autonomous systems. Frameworks need to be created to establish where responsibilities lie, to ensure the safe and effective functioning of autonomous systems, and how to handle disputes in areas where no legal precedence has been set.
61.Innovate UK added that there was a “genuine request from researchers and industries for a legal and ethical governance to which they can fine-tune their strategies and plans about innovative robotic applications”. Mike Wilson from ABB Robotics highlighted that while “the pace of development continues [in robotics] the standards and the legal frameworks around them are not keeping up with the development of the technology”. He stressed that this was something “that certainly needs to be addressed to ensure that people have a clear picture of where the standards are going to be”.
62.Having a secure regulatory environment may also help to build public trust. Drawing on the example of commercial aircraft, Professor Alan Winfield from the Bristol Robotics Laboratory thought that one of the reasons why people trust airlines was because “we know they are part of a highly regulated industry with an excellent safety record”. Furthermore, when things go wrong, there are “robust processes of air accident investigation”. Professor Nelson thought that “as technology in this area develops, a need will probably arise” for something similar to the Civil Aviation Authority to “ensure that [AI systems] are properly regulated and to build trust in the community”.
63.Others emphasised that a balance needed to be struck on the grounds that efforts to introduce a governance regime could curtail innovation and hold back desirable progress. Speaking in the context of developing driverless cars, Dr Buckingham told us that:
One thing we must not do is put too much red tape around this at the wrong time and stop things developing. One of the key points is to make sure that we are doing that testing in the UK transparently and bringing the industry here so that we understand what is going on, and that we start to apply the regulation appropriately when we have more information about what the issues are. One of the risks is that, if we over-regulate, it is bad for making use of the technology.
TechUK also warned that:
over-regulation or legislation of robotics and artificial intelligence at this stage of its development, risks stalling or even stifling innovation. This could in turn risk the UK’s leadership in the development of these technologies.
64.Nesta noted that there were moves “in both the public and private sectors to set up ethical frameworks for best practice”. Such initiatives are being developed at the company level (e.g. Google DeepMind’s ethics board); at an industry-wide level (e.g. the Institute of Electrical and Electronics Engineers global initiative on ‘Ethical Considerations in the Design of Autonomous Systems’) and at the European level (e.g. the European Parliament’s Committee on Legal Affairs’ examination of the legal and ethical aspects of robotics and AI). It is not clear, however, if any cross-fertilisation of ideas, or learning, is taking place across these layers of governance or between the public and private sectors. As the Chief Executive of Nesta has argued, “it’s currently no-one’s job to work out what needs to be done”.
65.Establishing good robotics and AI governance practices matters, both for the economy and for society as a whole. According to Dr Cotton-Barratt, the UK is well-positioned to respond to this challenge. He described a “small but growing research community looking into these questions”, adding that the “UK is world-leading in this at the moment” and has the “intellectual leadership”, as exemplified by the establishment of the Future of Humanity Institute at the University of Oxford and the Centre for the Study of Existential Risk at the University of Cambridge. Evidence submitted jointly by these bodies suggested that the UK’s expertise could be applied to best effect through a “Warnock-Style” Commission, in reference to Baroness Warnock’s examination of the ethics of IVF in the early 1980s. Elsewhere, Nesta has made the case for a “Machine Intelligence Commission”, possessing powers similar to those of the now disbanded Royal Commission on Environmental Pollution.
66.There has been some discussion about who should be involved in identifying, and establishing, suitable governance frameworks for robotics and AI. Kate Crawford from Microsoft has argued that:
Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included.
Dave Coplin from Microsoft told us that it was a task “for the tech industry, the Government, NGOs and the people who will ultimately consume the services” and emphasised that it was important “to find a way of convening those four parties together to drive forward that conversation.” Dr Cotton-Barratt similarly recommended a broad “community of interest [that] would include AI researchers, social scientists and ethicists, representatives of industry and ministries”.
67.Professor Nick Jennings was clear that engagement with the public on robotics and AI needed “to start now so that people are aware of the facts when they are drawing up their opinions and they are given sensible views about what the future might be”. He contrasted this with the approach previously taken towards GM plants which, he reflected, did “not really engage the public early and quickly enough”.
68.A range of views were expressed about the role of public dialogue on robotics and AI. For some, it was a way to help build public trust and acceptance, and to tackle public “misconceptions”. For others—including the Government—it was also about acknowledging, and improving, our understanding of the publics’ concerns. A small number of witnesses suggested that the public have a role to play in directing the development of AI. Professor Luckin, for example, emphasised that developments in AI to date had focused predominately “on the technology and not on the problems it could solve”, adding that it would “be good if it could be more challenge-focused”.
69.Similarly, Paul Doyle from Hereward College—which supports young people with physical, sensory and cognitive disabilities—told us that, where assistive robotics were concerned, there remained a “massive disconnect” between “what is being produced in the University laboratory/workshop and what is needed in the thousands of homes across the UK”. Hereward College, he noted, had tried to bring “end users’ perspectives” to the attention of research communities. Robotics, as Population Matters noted, could improve the mobility of people with disabilities and “offer them a voice”. Pupils 2 Parliament—a group of 61 primary school children aged 9 and 10—also identified helping disabled people “move and walk” as their top priority for the “future development of robots”. A strong public role could thus facilitate greater scrutiny of the underlying motives behind advancements in robotics and AI, and provide a societal, rather than purely technological, perspective on how they could be developed.
70.The Royal Academy of Engineering suggested that the Government “could do more to open dialogue with the public on these issues so that concerns about social, legal and ethical issues are addressed in a timely way”. The Academy and others pointed to the support that ‘Sciencewise’—the UK’s national centre for public dialogue in policy making involving science and technology issues—could provide. In March 2016, for example, Sciencewise had hosted a “RAS Policy and the Public” workshop that “identified a number of specific ethical, legal and social” issues. An overview of the session on the Sciencewise website indicates that invitations to the event were sent to “Government policy makers, academics and industry leaders”. The Involve Foundation, however, stressed that effective public dialogue on robotics and AI required consulting as broadly as possible:
Policy development around these topics should not be restricted to involving a narrow range of expert stakeholders, but should also be informed by, and responsive to, broader public opinion.
71.While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now. Not only would this help to ensure that the UK remains focused on developing ‘socially beneficial’ AI systems, it would also represent an important step towards fostering public dialogue about, and trust in, such systems over time.
72.Our inquiry has illuminated many of the key ethical issues requiring serious consideration—verification and validation, decision-making transparency, minimising bias, increasing accountability, privacy and safety. As the field continues to advance at a rapid pace, these factors require ongoing monitoring, so that the need for effective governance is continually assessed and acted upon. The UK is world-leading when it comes to considering the implications of AI and is well-placed to provide global intellectual leadership on this matter.
73.We recommend that a standing Commission on Artificial Intelligence be established, based at the Alan Turing Institute, to examine the social, ethical and legal implications of recent and potential developments in AI. It should focus on establishing principles to govern the development and application of AI techniques, as well as advising the Government of any regulation required on limits to its progression. It will need to be closely coordinated with the work of the Council of Data Ethics which the Government is currently setting up following the recommendation made in our Big Data Dilemma report.
74.Membership of the Commission should be broad and include those with expertise in law, social science and philosophy, as well as computer scientists, natural scientists, mathematicians and engineers. Members drawn from industry, NGOs and the public, should also be included and a programme of wide ranging public dialogue instituted.
71 AAAI and UKCRC ()
73 EPSRC UK-RAS Network () para 4.4
74 Innovate UK () para 20
75 Professor David Lane () para 4.3
78 , BBC News Online, 8 June 2016
79 Dario Amodei et al, , June 2016
80 AAAI and UKCRC ()
81 Global Priorities Project () para 18
82 Q63; see also Q2
83 Professor Tony J. Prescott () para 3; see also Nutmeg Saving and Investment Ltd ()
84 Harry Armstrong, Machines That Learn in the Wild, Nesta, July 2015, p14
85 Harry Armstrong, Machines That Learn in the Wild, Nesta, July 2015, p14
88 Professor Alan Winfield () para 10
89 Science and Technology Committee, Fourth Report of Session 2015–16, , HC 468, paras 83–102
90 Frankenstein’s paperclips; Ethics, The Economist, 25 June 2016 (US Edition)
91 , The New York Times, 25 June 2016
92 Dr Ansgar Koene and Dr Yohko Hatada ()
93 , The New York Times, 25 June 2016
94 Forget killer robots: This is the future of supersmart machines, New Scientist, 22 June 2016
97 Science and Technology Committee, Fourth Report of Session 2015–16, , HC 468, para 95
98 “”, New Scientist, 29 April 2016; “”, Daily Mail, 3 May 2016; “”, BBC News Online, 3 May 2016; “”, Daily Telegraph, 5 July 2016
99 Science and Technology Committee, Fourth Report of Session 2015–16, , HC 468, para 101
100 Science and Technology Committee, Fourth Report of Session 2015–16, , HC 468, para 43
105 Science and Technology committee, Fifth special report of session 2015–16, , HC 992, para 57
109 The Law Society () para 10
110 The Law Society () para 11
111 Centre for Connected & Autonomous Vehicles, , July 2016, para 1.3
112 Centre for Connected & Autonomous Vehicles, , July 2016
113 Future Advocacy () para 4.1
114 Google DeepMind () para 5.3
115 Innovate UK () para 38
116 Article 36 ()
119 Q64 [Richard Moyes]
120 Q82 [Richard Moyes]
121 Defence Committee, Tenth Report of Session 2013–14, , HC 772, para 144
122 United Kingdom of Great Britain and Northern Ireland, 12–13th November 2015
123 , The Guardian, 13 April 2015
124 Article 36 ()
125 IBM, , last accessed 31 August 2016
126 Q56 [Dr Cotton-Barratt]; EPSRC UK-RAS Network () para 4.4
127 techUK () para 42
128 Innovate UK () para 37
129 Innovate UK ()
132 Professor Alan Winfield () para 8
133 Q5; see also EPSRC UK-RAS Network () para 4.3
134 Q19 [Dr Buckingham]
135 techUK () para 37
136 Harry Armstrong, Machines That Learn in the Wild, Nesta, July 2015, p15
137 Google DeepMind () para 5.1
138 , IEEE, 5 April 2016
139 Committee on Legal Affairs, European Parliament, (2015/2103(INL)), May 2016
140 Geoff Mulgan, , Nesta, February 2016
141 Q56 [Dr Cotton-Barratt]; see also AAAI and UKCRC ()
142 Future of Humanity Institute, Centre for the Study of Existential Risk, Global Priorities Project, and Future of Life Institute (). This point was also made by Future Advocacy () para 2.5.
143 Geoff Mulgan, , Nesta, February 2016
144 , The New York Times, 25 June 2016
146 Dr Owen Cotton-Barrett ()
149 Q5; Q32 [Dr Buckingham]; Lloyd’s Register Foundation () para 16; Robotics & Autonomous Systems Special Interest Group () para 36
150 Royal Academy of Engineering () para 31
151 Department for Business, Innovation and Skills (BIS) () para 28; The Law Society () para 12; The Involve Foundation () para 2.12
152 Q108 [Professor Luckin]
153 Hereward College () para 3
154 Hereward College () para 3
155 Population Matters ()
156 Pupils 2 Parliament () para 28
157 Royal Academy of Engineering () para 38
158 Research Councils UK () para 44
159 , Sciencewise press notice, not dated.
160 The Involve Foundation () para 2.5
5 October 2016