AI: threat or benefit?

Melissa Coutinho explores the impact of AI on legal practice

Whether you visit the Science Museum to see its Robots Exhibition (running until September 2017), or read Professor Susskind’s book, Tomorrow’s Lawyer, it cannot escape your notice that artificial intelligence (AI) and robotics are moving on apace. One need not consciously keep track of technological advances to recognise that the world is changing, as examples abound of technology taking over tasks previously considered the preserve of sentient human beings. From the trials of driverless cars, to financial analytics, we are constantly having to re-evaluate what tasks can be performed by AI. Progress is measured by the increase in technology so that this generation recognises as normal, inventions not imagined by their predecessors. Few people think twice about academic software used to detect plagiarism, or social media connecting with thousands instantly, and the future is likely to become ever more widely a part of our lives. What impact this will have on jobs of the future – legal and otherwise – has become a valid question. The possible need for ethical safeguards has moved from being the remit of science fiction, to being a concern for legislators. Following the publication of a report by the European Parliament’s Committee of Legal Affairs, MEPs have been tasked with voting on the adoption of rules for how humans will interact with artificial intelligence and robots.

Legal personality

The report by the Legal Affairs Committee with recommendations to the Commission on Civil Law Rules on Robotics, (‘the Report’) makes it apparent that it would be wrong to ignore the latest industrial revolution of which the world may be on the cusp. It considers whether robots should have legal status as ‘electronic persons’. The extent to which this is because modern creations so often replicate humanoid features is unclear, given that generally intelligent animals are not considered to have legal personality, (although these boundaries are also being challenged, if one considers the recent infamous federal case in San Francisco which PETA brought to ascribe copyright for ‘selfies’ to macaque who snapped them). While more than one Japanese company now has a robot receptionist, the Science Museum indicates that there is evidence of robots made by man, which date back over 600 years. What really appears to be in question is the extent to which robots or other manifestations of artificial intelligence can act independently of their human creators.

Ethical responsibility

Given the current uses of artificial intelligence, be these interactive robots that gather and organise information from a variety of sources, or the more mundane algorithms created to scan Big Data and make clinical recommendations that are personalised to individual patients, programmers are alive to their creations actively learning and adapting, to produce more sophisticated and meaningful results. This suggests that it is easily conceivable that artificial intelligence will outstrip those responsible for its creation, not simply in terms of speed, which is currently the case, (the average computer search is far faster than its human equivalent), but in terms of analysis and learning. For medical applications, medical device legislation already holds responsible manufacturers responsible for the ethical implications that can attach to their creations, but the Report suggests that these same parameters are extended to the designers of all AI and robotics. In effect, the Report recommends that those responsible for these creations, include a ‘kill switch’ to permit functions to be shut down if necessary.

Science fiction?

It is easy to conjure up the science fiction of Mary Shelly’s ‘Frankenstein’ or Ted Heath’s ‘Iron Man’ to imagine the harm caused by a misuse of power. Perhaps it is this which prompts the Report to recommend that users should be able to use robots ‘without risk or fear of physical or psychological harm’, anticipating a worst case scenario. The Report openly expresses concern over design, and programming in AI and robotics. The Report unself-consciously borrows from Asimov’s 1943 work ‘Runabout’ – popularly revived in film form in 1993 as, ‘I, Robot’ – and it is surprisingly easy to see the sense of Asmiov’s Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

The Report suggests that if robots become, or are made, self-aware, Asimov’s Laws must be regarded as being directed at the designers, producers and operators of robots, since those laws are not easily converted into machine code.

Working life

It is easy to see the benefits of machines doing tedious and time-consuming tasks. The Report recognises the potential for ‘virtually unbounded prosperity’ that could be unleashed, as well as the protection from unpleasant or dangerous tasks. However, this issue of what work will look like in the future is also questioned. It is unlikely that it is only the tasks that people do not want to do that will be achieved by technology. The advantages of doing more sophisticated tasks faster, and with fewer errors cannot be ignored. The Report queries whether this will mean detriment to some, and whether there will be the need for member states to introduce a basic income in the light of jobs being given to non-humans; it seems competition for employment is not just about the protection of domestic borders but about virtual rivals. It is tempting to see a future that is not an Orwellian 1984, where innovation is stifled, but rather an embracing of technology, as demonstrated by David Hockney in his recent iPad art – but the reality of course is that nobody knows exactly what the future holds – or how close it may be...

Lawyers of the future

Most lawyers today will use electronic searches before using a physical library. The latest legal research services, such as Lexis Nexis and Westlaw, allow for personal libraries of up to 150 legal textbooks to be stored and accessed instantly, allowing searches not dependent on the opening hours of physical facilities, geographical location, or personal mobility. Far from regarding such services as a luxury, they are regarded as essentials to perform an efficient job. Academics looking at how technology and employment align are divided in their views. Professor Susskind appears to predict ‘decomposition, the breaking down of professional work into its component parts’ – a new industrial revolution meaning a division of labour and expertise being lost as the roles of skilled professionals are divided into specific delineated processes. While he acknowledges that ‘some of these parts will still require expert trusted advisers acting in traditional ways’, there is an emphasis on the standardisation and systemisation of many tasks, done he suggests by ‘legal knowledge engineers, legal technologists, risk managers’ etc – not the traditional lawyer of today.

Happily, his view is countered by other more optimistic views, that nonetheless recognise the change in legal services, with greater emphasis on electronic technology. Charley Moore founded the online legal services provider Rocket Lawyer in 2007; it has 30m users. For a monthly fee, subscribers can access legal templates and forms, and online legal advice from lawyers. Moore’s philosophy and outlook sees the routine tasks being done by software but recognises an important role for lawyers, harnessing the human aspect of legal argument and interpretation. His view is that lawyers get to perform a creative and cerebral role, serving more people than they currently do. This may mean fewer lawyers needed for existing clients, but it can ultimately mean more people accessing legal advice and services if costs are reduced. Working across email is already largely the norm in advisory situations. This means that fewer lawyers need to be located in prime locations 100% of the time. With overheads cut, and costs falling, more people can access legal advice. Be these clients within existing organisations, or new smaller organisations and individuals, Moore thinks more lawyers will be needed, rather than fewer.

Error exploited by humans

With AI having an important role in predicting outcomes and making recommendations (because it scans possible results more quickly than a human), the de-humanising of AI needs to be considered. Sharing legal advice across organisations to obtain an off-the-shelf answer is already being scoped by more than one institution. As there are robot receptionists in some Japanese companies, how long will it take for a robotic judge to appear in courts of the future? However, AI with all its potential efficiency in cost saving, and possible accuracy, does not take account of all variables. The concept of human dignity is a parameter not easily distilled into computer code. With computerised systems predicted to increase accessibility to the civil courts for many, and more organisation and allocation being achieved electronically, there will be less contact with key personnel and lawyers, who would add a human dimension. Further modern technology has its own complications, as recent hacking incidents attest. Anti-hacking software will need to be included in every system which would otherwise allow its users to be vulnerable to those with advanced computer skills. Hackers with the capacity to cause chaos, by interrupting normal service, being able to have sight of privileged legal opinions, or decisions, will need to be protected against.

Human capacity

While the relationship between human and robot introduces new complexities, the Report considers safety both in immediate and long term ways. It acknowledges, what is already the case, that in some instances AI can surpass human intellectual capacity. Multiply this, so that it becomes the norm, and without proper preparation, and legislation, it questions whether this could ‘pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species’. For now, however, the Report considers the legal liabilities of robots with the suggestion that liability should be proportionate to the complexity of instructions given to the robot and its autonomy. While one cannot imagine a robot being held responsible, the Report suggests that ‘the greater a robot’s learning capability or autonomy is, the lower other parties’ responsibilities should be and the longer a robot’s “education” has lasted, the greater the responsibility of its “teacher”.’ For me, that still means responsibility resides with creators, designers, and manufacturers, albeit that could be extended to owners and programmers also. The future may well involve such parties having to take out insurance to protect against any potential damage that their machine might cause.

For now?

If MEPs vote in favour of the legislation, the Report will be placed before the European Commission (see bit.ly/285CBjM). A three-month window in which a decision can be made as to possible legislative steps it will take follows. While It is under no obligation to propose any new laws, it may take up some of the Report’s recommendations, which include the creation of a new Agency for robotics and artificial intelligence that can provide technical, ethical and regulatory expertise. Renouncing artificial intelligence and robotics is not a viable option for most of us, such are the benefits in modern life. Accordingly, acknowledging the possible pitfalls and preparing for these, seems like the best possible compromise for the future.

Category: 
Issue: 
Author details: 
Melissa Coutinho

Melissa Coutinho is a lawyer for the Government Legal Service An accredited arbitrator, qualified PPM practitioner and a magistrate, she also writes and lectures on medical products and their regulation.