This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate...

46
This may be the author’s version of a work that was submitted/accepted for publication in the following source: Guihot, Michael & Rimmer, Matthew (2019) Artificial Intelligence: Governance and Leadership - A submission to the Australian Human Rights Commission and World Economic Forum. Australian Human Rights Commission and World Economic Forum, Aus- tralia. This file was downloaded from: https://eprints.qut.edu.au/127442/ c 2019 the Author(s) This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the docu- ment is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recog- nise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to [email protected] Notice: Please note that this document may not be the Version of Record (i.e. published version) of the work. Author manuscript versions (as Sub- mitted for peer review or as Accepted for publication after peer review) can be identified by an absence of publisher branding and/or typeset appear- ance. If there is any doubt, please refer to the published source. https://tech.humanrights.gov.au/consultation

Transcript of This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate...

Page 1: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

This may be the author’s version of a work that was submitted/acceptedfor publication in the following source:

Guihot, Michael & Rimmer, Matthew(2019)Artificial Intelligence: Governance and Leadership - A submission to the

Australian Human Rights Commission and World Economic Forum.Australian Human Rights Commission and World Economic Forum, Aus-tralia.

This file was downloaded from: https://eprints.qut.edu.au/127442/

c© 2019 the Author(s)

This work is covered by copyright. Unless the document is being made available under aCreative Commons Licence, you must assume that re-use is limited to personal use andthat permission from the copyright owner must be obtained for all other uses. If the docu-ment is available under a Creative Commons License (or other specified license) then referto the Licence for details of permitted re-use. It is a condition of access that users recog-nise and abide by the legal requirements associated with these rights. If you believe thatthis work infringes copyright please provide details by email to [email protected]

Notice: Please note that this document may not be the Version of Record(i.e. published version) of the work. Author manuscript versions (as Sub-mitted for peer review or as Accepted for publication after peer review) canbe identified by an absence of publisher branding and/or typeset appear-ance. If there is any doubt, please refer to the published source.

https://tech.humanrights.gov.au/consultation

Page 2: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

AUSTRALIAN HUMAN RIGHTS COMMISSION

AND WORLD ECONOMIC FORUM

ARTIFICIAL INTELLIGENCE: GOVERNANCE AND LEADERSHIP

QUT Robotronica 2015

DR MICHAEL GUIHOT

SENIOR LECTURER

QUT COMMERCIAL AND PROPERTY LAW CENTRE

FACULTY OF LAW

QUEENSLAND UNIVERSITY OF TECHNOLOGY

DR MATTHEW RIMMER

PROFESSOR OF INTELLECTUAL PROPERTY AND INNOVATION LAW

FACULTY OF LAW

QUEENSLAND UNIVERSITY OF TECHNOLOGY

Queensland University of Technology

2 George Street GPO Box 2434

Brisbane Queensland 4001 Australia

Work Telephone Number: (07) 31381599

Page 3: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

2  

EXECUTIVE SUMMARY

This submission addresses the white paper on Artificial Intelligence: Governance and

Leadership produced by the Australian Human Rights Commission. In particular, it focuses

upon the dimensions of intellectual property, commercial law, and regulation.

In regards to intellectual property, there has been significant case law, particularly in

the United States dealing with the interaction of established IP laws with developments

in AI and robots.

Commercial aspects of the development of AI and robots should be governed using

either the Competition and Consumer Act 2010 and the Australian Consumer Law or

modifications to them to specifically address problems.

There are a number of regulatory responses that can respond to developments in AI and

robots including legislative amendments, self-regulation and soft law approaches such

as nudging. The significant commercial impact of developments in these new

technologies will require a hardened and practiced regulator such as the ACCC to be

effective.

Recommendation 1

Intellectual property law plays a key role in the regulation of artificial intelligence, and other

related fields of technology. Intellectual property holders will hold considerable influence in

terms of the use and exploitation of artificial intelligence technologies. Australia has a diverse

array of regulators in the field. IP Australia has oversight of industrial forms of property – such

as patents, trade marks, and designs. The Department of Communications and the Arts has

carriage of copyright law. The Australian Competition and Consumer Commission also plays

a role in relation to misleading or deceptive conduct. At an international level, the World

Intellectual Property Organization has played a significant role in tracking technology trends

in respect of AI, particularly through patent information. There may well need to be reforms to

intellectual property law, policy, and practice in light of developments in AI.

Page 4: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

3  

Recommendation 2

The Australian Competition and Consumer Commission, or a newly created technology

subdivision of the ACCC should be the body that overseas and enforces the amended

legislation in relation to consumer transactions that involve problems associated with

developments in AI. This is for two reasons:

Firstly, the ACCC already has expertise in developing, enforcing and educating

Australians on regulation built up over 20 years of experience in consumer protection.

Secondly, there is already a vast and powerful regulatory enforcement regime in place

under the Competition and Consumer Act that could, if need be, be amended to apply

to problems associated with developments in AI.

Recommendation 3

If a new body such as the proposed Responsible Innovation Organisation is created, its role

should be limited to education and coordination between the various regulatory bodies

regulating AI. Because the rate and degree of change in AI development is so rapid and deep,

and the possible uses to which it might be put is unknowable, no single agency could maintain

full vigilance or control over these developments. If that proves to be the case, then any single

agency that did take on a governance role would likely fail. Any agency that does take on a

governance role and fails, will consequently bear some liability if its governance is lacking, for

whatever reason. Setting up an agency that is not able to fulfil its role would merely transfer

some, if not all, of the liability for problems caused by AI to the agency, away from the

technology companies. We must be careful not to shift the burden from the

manufacturer/supplier to authority.

Page 5: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

4  

Definition and Classification

Lin, Abney and Bekey define a robot as:

… an engineered machine that senses, thinks, and acts: “Thus a robot must have sensors, processing

ability that emulates some aspects of cognition, and actuators. … on-board intelligence is necessary if

the robot is to perform significant tasks autonomously…”1

Thus AI and robots lie together on a spectrum, with AI as the thinking part of the ‘sense, think,

act’ paradigm. Calo differentiates between AI and robots such that ‘a technology does not act,

and hence is not a robot, merely by providing information in an intelligible format. It must be

in some way.’2 The embodiment or physicality required of a robot takes AI into the world.

Balkin too is open to the idea that AI and robots are merely elements on a spectrum:

as Calo points out, there is a continuum between “robots” and “artificial intelligence.” That is because,

like the Internet itself, robots and other interactive entities do not have to be designed in any particular

way. And because there is a continuum of potential designs and a variety of different potential uses, there

is also a continuum of potential effects that these new technologies can have.3

                                                            1 Patrick Lin, Keith Abney and George Bekey, ‘Robot Ethics: Mapping the Issues for a Mechanized

World’ (2011) 175(5–6) Artificial Intelligence 942, 943 citing; George A Bekey, Autonomous Robots: From

Biological Inspiration to Implementation and Control (MIT Press, 2005). 2 Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103(3) California Law Review 513, 531. 3 Jack M Balkin, ‘The Path of Robotics Law’ (2015) 6 California Law Review Circuit 45, 50.

Page 6: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

5  

In the last twenty years or so, there has been an exponential increase in the power, speed, and

capacity of computers. At the same time, the stockpile of machine-accessible data has similarly

multiplied. The machine learning capabilities of Artificial intelligence (AI) now has the

dexterity to traverse these data troves, using algorithms to make conclusions based on

connections that humans are incapable of seeing. Companies use AI for hiring decisions,4 to

profile customers, and to present customised information on social media sites. Courts and

judges use AI in sentencing, bail and probation decisions. As computers become more

pervasive, we continue to accede to and trust their ‘superior’ decision-making abilities.

However, there is a growing concern about the use of, and outcomes achieved by, some

automated decision-making processes. These concerns include concerns about incursions into

consumers’ informational and personal privacy. Also, the complex and opaque applications

that use decision-making algorithms are often ‘black boxes’5 and, as such, the decisions they

make often cannot be examined or explained easily. Further, the data used to train the models

is sometimes, itself, flawed, incomplete, or may even entrench existing biases.6 This relentless

advancement of automation and AI in the private sector has roused public discourse about the

need for regulatory oversight.7 Given the potential impact of AI applications, researchers,

journalists, data scientists, lawyers and policy makers have an obligation to mitigate threats or

risks caused by AI, including by questioning and testing every step in the development of new

technologies.

                                                            4 Simon Chandler, The AI Chatbot Will Hire You Now (13 September 2017)

https://www.wired.com/story/the-ai-chatbot-will-hire-you-now/. 5 Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information

(Harvard Univ. Press, 2015). 6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review

671. 7 Oren Etzioni, ‘Opinion | How to Regulate Artificial Intelligence’ The New York Times, 20 January 2018

https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html; The United Nations

Department of Economic & Social Affairs, ‘The Impact of the Technological Revolution on Labour Markets and

Income Distribution’ (The United Nations, 31 July 2017) https://www.un.org/development/desa/dpad/wp-

content/uploads/sites/45/publication/2017_Aug_Frontier-Issues-1.pdf; Michael Guihot, Anne Matthew and

Nicolas Pierre Suzor, ‘Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence’ (2017) 20(2)

Vanderbilt Journal of Entertainment and Technology Law 385.

Page 7: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

6  

The work of Roger Brownsword suggests that there are recurring themes in how the

legal system tries to deal with new, disruptive technologies.8 In the fields of law and

technology, there are recurring debates over how laws should be technology-neutral,

technology-contextual, or technology-specific. There has certainly been such a debate over

robotics law and policy, and discussion as to whether it is an autonomous field, or lacks a

specific identity.

There has also been a great discussion about liability rules in respect of robotics, and

significant debate over legal rules regarding automation in transportation. Both automobile

manufacturers and information technology companies have been engaged in research and

development over autonomous vehicles. There has been significant debate over the road rules

for autonomous vehicles – such as Google’s self-driving car. Likewise, drones have raised

challenging policy questions in respect of aviation rules. The appearance of aquabots has also

posed intriguing matters about the law of sea. The adoption of robotics in agriculture has also

raised questions about automation. In the field of health care, the use of robotics holds out the

promise of improving health outcomes for patients. Yet, given the past conflicts over medical

liability, there is a need to lay down appropriate rules, standards, and codes about the use of

robotics in the areas of surgery, patient care, and prosthetics.

As well as the discussion about civilian uses of robots, there has also been much interest

in the increasing use of robots by law enforcement agencies. At an international level, there

has been deep disquiet about the use of drone warfare by major superpowers. There has been a

movement to ban ‘killer robots’.

There has been significant debate about the impact of robots, automation, and artificial

intelligence upon employment. Optimists hope that the robotics revolution will result in the

creation of new jobs. Pessimists fear that automation will lead to redundancies, under-

employment, and underemployment across a range of industries. One policy recommendation

has been that there should be a robot tax to generate funds for training of workers, in areas such

as manufacturing, who are displaced by automation.

                                                            8 Roger Brownsword, Eloise Scotford, and Karen Yeung, Oxford Handbook in Law, Regulation and

Technology. Oxford: Oxford University Press, 2015.

Page 8: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

7  

Bill Gates has been enthusiastic about the idea of taxing robotics.9 However, critics

have complained that special forms of taxation in respect of robotics would discourage

research, development, and innovation.

Cory Doctorow has wondered whether it is even possible to regulate robots. He argues:

‘I am skeptical that "robot law" can be effectively separated from software law in general.’10

Doctorow, channelling Lawrence Lessig, suggests that we think about how software and

robotics is regulated ‘through code, through law, through markets and through norms.’11

Mark Lemley and Bryan Casey have discussed the debate over the definition and

classification of robots and artificial intelligence.12 They have highlighted problems of over-

inclusive and under-inclusive definitions of robots and artificial intelligence. Lemley and

Casey argue:

Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do

as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots:

embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws

should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must

distinguish robots from other entities, the law should apply what we call Turing’s Razor, identifying

robots on a case-by-case basis. Third, we offer six functional criteria for making these types of “I know

it when I see it” determinations and argue that courts are generally better positioned than legislators to

apply such standards. Finally, we argue that if we must have definitions rather than apply standards,

they should be as short-term and contingent as possible. That, in turn, suggests regulators—not

legislators—should play the defining role.13

This seems to be a plea for a pragmatic approach to the definition, classification, and regulation

of robotics and artificial intelligence.

                                                            9 Kevin Delaney, ‘The Robot That Takes Your Job Should Pay Taxes, Says Bill Gates’, Quartz, 18

February 2017, https://qz.com/911968/bill-gates-the-robot-that-takes-your-job-should-pay-taxes/ 10 Cory Doctorow, ‘Why It Is Not Possible to Regulate Robots’, The Guardian, 2 April 2014,

https://www.theguardian.com/technology/blog/2014/apr/02/why-it-is-not-possible-to-regulate-robots 11 Ibid. 12 Mark Lemley and Bryan Casey, ‘You Might Be a Robot’, SSRN, 2019,

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3327602#.XHsk_VR5I_Q.twitter

13   Ibid. 

Page 9: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

8  

1. Intellectual Property

Francis Gurry - the director-general of the World Intellectual Property Organization – has

discussed the need for research in respect of intellectual property and artificial intelligence:

AI is fast becoming part of our everyday lives, changing how we work, shop, travel and interact with

each other. Yet we are only at the beginning of discovering the many ways in which AI will have an

impact on – and indeed challenge – business, society and culture. There are numerous misconceptions

and misgivings about the nature of AI, and in particular the challenge it poses to humankind. Given

these widely held reservations and concerns, it is essential to have a factual basis for policy discussions

about innovation in AI.14

He observes that intellectual property plays a key role in the regulation of artificial intelligence

and robotics. In particular, the disciplines of trade secrets, patents, designs, trade marks, and

copyright law will impinge upon

A. Trade Secrets, Confidential Information, and Privacy

New Innovations in robotics and artificial intelligence have often been protected through trade

secrets, and confidential information. Keisner, Raffo and Wunsch-Vincent comment that ‘the

technological complexity of robotics systems means that trade secrets are often the first option

for companies seeking to protect their innovations’.15

There have been other disputes over trade secrets relating to the field of robotics

(broadly construed). Notably, in 2018, there was a conflict between Waymo and Uber relating

to confidential information and patents associated with autonomous vehicles (self-driving

cars). After going to trial, the matter was ultimately settled in favour of Waymo. The dispute

is nonetheless an important precedent in respect of intellectual property, employment law, and

robotics.

In May 2017, a Seattle jury decided that Huawei misappropriated trade

secrets belonging to T-Mobile related to a smartphone-testing robot called Tappy, which T-

                                                            14 WIPO, Technology Trends 2019: Artificial Intelligence, Geneva: WIPO, 2019, 7. 15 C. Andrew Keisner, Julio Raffo and Sacha Wunsch-Vincent, ‘Breakthrough Technologies – Robotics

and IP’, WIPO Magazine, 2016.

Page 10: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

9  

Mobile had in its Bellevue laboratory.16 The jury determined that Huawei had misappropriated

T-Mobile's trade secrets in a series of incidents in 2012 and 2013, and that it breached a handset

supply contract between the two companies. The jury said T-Mobile should be awarded $4.8

million in damages because of this breach of contract. In January 2018, the United States

Department of Justice brought a criminal action against Huawei for the theft of trade secrets

held by T-Mobile in respect of the robot Tappy.17 The dispute in USA v. Huawei will be a major

precedent on intellectual property and robotics.18

The current trade dispute between the United States and China is amongst other things

about intellectual property, technology transfer, and innovation policy. The Made in China

2025 innovation policy is particularly focused upon China developing capacity in fields of

advanced manufacturing – including robotics. The United States is keen to preserve its

competitive advantage in industry and technology.

In October 2017, U.S. prosecutors dropped charges against Dong Liu - a dual citizen of

China and Canada - accused of trying to steal trade secrets from Medrobotics - a Massachusetts-

based manufacturer of robotic surgical products - by trespassing at its headquarters (Raymond,

2017). There seemed to be a lack of evidence in the end for the matter to proceed.

In 2018, an automotive robotics supplier exposed documents detailing assembly line

schematics, robotics configurations, and other trade secrets of Toyota, Tesla, and Volkswagen

on a publicly accessible server. This case highlighted issues about information security in the

robotics sector.

In 2018, the AI Now Institute led by Kate Crawford has argued: ‘AI companies should

waive trade secrecy and other legal claims that stand in the way of accountability in the public

sector’.19 The group has elaborated:

Vendors and developers who create AI and automated decision systems for use in government should

agree to waive any trade secrecy or other legal claim that inhibits full auditing and understanding of

                                                            16 Rachel Lerman, ‘Jury Awards T-Mobile $4.8m in Trade-Secrets Case against Huawei’, The Seattle

Times, 20 May 2017. 17 United States Department of Justice, ‘Chinese Telecommunications Device Manufacturer and its U.S.

Affiliate Indicted for Theft of Trade Secrets, Wire Fraud, and Obstruction Of Justice,’ Press Release, 28 January

2019; and John Marsh, ‘Tappy's Revenge: What You Need to Know About the DOJ's Momentous Trade Secret

Indictment of Huawei’, Lexblog, 7 February 2019. 18 USA v. Huawei (2019), Case 2:19-cr-0010-RSM 19 AI Now Institute, AI Now Report, December 2018, https://ainowinstitute.org/AI_Now_2018_Report.pdf

Page 11: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

10  

their software. Corporate secrecy laws are a barrier to due process: they contribute to the “black box

effect” rendering systems opaque and unaccountable, making it hard to assess bias, contest decisions,

or remedy errors. Anyone procuring these technologies for use in the public sector should demand that

vendors waive these claims before entering into any agreements.20

The group was keen to ensure that the auditability and transparency of AI systems was not

unduly affected by trade secrets and confidential information.

Moreover, the AI Now Institute argue: ‘Technology companies should provide

protections for conscientious objectors, employee organizing, and ethical whistleblowers.’

They comment:

Organizing and resistance by technology workers has emerged as a force for accountability and ethical

decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and

make ethical choices about what projects they work on. This should include clear policies

accommodating and protecting conscientious objectors, ensuring workers the right to know what they

are working on, and the ability to abstain from such work without retaliation or retribution. Workers

raising ethical concerns must also be protected, as should whistleblowing in the public interest.21

In this context, there is a need to ensure that there are proper defences and exceptions for

whistleblowers and conscientious objectors under trade secrets law.

In response to such intellectual property claims in the technological field, some have

instead looked to open licensing in respect of robotics.

As well as raising matters of trade secret protection and confidential information,

artificial intelligence and robotics – and similar technologies like autonomous vehicles and

drones – raise issues about privacy. Calo, Froomkin and Kerr have highlighted the significant

privacy implications of robotics in their respective work.22 The Australian Parliamentary

inquiry into autonomous vehicles was particularly concerned about data collection and data

gathering by self-driving cars, and the privacy ramifications of such activities. The Queensland

Government is currently holding an inquiry into the privacy implications of surveillance by

drones.

                                                            20 Ibid. 21 Ibid. 22 Ryan Calo, A. Michael Froomkin, and Ian Kerr (ed.), Robot Law, Cheltenham and Northampton (MA):

Edward Elgar, 2016.

Page 12: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

11  

B. Patent Law

The United States Patent and Trademark Office (USPTO) has developed a classification for

robots. This collection ‘provides for a reprogrammable, multifunction manipulator designed to

move devices through variable programmed motions for the performance of changeable tasks

on a repetitive basis without human intervention and all subcombinations thereof specialized

for use with such manipulator.’ The USPTO relies upon a definition of an industrial robot from

the International Organization for Standardization.

In the field of patent law, there has been significant patent activity in respect of robotics.

The WIPO report on breakthrough innovation charts the geography of patent activity in the

area of robotics.23 Japanese, Korean, and German companies dominated the top rankings for

filing patents in the area of robotics. China was notably improving in its performance

(particularly in terms of public sector patent filings). There is a need to improve Australia’s

performance in translating research into practical outcomes. The 2019 WIPO study has

highlighted the patterns of patent activity in respect of artificial intelligence.24 While the

classifications for robotics and artificial intelligence are distinct, there is a significant

intersection and overlap between the two fields of technology.

There has been increasing litigation in respect of patents relating to robots and robotics.

In 2011, a United States District Court found that all Genmark Automation patent claims at

issue in a legal action were valid and infringed by Innovative Robotics Systems Inc, and entered

a final consent judgment and permanent injunction against any further infringement of

Genmark’s patents.

In 2017, iRobot Corp, a leader in consumer robots and maker of the Roomba®

vacuuming robot, filed legal proceedings for patent infringement against multiple robotic

vacuum cleaner manufacturers and sellers including Bissell, Hoover, bObsweep, iLife, Black

& Decker and the Chinese or Taiwanese companies that manufacture the infringing products.

In 2017, the US International Trade Commission investigated whether iRobot’s patents in

respect of robotics had been infringed by rival robots sold by Bissell, Hoover, Black & Decker

and others. In 2018, the International Trade Commission made a final determination, which

bars products from Hoover and bObsweep from importation into the United States. The action

                                                            23 WIPO, World IP Report: Breakthrough Innovation and Economic Growth, Geneva: WIPO, 2015. 24 WIPO, Technology Trends 2019: Artificial Intelligence, Geneva: WIPO, 2019.

Page 13: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

12  

has led to other cases against iLife, Bissell, and Black & Decker, among others, to end in

favourable settlements to iRobot.

There has been concern about non-practising patent entities filing patent infringement

actions in the field of robotics (much as there has been issues in the field of information

technology). One suit against Intuitive Surgical was brought by Alisanos LLC after Medicanica

– a company that retained surgical robotics patents – transferred its patent portfolio to

Alisanos.25

In 2019, Chelmsford's Endeavor Robotics – which designed a ‘Scorpion Robot for the

U.S. Army - sued defense firm QinetiQ North America for patent infringement. Endeavor

Robotics claims that QinetiQ infringed on two of its patents, one for a ‘robotic platform’ and

one for a ‘mobile robotic vehicle’. Endeavor claims that QinetiQ North America’s robot for

the U.S. Army’s Common Robotic System-Individual program infringed its stair-climbing

robots.

WIPO commented: ‘One can start to see the more intensive offensive and defensive IP

strategies that are present in other high-technology fields.’26 WIPO has wondered: ‘A vital

question is whether the increased stakes and commercial opportunity across various sectors

will tilt the balance toward costly litigation, as in other hightech and complex technologies.’27

It will also be worthwhile seeing whether patent defences and exemptions are deployed in

respect of robotics – particularly in respect of the defence of experimental use; compulsory

licensing; and crown use. There may also be scope for patent pools and public sector licensing

in respect of robotics.

As part of its ‘Made in China 2025’ strategy, China has developed its own local robotics

industry to boost advanced manufacturing.28 In order to compete with its foreign competitors,

China has established a patent pool in respect of robotics technology. The patent pool is

intended to be a means of tackling obstacles to the development of China’s robotics industry,

such as a lack of core patents.

Other possible regulatory tools include funding research and development – such as

innovation prizes and challenges in the field of robotics.

                                                            25 Tim Sparapani, ‘Surgical Robotics and the Attack of Patent Trolls’, Forbes, 19 June 2015. 26 WIPO, World IP Report: Breakthrough Innovation and Economic Growth, Geneva: WIPO, 2015, 129. 27 Ibid., 129. 28 CHOFN, ‘Country Developing New Age of Robotics with Patent Efforts’, 7 March 2016.

Page 14: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

13  

In its 2019 study, WIPO has considered the intellectual property trends in respect of

artificial intelligence.29 In its executive summary, WIPO highlights the wide variety of

applications of artificial intelligence:

Artificial intelligence (AI) is increasingly driving important developments in technology and business,

from autonomous vehicles to medical diagnosis to advanced manufacturing. As AI moves from the

theoretical realm to the global marketplace, its growth is fueled by a profusion of digitized data and

rapidly advancing computational processing power, with potentially revolutionary effect: detecting

patterns among billions of seemingly unrelated data points, AI can improve weather forecasting, boost

crop yields, enhance detection of cancer, predict an epidemic and improve industrial productivity.30

WIPO also noted that some technologies had multiple applications: ‘Many AI-related

technologies can find use across different industries, as shown by the large number of patents

in AI that refer to multiple industries.’31

Analysing the patent data, WIPO highlights the dominance of entities from United

States, China, and Japan. The report observes:

Companies, in particular those from Japan, the United States of America (U.S.) and China, dominate

patenting activity. Companies represent 26 out of the top 30 AI patent applicants, while only four are

universities or public research organizations. This pattern applies across most AI techniques,

applications and fields. Of the top 20 companies filing AI-related patents, 12 are based in Japan, three

are from the U.S. and two are from China. Japanese consumer electronics companies are particularly

heavily represented.32

In this context, Australia is in a precarious position – lagging in many of the key innovation

races in respect of patents and AI.

In terms of patent applicants, WIPO charted the dominance of IBM and Microsoft:

IBM and Microsoft are leaders in AI patenting across different AI-related areas IBM has the largest

portfolio of AI patent applications with 8,290 inventions, followed by Microsoft with 5,930. Both

companies’ portfolios span a range of AI techniques, applications and fields, indicating that these

companies are not limiting their activity to a specific industry or field. Rounding out the top five

                                                            29 WIPO, Technology Trends 2019: Artificial Intelligence, Geneva: WIPO, 2019. 30 Ibid., 13. 31 Ibid., 14. 32 Ibid., 15.

Page 15: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

14  

applicants are Toshiba (5,223), Samsung (5,102) and NEC (4,406). The State Grid Corporation of

China has leaped into the top 20, increasing its patent filings by an average of 70 percent annually from

2013 to 2016, particularly in the machine learning techniques of bio-inspired approaches, which draw

from observations of nature, and support vector machines, a form of supervised learning.33

The heavy dominance of AI patents by corporations will also have important implications in

terms of the ownership of AI, and access to benefits associated with AI.

The report also discusses the activity of universities and public research institutions in

the context of AI research. The report comments: ‘Despite the dominance of companies in AI,

universities and public research organizations play a leading role in inventions in selected AI

fields such as distributed AI, some machine learning techniques and neuroscience/

neurorobotics.’34 The report highlights the significant investment by Chinese universities in AI

patents:

Chinese organizations make up 17 of the top 20 academic players in AI patenting as well as 10 of the

top 20 in AI-related scientific publications. Chinese organizations are particularly strong in the

emerging technique of deep learning. The leading public research organization applicant is the Chinese

Academy of Sciences (CAS), with over 2,500 patent families and over 20,000 scientific papers

published on AI. Moreover, CAS has the largest deep learning portfolio (235 patent families). Chinese

organizations are consolidating their lead, with patent filings having grown on average by more than

20 percent per year from 2013 to 2016, matching or beating the growth rates of organizations from

most other countries.35

The report observes that there are 167 universities and public research organisations ranked

among the top 500 patent applicants: ‘110 are Chinese, 20 are from the U.S., 19 from the

Republic of Korea and 4 from Japan; [and] four European public research organizations feature

in the top 500 list.’36

C. Designs Law

                                                            33 Ibid., 15. 34 Ibid., 16. 35 Ibid., 16. 36 Ibid., 15.

Page 16: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

15  

Designs law may also raise issues in respect of the design of robots and robotics. Keisner, Raffo

and Wunsch-Vincent observe that ‘industrial designs that protect a robot’s appearance – its

shape and form – also play an important role in improving the marketability of products and

helping firms appropriate the returns on their R&D investments.’37

As highlighted by the litigation between Apple and Samsung over designs relating to

smartphones and tablets, designs law can play a significant role in disputes over the ownership

of new technologies.

The right of repair under designs law will also play an important role in respect of

robotics. The Federal Court of Australia has provided guidance as to the nature and scope of

the right of repair in GM Global Technology Operations LLC v. S.S.S. Auto Parts Pty Ltd.38

There has been increasing activity in respect of algorithm-driven design. There has been

a conversation about artificial intelligence will impact the work of designers.

D. Trademark Law and Related Rights

In addition to other fields of industrial property law, there have also been battles over trade

marks and robotics. The makers of the film RoboCop have asserted their trademark against

providers of security services. Lucasfilm – which developed the Star Wars franchise – acquired

a trademark on ‘Droid.’ Trade marks will play a critical role as robotics companies seek to

market their inventions in the global marketplace.

There has been consideration as to how artificial intelligence will affect trade mark law.

Lee Curtis and Rachel Platts of HGF explain how the AI revolution will impact upon the legal

discipline.39 They explain:

The impact of AI systems in everyday life and the process of buying products and services, which in

essence is the focus of trademark law, is increasing. It is predicted by a study from Gartner that by

2020, 85% of customer service interactions in retail will be powered or influenced by some form of AI

technology. AI global revenue is predicted by market intelligence firm Tractica to skyrocket from

                                                            37 C. Andrew Keisner, Julio Raffo and Sacha Wunsch-Vincent, ‘Breakthrough Technologies – Robotics

and IP’, WIPO Magazine, 2016. 38 GM Global Technology Operations LLC v. S.S.S. Auto Parts Pty Ltd [2019] FCA 97 39 Lee Curtis and Rachel Platts, ‘AI Is Coming and It Will Change Trade Mark Law’,

http://www.hgf.com/media/1173564/09-13-AI.PDF

Page 17: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

16  

$643.7 million in 2016 to $36.8 billion in 2025. A report from advertising agency J WalterThompson

suggests that 70% of so-called millennials appreciate brands using AI technology to showcase their

products, with a report from Statista suggesting that 38% percent of consumers receive better

purchasing guidance with AI than without.40

Curtis and Platts note: ‘To date, AI and IP discussions have centred around patent law and

patent protection for AI software applications.’41 However, they observed: ‘The impact of AI

on trade mark law and whether the present law is “fit for purpose” seems to have been

completely overlooked.’42

There has been one piece of litigation, which involved artificial intelligence and trade

mark law. In Cosmetic Warriors and Lush v Amazon.co.uk and Amazon EU, Lush argued that

Amazon had infringed its trade marks.43 There was a consideration of Amazon’s use of product

suggestion:

In connection with the search engine on its own site, Amazon has also used analyses of consumer

behaviour. Thus for example, if a consumer types Squiffo into the search box and that term has not

been typed in previously, no results will be shown and the screen may ask if the consumer meant

“squiff” and display some results for squiff products. However, if the consumer who originally typed

in squiffo goes on to purchase some goods, these goods might be suggested to the next consumer who

types in squiffo. It is for reasons like this that consumers who type Lush into the amazon.co.uk search

facility are shown products such as Bomb Cosmetics products—previous consumers who typed in Lush

have gone on to browse and/or purchase such products. Thus, Amazon has built up and uses a

behaviour-based search tool to identify an association between a particular search word and specific

products. Amazon uses this tool to present products to consumers which it hopes will be of interest to

them. In the present case, this tool has used the word Lush to identify products which Amazon believes

a consumer searching for Lush products might wish to buy instead of a Lush-branded product.44

Baldwin J held that those sponsored advertisements for Amazon, triggered by keywords

including ‘Lush’, which included the mark, infringed Article 5(1)(a) of the Directive. However

those advertisements that did not include the LUSH mark were not infringing.

                                                            40 Ibid. 41 Ibid. 42 Ibid. 43 Cosmetic Warriors and Lush v Amazon.co.uk and Amazon EU [2014] EWHC 181 (Ch). 44 Cosmetic Warriors and Lush v Amazon.co.uk and Amazon EU [2014] EWHC 181 (Ch).

Page 18: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

17  

There were also some disputes over publicity rights and robotic representations in the

1990s in the United States in a number of cases such as Wendt v. Host International, Inc., and

White v. Samsung Electronics.45 There has been discussion of whether there is a need to revisit

legal personhood in light of robotics and artificial intelligence.

The advent of robotics and artificial intelligence may also have implications for

consumer law and competition policy. Kate Darling has emphasized the need to take into

account the public interest in competition policy in matters of intellectual property and robotics:

‘Competition will drive better implementations of personalized robots, and a vibrant market

could even encourage better privacy and data security solutions.’46

E. Copyright Law

Copyright law and technological protection measures are relevant to robotics through the

means of protecting computer programs, databases, and creative works.

In copyright law, robotics poses complicated questions about authorship, ownership,

and creativity. At the QUT Robotronica festivals, there have been a number of demonstrations

of how robotics has been transforming the creative arts – including in music, art, and

performance.

In the private sector, information technology companies have experimented with

machine learning, neural networks, and artificial intelligence.

There has been significant debate as to whether the copyright categories of authorship

could include artificial intelligence. In the IceTV v. Nine Network, the High Court of Australia

has insisted upon the need for human authorship of copyright works.47 The judges stressed the

importance of human authorship and human agency:

The first principle concerns the significance of "authorship". The subject matter of the Act now extends

well beyond the traditional categories of original works of authorship, but the essential source of

original works remains the activities of authors. While, by assignment or by other operation of law, a

party other than the author may be owner of the copyright from time to time, original works emanate

                                                            45 Wendt v. Host International, Inc., 125 F.3d 806 (1997) and White v. Samsung Electronics 971 F.2d 1395

(1992) 46 Kate Darling, ‘Why Google’s Robot Personality Patent Is Not Good for Robotics’, IEEE Spectrum, 8

April 2015. 47 IceTV v. Nine Network [2009] HCA 14.

Page 19: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

18  

from authors. So it was that in Victoria Park Racing and Recreation Grounds Co Ltd v Taylor, Dixon

J observed:

"Perhaps from the facts a presumption arises that the plaintiff company is the owner of the

copyright but, as corporations must enlist human agencies to compose literary, dramatic,

musical and artistic works, it cannot found its title on authorship. No proof was offered that the

author or authors was or were in the employment of the company under a contract of service

and that the book was compiled or written in the course of such employment."

Key provisions of Pt III of the Act fix on "the author". Examples include the requirement for the author

of unpublished works to be a "qualified person" for copyright to subsist (s 32(1)), the fixing of copyright

duration by reference to the death of the author (s 33), and the conferral of copyright upon the author

subject to the terms of employment or contractual arrangements under which the author labours (s 35).48

The High Court of Australia emphasized that ‘the notion of "creation" conveys the earlier

understanding of an "author" as "the person who brings the copyright work into existence in its

material form"’. 49

In the United States, a number of jurists and legal theorists have considered ways and

means by which robotics and artificial intelligence could be accommodated within copyright

law. Pamela Samuelson,50 Annemarie Bridy,51 James Grimmelmann,52 and Andres Guadamuz53

have explored the possibilities in respect of copyright law and artificial intelligence.

Presciently, in 1985, Pamela Samuelson from Berkeley Law considered the question of

allocation of ownership rights in computer-generated works. 54 She observed: ‘As “artificial

intelligence” (AI) programs become increasingly sophisticated in their role as the “assistants”

of humans in the creation of a wide range of products – from music to architectural plans to

                                                            48 IceTV v. Nine Network [2009] HCA 14. 49 IceTV v. Nine Network [2009] HCA 14. 50 Pamela Samuelson, ‘Allocating Ownership Rights in Computer-Generated Works’ (1985) 47 University

of Pittsburgh Law Review 1185-1228. 51 Annemarie Bridy, ‘Coding Creativity: Copyright and the Artificially Intelligent Author’ (2012) 5

Stanford Technology Law Review 1-28. 52 James Grimmelmann, ‘Copyright for Literate Robots’ (2016) 101 Iowa Law Review 657-681. 53 Andres Guadamuz, ‘Artificial Intelligence and Copyright’, WIPO Magazine, October 2017.

https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html 54 Pamela Samuelson, ‘Allocating Ownership Rights in Computer-Generated Works’ (1985) 47 University

of Pittsburgh Law Review 1185-1228.

Page 20: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

19  

computer chip designs to industrial products to chemical formulae – the question of who will

own what rights in the “output” of such programs may well become a hotly contested issue.’55

Annemarie Bridy reflected upon the debate over copyright and artificial intelligence.56

She charted the history of the discussion:

For more than a quarter century, interest among copyright scholars in the question of AI authorship has

waxed and waned as the popular conversation about AI has oscillated between exaggerated predictions

for its future and premature pronouncements of its death. For policymakers, the issue has sat on the

horizon, always within view but never actually pressing. Indeed, to the extent that the copyright system

is now in a digitally induced crisis, the causes lie primarily outside the domain of cultural production,

in the domains of reproduction and distribution. To recognize this fact, however, is not to say that we

can or should ignore the challenge that AI authorship presents to copyright law’s underlying

assumptions about creativity. On the contrary, the relatively slow development of AI offers a reprieve

from the reactive model of policymaking that has driven copyright law in the digital age.57

Bridy suggested: ‘AI authorship is readily assimilable to the current copyright framework

through the work made for hire doctrine, which is a mechanism for vesting copyright directly

in a legal person who is acknowledged not to be the author-in-fact of the work in question.’58

James Grimmelmann has also considered the relationship between copyright, artificial

intelligence, and robotics.59 He comments:

Almost by accident, copyright law has concluded that it is for humans only: reading performed by

computers doesn't count as infringement. Conceptually, this makes sense: Copyright's ideal of romantic

readership involves humans writing for other humans. But in an age when more and more manipulation

of copyrighted works is carried out by automated processes, this split between human reading

(infringement) and robotic reading (exempt) has odd consequences: it pulls us toward a copyright

system in which humans occupy a surprisingly peripheral place.60

                                                            55 Ibid. 56 Annemarie Bridy, ‘Coding Creativity: Copyright and the Artificially Intelligent Author’ (2012) 5

Stanford Technology Law Review 1-28. 57 Ibid. 58 Ibid. 59 James Grimmelmann, ‘Copyright for Literate Robots’ (2016) 101 Iowa Law Review 657-681. 60 Ibid.

Page 21: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

20  

Grimmelmann considers how the principles of copyright infringement and the doctrine of fair

use deal with robotic readers.

In 2017, Andres Guadamuz from Sussex University commented upon some of the

implications of artificial intelligence for copyright law and policy:

Creating works using artificial intelligence could have very important implications for copyright law.

Traditionally, the ownership of copyright in computer-generated works was not in question because the

program was merely a tool that supported the creative process, very much like a pen and paper. Creative

works qualify for copyright protection if they are original, with most definitions of originality requiring

a human author. Most jurisdictions, including Spain and Germany, state that only works created by a

human can be protected by copyright. But with the latest types of artificial intelligence, the computer

program is no longer a tool; it actually makes many of the decisions involved in the creative process

without human intervention.61

He suggests that the discipline raises fundamental doctrinal issues in terms of authorship,

ownership, copyright subject matter, copyright infringement, and remedies.

By contrast, the European Parliament Legal Affairs Committee has demanded the

elaboration of criteria for ‘own intellectual creation’ for copyrightable works produced by

computers or robots is demanded.62

In the field of literature, there has been anxiety amongst authors and publishing

regarding machine-learning projects of big IT companies. In 2016, authors expressed disquiet

over Google using novels to improve its AI’s conversation ability.63 Erin McCarthy objected:

It’s hard to gauge the use of my work and the exact purpose for its use without having seen it in action.

My assumption would be they purchased a copy of the book originally. If they haven’t, then I would

                                                            61 Andres Guadamuz, ‘Artificial Intelligence and Copyright’, WIPO Magazine, October 2017.

https://www.wipo.int/wipo_magazine/en/2017/05/article_0003.html 62 European Parliament Legal Affairs Committee, Civil Law Rules on Robotics that includes

recommendations to the Commission on Civil Law Rules on Robotics (2015/2103/(INL); rapporteur, Mady

Delvaux (S&D, Luxembourg) http://www.europarl.europa.eu/doceo/document/A-8-2017-

0005_EN.html?redirect 63 Richard Lea, ‘Google swallows 11,000 novels to improve AI's conversation’, The Guardian, 28

September 2016, https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-

ais-conversation

Page 22: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

21  

imagine the source of the content, as intellectual property, should be properly attributed and

compensated for the general health of the creative community.64

Mary Rasenberger of the Author’s Guild maintained: ‘The research in question uses these

novels for the exact purpose intended by their authors – to be read.’65 ‘It shouldn’t matter

whether it’s a machine or a human doing the copying and reading, especially when behind the

machine stands a multi-billion dollar corporation which has time and again bent over

backwards devising ways to monetise creative content without compensating the creators of

that content.’66

For its part, Google was unapologetic about its use of the textual material:

We could have used many different sets of data for this kind of training, and we have used many

different ones for different research projects. But in this case, it was particularly useful to have language

that frequently repeated the same ideas, so the model could learn many ways to say the same thing –

the language, phrasing and grammar in fiction books tends to be much more varied and rich than in

most nonfiction books.67

No doubt the company was heartened by its victory over authors in a dispute over copyright

and fair use in respect of Google Books.68

In October 2018, a piece of AI art by the French collective Obvious was sold at

Christie’s for $432,500. Hugo Caselles-Dupré of the collective Obvious explained that their

work explored the interface between art and artificial intelligence. Their method relied upon a

‘generative adversarial network’. Caselles-Dupré explained:

The algorithm is composed of two parts. On one side is the Generator, on the other the Discriminator.

We fed the system with a data set of 15,000 portraits painted between the 14th century to the 20th. The

Generator makes a new image based on the set, then the Discriminator tries to spot the difference

between a human-made image and one created by the Generator. The aim is to fool the Discriminator

into thinking that the new images are real-life portraits. Then we have a result.69

                                                            64 Ibid. 65 Ibid. 66 Ibid. 67 Ibid. 68 See - 69 Christie’s, ‘Is Artificial Intelligence Set To Become Art’s Next Medium’, Christie’s, 12 December 2018,

https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx

Page 23: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

22  

He observed that ‘we found that portraits provided the best way to illustrate our point, which

is that algorithms are able to emulate creativity.’70

On March 2019, Sotheby’s is auctioning a work of AI art by the German artist, Mario

Klingemann. Mario Klingemann has an unorthodox vision of creativity: ‘Humans are not

original - We only reinvent, make connections between things we have seen.’71

Certain AI Projects might also raise larger questions about copyright law, copyright

exceptions, and copyright infringement. Montreal-based artist Adam Basanta has developed an

AI project called All We’d Ever Need is One Another.72 Basanta has explained his process of

using computer scanners:

I was really surprised that the images looked a lot like canonical 1950s abstract paintings I literally had

a moment where I had made a piece and I thought: I’ve seen this before. I looked it up and I found a

Rothko that was very, very similar to it. “If it’s similar enough to a work that the art market or

international collections has deemed art-worthy, then that image, which is similar to it, is also art-

worthy. It becomes art.73

The artist is being sued by Amel Chamandy for copyright infringement and trademark

infringement of her paining, ‘Your World Without Painting’. 74 Her lawyer Pascal Lauzon said:

‘These acts of infringement illegally divert internet traffic away from NuEdge’s website and

allows you to unduly benefit from the goodwill and reputation associated with the name and

trademark AMEL CHAMANDY.’75 Amel Chamandy is seeking $CA 40,000 in damages.

                                                            70 Ibid. 71 Arthur Miller, ‘Can Machines Be More Creative Than Humans?’, The Guardian, 4 March 2019,

https://www.theguardian.com/technology/2019/mar/04/can-machines-be-more-creative-than-

humans?CMP=share_btn_tw 72 CBC Radio, ‘Can An Artist Sue An AI Over Copyright Infringement?’, CBC Radio, 13 October 2018,

https://www.cbc.ca/radio/spark/409-1.4860495/can-an-artist-sue-an-ai-over-copyright-infringement-1.4860762 73 Chris Hannay, ‘Artist Faces Lawsuit Over Computer System That Creates Randomly Generated Images’,

The Globe and Mail, 4 October 2018, https://www.theglobeandmail.com/arts/art-and-architecture/article-artist-

faces-lawsuit-over-computer-system-that-creates-randomly/ 74 Ibid. 75 Ibid.

Page 24: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

23  

Artificial intelligence has also been transforming music. Google’s WaveNet uses neural

nets to generate speech and music.76 Stuart Dredge has reported that there are a number of AI

firms, which seek forge links between the technology industry and the music industry.77

Examples include AI Music, Amper Music, Popgun, Jukedeck, Humtap, Groov.AI, Magenta,

and Flow Machines. Jon Eades – who runs the Abbey Road Red incubator - says that AI will

have a mixed impact upon the music industry:

I think there will be collateral damage, just like the internet. It created huge opportunity, and completely

adjusted the landscape. But depending on where you sat in the pre-internet ecosystem, you either called

it an opportunity or a threat. It was the same change, but depending on how much you had to gain or

lose, your commentary was different. I think the same thing is occurring here. AI is going to be as much

of a fundamental factor in how the businesses around music are going to evolve as the internet was.78

AI music raises issues around copyright authorship, ownership, infringement, and exceptions.

The law firm Allens has argued that there is a need to reform copyright law and policy

to take into account artificial intelligence.79 Partner Andrew Wiseman worries: ‘The speed of

progress and technology often overtakes legislation and that's been the case with copyright

right back to when it first began with the Stationers in the United Kingdom and the 1600s.’ He

contends: ‘I think a lot of people are not aware of how fast AI is working and proceeding, and

the issue will be that if we don't deal with ownership issues like this quickly, then we may find

that there is some very important work that ends up with no copyright because there's no

relevant owner.’ However, others have protested that recognition of non-human authorship

would go against the rationales and objectives of copyright law.

There have also been parallel discussions taking place in respect of copyright law and

non-human authorship in the context of the animal kingdom. The ‘Monkey Selfie’ case has

                                                            76 Devin Coldewey, ‘Google’s WaveNet uses Neural Nets to Generate Eerily Convincing Speech and

Music’, Tech Crunch, 9 September 2016, https://techcrunch.com/2016/09/09/googles-wavenet-uses-neural-nets-

to-generate-eerily-convincing-speech-and-music/ 77 Stuart Dredge, ‘AI and Music: Will We Be Slaves to the Algorithm?’, The Guardian, 6 August 2017,

https://www.theguardian.com/technology/2017/aug/06/artificial-intelligence-and-will-we-be-slaves-to-the-

algorithm 78 Ibid. 79 Lucas Baird, ‘Copyright Law Must Be Amended to Account for Artificial Intelligence’, Australian

Financial Review, 1 January 2019, https://www.afr.com/business/legal/copyright-law-must-be-amended-to-

account-for-artificial-intelligence-allens-20181227-h19hmb

Page 25: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

24  

been quite notorious.80 The United States Court of Appeals for the Ninth Circuit held in this

matter: ‘We conclude that this monkey—and all animals, since they are not human—lacks

statutory standing under the Copyright Act.’81

                                                            80 Naruto v. Salter (2018) No. 16015469 81 Naruto v. Salter (2018) No. 16015469

Page 26: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

25  

2. Commercial Law

A. Problems associated with AI

In a semi-fictional introduction to her book, Smart Technologies and the End(s) of Law: Novel

Entanglements of Law and Technology,82 Mireille Hildebrandt, the Dutch lawyer and

philosopher, outlined some of the applications of new technologies, including AI, in our not-

too-distant future. Hildebrandt took currently available technology and applied it liberally to

saturate Diana’s life in ubiquitous technological gadgets and applications to illustrate the

control that these technologies, and the companies that own them, might have over our lives if

we continue to thoughtlessly adopt technology. The ramifications for our human rights, our

personal, social, and work lives, and our very existence are writ large in this story.

Over the past ten or so years, applications of what is described as narrow Al have

expanded and developed. Current applications that use powerful face recognition, data

analytics and natural language processing have pushed AI further into our everyday lives. This

has occurred at a speed and to an extent that we, as consumers, are unable to fully process. The

implications for us continuing down this path, uncontrolled, are profound. Ordinary consumers

of these products appear powerless against the technology companies. In its interim report on

digital platforms, the ACCC found that:

… consumers are unable to make informed choices over the amount of data collected by the digital

platforms, and how this data is used. This reflects the bargaining power held by the digital platforms vis-

à-vis consumers, and the information asymmetries that exist between digital platforms and consumers.

The ACCC considers that the current regulatory framework, including privacy laws, does not effectively

deter certain data practices that exploit the information asymmetries and the bargaining power

imbalances that exist between digital platforms and consumers.83

A number of the challenges to regulating developments in artificial intelligence (AI)

and some possible solutions were set out by Michael Guihot, Anne F Matthew and Nicolas P

                                                            82 Mireille Hildebrandt, ‘Introduction: Diana’s Onlife World’ in Smart Technologies and the End(s) of Law:

Novel Entanglements of Law and Technology (Edward Elgar Publishing, 2015). 83 Australian Competition and Consumer Commission, ‘Digital Platforms Inquiry: Preliminary Report’

(Australian Competition and Consumer Commission, December 2018) 13

https://www.accc.gov.au/system/files/ACCC%20Digital%20Platforms%20Inquiry%20-

%20Preliminary%20Report.pdf.

Page 27: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

26  

Suzor, in their paper, ‘Nudging Robots: Innovative Solutions to Regulate Artificial

Intelligence’.84 This submission emphasises and develops several of those ideas. However, it

also engages in and recommends a more commercial approach to regulation in this area. This

is because, at a practical, commercial level, many of the possible harms associated with

developments in AI stem from corporations pursuing profits through interactions with

consumers. This commercial approach to regulating AI also fundamentally affects the choice

of a suggested regulatory body to oversee the development of AI in Australia.

Risk is a factor in determining regulatory responses. As discussed in ‘Nudging Robots’,

the level of risk associated with various applications of AI is not constant. One application of

AI (such as a facial recognition system) could pose a range of risks from low to moderate to

high, depending on how it is used, by whom, and for what purpose. Further, the type of risk

posed by each application may not be the same. For example, with a particular application of

AI, there might be a low risk to safety or to human life, but a high risk of a breach of privacy.

An additional complicating factor is that similar types of applications will be used differently

in different industries or areas. For example, the same narrow AI application used in a product

in medical procedures may be applied to a different product for security purposes. This will

very likely mean that different regulatory agencies will be required to regulate the same AI,

but in different applications. Therefore, it is too simplistic to seek to regulate it based upon a

single presumed level of risk.85

As argued in ‘Nudging Robots’, ‘public regulators must become informed about the AI

used in their field, assess the risks posed by the AI application as it is used in the industry in

which they operate, and regulate appropriately. Armed with a deeper understanding of the

industry and the intended use of the AI, stakeholders involved in informing the regulatory

approach will be better placed to ask the right questions to assuage, or at least contextualise,

their concerns about levels of risk.’86 It is important in this context that there is a level of

cooperation between regulating agencies.87

                                                            84 Michael Guihot, Anne F Matthew and Nicolas P Suzor, ‘Nudging Robots: Innovative Solutions to

Regulate Artificial Intelligence’ (2017) 20(2) Vanderbilt Journal of Entertainment and Technology Law 385. 85 Ibid. 86   Ibid. 87 See Phil MacNaghten and Jason Chilvers, ‘Governing Risky Technologies’ in Matthew Kearnes,

Francisco Klauser and Stuart Lane (eds), Crictical Risk Research: Politics and Ethics (John Wiley & Sons Ltd,

2012) 99.

Page 28: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

27  

B. Approaches to regulation

Guihot et al canvass a number of approaches to regulating various incarnations of AI.88 The

difficulty with a one approach fits all response becomes obvious when the nature of the problem

is described. Matthew Scherer sets out four general ex ante problems with regulating research

and development of AI: (1) discreetness, meaning ‘AI projects could be developed without the

largescale integrated institutional frameworks’; (2) diffuseness, meaning AI projects could be

carried out by diffuse actors in many locations around the world; (3) discreteness, meaning

projects will make use of discrete components and technologies ‘the full potential of which

will not be apparent until the components come together’; and (4) opacity, meaning the

‘technologies underlying AI will tend to be opaque to most potential regulators.’89

These four problems go to the heart of the issues with regulating AI, but many other

problems tied to individual uses of AI become apparent only after the AI is introduced to the

public and its range of possible applications becomes apparent. For example, face recognition

software is now readily available at low cost.90 The potential for abuse of this software should

have been apparent, but its use in schools, social settings and, particularly, by government for

‘security purposes’ compounds the risks it poses and makes it an urgent problem in need of

redress. Some possible regulatory solutions to developing AI are discussed below.

C. Government intervention

There is a role for government to play in regulating developments in AI. Nothing beats the

effect of top down hard laws with enforceable sanctions to regulate behaviours. An immediate

effect of legislation is the availability of remedies set out for its breach. A secondary impact of

legislation is the deterrent effect it has on potential wrongdoers. In Ayers and Braithwaite’s

articulation of responsive regulation, top down legislation is used as the ultimate enforcement

in a pyramid of stepped regulatory interventions but also a deterrent when threatening heavier

                                                            88 Michael Guihot, Anne F Matthew and Nicolas P Suzor, ‘Nudging Robots: Innovative Solutions to

Regulate Artificial Intelligence’ (2017) 20(2) Vanderbilt Journal of Entertainment and Technology Law 385. 89 Matthew Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and

Strategies’ (2016) 29 (2) Harvard Journal of Law & Technology 354-400. 90 Alex Walling, What Is the Best Face Recognition Software? (28 November 2018) Quora

https://www.quora.com/What-is-the-best-face-recognition-software.

Page 29: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

28  

sanctions. In this way “escalating forms of government intervention … reinforce and help

constitute less intrusive and delegated forms of market regulation,”91 such as self-regulation.

Some states in the United States have taken the lead against the use of biometrics such

as face recognition. In a remarkably prescient example, in 2008, the state of Illinois legislated

to prohibit its use.92 Under the Biometric Information Privacy Act, ‘no private entity may

collect, capture, purchase, receive through trade, or otherwise obtain a person's or a customer's

biometric identifier or biometric information’. The only exceptions are if the entity informs the

subject in writing that biometric information is being collected, the purpose for which it is

being stored, collected or used, and receives written release from the subject.93 A recent court

case confirmed that the subject does not have to prove actual harm for the legislation to apply.94

D. Self-regulation

To be effective, any regulation must reflect the society’s norms. However, these norms are

shifting as we become inured to the encroachments upon our privacy and intimate parts of our

lives in return for access to social utilities such as provided by companies such as Facebook

and Google. These companies, along with a number of others have set up a self-regulating

entity called the Partnership on AI.95 The Partnership on AI’s purpose statement is to ‘benefit

people and society,’96 and it is said to have been “[e]stablished to study and formulate best

practices on AI technologies, to advance the public’s understanding of AI, and to serve as an

open platform for discussion and engagement about AI and its influences on people and

society.”97 It has developed a series of tenets, one of which is to seek to ‘maximize the benefits

and address the potential challenges of AI technologies, by, among other things, … Opposing

                                                            91 Ian Ayres and John Braithwaite, Responsive Regulation: Transcending the Deregulation Debate (Oxford

University Press, Incorporated, 1992) 4 http://ebookcentral.proquest.com/lib/qut/detail.action?docID=272606. 92 Biometric Information Privacy Act (740 ILCS 14/). 93 Ibid, s 15(b). 94 Russell Brandom, Crucial Biometric Privacy Law Survives Illinois Court Fight (26 January 2019) The

Verge https://www.theverge.com/2019/1/26/18197567/six-flags-illinois-biometric-information-privacy-act-

facial-recognition. 95 Partnership on AI Partnership on Artificial Intelligence to Benefit People and Society

https://www.partnershiponai.org/. 96 Ibid. 97 Ibid.

Page 30: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

29  

development and use of AI technologies that would violate international conventions or human

rights, and promoting safeguards and technologies that do no harm’.98 In doing so, the

Partnership on AI is taking on the role of a self-regulatory association and potentially warding

off more enforceable state-imposed regulatory obligations. The benefits of this type of self-

regulation are questionable. However, given the human rights concerns expressed by the HRC

in relation to the behaviours and practices of these very institutions, it is not certain that this

type of self-regulation will be wholly effective.

The commercial applications of AI are becoming more innovative and pernicious. The

world's most valuable companies99 are investing heavily in AI’s potential, partly because of the

immense sums of money to be made. For this reason, it would be problematic to allow these

companies to offer the solutions to the problems with AI. These technology companies

pursuing corporate strategies at the expense of consumers has resulted in what Shoshana Zuboff

describes as Surveillance Capitalism. Zuboff defines Surveillance Capitalism variously as:

1. A new economic order that claims human experience as free raw material for hidden commercial

practices of extraction, prediction, and sales; 2. A parasitic economic logic in which the production of

goods and services is subordinated to a new global architecture of behavioural modification; … 8. An

expropriation of critical human rights that is best understood as a coup from above; an overthrow

of the people’s sovereignty.100

For these reasons alone, self or co regulation by these entities may be one strand in the

regulatory rope, but cannot stand alone.

E. Nudging

As discussed in ‘Nudging Robots’, nudge theory has become prominent in recent years.101

Guihot et al argue that:

                                                            98 Ibid. 99 TOP 10 - The Most Valuable Companies in the World - 2019 List | FXSSI - Forex Sentiment Board

https://fxssi.com/top-10-most-valuable-companies-in-the-world. 100 Shashona Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New

Frontier of Power (Hatchet Book Group, 2019) i. 101 See, e.g., id. at 6–8.

Page 31: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

30  

Psychological observations, as applied in behavioral economics, reveal that normative human behavior

can be skewed or distorted by inherent human biases. Nudge theory proposes that by exploiting these

biases, humans can be nudged to behave so as to achieve an outcome desired by the nudger. The theory

has tended to focus on nudging individual behaviours. However, recent work has examined how

behavioral economics approaches might influence a broader spectrum of decision-makers—including

corporations and policy-makers.102

It should not go without saying that the role that the Human Rights Commission in its

Human Rights and Technology Project (HRTP) and now the HRC with the World Economic

Forum (WEF) is undertaking is itself nudging behaviours of developers in AI. Discussing and

documenting the human rights aspects of technology development brings these issues to either

the conscious or the sub-conscious of those exposed to the HRC and WEF work. The effect of

this is unknowable, but should not be dismissed simply because of that.

F. Regulation by Design

Lawrence Lessig envisioned regulation as a combination of the forces applied by law,

architecture, markets and social norms. He popularised the idea that the architecture of

computer code could be a regulatory tool.103 That is, that computers could be coded so as to

self-enforce a legal outcome, or to prevent a non-legal outcome. In this way, computers and

the internet could be self-regulating. Lessig noted that:

We can build, or architect, or code cyberspace to protect values we believe are fundamental, or we can

build, or architect, or code cyberspace to allow those values to disappear. There is no middle ground.104

This notion has led various academics to theorise on a number of issues that could be governed

using designs or architecture of computer code. These areas now include Privacy by Design

(PbD), Data Protection or Security by Design (SbD),105 and Legal Protection by Design (LPbD).

                                                            102   Michael Guihot, Anne F Matthew and Nicolas P Suzor, ‘Nudging Robots: Innovative Solutions to

Regulate Artificial Intelligence’ (2017) 20(2) Vanderbilt Journal of Entertainment and Technology Law 385. 103 Lawrence Lessig, Code and Other Laws of Cyberspace (Basic Books, 1999). 104 Ibid 6. 105 See Lee A Bygrave, ‘Data Protection by Design and by Default: Deciphering the EU’s Legislative

Requirements’ (2017) 4 Oslo Law Review 105 (discussing Article 25 of the GDPR on Data Protection by Design).

Page 32: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

31  

In his book, Privacy’s Blueprint, Woodrow Hartzog emphasised design as a way to

defend against invasions of privacy. He argued that

Design-based protections can require companies to protect against reasonably anticipated third-party

harms. Technology users will not have to worry as much about hiring an attorney or suffering only to

receive a paltry recovery, because they will not become victims. Design cannot do everything, but it

can dissuade would-be wrongdoers if the transaction cost is high enough.106

Hildebrandt proposed Legal Protection by Design (LPbD), in the same vein as the arguments

put forward for Privacy by Design and Security by Design. Hildebrandt argued that:

LPbD, then, requires that the legal conditions we have agreed upon (articulated in natural language) are

translated into the technical requirements that inform the data-driven architecture of our everyday

environment. These requirements should instigate technical specifications and default settings that —

other than current systems — afford the protection of fundamental rights. Thus, LPbD should constrain

the data-driven architectures that run our new onlife world, while challenging developers to offer multiple

ways of modelling the world, thus making their calculations, predictions and pre-emptions both testable

and contestable. Instead of ‘anything goes’ for the architects of this new world, democratically

legitimated law must regain its monopoly on setting the defaults of societal order, defining the rules of

the game in a way that brings the data-driven machinery under the Rule of Law.

When it comes to regulating AI, there is no one ‘AI’ and there is therefore no corresponding

single regulatory response. There must be a multi-dimensional response using elements of

government rule, self-regulation, design aspects and other soft regulatory tools (we use nudging

as an example of this). Certainly, an emphasis on the design approach to AI development, could

be an effective tool in the regulatory arsenal to govern developments in AI.

                                                            106 Woodrow Hartzog, Privacy’s Blueprint: The Battle to Control the Design of New Technologies (Harvard

University Press, 2018) 82 http://ebookcentral.proquest.com/lib/qut/detail.action?docID=5317538. However, see

Bert-Jaap Koops and Ronald Leenes, ‘Privacy Regulation Cannot Be Hardcoded. A Critical Comment on the

“Privacy by Design” Provision in Data-Protection Law’ (2014) 28(2) International Review of Law, Computers &

Technology 159 ('The upshot of our analysis is that ‘taking privacy seriously’ is unlikely to be achieved by

focusing on rule compliance through techno-regulation, which leads system developers into a morass of selecting

and translating rules that cannot be simply translated into system design requirements. Instead, taking privacy

seriously requires a mindset of both system developers and their clients to take into account privacy-preserving

strategies when they discuss and determine system requirements.’).

Page 33: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

32  

G. A Commercial/Consumer Approach

Several of the submissions to the HRC report lamented that Australia lacks a bill of rights that

would ensure human rights were incorporated into Australia’s system of governance. In the

current political climate, it is difficult to envisage such a development. However, there is

another broad-reaching, almost universal, regime that is already in place that does act in the

best interests of the vast majority of Australians. The Competition and Consumer Act 2010 and

the Australian Consumer Law (ACL) apply to interactions with consumers. Under the ACL, a

person is taken to have acquired a good or service if the cost of the good or service did not

exceed $40,000 or the good or service was a kind ordinarily acquired for personal, domestic or

household use or consumption.107 This broad classification of transactions as consumer

transactions would also arguably capture many of the goods and AI services provided by

technology companies. If it does not, then this submission argues for amendments to the ACL

to include such interactions.

Indeed the breadth of the application and protections sought to be covered in the ACL

was envisaged at its inception in 1974 when the then Trade Practices Bill was being debated.

The Second Reading speech of William Morrison, the then Minister for Science, on the

introduction of the Trade Practices Bill in 1974 [at pp 574-575] is worth quoting at length. The

Minister said:

I think we all know that consumers are the largest but regrettably the least organised economic group in

the community. Every one of us, by definition, is a consumer—from when we get up in the morning and

squeeze our toothpaste tube until we go to bed at night and turn off the lamp. But more often than not,

we are quite ignorant of our rights and privileges as consumers. What we are proposing in this Bill is

a consumers' charter, that is, a bill of rights for Australians as consumers. We, as consumers, have

first the right to be safe, the right to protection against products which could harm our health or endanger

our lives.

Secondly, we have the right to know. The march of technology has brought added difficulties

as well as benefits to the consumer. The housewife today is required to be an amateur mechanic,

electrician, doctor, chemist, food technologist and mathematician, but all too rarely is she given the

information she needs to fill these roles. Our consumers' charter gives her the right to have access to the

facts, free from fraudulent or misleading information, whether in packaging or advertising.

Thirdly, consumers should have the right to choose, the right to select between goods and

services at competitive prices. Our charter protects the consumer against shady business practices which

                                                            107 Australian Consumer Law s 4B.

Page 34: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

33  

restrict the basic right to get value for money. ... To recapitulate, related to this Bill and involved in this

Bill are four fundamental rights: The right to 'be safe, as proposed in clause 62; the right to know, as

proposed by clause 63; the right to choose, and the right to be heard.

And at [575-576], the Minister said:

I turn to the approach adopted in the Bill. The Bill prohibits a wide range of specific practices which

have come under notice, including false representations, bait advertising, referral selling, pyramid selling

and the assertion of a right to payment for unsolicited goods. It is not possible however to specify in

advance the nature of all undesirable practices, as sharp operators continually evolve new schemes

for duping the public. For this reason the broad prohibition of misleading or deceptive conduct in

clause 52 is of great importance. … The courts will be able under that provision to take action against

conduct which may not fall within the more specific terms of other provisions. This will provide the

flexibility necessary if legislation of this kind is to be able to deal with evolving market practices without

the constant need for legislative action to catch up, often after much damage has already been done, with

new practices that are harmful to consumers.

The audacity and prescience of this legislative proposal is impressive. It anticipated

technological advances, and sought to address them. However, it could not have predicted the

extent of technological advances in the last twenty years or so. To frame the legislative scheme

as a bill of consumer rights speaks to how broadly it applies and its potential impact in the area

of AI development.

 

Page 35: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

34  

3. Regulation

There have been a number of national inquiries and discussion papers in respect of robotics

and artificial intelligence.

A. Australia

The Australian Parliament held an inquiry into drones in 2016, and into autonomous vehicles

in 2017. The Australian Centre for Robotic Vision produced A Robotics Roadmap for Australia

in June 2018. Synergies Economic Consulting produced a consultancy report on The Robotics

and Automation Advantage for Queensland. The Australian Human Rights Commission has an

ongoing inquiry into human rights and new technologies – considering amongst other things,

robotics, artificial intelligence, and the Internet of Things. The law firm Corrs has stressed: ‘It

is time for Australian law makers to start thinking about how we want our life with robots to

be regulated’.108

In this context, the Australian Human Rights Commission has proposed the

establishment of a Responsible Innovation Organisation.

The scope of this organisation is unclear in the white paper. There is a slippage between

the discussion of artificial intelligence and the new institution in the white paper. It is unclear

whether the Responsible Innovation Organisation would be just focused on artificial

intelligence. Or whether it would also deal with robotics and automated technologies? And big

data? Or whether it will play more generally in the regulation of a wide range of technologies

– covering information technology, biotechnology, clean technology, nanotechnology, 3D

printing, advanced manufacturing and so on?

The white paper contemplates the Responsible Innovation Organisation having a

number of functions and powers.

It is unclear how the Responsible Innovation Organisation would operate alongside a

range of existing regulators. It should be noted that a number of existing institutions and

regulators will have a significant role to play – given their jurisdiction. IP Australia manages

registered intellectual property – including patents, trade marks, and designs. The ACCC will

retain oversight over competition law and policy – including in new fields of technology, such

                                                            108 James Cameron, ‘Preparing for Life With Robots: How Will They Be Regulated in Australia?’, Corrs

Chambers Westgarth, 6 March 2017.

Page 36: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

35  

as artificial intelligence. ASIC will continue to have a role in regulating corporate

communications. Certain fields of technologies already have a sui generis regulator – see for

instance, the Gene Technology Regulator.

B. United States

In the United States, there has been a growing discipline of robot law and policy. There has

been a concerted effort by academics and scholars to develop the discipline of Robot Law as

an organised and systematic field of jurisprudence. There have been regular ‘We Robot’

conferences in North America. The book Robot Law – edited by Ryan Calo, A. Michael

Froomkin and Ian Kerr – represents a collective effort to survey the emerging field.109 In his

introduction, Froomkin comments: ‘Like the Internet before it, robotics is a socially and

economically transformative technology.’110 He observes that ‘the increasing sophistication of

robots and their widespread deployment everywhere from the home to hospitals, public spaces,

and the battlefield requires rethinking a wide variety of philosophical and public policy issues,

interacts uneasily with existing legal regimes, and thus may counsel changes in policy and in

law.’ 111

Ryan Calo has maintained that the United States should establish a Federal Robotics

Commission.112 He argued ‘that the United States would benefit from an agency dedicated to

the responsible integration of robotics technologies into American society’. In his view,

‘Robots, like radio or trains, make possible new human experiences and create distinct but

related challenges that would benefit from being examined and treated together.’113 Calo

envisaged that the organisation would have a special role: ‘The institution I have in mind would

not “regulate” robotics in the sense of fashioning rules regarding their use, at least not in any

initial incarnation’.114 He suggested: ‘Rather, the agency would advise on issues at all levels —

state and federal, domestic and foreign, civil and criminal — that touch upon the unique aspects

                                                            109 Ryan Calo, A. Michael Froomkin, and Ian Kerr (ed.), Robot Law, Cheltenham and Northampton (MA):

Edward Elgar, 2016. 110 Ibid. 111 Ibid. 112 Ryan Calo, ‘The Case for a Federal Robotics Commission’, Brookings Centre for Technology

Innovation, 2014, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2529151 113 Ibid. 114 Ibid.

Page 37: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

36  

of robotics and artificial intelligence and the novel human experiences these technologies

generate.’115

The United States has held a DARPA Robotics Challenge in order to encourage

innovative activity in the field of robotics.

The Obama Administration produced a report in 2016 on the future of artificial

intelligence.116 In the preface to the report, John Holdren and Megan Smith discussed the

significance of artificial intelligence:

Advances in Artificial Intelligence (AI) technology have opened up new markets and new opportunities

for progress in critical areas such as health, education, energy, and the environment. In recent years,

machines have surpassed humans in the performance of certain specific tasks, such as some aspects of

image recognition. Experts forecast that rapid progress in the field of specialized artificial intelligence

will continue. Although it is very unlikely that machines will exhibit broadly-applicable intelligence

comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will

reach and exceed human performance on more and more tasks.117

Holdren and Smith discuss the need for regulation of artificial intelligence: ‘In the coming

years, AI will continue to contribute to economic growth and will be a valuable tool for

improving the world, as long as industry, civil society, and government work together to

develop the positive aspects of the technology, manage its risks and challenges, and ensure that

everyone has the opportunity to help in building an AI-enhanced society and to participate in

its benefits.’118

The report considered applications of AI for the public good. The report canvassed

larger questions about the regulation of artificial intelligence:

AI has applications in many products, such as cars and aircraft, which are subject to regulation designed

to protect the public from harm and ensure fairness in economic competition. How will the

incorporation of AI into these products affect the relevant regulatory approaches? In general, the

approach to regulation of AI-enabled products to protect public safety should be informed by

                                                            115 Ibid. 116 Executive Office of the President, National Science and Technology Council Committee on Technology,

Preparing for the Future of Artificial Intelligence, Washington DC: United States Government, October 2016,

https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_

the_future_of_ai.pdf 117 Ibid. 118 Ibid.

Page 38: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

37  

assessment of the aspects of risk that the addition of AI may reduce alongside the aspects of risk that it

may increase. If a risk falls within the bounds of an existing regulatory regime, moreover, the policy

discussion should start by considering whether the existing regulations already adequately address the

risk, or whether they need to be adapted to the addition of AI. Also, where regulatory responses to the

addition of AI threaten to increase the cost of compliance, or slow the development or adoption of

beneficial innovations, policymakers should consider how those responses could be adjusted to lower

costs and barriers to innovation without adversely impacting safety or market fairness.119

The report noted: ‘Currently relevant examples of the regulatory challenges that AI-enabled

products present are found in the cases of automated vehicles (AVs, such as self-driving cars)

and AI-equipped unmanned aircraft systems (UAS, or “drones”).’120 The report also explored

issues relating to employment; the economics of AI; and fairness, safety, and governance.

There was also a discussion of the larger implications of AI for international relations and

security.

The report was also accompanied by a National Artificial Intelligence Research and

Development Strategic Plan.121 The report observed:

Artificial intelligence (AI) is a transformative technology that holds promise for tremendous societal

and economic benefit. AI has the potential to revolutionize how we live, work, learn, discover, and

communicate. AI research can further our national priorities, including increased economic prosperity,

improved educational opportunities and quality of life, and enhanced national and homeland security.

Because of these potential benefits, the U.S. government has invested in AI research for many years.

Yet, as with any significant technology in which the Federal government has interest, there are not only

tremendous opportunities but also a number of considerations that must be taken into account in guiding

the overall direction of Federally-funded R&D in AI.122

This report outlined seven strategies. The first strategy was to make long-term investments in

AI research. The second strategy was to develop effective methods for human-AI collaboration.

The third strategy was to understand and address the ethical, legal, and societal implications of

                                                            119 Ibid. 120 Ibid. 121 National Science and Technology Council, The National Artificial Intelligence Research and

Development Strategic Plan, Washington DC: United States Government, October 2016,

https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/national_ai_rd

_strategic_plan.pdf 122 Ibid.

Page 39: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

38  

AI. The fourth strategy was to ensure the safety and security of AI systems. The fifth strategy

was to develop shared public datasets and environments for AI training and testing. The sixth

strategy was to measure and evaluate AI technologies through standards and benchmarks. The

seventh strategy was to better understand the national AI R&D workforce needs.

The Trump administration seems less interested in the topic of the regulation of robotics

and artificial intelligence. However, as the trade conflict with China shows, the Trump

administration is certainly interested in preserving the competitive advantages of the United

States in the field of new technologies.

C. European Union

In January 2017, the European Parliament’s Legal Affairs Committee recommended that there

should be law reform to address the fast-evolving field of robotics. Rapporteur Mady Delvaux

(S&D, LU) said:

A growing number of areas of our daily lives are increasingly affected by robotics. In order to address

this reality and to ensure that robots are and will remain in the service of humans, we urgently need to

create a robust European legal framework

The report was approved by 17 votes to 2, with 2 abstentions, looks at robotics-related issues

such as liability, safety and changes in the labour market.

The European Parliament’s Legal Affairs Committee drafted a motion for a European

Parliament Resolution. The motion noted that ‘from Mary Shelley's Frankenstein's Monster to

the classical myth of Pygmalion, through the story of Prague's Golem to the robot of Karel

Čapek, who coined the word, people have fantasised about the possibility of building intelligent

machines, more often than not androids with human features.’ The Committee observed that

‘humankind stands on the threshold of an era when ever more sophisticated robots, bots,

androids and other manifestations of artificial intelligence (‘AI’) seem to be poised to unleash

a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally

important for the legislature to consider its legal and ethical implications and effects, without

stifling innovation.’

The European Parliament’s Legal Affairs Committee called for the establishment of a

definition and classification of ‘smart robots’, and a registration of smart robots. The

Committee asked for an in-depth evaluation of liability regimes for robots and insurance

Page 40: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

39  

schemes. The Committee asked for a Charter of Robotics, which would include a code of

ethical conduct for robotics engineers. In particular, the Committee highlighted the importance

of fundamental rights, the precautionary principle, inclusiveness, accountability, safety,

reversibility, privacy, and cost-benefit analysis. The Committee also called for a licensing

scheme for designers of robots, and a licensing scheme for users of robots.

In the wake of the resolution, there has been further discussion how best to regulate

artificial intelligence and robotics.

D. Japan

Japan has established a Robotics Policy Office in 2015.123 The Ministry of Economy, Trade and

Industry (METI) established ‘the Robotics Policy Office under the Industrial Machinery

Division of the Manufacturing Industries Bureau in order to drive regulatory reform that is

well-balanced both in terms of deregulation and establishment of new rules from the standpoint

of achieving the proactive use of robots and international standardization focusing on global

business, and to make more appropriate, effective progress in the proactive use of robots in

various fields such as the service field and in the promotion of the robot industry by

comprehensively supervising affairs regarding the use of robots.’124 Japan has since undertaken

a range of activities – including holding a World Robot Summit, mapping the industry market,

developing regulations, and looking at the introduction of nursing care robots. METI has also

hosted an annual Robot Awards program to ‘facilitate the development of robot technologies

and the utilization of robots.’125 The use of prizes has been promoted by economists like Joseph

Stiglitz as an alternative/ supplementary system to the patent regime to promote innovation.

E. South Korea

                                                            123 Japan Ministry of Economics, Trade and Industry, ‘Robotics Policy Office Is to Be Established in METI’,

Press Release, 1 July 2015. 124 Ibid. 125 Ibid.

Page 41: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

40  

South Korea has established the Intelligent Robot Development and Promotion Act 2008 and

has passed national plans for the robotics industry.

F. China

As part of its Made in China 2025 Policy, China has also been investing heavily in artificial

intelligence,126 as well as robotics. Jacob Turner suggests that ‘National AI policies are bound

up with countries’ current positions in the global order, as well as where they are hoping to be

in the future’.127 He comments upon the position of China accordingly:

For China, the issue is both one of economics and one of international politics. China’s efforts to

develop a world-leading home-grown industry in AI are connected to but not the same as its efforts to

influence the international discourse on AI; even if the first aim does not succeed, the second might.

Recent indications suggest that China may now seek a leading role in shaping worldwide AI regulation,

much as the USA did in numerous fields over the twentieth century.128

There is a competition in respect of worldwide standards in respect of artificial intelligence.

G. World Economic Forum

As Schwab and Davis from the World Economic Forum (WEF) have noted, there are a number

of pressing issues for stakeholders in the fields of artificial intelligence and robotics.129 There

is a need to establish ethical standards and normative expectations of autonomous processes

and machines. There must be clear legal frameworks in respect of the governance of robotics

and artificial intelligence. Schwab and Davis have called for ‘innovative governance

procedures and the potential creation of new types of committees, agencies or advisory

groups.’130 Furthermore, there will need to be conflict resolution in the fields of artificial

intelligence and robotics – particularly over the ownership, use, and exploitation of intellectual

                                                            126 Kai-Fu Lee, AI Super-Powers: China, Silicon Valley, and the New World Order, Boston and New York:

Houghton Mifflin Harcourt, 2018. 127 Jacob Turner, Robot Rules: Regulating Artificial Intelligence, London: Palgrave Macmillan, 2019, 236. 128 Ibid., 236. 129 Klaus Schwab with Nicholas Davis, Shaping the Future of the Fourth Industrial Revolution, Geneva:

World Economic Forum; and New York: Currency, 2018. 130 Ibid.

Page 42: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

41  

property. Schwab and Davis conclude: ‘AI and robotics will require collaborative governance

as issues involving conflict resolution, ethical standards, data regulation and policy formation

become priorities on the global scale.’131

 

                                                            131 Ibid.

Page 43: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

42  

Biographies

Dr Michael Guihot

In 2017, Dr Michael Guihot presented a paper titled ‘Nudging Robots: Innovative Solutions to

Regulate Artificial Intelligence’ at the We Robot Conference at Yale Law School. That paper

was published in Volume 20(2) Vanderbilt Journal of Entertainment and Technology Law.

Since then his research has been wholly on developments in Artificial Intelligence, Robots and

the Law. He is currently co-authoring a book by that name.

Dr Guihot teaches the new unit, Artificial Intelligence, Robots and the Law at QUT. He

runs a reading group on Technology, Regulation and Society and is a member of the Research

Committee and a sub-committee developing a more focused and dynamic research group.

His research investigates the intersection of new technology and law, including the

regulation of artificial intelligence, and the impact of new technologies on power and

governance. His current research investigates how changes in global power structures affect

private and public governance, and the impact of new technology on legal institutions. He has

published in international journals on the regulation of artificial intelligence, and also published

on manufacturer’s liability under the Competition and Consumer Act 2010 (Cth).

Before returning to teach at university in 2010, Guihot worked as a lawyer and senior

associate with national law firm Middletons Lawyers (now K&L Gates) in Sydney from 1999

to 2006. He then worked as in-house counsel in a global alcohol manufacturing company

specializing in the competition law, intellectual property and compliance aspects of the

business before becoming managing partner and commercial practice group head of a small

firm in Sydney for 3 years.

Dr Matthew Rimmer

Dr Matthew Rimmer is a Professor in Intellectual Property and Innovation Law at the Faculty

of Law, at the Queensland University of Technology (QUT). He is a leader of the QUT

Intellectual Property and Innovation Law research program, and a member of the QUT Digital

Media Research Centre (QUT DMRC) the QUT Australian Centre for Health Law Research

(QUT ACHLR), and the QUT International Law and Global Governance Research Program

(QUT IP IL). Rimmer has published widely on copyright law and information technology,

patent law and biotechnology, access to medicines, plain packaging of tobacco products,

Page 44: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

43  

intellectual property and climate change, and Indigenous Intellectual Property. He is currently

working on research on intellectual property, the creative industries, and 3D printing;

intellectual property and public health; and intellectual property and trade, looking at

the Trans-Pacific Partnership, the Regional Comprehensive Economic Partnership, and

the Trans-Atlantic Trade and Investment Partnership, and the Trade in Services Agreement.

His work is archived at QUT ePrints SSRN Abstracts Bepress Selected Works.

Dr Matthew Rimmer holds a BA (Hons) and a University Medal in literature (1995),

and a LLB (Hons) (1997) from the Australian National University. He received a PhD in law

from the University of New South Wales for his dissertation on The Pirate Bazaar: The Social

Life of Copyright Law (1998-2001). Dr Matthew Rimmer was a lecturer, senior lecturer, and

an associate professor at the ANU College of Law, and a research fellow and an associate

director of the Australian Centre for Intellectual Property in Agriculture (ACIPA) (2001 to

2015). He was an Australian Research Council Future Fellow, working on Intellectual Property

and Climate Change from 2011 to 2015. He was a member of the ANU Climate Change

Institute.

Rimmer is the author of Digital Copyright and the Consumer Revolution: Hands off my

iPod (Edward Elgar, 2007). With a focus on recent US copyright law, the book charts the

consumer rebellion against the Sonny Bono Copyright Term Extension Act 1998 (US) and

the Digital Millennium Copyright Act 1998 (US). Rimmer explores the significance of key

judicial rulings and considers legal controversies over new technologies, such as the iPod,

TiVo, Sony Playstation II, Google Book Search, and peer-to-peer networks. The book also

highlights cultural developments, such as the emergence of digital sampling and mash-ups, the

construction of the BBC Creative Archive, and the evolution of the Creative Commons.

Rimmer has also participated in a number of policy debates over Film Directors’ copyright,

the Australia-United States Free Trade Agreement 2004, the Copyright Amendment Act 2006

(Cth), the Anti-Counterfeiting Trade Agreement 2011, and the Trans-Pacific Partnership. He

has been an advocate for Fair IT Pricing in Australia.

Rimmer is the author of Intellectual Property and Biotechnology: Biological

Inventions (Edward Elgar, 2008). This book documents and evaluates the dramatic expansion

of intellectual property law to accommodate various forms of biotechnology from micro-

organisms, plants, and animals to human genes and stem cells. It makes a unique theoretical

contribution to the controversial public debate over the commercialisation of biological

inventions. Rimmer also edited the thematic issue of Law in Context, entitled Patent Law and

Biological Inventions (Federation Press, 2006). Rimmer was also a chief investigator in an

Page 45: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

44  

Australian Research Council Discovery Project, ‘Gene Patents In Australia: Options For

Reform’ (2003-2005), an Australian Research Council Linkage Grant, ‘The Protection of

Botanical Inventions (2003), and an Australian Research Council Discovery Project,

‘Promoting Plant Innovation in Australia’ (2009-2011). Rimmer has participated in inquiries

into plant breeders’ rights, gene patents, and access to genetic resources.

Rimmer is a co-editor of a collection on access to medicines entitled Incentives for

Global Public Health: Patent Law and Access to Essential Medicines (Cambridge University

Press, 2010). The work considers the intersection between international law, public law, and

intellectual property law, and highlights a number of new policy alternatives – such as medical

innovation prizes, the Health Impact Fund, patent pools, open source drug discovery, and the

philanthropic work of the (Red) Campaign, the Gates Foundation, and the Clinton Foundation.

Rimmer is also a co-editor of Intellectual Property and Emerging Technologies: The New

Biology (Edward Elgar, 2012).

Rimmer is a researcher and commentator on the topic of intellectual property, public

health, and tobacco control. He has undertaken research on trade mark law and the plain

packaging of tobacco products, and given evidence to an Australian parliamentary inquiry on

the topic. Rimmer has edited a special issue of the QUT Law Review on the topic, The Plain

Packaging of Tobacco Products (2017).

Rimmer is the author of a monograph, Intellectual Property and Climate Change:

Inventing Clean Technologies (Edward Elgar, September 2011). This book charts the patent

landscapes and legal conflicts emerging in a range of fields of innovation – including renewable

forms of energy, such as solar power, wind power, and geothermal energy; as well as biofuels,

green chemistry, green vehicles, energy efficiency, and smart grids. As well as reviewing key

international treaties, this book provides a detailed analysis of current trends in patent policy

and administration in key nation states, and offers clear recommendations for law reform. It

considers such options as technology transfer, compulsory licensing, public sector licensing,

and patent pools; and analyses the development of Climate Innovation Centres, the Eco-Patent

Commons, and environmental prizes, such as the L-Prize, the H-Prize, and the X-Prizes.

Rimmer is currently working on a manuscript, looking at green branding, trade mark law, and

environmental activism. He is also the editor of the collection, Intellectual Property and Clean

Energy: The Paris Agreement and Climate Justice (Springer, 2018).

Rimmer has also a research interest in intellectual property and traditional knowledge.

He has written about the misappropriation of Indigenous art, the right of resale, Indigenous

performers’ rights, authenticity marks, biopiracy, and population genetics. Rimmer is the editor

Page 46: This file was downloaded … on AI...6 Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671. 7 Oren Etzioni, ‘Opinion | How

45  

of the collection, Indigenous Intellectual Property: A Handbook of Contemporary

Research (Edward Elgar, 2015).

Rimmer is currently working as a Chief Investigator on an ARC Discovery Project on

‘Inventing The Future: Intellectual Property and 3D Printing’ (2017-2020). This project aims

to provide guidance for industry and policy-makers about intellectual property, three-

dimensional (3D) printing, and innovation policy. It will consider the evolution of 3D printing,

and examine its implications for the creative industries, branding and marketing, manufacturing

and robotics, clean technologies, health-care and the digital economy. The project will examine

how 3D printing disrupts copyright law, designs law, trade mark law, patent law and

confidential information. The project expects to provide practical advice about intellectual

property management and commercialisation, and boost Australia’s capacity in advanced

manufacturing and materials science. Along with Dinusha Mendis and Mark Lemley, Rimmer

is the editor of the collection, 3D Printing and Beyond: Intellectual Property and

Regulation (Edward Elgar, 2019).