The dark side of artificial intelligence: challenges for the legal system[1]

Dr W Gravett[2]

Robotics and artificial intelligence (AI) have the potential to transform lives and work practices, raise efficiency, savings and safety levels, and provide enhanced levels of services in the short to medium term. The current trend towards developing smart and autonomous machines with the capacity to be trained and to make decisions independently, holds not only economic advantages, but also a variety of concerns regarding their direct and indirect effects on society as a whole,[3] including challenges to ensure privacy and autonomy, non-discrimination, due process and transparency in decision-making processes.

AI’s challenges to privacy and autonomy

We are already willing to wear or carry devices that provide much detail about our circumstances to databases.[4] Our mobile phones are capable of providing real-time spatial location data,[5] as well as retaining a secret record of every location that we visit.[6] We have embraced highly contextualised and automated directives in the travel context, eagerly (and sometimes blindly) accepting directions from Google Maps.[7] The capability of machines to invade human privacy will only increase.[8]

The more convenient an agent is however, the more it needs to know about a person (preferences, timing, capacities, etc). This creates a tradeoff — more help requires more intrusion. The record to date is that convenience overwhelms privacy — autonomy and/or independence will increasingly be sacrificed and replaced by convenience.[9]

New brain imaging techniques point to a future in which our thoughts will not be as private as they are now.[10] People could be scanned for one purpose, for example to see how advertising campaigns affect their brains, while they inadvertently generate information that bears on their racial biases, sexual orientation or other sexual preferences.

AI-enhanced photorealistic pictures and videos, or “deep-fake” technology[11] is becoming pervasive. For example, Philip Wang, a software engineer at Uber, developed a website called ThisPersonDoesNotExist.com, that creates an endless stream of fake portraits. The algorithm that powers it is trained on an enormous dataset of real images, and then uses a neural network known as a generative adversarial network (or GAN) to fabricate new examples. In a Facebook post, Wang wrote:[12]

Each time you refresh the site, the network will generate a new facial image from scratch … Most people do not understand how good AIs will be at synthesizing images in the future.

In February 2019, the creators of a revolutionary AI system that can write news stories and works of fiction — nicknamed “deep fakes for text” — took the unusual step of not releasing their research publicly, for fear of potential misuse. OpenAI, a non-profit research company backed by, among others, Elon Musk, stated that its AI model, called GPT2, is so good and the risk of malicious use so high, that it is deviating from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.[13] GPT2 is fed text — anything from a few words to a whole page — and is then asked to write the next few sentences based on its predictions of what should come next. GPT2 is capable of writing plausible passages that match what it is given in both style and subject.[14]

The ability to manipulate and generate realistic imagery at scale will have a huge effect on how modern societies think about evidence and trust. Such software may also be used for creating political propaganda and influence campaigns.[15] The courtroom is not immune to misleading evidence. Fake evidence will inevitably leak into the courtroom with the potential to dupe fact-finders.

Chesney and Citron predict a development stemming from deep-fake evidence — “immutable life logs” as an alibi service.[16] Because deep-fake technology will be able to portray people saying and doing things that they actually never did or said, alibis will become essential for digitally ensnared accused to prove their innocence in the courtroom. Hence, deep-fakes will create a heightened demand for proof of where a person was and what they were doing at all times.

The AI surveillance state

In the United States, both the federal and State governments have outsourced many regulatory and legal decisions to computation. Tax returns are too voluminous for IRS personnel to examine manually; “audit flags” are programmed to determine which returns should receive greater scrutiny or be rejected outright. Homeland Security officials are using big data and algorithms to determine which travellers pose a security risk and who can pass unmolested to their flights. So called “predictive policing” deploys law enforcement resources before crimes are committed. And, once perpetrators are convicted, “evidence-based sentencing” may quantify punishment by using data and algorithms to adjust the length of prison sentences based on myriad factors.[17]

Privacy proponents will recoil upon learning that AI is also increasing the effectiveness of State surveillance techniques.[18] Before AI, cameras were useful only to the extent that someone either observed a live feed or reviewed recorded footage. That time has passed. With the assistance of AI, cameras can now navigate three dimensions and make sense of what they “see” — all without any human intervention or assistance. Moreover, AI-augmented cameras are beginning to operate beyond ordinary human capability — they can identify millions of faces and predict human behaviour.[19]

The advent of China’s social credit system (“SCS”) is a sign of what may eventuate where an individual’s rights and liberties will be determined by the SCS. This is the Orwellian nightmare realised.[20] New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its Sharp Eyes program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records and personal identity into a “police cloud”.[21] This integrated database enables authorities to keep track of criminals, potential law-breakers and terrorists.[22]

Facial recognition technology is nothing new. We see it, for example, on the iPhone X with its face-scanning technology.[23] But, thus far, China is the world leader in using facial recognition technology as a surveillance tool. Under the Sharp Eyes program, China’s goal is to recognise all Chinese citizens within seconds of their faces appearing on a camera.[24] In May 2018, the Chinese government introduced a travel ban on people with poor “social credit”.[25]

In the world of technology, facial recognition has become a known commodity. Behaviour prediction, on the other hand, is a recent trend. In addition to recognising who you are, AI-augmented cameras will be “intelligent” enough to predict your behaviour. This technology already exists, and it is improving by the day.[26]

New AI software is being used in Japan to monitor the body language of shoppers for signs that they are planning to steal. This software, developed by Japanese company Vaak, differs from similar products that match faces to criminal records. Instead, VaakEye uses algorithms to analyse footage from security cameras to spot fidgeting, restlessness and other body language cues that could be suspicious and then alerts shop employees about potential thieves via an app.[27]

Using AI to apprehend potential shoplifters raises ethical questions. For example, even though the incentive of such software may be to prevent theft, is it legal, or even moral, to prevent someone from entering a shop based on this? To exacerbate these concerns, there is also the potential of AI being used to fuel discrimination. A 2018 study by researchers from the Massachusetts Institute of Technology and Stanford University found that various commercial facial-analysis programs demonstrate skin type and gender biases, depending on the types of data used.[28] Technologies that rely on algorithms, particularly in regards to human behaviour, have the potential for discrimination. After all, humans have to train the algorithms what or whom to treat suspiciously.

One way in which police arrest suspects is through arrest warrants, which, in most common law jurisdictions, is based on a “reasonable grounds” standard. If an AI-equipped camera identifies someone as a likely criminal, will that be enough to meet the reasonable grounds standard? If so, and assuming the technology assigns a percentage of criminality to an individual, how much will satisfy reasonable grounds — 90%, 70% or 50%? This, of course, also raises the question of whether it is even ethical let alone lawful to arrest a person before they commit a crime.[29]

What about the role of this technology as admissible evidence in the courtroom? Would it be too prejudicial to show the fact-finder that AI software determined that an accused is a criminal? What if, instead, prosecutors used the technology during a trial to buttress their arguments? In a closing address, for example, the prosecutor might argue: “Based on all the eye-witness testimony, along with the determination that the accused, considering his facial features, has an 80% likelihood of having committed the crime charged, you should find the accused guilty.”[30]

These types of arguments could be commonplace in the future. There will be a need for clarity from lawmakers and regulators regarding who will ultimately need to decide in what circumstances the use of this technology will be appropriate or desirable as a matter of public policy.[31]

Bias and algorithmic transparency

Developments in technology raise important policy, regulatory and ethical issues.[32] For example, how should we promote data access? How do we guard against biased or unfair data utilised in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about legal liability in cases in which algorithms cause harm?[33]

It must be remembered that technology is not necessarily neutral and objective. A software design may expressly, through its programming, reflect a preference for certain values over others.

AI systems can also be inadvertently programmed to have bias because of the biases of the programmers or, with the development of self-learning algorithms, actually learn to be biased based on the data it is learning from.[34] In addition, AI systems find it more difficult to generalise findings from a narrower dataset, with minor differences from a training set potentially making larger-than-intended impact on a prospective set of data, creating potential bias.[35]

A 2017 study demonstrated that AI can learn to have racist or sexist biases based on word associations that are part of data it was learning from, and sourced from the internet that reflected humanity’s own cultural and historical biases.[36]

Algorithms — the set of instructions according to which computers carry out tasks — have become an integral part of everyday life, and they have immersed themselves in the law.[37] In the United States, judges in certain States use algorithms as part of the sentencing process to assess recidivism risk.[38] Many law enforcement agencies use algorithms to predict when and where crimes are likely to occur (so-called “predictive policing”).

Most algorithms are created with good intentions, but questions have started surfacing over algorithmic bias on employment search websites, in credit reporting bureaux, on social media websites and even in the criminal justice system, where sentencing and parole decisions appear to be biased against African Americans.[39] These issues are likely to become exacerbated as machine learning and predictive analytics become more sophisticated, particularly because with deep learning (which learns autonomously), algorithms can quickly reach a point where humans can often no longer explain or understand them.

Moreover, it is very difficult to challenge a computer’s decisions, because whoever owns the algorithms owns the trade secrets associated with them, and is neither going to reveal the source code, nor likely be willing to even discuss the secret source and how it makes the algorithm functions.[40] What justifies the algorithm from an economic viability perspective is its success or perceived success, which is an entirely different question from whether or not it operates in biased ways.[41]

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices.[42] Racial issues also arise in facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a database. As pointed out by Joy Buolamwini, a researcher at the MIT Media Lab:[43]

If your facial recognition data contains mostly Caucasian faces, that is what your program will learn to recognize.

Unless the databases have access to diverse data, these programs perform poorly when attempting to recognise African-American or Asian-American features. Many historical datasets reflect traditional values, which may or may not represent the desired preferences in a current system. Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decision-making.

As algorithms have become an established part of high-stakes projects, concerns have arisen that they are not adequately transparent to allow for accountability, especially if they are used as the basis for harmful or coercive decisions.[44] There is growing consensus among computer scientists that it would take aggressive research to cut through algorithmic opacity, particularly in machine learning, where opacity is at its densest.[45]

One of the major problems is that classic values of administrative procedure, such as due process, are not easily coded into software language. In the United States, many automated implementations of social welfare programs, ranging from State emergency assistance to the Affordable Care Act (“Obamacare”) exchanges, have resulted in erroneous denials of benefits, lengthy delays and troubling outcomes.[46]

Depending on how AI systems are set up, they can assist people to discriminate against individuals they do not like, or help to screen or to build lists of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. For these reasons, the European Union implemented the General Data Protection Regulation (GDPR) in May 2019.[47] The GDPR is designed to ensure the protection of personal data and provide individuals with information on how data is collected and operates. All organisations within the European Union, and organisations that collect data related to EU citizens, must be GDPR-compliant.

Machine learning is the ability of a computer to modify its programming to account for new data and modify its operations accordingly. It uses computers to run predictive models that learn from existing data to forecast future behaviours, outcomes and trends.[48] Machine learning, therefore, is dependent on data. The more data it can access, the better it can learn. However, the quality of the data, the way the data is inputted into the system, and how the system is “trained” to analyse the data can all have dire effects on the validity, accuracy and usefulness of the information generated by the algorithm.

In short, not only can an otherwise perfect algorithm fail to accomplish its set goals, but it may also prove affirmatively harmful.[49] For example, the algorithm employed by Google to answer user questions erroneously declared that Barack Obama, a Christian, was a Muslim.[50] The algorithm simply did what it was “trained” to do — it gathered information from the internet, “feeding” on websites that propagated false information. Its data pool was polluted, and the algorithm could not discern between “good” and “bad” data. This was also brought to light, for example, by the Microsoft chatbot, “Tay”, which learned to interact with humans via Twitter. Within 24 hours, the chatbot became racist, because internet trolls had bombarded it with mostly offensive and erroneous data in the form of inflammatory tweets, from which the chatbot had “learned”.[51]

Even if the data were accurate, the person “training” the AI could infuse their own biases into the system. This may have been a factor in the crime predicting software that has led to the arrest of an unjustifiably high number of African-Americans and other minorities in the United States[52] as well as sentencing tools that predict higher rates of recidivism for people with these racial profiles.[53]

Accordingly, the effective accuracy of an algorithm is dependent on both the programming and the data. This dictates a further, legally troubling conclusion. If there are doubts about the results of an algorithm, one can at least theoretically inspect and analyse the programming that constitutes the algorithm. However, given the sheer volume of data available on the internet, it may be impossible to adequately determine and inspect the data used by the algorithm.[54]

Bias and discrimination are serious issues facing AI. There have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure this does not become prevalent in AI. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. This will help protect consumers and build confidence in these systems as a whole.[55] More transparency is needed in how AI systems operate.[56]

The challenge of regulating AI

The question arises whether we have reached the point at which we need to devise a legislative instrument on robotics and AI.[57] The classic line of thinking is that legislation becomes necessary once a societal or technological change calls for an adequate legal framework. Once every home and business is equipped with an autonomous robot, society will change dramatically. People will work, collaborate, interact, live, and perhaps even fall in love with, highly sophisticated machines. We will need to consider humanity’s place in the face of these technologies.[58]

There are, broadly speaking, two schools of thought on the issue of the regulation of AI. The first is based on the premise that regulation is bad for innovation. Entrepreneurs in this camp do not want the field of AI to be defined too soon, and certainly not by non-technical people. Among their concerns are that bad policy creates bad technology, regulation stifles innovation, and regulation is premature because we do not yet have any clear sense of what we would be regulating.[59]

The other school of thought seeks to protect against potentially harmful creations that “poison the well” for other AI entrepreneurs. Subscribers to this school believe that national governments should act expeditiously to promote existing standards and guidelines or, where necessary, create new guidelines, to ensure a basic respect for the principle of “first, do no harm”.[60]

Innovations such as the internet and networked AI have enormous short-term benefits, along with long-term negative effects that could take decades to become recognisable. AI will drive a vast range of efficiency optimisations, but also enable hidden discrimination and arbitrary penalisation of individuals in areas such as insurance, job seeking and performance assessment.[61] Without significant changes in our political economy and governance regimes, AI is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions.[62] As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who do not or cannot.[63]

Rapid innovation in technology far exceeds the ability of the world’s domestic and international legal systems to keep pace.[64] The key for humanity in general and lawyers specifically, will be to develop the positive aspects of the technology, while managing its risks and challenges.[65] AI regulation will be a necessity, particularly in the areas of safety and errors, liability laws and social impact. Policy-makers will have to embrace the benefits that AI can bring, but at the same time be sensitive to pre-empt the dramatic and potentially devastating effects of misusing AI.[66]

Countries should develop a data strategy that promotes innovation and consumer protection. Currently, there are no uniform standards in terms of data access, data sharing or data protection.[67] Almost all the data is proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. Without structured and instructed data sets, it will be nearly impossible to gain the full benefits of AI.[68]

Conclusion

AI may well be a revolution in human affairs and become the single most influential innovation in history.[69] How AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts reconciled, legal realities resolved, and how much transparency is required in AI and data analytic solutions.[70]

Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organisational routines. Exactly how these processes are executed needs to be better understood, because they will have substantial impact on the general public soon and for the foreseeable future.[71]



[1] Edited version of a paper presented at the 9th International Conference of the International Organization for Judicial Training, “Judicial training: a key to successful transformation of the judiciary”, Cape Town, South Africa, 2019; published in (2021) 33(5) JOB 47, updated 2021; full paper published in (2020) 7 Judicial Education and Training 13.

[2] Associate Professor, Department of Procedural Law in the Faculty of Law, University of Pretoria.

[3] European Parliament Committee on Legal Affairs, Report with recommendations to the Commission on civil law rules on robotics, Report 2015/2103(INL), 2017.

[4] B Sheppard, “Warming up to inscrutability: how technology could change our concept of law” (2018) 68 University of Toronto Law Journal 41.

[5] Y Chen and M Ahn (eds), Routledge handbook of information technology in government, Routledge, 2017, p 109.

[6] C Arthur, “iPhone keeps record of everywhere you go”, The Guardian, 20 April 2011 at www.theguardian.com/technology/2011/apr/20/iphone-tracking-prompts-privacy-fears, accessed 25 August 2021.

[7] Sheppard, above n 4.

[8] A Casey and A Niblett, “Self-driving laws” (2016) 66 University of Toronto Law Journal 429 at 438.

[9] K Alexandridis quoted in J Anderson and L Rainie, “Artificial intelligence and the future of humans”, Pew Research Center Internet and Technology, 10 December 2018 at www.pewinternet.org/2018/12/10/artificial-intelligence-and-the-future-of-humans/, accessed 25 August 2021.

[10] A Kolber, “Will there be a neurolaw revolution?” (2014) 89 Indiana Law Journal 807 at 836.

[11] The first use of deep fake technology was to paste people’s faces onto target videos, often in order to create nonconsensual pornography. See J Vincent, “ThisPersonDoesNotExist.com uses AI to generate endless fake faces”, The Verge, 15 February 2019 at www.theverge.com/tldr/2019/2/15/18226005/ai-generated-fake-peopleportraitsthispersondoesnotexist-stylegan, accessed 25 August 2021.

[12] ibid.

[13] A Hern, “New AI fake text generator may be too dangerous to release, say creators”, The Guardian, 15 February 2019 at www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction, accessed 25 August 2021.

[14] For example, fed the opening line of George Orwell’s Nineteen Eighty-Four — “It was a bright cold day in April, and the clocks were striking thirteen” — the system recognised the vaguely futuristic tone and the novelistic style, and continued with: “I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science.”: Hern, ibid.

[15] Vincent above n 11.

[16] B Chesney and D Citron, “Deep fakes: a looming challenge for privacy, democracy, and national security” (2019) 107 California Law Review 1753.

[17] F Pasquale and G Cashwell, “Four futures of legal automation” (2015) 63 UCLA Law Review 30.

[18] D Rankin, “How artificial intelligence could change the law in three major ways”, The Journal of Law and Technology at Texas, 14 October 2018 at http://jolttx.com/2018/10/14/how-artificial-intelligence-could-change-the-law-in-three-major-ways/, accessed 25 August 2021.

[19] ibid.

[20] S Biggs as quoted in J Anderson and L Rainie, above n 9.

[21] D West and J Allen “How artificial intelligence is transforming the world”, Brookings Report, 24 April 2018 at www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/, accessed 25 August 2021.

[22] ibid.

[23] Conceptually, the way in which it works is simple: the camera looks at a face, extracts distinguishing facial features (such as the size and width of the nose, for example) and then compares those features against a database of pictures (sometimes taken from driver’s licence photos).

[24] See, generally S Denyer, “In China, facial recognition is sharp end of a drive for total surveillance”, The Sydney Morning Herald, 16 January 2018 at www.smh.com.au/world/in-china-facial-recognition-is-sharp-end-ofa-drive-for-total-surveillance-20180108-h0f3jb.html, accessed 25 August 2021.

[25] As reported by China’s National Public Credit Information Center, see The Guardian, “China bans 23m from buying travel tickets as part of ‘social credit’ system”, 2 March 2019 at www.theguardian.com/world/2019/mar/01/china-bans-23m-discredited-citizens-from-buying-travel-tickets-social-credit-system, accessed 25 August 2021.

[26] Rankin, above n 18.

[27] The company fed the algorithm 100,000 hours of surveillance data to train it to monitor everything from facial expressions of shoppers to their movement and clothing. VaakEye was launched in 50 shops in Japan during March 2019, and the company plans to expand to 100,000 shops in Japan within three years. Proponents of systems such as this claim that they could help reduce global retail costs from shoplifting, which reached $USD 34 billion in 2017. See further, N Lewis, “Should AI be used to catch shoplifters?”, CNN Business, 18 April 2019 at https://edition.cnn.com/2019/04/18/business/ai-vaak-shoplifting/index.html, accessed 25 August 2021.

[28] Lewis, ibid.

[29] Rankin, above n 18.

[30] ibid.

[31] Lewis, above n 27.

[32] West and Allen, above n 21.

[33] O Osoba and W Welser IV, “The risks of artificial intelligence to security and the future of work”, RAND Corp, 2017.

[34] E Loh, “Medicine and the rise of the robots: a qualitative review of recent advances of artificial intelligence in health” (2018) 2 BMJ Leader 59 at 61.

[35] ibid.

[36] A Caliskan, J Bryson and A Narayanan, “Semantics derived automatically from language corpora contain human-like biases” (2017) Science 183.

[37] L Millan “Artificial intelligence”, Canadian Lawyer Magazine, 3 April 2017 at www.canadianlawyermag.com/article/artificial-intelligence-3585, accessed 25 August 2021.

[38] Twenty-eight States and parts of 7 more States use algorithms as risk assessment tools in the sentencing process: M Stevenson and J Doleac, “Algorithmic risk assessment in the hands of humans” at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3489440, accessed 25 August 2021.

[39] ibid.

[40] ibid.

[41] ibid.

[42] Executive Office of the President, Preparing for the future of artificial intelligence, National Science and Technology Council Committee on Technology, 2016, pp 30–31.

[43] A Cohen, “The digital activist taking human prejudice out of our machines”, Bloomberg Businessweek, 3 July 2017, p 80.

[44] Sheppard, above n 4, at 47.

[45] ibid at 48.

[46] F Pasquale and G Cashwell, above n 17. Likewise in Australia, the flawed robodebt program directed to recovering alleged debt from Centrelink recipients led to settlement of a class action in May 2021: see Gordon Legal statement at https://gordonlegal.com.au/robodebt-class-action/robodebt-settlement-faqs/#settleeight, accessed 25 August 2021.

[48] I Giuffrida, F Lederer and N Vermerys, “A legal perspective on the trials and tribulations of AI: how artificial intelligence, the internet of things, smart contracts, and other technologies will affect the law” (2018) 68 Case Western Reserve Law Review 747 at 753.

[49] ibid at 754.

[50] J Nicas, “Google has picked an answer for you — too bad it’s often wrong”, Wall Street Journal, 16 November 2017 at www.wsj.com/articles/googles-featured-answers-aim-to-distill-truthbut-often-get-it-wrong-1510847867, accessed 25 August 2021.

[51] D Victor, “Microsoft created a twitter bot to learn from users. It quickly became a racist jerk”, New York Times, 24 March 2016 at www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html, accessed 25 August 2021.

[52] C O’Neil, Weapons of math destruction, Crown Books, 2017, pp 85–87.

[53] J Angwin et al, “Machine bias”, ProPublica, 23 May 2016 at www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed 25 August 2021.

[54] Giuffrida, Lederer and Vermerys, above n 48 at 755.

[55] West and Allen, above n 21.

[56] ibid.

[57] On February 16, 2017, the European Parliament adopted a legislative initiative resolution in which it recommended a range of legislative and nonlegislative initiatives in the field of robotics and AI to the European Commission: at www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.html, accessed 25 August 2021.

[58] N Nevejans, European civil law rules on robotics, Report noPE 571.379, 2016 at pp 6–7.

[59] C Piovesan, “Speaker’s corner: lawyers need to keep up with AI”, Law Times, 5 June 2017 at www.lawtimesnews.com/author/na/speakers-corner-lawyers-need-to-keep-up-with-ai-13408/, accessed 25 August 2021.

[60] ibid.

[61] A McLaughlin as quoted in J Anderson and L Rainie, above n 9.

[62] M Gorbis as quoted in J Anderson and L Rainie, above n 9.

[63] G Shannon as quoted in J Anderson and L Rainie, above n 9.

[64] C Rice, “Artificial Intelligence”, 6 January 2016 at www.claytonrice.com/artificial-intelligence, accessed 25 August 2021.

[65] A Botha, “Artificial intelligence II: the future of artificial intelligence”, Foresight For Development at www.foresightfordevelopment.org/featured/artificial-intelligence-ii, accessed 25 August 2021.

[66] ibid.

[67] West and Allen, above n 21.

[68] ibid.

[69] ibid.

[70] ibid.

[71] ibid.