Artificial intelligence and litigation — future possibilities[1]

D Farrands KC[2]

We are now in yet another one of humanity’s revolutions: artificial intelligence (AI). Its impact is and will be transformative across virtually all endeavours: medicine, finance, transport, insurance, manufacturing and the like. But how will it likely affect the legal profession, in particular the conduct of and outcomes in litigation? This article seeks to provide insight into that issue. The article first looks at the fundamental distinction between human intelligence and AI, critical to any in depth analysis of AI’s likely impact on litigation work. The article concludes that by reason of the sheer complexity of superior court trial work, for the foreseeable future, at least in Western societies, human intelligence will remain the dominant problem-solving device to achieve “just” outcomes in disputes. However, around the globe, AI will have an increasingly important role to play in resolving the less complex court work, in Tribunal decision-making, and in resolving small claims and private treaty disputes. It will also have an increasingly significant role to play in performing legal research analytics, discovery analysis, providing predictive outcome analysis, and in reducing the cost of litigation, with software products and services already well advanced. In the interests of just, efficient, timely and cost-effective dispute resolution, the legal profession should take all practicable steps to embrace the AI revolution. But the profession should also be extremely cautious to ensure that efficiency does not ultimately trump justice.


The Chief Justice of the Federal Court of Australia observed:[3]

To a degree, the future must remain unknown. Artificial intelligence and its effect on Courts, the profession and the law will change the landscape of life in ways we cannot predict.

AI is currently receiving great attention, including in areas such as the law.[4] It is part of the fascination with and implementation of the technology age, being an era of unparalleled interconnectedness and resulting efficiencies — within a global population of 7.78 billion people, there were, as at October 2019, 5.15 billion mobile phones, 4.48 billion internet users, 3.7 billion social media users, 2.5 billion email users, and in 2021 there are 2.8 billion Facebook users.[5] These are likely to all join up as the “internet of things” (often referred to as “IoT”) in the not too distant future.

In Australia, the Commonwealth Government’s national science agency, CSIRO, has developed an AI Action Plan, which includes establishing the National AI Centre and four AI and Digital Capability Centres. CSIRO is home to one of the world’s largest AI applied capabilities, with more than one thousand researchers.

This article outlines the possible impact of AI[6] on the future work of the key players in litigation — the courts, solicitors and barristers. It does not address the now well-accepted technologies facilitating filing of court documents electronically and general digital case management within courts.[7] The article addresses the question of whether AI might be (to use jargon) a “disrupter” of legal services or an “enabler” or something between the two, and if so in what areas. It is intended to assist with an understanding of AI by way of overview,[8] with particular emphasis on litigation. It is necessarily general in some respects as it attempts to predict the future in the very uncertain environment of software development. In this regard, the article is exploratory in nature and intended to provide food for thought about AI’s potential role in litigation.

Executive summary

Preliminary conclusions about the future are outlined below, which in turn may inform the possible future of litigation more generally.

  • There is a significant difference between the work of humans (called herein “human intelligence”) and the work of computers (called herein “artificial intelligence”), principally that human work involves intuitive scenario selection, “creativity”, “imagination” and “insight” — at present computers are a long way from being able to exercise discretion/judgment in this way, and quite possibly never will. This kind of work cannot (currently) be “mapped” and therefore cannot be replicated. In contrast, computer algorithms are (merely), at least in the first instance, “predetermined” pathways to an outcome. AI might “learn” new pathways to a result by practising or exploring but they are not exercising judgment in the way a human does.

  • There is no doubt that computers will continue to improve significantly as super-powerful computational devices, and for a very long while yet. Whether computer intelligence (AI) and human intelligence will ultimately converge is highly debateable.

  • Computers do not “reason”; however, there is now “machine learning” in which computers can improve algorithms in the sense that they can eliminate less successful pathways to achieve a stated task. A recent further development is called “legal analytics”, which is an advanced form of computer searching and analysis, where inter alia computers can find or extract comparative legal arguments in similar cases — but they cannot (yet at least) develop or apply legal reasoning.

  • It is true that AI can cause to be identified new pathways to achieve tasks, and even solutions (not previously identified), and may therefore be said to have “invented” things.

  • Computers cannot (at present) make “ethical” judgments or set themselves goals or tasks. These appear to be insurmountable barriers to the conformance of AI with human intelligence.

  • Laws use normative standards as a means of reconciling the particular justice of a case with the overall societal acceptance of court decisions. Such standards include the “reasonable person” and “objective” interpretation of contractual provisions. Computers are not able to determine normative standards over time.

  • Courts use both closed datasets (admitted evidence), together with open datasets (a judge’s or juries’ experience and understanding of societal mythologies[9] about values such as trust, honesty and fairness). In contrast, a computer can only ever apply the relevant algorithm(s) to a closed dataset (recognising that the dataset may still be immense).

  • There is a large area of work that computers can do in industry and in legal services — examples are given below, including “natural language” work and image processing.

  • It would seem that the work of the superior courts will remain beyond the reach of computers for the foreseeable future unless there is significant structural change in the way complex controversies are to be quelled (such as randomly selected reviews by judicial officers of AI generated judgments, as a check on whether AI is appropriately quelling disputes).

  • It is possible that interlocutory disputes in superior courts might one day be dealt with by computers, such as whether security for costs should be awarded.

  • Because the work may be less complex, and computers are becoming more powerful, it seems more likely that there will be increasing emphasis on tribunal work being done by computers (eg, in consumer disputes, and in refugee, citizenship and the like matters). High-volume low-value work might be fertile ground for AI, but it must be remembered that individual parties will still have a strong sense of justice and the system using AI must always be designed with that in mind lest the use of AI may be viewed negatively, diminishing the authority of the relevant system.

  • It is likely that private dispute resolution mechanisms, effected through algorithms, will continue to play a significant role in quelling small-scale disputes (eBay is a good example of this). This is a significant positive development in the goal of increasing access to justice, keeping costs down and keeping dispute resolution timely.

  • Routine work of solicitors will continue to be significantly aided by AI, particularly in contract review, due diligence and discovery. However, advisory work in litigation cannot currently be displaced because it requires the application of legal expertise to particular circumstances and advice on risk/reward dynamics of commerce — something a computer could never adequately assess because such dynamics are inter alia, subjective, not objective.

  • The court work of barristers is likely to remain substantially unaffected by AI except as regards reduced workloads in tribunals. There may be significant work in the area of the evidentiary admissibility of output from computer algorithms (eg, whether it can be relied upon, whether the assumptions are transparent and valid, and what weight it should be given) — this might in fact be a significant piece of any particular litigation. There may, however, be a role for barristers in dealing with the manner in which “virtual” tribunals deal with disputes. The Bar should take an active role in developments in AI for that reason alone. In other respects, for so long as there are hearings, the social processes of advocacy cannot be mimicked by computers.

  • AI can never eliminate one of the most powerful aspects of our current litigation system: that the result is determined didactically — that is, through the endeavour of competing counsel and instructing solicitors regulated by the court as to the weight of evidence, the relevant legal principles and their application. In effect, there are multiple brains working on the one problem, in an environment specifically designed to “test” hypotheses, facts, analysis/reasoning and conclusions in real time. These super-computers (brains) are the most sophisticated and dynamic organs known in the universe. AI may, however, be able to help participants in litigation formulate and test relevant hypotheses.

  • If AI is to play an important role in litigation (eg, tribunal work) then the owner of that AI is likely to be government. It will need expertise and resources. At lower levels of dispute resolution (eg, small claims), the owners of the AI systems are likely to be corporations (such as eBay).

  • The future for AI is bright, but indeterminate. It is likely to affect commerce in material ways and give rise to significant dilemmas in other areas — for example, in relation to weaponry, defence and policing populations through State monitoring. There will be a significant need for protocols and governance to assist society navigate the impact of AI over time.

  • Finally, the extent of AI’s involvement in litigation will always be a function of the level of complexity of the legal system in question. In a robust democratic society, for example, with unique notions of justice, and the need for frequent checks on the exercise of power, AI should always have a limited role. But for other societies, where the exercise of power is largely unconstrained or is directed towards State control, and where there may be a view that humans are fallible but technology is less so, determinations of disputes by AI may well be much more prevalent even if dissatisfying to participants. The legal system “design” issue is whether to await AI to “catch up” so it can perform more complex tasks or to “dumb down” the litigation so that AI can deal with it. There is, of course, a middle ground.

Intelligence — the nature of work — tasks

It is impossible to address the future of AI in the legal industry or elsewhere without first discussing two concepts in detail — “intelligence” and “work”. These are then applied to the relevant tasks under consideration, each discussed in turn.

For the purposes of this article, and to assist the discussion, intelligence has been defined in these terms: the capability to perform work to achieve the relevant task.[10] Determining the pathway may involve abstract reasoning, the application of logic, testing, problem solving, induction, deduction, the making of inferences, and the like. To understand and apply this definition, it is necessary to be clear on what “work” is within the above definition.[11] “Work” for these purposes is the determination of a pathway to achieve the relevant task. Determining the pathway may involve abstract reasoning, the application of logic, testing, problem solving, induction, deduction, the making of inferences, and the like.

Apart from “work” being a place, it can be said for present purposes that there are essentially two types of work (which are often conflated):


The work involved in performing a task that requires the exercise of discretion or judgment, in order to complete the task.[12]


The work involved in performing a task where no exercise of discretion or judgment is needed. Rather, the pathway to achieve the task has been predetermined and cannot be deviated from because it has been set so as to achieve the task in a particular way. In this kind of work, new pathways may be determined over time (by the computer), but there is still no exercise of discretion in the sense contemplated within (1).

The first type of work is in broad terms “human” work, and the second is in broad terms “computer (AI)” work. These two types of work are explored in more detail below.

Human work/intelligence — the exercise of discretion or judgment

In the first type of work, there may be many ways to achieve a task, and it is the selection of (or judgment associated with) the pathway to achieve the task that is “the work”. People get paid to do this kind of work — that is, to exercise discretion or judgment. These types of tasks may be routine or non-routine;[13] however, even if routine, they may still require the exercise of some discretion. The task might also be “reproductive” in nature — requiring a person to perform the task using identical or similar solutions or procedures used in the past; alternatively, it might be “productive” — requiring people to invent new ways of solving the problem or performing the task.[14]

There are discrete categories of human work, for example:


Conscious decision-making as to the pathway, permitting articulated reasoning. For example, someone might say “I want to do it this way for the following reasons” or “I have done it this way for the following reasons”. Expressing the pathway is known as “reasons”.


Unconscious rational decision-making as to the pathway to use, which may involve a guess as to a pathway which has not before been recognised or known. This involves the use of “intuition”, “creativity” or “imagination” or perhaps “insight”. Here the pathway is unarticulated/unexpressed. There are no stated reasons. The pathway, if exposed, might be rational. But a person might say, “here is the task or result you wanted, but I can’t tell you how I got there”.


Unconscious irrational decision-making as to the pathway to use. This might include “guessing” or using “lateral thinking” or using “insight” in some unconscious process. Again, there are no stated reasons. A person might say, “I tried a few alternatives and this one seems to work even though it seems illogical or counterintuitive to me”.

The “creative” work[15] referred to in (2) and (3) can be further broken down into several subcategories. First, there is the creative work of choosing the relevant pathway within a given closed universe of data. For example, there may be only three known ways to achieve a task, some efficient and some inefficient. The competitive selection among scenarios is what is called decision making.[16] The “work” of the human, the creative part, is to select which one is the most efficient. Why one path has been chosen over another may be unstated. Secondly, there is the creative work of choosing the relevant pathway within an open universe of data — that is, where not all the data is available to the person doing the task. Here, the task involves uncertainty and therefore judgment and inference. In other words, because discrete scenarios may not be able to be built, the pathway to the task or outcome is not entirely predictable and therefore the person performing the task must exercise discretion or judgment with uncertainty, often relying on inference(s). Again, the person may not be able to state why the task was done a particular way.

Within all these categories of work, there is a further dimension. The open universe of data may be small or large. For example, a secretary may not have all the data to perform a task but is capable of obtaining it within the short timeframe given for the task. In another context, however, the task might involve large bodies of disparate data requiring significant judgment where all the data is not known. This would generally be considered a more complex task than a simple task. For example, the CEO brings experience and wisdom to long-range tasks within a large organisation. The CEO might be tasked to place the organisation’s business in a predicted optimal position given a fast-moving context, with an outlook of five years. The task being undertaken is to predict what business model will best optimise shareholder returns over time — this is a high-complexity task involving a large amount of uncertainty, including “scenario uncertainty”, and therefore multiple complex pathways.[17] Another example is the intuitive judgment in long-term strategic business decisions — for example, whether to put a computer within a phone (as Apple did, with phenomenal success) or not do so (as Blackberry chose, with dire consequences). It is difficult to see AI (or quasi AI) performing this type of work or judgment. The work involves high orders of complexity and “intuitive”, “creative” or “imaginative” judgment, and, in relevant cases, an open dataset.

As discussed below, category (1) above gives AI little difficulty in the sense that it can be mimicked or replicated and possibly enhanced. However, the unconscious processes in (2) and/or (3) may or may not be capable of ever being mapped or reproduced in the future. As at today, such judgments remain opaque and cannot be deconstructed into elements or articulated pathways of reasoning. Categories (2) and (3) therefore currently remain the domain of the human brain, not a computer. A computer cannot look “outside” the algorithm or the dataset it has been given. A computer cannot “create” scenarios, or cross-over between scenarios. A computer does not know how to guess. It can be programmed to randomly “mine” data for particular outcomes, be asked to compare things, and to analyse data for specific results. It can be programmed to derive outcomes from other outcomes. But none of that is “guessing” (or hypothesising). And for all of the categories, with very long lead times for completion of the task, the higher the likelihood the task will be complex because the dataset is not closed for long periods. Selecting the preferred pathway from multiple scenarios in these circumstances is likely to remain within human intelligence alone.

These concepts are obviously relevant for litigation where the giving of “reasons” is a cornerstone of the justice system.[18] Articulated reasons can be tested and therefore the judgment in question reassessed as a check. This process aides the concept of a “just” system. However, human intelligence is not only based in reason. As Kirby J has observed writing on AI, it is locked into culture, linguistic and other prisms.[19]

Artificial work/intelligence — no exercise of discretion or judgment

In the second type of work, AI work, there is only one way to achieve a task and that way has been pre-determined, or pre-determined by incremental adjustments over time. It follows that essentially there is no discretion to perform it another way. In this type of work, the “recognition points” along the way, determined within the computer program or algorithm in question, tell the computer the journey it must travel on or towards to achieve the result. The journey is predetermined, albeit it might have multiple pathways depending on the recognition points within the algorithm in question.

In the second type, the processes involve no elements of “intuition”, “creativity”, “imagination” or “discretion” — they are purely mechanical. This is generally the work of a robot or a computer. It only knows those pathways to achieve a task which have been predetermined for it by a program (a set of instructions on what to do if certain things happen). This second type of work can be identified in this way: if the final reasons are, at the outset of the task, known, and can be articulated for the making of a decision, no “decision” has in fact been made. Rather, the task has been to calculate an inevitable outcome, as a computer does.[20]

Quasi-human intelligence performed by AI

It might be said that there is some middle ground between what is described as “human” intelligence and “AI” because there is some middle ground in the type of work performed.

Take for example a computer program that has significant “predictive powers” — that is, a program that seeks to determine or predict the future even though the dataset is limited. A good example of this might be where a computer identifies trends in given data, and then extrapolates those trends to predict future trends. Weather prediction is a good example. Here, it might be said that the computer is exercising “discretion” or “judgment” as to the shape of the data (or world in question) beyond the dataset. It might also be said that this form of AI is in fact “human intelligence”, albeit in a limited or closed sense.[21]

Another type of “quasi-human intelligence” might be described as the computer’s ability to “learn” (machine learn) from a dataset. In this kind of algorithm, the computer looks for the ways (or alternate scenarios) in which a particular journey can be achieved to produce a result, essentially by trial and error. Because of the incredible speeds at which the computer is able to operate and calculate, and test pathways/scenarios, the computer gives the impression of making “judgments” about optimal or winning pathways — but it is not doing that. It is still working the algorithm given to it, and perhaps modifying it if it finds a better pathway, determined via its exploration of the patterns within the dataset. This “machine learning” is dealt with in further detail below.

Yet another type of “quasi-human intelligence” might be described as an algorithm’s ability to create another algorithm. This second algorithm might be impenetrable to a human (ie, not able to be deconstructed or mapped) and might be said, in that sense, to have been “created” by the first algorithm.[22]

Another type of middle ground is the possible move to a computer “reading” as opposed to merely extracting semantic information from a database. Computers cannot yet read like a human does, but they can extract semantic information that may be highly useful in a particular context.[23]

Summary — human vs artificial intelligence

In the current environment, and for the foreseeable future at least, it is possible to derive at least three discrete categories of “intelligence”:


Human intelligence: which requires discretion or judgment (often across potentially different time spans) because there are multiple scenarios to choose from, and there is uncertainty (and the greater the uncertainty, the greater the complexity involved), where some of the reasoning giving rise to the exercise of discretion or judgment may not be able to be exposed because it is intuitive or creative in nature. This work is in addition to purely mathematical work where pathways are already known (such as calculating 2+2=4). “Human intelligence” might therefore be described as “broad” intelligence.


Artificial intelligence: which requires no discretion or judgment because there is essentially a universe or near universe of certainty of data and the “reasoning” is known from the outset (such as a computer program). AI might therefore be described as “narrow” intelligence. Further, it might be said that below “narrow” intelligence is “augmented” intelligence. An example of AI (or narrow intelligence) is IBM’s Deep Blue chess computer, which beat Garry Kasparov in 1997. A broader but still narrow intelligence is the Deep Q-Network AI system of Google DeepMind, which accomplishes slightly broader goals (it can play dozens of different vintage Atari computer games at human-level competency).[24]


Quasi-human intelligence: which requires the computer to predict things, or determine optimal pathways, or to develop its own new algorithms to achieve optimal results, therefore giving the impression (but not the actuality) of judgment or discretion. “Quasi-human” intelligence may be described as “mid-range” intelligence.

Can artificial intelligence become human intelligence?

It is widely predicted that at some point in the future, human and artificial intelligence might eventually converge. This has been contemplated for some time on the basis that the human brain is (merely) an organic algorithm.[25] If this occurred, perhaps all work (and therefore tasks) humans can do could also be done by computers — including that done by courts, solicitors and counsel. This section explores the possibility of convergence.

The answer to the potential for convergence lies with, at the very least, four main factors.[26]

The first factor is in the incredible increase in computational power. It is said that the human brain can be reduced to a series of processes that are chemical and electrical. If this is correct, there exists the possibility that we become able to expose the processes within the brain and therefore the reasoning currently sitting under the “intuitive” or “creative” judgments made by humans.[27] As predicated by Moore’s Law,[28] the fourth generation iPad now has more computing power than the most powerful super-computer in the world some 30 years ago, the Cray-2.[29] We are, apparently, nowhere near the limit of computing power. It has been estimated that the limit is 33 orders of magnitude (1033 times) beyond today’s state of the art for how much computing a clump of matter can do; even if computer power doubled every couple of years, it would take over two centuries until the frontier was reached.[30] It is possible that in the foreseeable future, computing power may extend significantly into the mental processing power of the human brain. This coupled with “machine learning” (see below) may see computing power reach capabilities once thought impossible (consider the changes arising from the introduction of the iPhone alone). It has been estimated that the computational capacity of the human brain is roughly the power of, what is today, an optimised $1,000 computer.[31] More recently, it has been estimated that by 2020, the average desktop computer will have the same processing power as the human brain, with approximately 1016 calculations possible per second.[32]

Associated with this area of exponential growth in computing power is the second main factor — the development of so-called “machine learning”, using new neural network techniques applied to vast amounts of data. Machine learning is computation using algorithms that improve through experience. The “learning” occurs because each relevant neuron updates its state at regular time steps by averaging together the inputs from all connected neurons, weighting them, optionally adding a constant, and then adding what is known as an “activation function” to the result to compute the next state.[33] In this way, the machine “forward-predicts” the next better step. Machine learning (or deep learning) is responsible for some incredible results: from a computer giving an accurate caption of a photo even though it does not know what the photo represents;[34] to mastering dozens of computer games without instructions;[35] to passing Tokyo university entrance exams (which included maths, language and general knowledge questions); to beating the best Go player (Lee Sedol) in the world in 2016.[36] However, deep learning cannot (yet) do something that is fundamentally important — explain its reasoning and findings.[37]

The third main factor is the incredible reduction in cost and size of memory capacity for computers. Memory capacity has been improved dramatically over time, without requiring any software changes. Hard drives have reduced in cost by about 100 million times, and the faster memories useful for computation rather than mere storage have become 10 trillion times cheaper.[38] The human brain stores around 10 gigabytes electrically and around 100 terabytes chemically/biologically. The world’s best computers can now out-remember any biological system — for a few thousand dollars.[39] Overall, the cost of computation halves every couple of years, with the effect that over the last century or so, the cost of computing has dropped about one million million million times (1018).[40] This fact alone puts into challenge the use of labour when performing certain tasks. Computers are no longer building-sized; they are in our pockets or on our wrists.

The fourth main factor is the amount of available data, which is growing exponentially, and its collection is becoming more systemic and collaborative across certain professions, industry and government. The insurance industry, for example, has huge collaborative networks and systems designed to capture large volumes of data that can be mined by AI, for the benefit of the participants of the networks/systems. It is important to note, however, in the context of the legal industry specifically, that sophisticated AI algorithms and learning alone cannot provide valuable solutions; there has to be sufficient data upon which the algorithms can be put to analytical work.[41] It is true that sophisticated algorithms can sometimes overcome limited data; but even if there is a significant body of data available for analysis, this must be viewed cautiously because it may have inherent biases. For example, negative data is almost never published.[42] Further, commercial repositories of legal texts may be big enough to be useful for statistically learning contextual information about legal terms and phrases that can then be useful semantic information for other machine-learning applications. However, generally, law does not seem to present opportunities for big data.

In some industries, significant data-gathering or consolidation occurs, permitting better AI analysis. An example is in the insurance, banking and finance, and medical industries. This gathering and co-operation between entities raises significant legal issues regarding privacy and data integrity. Arguably, it is the capacity for a particular industry or profession to gather data in large amounts that will determine its ability to exploit AI in the foreseeable future. The computing power will come — but aggregation of data in certain areas may not. We know already that aggregation of data in, say, the medical fields is progressing well, and aggregation of data in litigation is even more so given its specific fact-dependent nature.

Where is AI already well established?

There are several well-recognised categories of tasks where AI is already well established.

The first category is image recognition or so-called “computer vision” — the ability of the computer to know what it is looking at. AI is being used, for example, to sort vegetables by recognising their sizes and varieties. It is used in face recognition, for example, to access the new iPhone X. And in China, AI is used extensively for face recognition. For example, an insurance company there,[43] with a market cap of $120 billion, is using AI to provide online loans quickly, verifying more than 300 million faces in various applications, complementing other cognitive AI capabilities including voice recognition.[44] That company alone has some 110 data scientists. Further, also in China, where there are tens of millions of cameras, it is used to penalise people for traffic infringements.

Another example of AI is in the use of “natural language” algorithms. Applications here include analysing whether customers are happy or discontented when making inquiries of companies, filtering inquiries prior to having to speak to a real person (such as when you call Telstra), and even attempting to determine (in the United States) whether 911 callers are witnesses to crimes or the criminals themselves (calling as “witnesses”). Natural language AI is also used in Siri and Google Home applications when you want quick answers from your device. The AI can get to know a person and does not rely on independent, full-premised questions; it can take premises from the preceding question, just as a human does.

Further examples of the use of AI include:

  • in robotics — an example is the driverless car or truck, discussed further below

  • mining data for better solutions — an example is at Airbus, where the company has used AI to find ways to identify patterns in production problems, with AI then matching about 79% of the production disruptions to solutions used previously, in near real time, providing recommendations to those on the floor workshop[45]

  • in the mining sector — where AI is used to control driverless trucks, using sensors and calculating optimal pathways for extraction of ore.[46] Further, image recognition AI is used to determine whether, after a mine blast, there should be another one or whether extraction by trucks is more optimal

  • by companies to determine the extent of “email rerouting”[47] within the organisation, and what to do about this rerouting — this can save significant time within the organisation, lowering costs, increasing efficiency and making the organisation more competitive

  • in retail — AI is used by retailers to suggest “the next purchase” to consumers based on previous buying patterns. Online, this is now virtually ubiquitous,

  • in game playing — probably the most famous example of “narrow” task AI,[48] such as in chess or Go, where AI has managed to beat even the best human players in the world.

With all these examples, it would appear that AI already has broad application across industry. It should be noted, however, that one of the biggest AI providers in the world, IBM, with its super-computer AI product, Watson, lists the following industry solutions:[49]

  • customer engagement

  • education

  • financial service

  • health

  • IoT — automotive, electronic, energy and utilities, insurance, manufacturing, retail

  • media talent,

  • work.

It is notable that “legal services” is not listed. However, IBM has built ROSS (AI) based on Watson, the cognitive computing platform. ROSS is a legal research tool that enables law firms to slash the time spent on research, while improving results (discussed further below).

Where is AI not established/desirable?

So far this article has focused on the work a human or computer might perform in order to achieve a task. However, even in circumstances where yet more work and tasks can be performed by AI (as it comes closer to “human intelligence”), the question arises whether there are some types of work that AI cannot or for that matter should not do.

Goal-setting work

It is true that AI has goals within a task, in the sense that it will be programmed to produce a particular result. The destination of a rocket, for example, controlled by AI, would be a goal set within AI.

However, AI cannot, at least for the foreseeable future, set the anterior goal — namely, whether the AI goal should be set at all; whether the rocket should be made.[50] That is a discrete task in itself. AI cannot set a “purpose” or act or do work so as to turn an intention into reality;[51] AI only has the intention a human gives it. Determining intention, and turning it into reality, is fundamental to all work involving human intelligence. It seems hard to identify a time when AI will be able to be self-deterministic as to its goals, except by (paradoxically) deliberately engineered random outcomes.

Ethical work

AI can already perform calculations, predict population outcomes and win games. These are all amoral tasks — they do not involve considerations of moral or societal judgments. But there is always the anterior question, which itself is work, of whether a particular task should or should not in fact be performed at all. For example, there may be a judgment as to the societal impact of the task in question. Take, for example, the judgment as to whether a nuclear bomb should be developed; it is not purely a matter of physics.

Yet another example is the study of comparative legal rules. AI can tell you whether particular conduct is or is not legal in a particular country, because it can search for that. But it cannot tell you whether the conduct should or should not be legal in a particular country. It is true that an algorithm could be developed to collate all of the laws relating to a particular subject matter (say the care of dogs) around the world, to summarise it, to compare those laws to current laws in the country in question, and to make a recommendation in relation to those laws. In that sense, the algorithm might give guidance on a “moral question” within the dataset (eg whether to ban or have destroyed a particular dog). This is where using statutory network diagrams for comparing similarly purposed laws across jurisdictions could come into play.[52] But the algorithm could not exercise the judgment of whether the particular laws overseas should be incorporated into domestic law.

Recent developments relating to the legal profession

The current computer programs developed to assist the legal industry can answer legal questions in a superficial sense, but the underlying AI cannot explain the answers. However, computational models of legal reasoning are currently being developed, with such models in some cases being able to generate arguments for and against particular outcomes in problems. In particular, the development of AI to assist with legal arguments, reasoning and to predict litigation outcomes is now a well-developed field, and is continuing to develop rapidly.[53] These computational models may also attempt to breakdown complex human intellectual tasks, such as estimating settlement values in disputes, or analysing offer and acceptance problems, to provide insight to legal practitioners.[54] The answers given are not philosophical but scientific in nature, in that the computer programs evaluate tasks and outcomes according to data.

While this area of AI has made progress, there is currently a bottleneck in further progress toward contributing to legal practice. To date, the substantive knowledge on legal matters deployed by currently developed computational models have been extracted manually from legal sources (cases, statutes, contracts and other texts). That is, currently, legal practitioners have to read the legal texts and provide their content (in a form the models can apply). The inability to automatically connect the computational models (for legal reasoning) directly to legal texts has limited the ability of programmers to apply programs in real-world legal information retrieval, prediction and decision-making.[55]

In broad overview, these developments in legal AI, involving question and answering techniques, knowledge extraction from text and “argument mining” are termed “text analytics” or “legal analytics”. Such analytic techniques in the future might overcome the bottleneck if the relevant computational program can be developed so as not to rely solely on manual techniques (by humans) to input what legal texts mean in ways programs can use, but rather so that knowledge can be input automatically. If this occurred, AI could, potentially, link these text-analysis tools to computational legal reasoning and legal analysis algorithms, to produce a wholly AI-derived legal solution. This has not happened yet; however, the amount of work being done in this area should not be underestimated — legal analytics is a well progressed field.[56]

According to some, the legal industry is at the “cusp” of what is shaping up to be an industry-changing revolution.[57] The new analytics is said to enable a legal practitioner to “mine” massive quantities of data that in earlier times would simply have been impossible,[58] aiding, among other things, predictive analytics. According to LexisNexis:[59]

Legal analytics gives litigators an advantage over opposing counsel by providing data-driven insights into how judges, firms and parties have behaved in similar cases in the past, and how they might behave in similar cases in the future. Analytics can accurately estimate variables like time to trial, the potential value of a case and likely outcomes. The ability to have a bird’s eye view of not just one case, but thousands in a jurisdiction, can shed light on when and how to litigate the issues — informing potential litigant strategies and maximising chances of success.

However, this kind of analysis might be better characterised as risk-assessment analysis, not strict legal analysis of relevant issues in a proceeding. In this sense, so-called “legal analytics”[60] is more of a business tool (eg for strategising a case and making predictions).

AI and the work of the courts

With these constructs of intelligence, work and tasks in mind, and an overview of the development and application of AI to date, one now has a framework in which to assess the possible impact of AI on the particular work of the courts and participants in litigation. Court systems are often characterised, fairly or otherwise, by lengthy delays, high costs and occasional injustice. The question is whether and/or how AI can make courts more efficient and just (including by making more consistent decisions). To do this, it is necessary to outline the work of those participants in the justice system and to understand the analytical tools/processes deployed in order to perform that work.

Judicial method — broad overview

Anyone who has spent any time in a court will know that the work of a court is extremely complex. However, clinically (eg to a programmer or an engineer) it might be said that there are only two inputs — the evidence and the relevant law — and one output — the decision.[61] This is, of course, superficial and ignores a vast landscape of frameworks, principles and systems, and the philosophy of justice.

One such major principle is that the courts should apply “the rule of law” to disputes. This generally requires that the question of legal rights and liabilities of parties should ordinarily be resolved by the application of the law and not by the exercise of (undue) discretion on the part of the relevant judicial officer.[62] As former Chief Justice Gleeson AC of the High Court of Australia has noted, “the contrast between rules of general application, known in advance, and ad hoc decision-making, is a familiar aspect of the concept of law”.[63] Judicial decisions are to be made according to legal standards rather than undirected considerations of fairness.[64] The standards are “external” to the judge, not merely “personal”, lest the court would become an unregulated authority.[65]

Under this policy setting — namely, that decisions should not be merely discretionary — courts develop rules and guidelines, and, moreover, set broad objective standards to apply, such as: the “reasonable person” in relation to negligence claims; and the “reasonable person’s” interpretation of a contractual provision having regard to text, context and purpose. These kinds of normative standards are necessarily qualitative in nature — something an algorithm will not deal with unless specifically instructed to convert it to a quantitative assessment. A similar policy approach is that a court will seek to determine what a statute “objectively” means, having regard to certain canons of construction, looking for the objective intention and purpose of Parliament expressed through the words of the statute, which constrains the exercise of discretion when construing statutes.[66] Undue discretion is also sought to be taken from the court by the operation of stare decisis (prior similar cases and superior court decisions should be “followed”) — that is, the role of precedent. It is a fundamental legal principle that people should be treated equally or consistently unless there is just reason not to.

However, we should not jump to the conclusion that somehow the above normative standards are rigid and deterministic, and that discretion has therefore been eliminated or has a limited role — it does not. The application of these normative standards and precedents is not purely algorithmic in nature. Cases are often wide open as to result. Although standards based, the application of the law retains significant discretionary elements, including: whether particular evidence is or is not accepted by a particular judge;[67] the weight to be given to evidence — usually a qualitative assessment derived from a range of factors, including a witness’s demeanour; which side of the coin the particular facts in question lie as to the standard (with each party to the proceeding having the contrary view); and whether there is a similar or dissimilar precedent available for comparison purposes. In addition to these elements, there is a further and significant discretionary consideration, albeit where the court in question is following “the rule of law”, guidelines and precedents. That element is the scope of the universe of evidence or data the court takes into account. As discussed above, AI is essentially applied in relation to a “closed universe” of data or where, at the least, the dataset is known, and from which (perhaps) predictions or extrapolations from the dataset can be made using dynamic algorithms. However, in the courts, the universe of data is not closed — it is open. Courts draw on data external to the evidence tendered in a case. That external data comes from, inter alia: the judge’s personal experiences; the judge’s personal biases; and the judge’s assessment as to those mythologies within society of what is “fair” or “just” in the circumstances. An excellent example in this area is the imposition of penalties for crimes, where judges seek to match the penalty with the conscience of the community and do so without any surveys or other studies — plain estimates are made based solely on the relevant judges’ experience of the world and prior penalty settings. These types of considerations involve what has been described as “open-textured legal predicates”[68] or, as Nettle J of the High Court of Australia has described it, “open-textured rules”.[69]

The nub of the problem is the difference between the process of scientific reasoning and the process of legal reasoning, the former knowing nothing of introspective notions of interpretive knowledge or metaphysics or theology, with computational law assuming that there can only ever be one proper outcome and that its indication requires no more than the application of logic and reason. As Nettle J points out, “open-textured rules” yield to more than one possible outcome and involve acts of will, as well as of cognition.[70] Expressed more broadly, the issue is not a single simple problem, but a high-complexity problem involving societal implications.

The so called “neighbour principle” in the law of negligence provides ample demonstration of the above issues. The development of that principle involved the House of Lords drawing on biblical values to derive a new normative standard of care and class of person to whom that standard should apply. This is obviously beyond the reach of AI. A further example is in the law of causation. In broad terms, causation is, at law, to be based on common sense and is now more formally “computed” by reference to issues of fact, law and policy within legislation.[71] In substance, at least in some jurisdictions, the test for causation has been put into a basic textual algorithm (recorded in statute). Even then, however, there is still a “slip rule” (causation based on policy). In the area of misleading and deceptive conduct, it is explicitly accepted that value judgments and policy considerations have a part to play in determining whether an act is sufficient to bring about the harm suffered by a plaintiff.[72] Misleading and deceptive conduct claims themselves involve a comparison of the normative conduct within the statutory proscription of misleading or deceptive conduct with the defendant’s own conduct. The “benchmark” normative conduct is not just a matter of evidence at trial but involves judicial considerations of that norm.

The genius of AI is that it relies on significant computing power, at low cost, with huge memory capacity and large volumes of data in a closed universe of data, and results/tasks are determined by reference to those factors. But for the reasons advanced above, this is not necessarily so in relation to a task the court may have before it of determining a just result. Shifts in the law can apply despite the “closed universe” of evidence before the court on the particular occasion, caused by the court’s perception of the need for change.[73] It is hard to imagine AI playing any role in the law’s movement to keep pace with societal expectations and norms unless the court decides to move from “judge’s perception” towards an approach of mining data that captures requisite mythologies held across society. Perhaps that might one day be done by reference, say, to the public undertaking Google searches on particular topics, such as the adequacy of penalties in relation to “one-punch” crimes. That would, however, be to determine justice on the basis of popular view rather than what is “right”.

Overall, it is difficult to see AI performing this kind and complexity of work; of exercising discretion in complex circumstances to produce “fair results”. Fair results are idiosyncratic, not formulaic. It is not possible to define “fairness”[74] and so AI’s search/pathway for it is illusory. This is so even if there are so-called “fairness factors”[75] designed into AI. If AI were set to formally determine normative standards, it would draw on other normative standards from its database, which may over time be inapposite. Or it may just stare back at the programmer; setting the standards requires imagination, creative reflection and independent judgment, and an assessment of normative standards of society as perceived through the lens of the judge in question. It is also difficult to see AI performing what may be very complex work in relation to statutory construction. For example, the proper interpretation of a provision may require a close reading of the words in question, the overall Act in question, the statutory framework, the history of the legislation, extrinsic materials, and even the commercial context in which the provision operates.[76]

Finally, apart from all of the above, there is the question of the onus of proof. In civil cases the plaintiff has the onus on the balance of probabilities. But experienced barristers know that the onus can shift, more than once, and that in some cases the onus is reversed (say in tax cases). All of these aspects would need to be in the algorithms in question, and when it came to competing and fine judgments as to the weight of the evidence, it is hard to see how the algorithm could work its way through all that.[77] One possible solution would be to “simplify” the onus — but that again is a structural or framework issue of very real legal and philosophical significance.

In addition to these complexities in judicial method, there is the overarching risk, which AI could never cope with as an alternative judicial determining entity, that the reasoning in the court may involve policy choices as to what result is fair.[78] This may be based on experience, wisdom and reflection. Judicial processes and results of this kind cannot be programmed into AI algorithms.

Another example of an area beyond the reach of AI is the application of equitable principles relating to the making of declarations.[79]

And of course at the highest level of policy, there is the question of constitutional power, which could never be left to AI. It is barely even imaginable, for example, that any society might confer on AI the decision as to whether or not to overturn Roe v Wade. It is barely within the reach of human capability let alone AI to decide such questions.

AI and robojudges

With all this said, it has nevertheless been postulated that one day there may be so-called “robojudges”:[80]

Robojudges could in principle ensure that, for the first time in history, everyone becomes truly equal under the law: they could be programmed to all be identical and to treat everyone equally, transparently applying the law in a truly unbiased fashion.

It has been suggested that such robojudges would be free from bias, able to take in more evidence (due to computing power and data), have incredible knowledge of all areas of the law (eliminating judge specialisation lists) and be more efficient.[81] However, these advantages might be taken away by bugs or data errors in the relevant robojudge algorithms.

There has been recent attention on the question of whether A1 can in fact ultimately “swallow” the legal system.[82] It has been observed that the use of AI in judicial decision-making, at least in the USA, is largely “theoretical” at this stage.[83] For the author’s part, unless the “rule of law” and its application by courts, as outlined above, is to fundamentally change and become significantly inflexible within rules and normative standards, it is hard to see robojudge appearing on the bench any time soon. The idea of robojudge confuses a superficially low-order complexity task (facts × law = judgment) with a high-order complexity task involving a sculpting of societal norms and standards over time. Further, in any event, the robojudge, unless specifically programmed to do so, would not necessarily articulate all its reasoning.[84] While the algorithm running robojudge could be interrogated, the pathways it develops for the result might not be. For example, where machine learning was involved, it may not reveal the dataset it has relied on; a great amount of the reasoning might be hidden. A lack of express reasoning is a fundamental cornerstone of our system of justice. A defendant who does not know why they were convicted, or why they had to make a relevant payment under a contract, may feel that the system is intrinsically unfair because it cannot be understood, tested or appealed against. How a judicial system without articulated reasoning could therefore be used in a democratic society is difficult to see, because justice is itself, as a function of fairness, contestable.

The robojudge is but one example of what may be described as the delegation of decision-making from human judgement to automated systems. That delegation carries significant dangers. That danger is well recognised in America and in some cases the delegation is (rightly) prohibited by regulation.[85] “Assembly line” adjudication has inherent warning signals to it.

The issue of appropriate and inappropriate delegation referred to above, is very real. It has recently been reported that in China, large numbers of cases are decided by AI (as to which see Managing the large volumes of cases — implications, below). It has been reported that a check imposed on AI is that relevant decisions must be reviewed by a human. Presumably this is to guard against issues such as discrimination or general “unfairness”. But the criteria for review appears to be unclear. It has been reported that if the human (judge) disagrees with the AI decision, he/she must give reasons for the disagreement. The use of humans for oversight in automated decision systems (ADSs) is not new as a concept and appears almost ubiquitously across industrial manufacturing, and also appears in Western literature as a justifiable element of legal systems design.[86] But, again, as soon as any order of complexity of decision-making affecting people’s rights arises, there should be clear warning signs against delegation of decision-making in relation to those rights.

Managing the large volumes of cases — implications

The use of AI has exploded in jurisdictions where huge numbers of cases must be dealt with, and where notions of the need for individualised “fair” outcomes are less apparent. In November 2016, it was announced that the judiciary in China would tap into the power of AI to build smarter courts, relying on big data, cloud computing, neural networks and machine learning. It has been reported that, so far, Chinese courts have not only made court filings available online but have also sought to create processes that would produce electronic court files and case materials automatically.

In north China’s Hebei Province, 178 local courts have used an AI-powered assistance application for judges since July 2016, called Intelligent Trial 1.0, which has substantially reduced the workload of judges and their assistants. It has been reported that the software has helped nearly 3,000 judges handle more than 150,000 cases, reducing judges’ workloads by one-third.[87] These results must of course be viewed contextually. Since 2016, the use of AI by Chinese courts has grown exponentially. This is part of China’s drive to lead the world in A1 by 2030.[88]

China has a “China Judgments Online” website now holding more than 120 million documents. Chinese courts actively “mine” this data, and there is even a secondary market where companies seek to repackage court data and analysis for lawyers and clients.[89]

In 2017, China embarked on a massive AI court related project called “Project 206”, in Shanghai. Some 400 officials and 300 IT specialists promptly developed software directed at streamlining evidence collection, improving consistency and strengthening over-sight of judges to reduce error.[90]

More recently, courts in China have initiated experiments to integrate AI into adjudication by software that reviews evidence, suggests outcomes, checks consistency of judgments, and makes recommendations on how to decide cases.[91] The so-called “smart court” has now emerged, with algorithms being used to boost court efficiency and consistency. But the drive for AI use also assists the judge’s personally too, for in China, judges can be penalised for decisions being overturned on appeal, deemed wrongly decided, and for decisions which might give rise to social unrest.

These developments challenge traditional notions of the role of the courts, and of a court’s authority.[92] They also challenge the notion of judicial discretion. And they also challenge the independence of the courts; who for example owns the algorithms, and who reviews them over time? And they give rise to questions over fundamental processes, at least in many countries, including checks against error, namely appeals.

AI as an aide to evidence

Determining the truth

At the next rung down, AI might be able to be deployed to assist the court or the parties with evidentiary matters. For example, it might be possible to deploy machine-learning techniques to better understand and analyse brain data from functional magnetic resonance imaging (fMRI) scanners[93] to determine what a person is thinking and whether they are telling the truth or lying. The use of such techniques is currently controversial. However, if available, it has been suggested that this might shorten trials or reduce workloads considerably.[94] Juries might be connected up to such AI so that they can assess whether the defendant is lying.[95]

Despite this prospect, the “truth” is not a single concept.[96] The weight to be given to particular evidence is a high-complexity task. Judges are required to assess evidence based on the input of facts provided by the parties, with a subjective weighting given by the judges to particular aspects of the evidence if required.[97] The weighting is usually a product of wisdom, experience and intuition. It may follow that AI might only be relevant in this area for cases where the universe of facts has been agreed between the parties (ie under a statement of agreed facts), where weighting of the evidence has been limited. However, that would not solve many problems — for example, it would not solve the ultimate issue of whether particular conduct (agreed to have occurred) is negligent.

Having said this, there may still be applications for AI in resolving or assisting in the resolution of litigation or disputes. As mentioned above, in China people are charged for traffic violations based on their identities taken from digital cameras. This may ultimately become an unobjectionable fact because the camera is vastly more accurate than the human eye. (Ironically, in the case of driverless AI cars, they will never commit offences, so the issue may be otiose.) The area of expert evidence might be another example where AI may take a key role — for example, in refuting a scientific causal connection assertion. It is hard to imagine AI fully displacing the review by the court of expert evidence. It might, however, identify anomalies within it, and “mine” global databases for inconsistent conclusions. This may well become a big part of the application of AI in trials over time.

However, in relation to expert evidence, courts require that the reasoning giving rise to expert opinion be transparent (and therefore able to be tested).[98] If the output of the AI is not able to be dissected and assessed by a human judicial officer, the court is, at least currently, likely to give the evidence little or no weight.[99] It follows that if AI is to make major inroads into the way in which expert evidence is received in court, the current system as designed — that is, one requiring expertise or knowledge, stated assumptions and express reasoning — may need to change. Alternatively, AI expert evidence will need to be developed so that the reasoning processes are fully exposed, not just the outcome of the relevant inquiry.

AI assisting judges

If there is no foreseeable absolute or significant role for AI in the work of judges, nevertheless there might be a role AI can play as the court’s assistant. After all, the court’s rules currently permit the appointment of an “adviser” or “assessor” to the court (albeit rarely used).[100]

There are parallels in commerce for such an appointment. In May 2014, a Hong Kong venture-capital firm, Deep Knowledge Ventures, which specialises in regenerative medicine, appointed an algorithm called VITAL to its board. VITAL analyses huge amounts of data about financial matters, clinical trials and intellectual property held by relevant companies that are investment targets, and then makes recommendations. The algorithm then, with the other board members, votes on whether the firm should make an investment in a specific business.[101]

It seems there is no reason why AI could not in the future “participate” in a court case, like a sophisticated “Google Home” device (increasingly being used in homes and offices), observing the trial, fact-checking and commenting where it thought appropriate having regard to a continuous mining of an immense database available to it of facts and law. The AI assistant might even object to evidence.

AI as an aide to impartiality and to assist with consistency

Finally, it is possible that AI may assist the courts to be more “objectively” impartial by suggesting to judges a particular position the court should adopt in certain areas based on a dataset collected or constructed, such as “risk of re-offending”, likelihood of breaching bail, and the like.[102] With this aide, however, comes considerable risks to the work of the judiciary; for example, if the pathway to the result, called “reasons”, cannot be exposed, that does not, at least in Western societies, respond to principles of fairness and transparency.[103]

AI in interlocutory applications

As can be seen from the above, AI appears for the foreseeable future to be unable to do the work of judges, particularly at trial level in superior courts, unless the processes and methodologies of the law in relation to the quelling of disputes fundamentally change. As former Chief Justice of the High Court of Australia, Murray Gleeson AC, has observed, it is for the parliaments to decide what controversies are justiciable, and to create, and where appropriate to limit, the facilities for the resolution of justiciable controversies.[104] As his Honour has pointed out, parliaments regularly expand and contract the subjects of justiciable controversy.[105]

There would seem to be many circumstances where AI might step in and take the place of courts where courts make explicit yet very discrete discretionary judgments, for example: whether to order an injunction; whether a court case should be moved to another jurisdiction; whether to grant security for costs; whether a party has waived or abandoned its right to arbitration; and whether to award costs to a particular party. In virtually all of these cases, there is a specific body of principles, guidelines or factors that have been developed by earlier decisions.[106] These “markers” or (to use the phrase adopted above) “recognition points” of what direction the court in question should head in and conclude on assist the court to determine what is “fair” having regarding to earlier thinking on the issue in question. This process might be said to be approaching the “recognition points” in the algorithms that were mentioned above and therefore, for these particular types of cases, these processes may be capable of being reduced to an algorithm(s). For example, whether a proceeding should be transferred to another jurisdiction might be determined by the following factors: the cost to each party of staying or going; the place where the principal activities involving the issue took place; and the law governing the relevant issue. These could be put into a relatively simple algorithm, with weighting given to the factors, and with AI producing a determination. There would be no need for a hearing. The algorithm could be programmed to look for like cases, and to distinguish other non-like cases.

Another example might be in the area of waiver in relation to arbitral rights under contract. The algorithm could determine whether the minimum number of judicial steps have been taken by the plaintiff to conclude that there has been a waiver of the right.

While this may sound attractive, experience shows that such limited discretion cases are often more complex than the above might suggest, and may involve factors that are not within the dedicated algorithm. There may also be a larger context to the question under consideration. Human judgment therefore might be necessary in order to derive a fair outcome. Further, in many jurisdictions, interlocutory decisions are able, with special leave, to be appealed. This “check” on such decisions ensures fairness for outlier cases. Perhaps the check could remain; an appeal could be available from an AI decision to a human judicial officer if certain threshold criteria were satisfied or not (as the case may be). Overall, it is possible that one day there may be a whole range of areas of litigation where interlocutory applications are dealt with solely by algorithms (AI) rather than at hearings.

Similarly, it would no doubt be possible to set penalties as derived by AI from an algorithm with known inputs and assessments. Again, however, setting penalties is not usually done solely by reference to the closed universe of evidence before the court in the particular case and judges no doubt draw on their observations and subjective judgment as to what is appropriate. If AI were to undertake such a task, using both “internal” and “external” evidence, in the case of the external evidence AI would need to mine data externally available to it as to the appropriate penalty to set. This appears at present to be unworkable.

Let’s take another example. In many if not all proceedings, the court decides whether and when the parties should go to mediation. The courts have, with the increases in costs of the courts, and delays, increasingly engaged mediation as a “pre-litigation” step in order to achieve fair and speedy outcomes at acceptable costs. It might be possible to have AI determine, based on a set of criteria, which cases should be the subject of compulsory mediation, even if one party does not agree to one. This might be based on a mining of a database that shows that particular kinds of cases have in the past settled at mediation in say 95% of cases. It might also be possible for AI to act as the mediator, based on the suggestions put to it by the parties, and suggest a possible settlement range. However, even if AI could be put to this task, mediations themselves are necessarily humanistic matters involving highly subjective judgments about what the outcome of a proceeding might or should be, the ancillary risks that might pose, and what can be afforded. There is also a “human dimension” of work in mediation, relating to creative solutions and intuitive assessment.

Finally, a good case for AI in the court’s process relates to the question of costs. It might be possible for AI to track the progress of the proceeding by reference to the court’s file, and to determine the costs that should be payable at the end of the proceeding. This might eliminate the work of the costs court and save tens of millions of dollars over time.

AI and tribunal work

Assuming our current notions of justice remain, AI will have significant difficulties, at least in the foreseeable future, attempting to do “judge” work. At present, it is simply impossible. As the former Chief Justice of the High Court of Australia, Robert French AC, has pointed out, it is the courts and only the courts that can carry out the adjudication function involving the exercise of judicial power.[107] The courts exercise the power of the state, interpreting laws, ensuring procedural fairness and rendering binding decisions across society at large. However, the question arises whether less complex legal analysis and decision-making can be performed by AI.

It is generally accepted that some of the work of tribunals may be less complex than superior court litigation, because the knowledge field and expertise required is narrower, and there is less need to develop overarching principles. Further, for some tribunals, case-by-case outcomes are generally acceptable without the need for formal rules of evidence and broad principles. AI may have some significant role for these tribunals over time. For example, in relation to refugees, the decision-maker may be an AI algorithm based on input data of the applicant’s background and an assigned percentage likelihood that refugee status is justifiable. Another example, current at present, is whether someone is a citizen of a particular country,[108] or even a dual citizen. Apparently INDIGO, a large legal expert system deployed by the Dutch immigration administration, deals with issues like this.

This kind of work of AI would require data to be inputted, which presumably would need to be certified by a legal practitioner or other qualified person. Such outcomes might also need to be audited by a “human” decision-maker for reasonableness against some normative standard. The audit might be like a Tax Office audit, targeted at certain decisions. And if AI was rejecting virtually all applications, the algorithm would presumably need to be revisited.

Finally, in some parts of Europe, family court settlements are determined without hearings, using allocations of assets between husband and wife based on an algorithm of what is fair. In the Family Court of Australia, software is being developed that generates advice on how property from a marriage would be split under a court determination.[109]

AI and small claims

Down yet a further level of complexity and dollar value of amounts in disputes are “small claims”. The question is whether these can be “AI’d” out of existence. This has in part already been suggested. There is a product (a web-based interface) called DoNotPay,[110] which is dedicated to assisting people to make small claims or defeat legal infringements such as parking or speeding fines.[111] The user fills in a form on screen, which is dedicated to the particular issue at hand (eg defending a fine), and the form is submitted to the relevant adjudicator (such as the small claims court). By using the product, a person does not need to use a lawyer. But that is only to start the claim and, in any event, the claim is of a size that would not justify a lawyer. This product will not, therefore, revolutionise the litigation system; however, it might make one small segment of it more efficient at the front end. It is a laudable product as an “access to justice” device; it enables people to take a formal litigation step.[112] But it does not do much more. An associated AI product has been developed in relation to class actions, where individuals can log their claims individually.[113] The basic idea is that the product lets a complainant approach a defendant electronically so as to be able to settle a claim online.[114] For more commercial work, there are legal online services such as those offered by Allen and Overy in London, generating more than £12 million for the firm each year from subscriptions.

If AI developers are able to take this initial “dip in the water” at doing litigation without a lawyer to further stages of development, the litigation landscape might change and possibly materially. However, at present, that looks a long way off. The question is not whether AI can improve in order to do this, but whether the structures of our legal system can be redesigned, giving AI more work to do because much of the discretion is taken out of the system. That is a large philosophical question, not merely an AI question.

AI and private dispute resolution mechanisms

Yet further down the tree of dispute resolution are the systems established by private individuals to quell controversy. Examples of this include dispute resolution provisions for sporting bodies or for consumer transactions, such as purchases over eBay.[115] In effect, consumers contract to resolve disputes electronically and agree to be bound by the dispute resolution process. It has been estimated that three times as many disagreements each year among eBay traders are resolved using “online dispute resolution” than there are lawsuits filed in the entire United States (US) court system.[116] This may drive efficiencies, but not all are satisfied that it is appropriate. The Paris Bar has expressed disapproval with non-lawyers providing legal services by trying to solve legal claims without lawyers. The Paris Bar has worked with regulators to create the Paris Bar Incubator, which works to focus new legal technology.[117]

Finally, even with AI dispute resolution, one would expect that a human’s judgment would at least on occasions be necessary by way of a control or an audit over that work. In any event, if there are to be such AI outcomes, this fundamental question arises: who will be the “owner” of the AI and/or the system governing it?[118] The natural answer is the court, the tribunal or the organisation (eg eBay) in question. But, as mentioned above, will a court or tribunal have the capability and resources to maintain and improve the AI over time? Will it have the resources to maintain the controls within the AI to ensure that it is meeting its purpose over time? Will it have the resources to audit those controls? Or will the court/ tribunal ultimately “outsource” the development of the AI to contractors, as it might do its IT systems? These governance issues do not just pertain to the court system.

The future of virtual dispute resolution

All of the above questions seem to generate a more general question: whether a court has to be a physical place, or whether it is in fact the service of justice? Certainly, the presence of a court is a symbol of government authority and facilitates highly complex problems (both logical and philosophical) being dealt with iteratively by competent legal experts and the court itself. However, alternative dispute resolution online, and courts themselves online, are a developing and significant trend.[119] They facilitate affordable access to justice, timely dispute resolution and binding results (ending lengthy appeal processes).

In this area, it may be possible that AI could be used to perform decision-making through pre-agreed contractual arrangements. As stated above, eBay is the classic example of this. Further, there are examples of low-complexity online courts already in existence around the world — in the Netherlands, British Colombia and the United Kingdom.[120] These online processes are intended to enable an exchange of facts, identify issues for resolution, and suggest consensual resolution of disputes if possible, failing which the dispute may go to a hearing or the dispute may be resolved “on the papers”.

AI and its impact on how courts are viewed by litigants and society more generally

So far this article has been essentially exploring the notion of whether there can be “behavioral equivalence” between traditional human-based, judicial decision-making and AI-based decision-making; that is, whether outcomes can be the same under each pathway. So far, at least to this author, behavioral equivalence is impossible except at the most basic procedural level such as rudimentary procedural tribunal work and the like.

But let us assume nevertheless for the moment that behavioral equivalence is possible. The question arises whether, although the outcomes are the same, there are nevertheless significant other differences which remain.

To consider this further, let us suppose that a particular litigant attends court, sees the court processes in motion, observes the judge in question, and assesses the reasoning for the result by reading the judgment. The litigant will necessarily make certain value judgments about the litigation and the result. The litigant will no doubt ask themselves various questions, including: was the process honest? Was the court courageous in its determination to upheld rights? Was the overall process fair? Was the result fair?

Now assume the same result has been achieved using AI. These questions do not go away when AI is introduced to determine the result. These same value judgments about honesty, trustworthiness, courage, and fairness, will all still inevitably be made by the litigants.

The literature on AI and the courts has a dearth of learning on how AI may affect (perhaps adversely) on those values as attributed to the court system by litigants or even observers.[121] Despite this, it is essential for governments to assess how the tradeoffs between human judgments and AI judgments in quelling controversies via adversarial (or even inquisitorial) systems are valued by litigants and society in general. Much work is needed in this area to determine, before introducing AI into a particular process, or so as to replace a process, how the involvement of AI will affect the values which litigants and society place on the relevant court system and its status in society.

AI and the work of law firms

Law firms advise on and implement complex transactions, including mergers and acquisitions. AI now exists for this kind of work and is being embraced by large law firms across many continents.

In late 2016, American academics published a paper entitled “Can robots be lawyers? Computers, lawyers, and the practice of law”.[122] In summary, the research accessed the time records of lawyers in major American law firms, allocated work across certain work streams (eg document management or legal writing), and then attempted to determine what work could or could not be done by AI. Surprisingly, the analysis revealed the absence of a strong association between the ease of automating a task and whether the task was performed by a junior associate, senior associate or a partner. Nevertheless, the overall conclusion was that automation could have a material employment effect on lawyers.

A major AI product now available in the legal services market is called Luminance, based in Cambridge in the United Kingdom.[123] It has developed data systems for NATO (the Northern Atlantic Treaty Organisation) and the British National Health Service. Its competitive advantage is that it applies “new maths” to solve complex real-world challenges where there are very large, heterogeneous, constantly evolving datasets. This is what Luminance says about the product:[124]

Founded by mathematicians from the University of Cambridge, Luminance’s Legal Inference Transformation Engine (LITE) uniquely combines supervised and unsupervised machine learning to provide the most robust, powerful platform for legal analysis available to lawyers. Luminance’s technology can read and form an understanding of legal documentation in any language and jurisdiction, immediately surfacing the most relevant information and vastly reducing the amount of time spent in document review.

Luminance is used by over 230 law firms and organisations in over 50 countries and in more than 80 languages across a wide range of practice areas, including M&A due diligence, property portfolio analysis, eDiscovery, contract negotiation and model document comparison.

According to its website, Luminance provides lawyers with the most rigorous analysis of their documents, instantly highlighting anomalous areas or risks that require urgent attention, working across the entire dataset, negating the need to rely on sampling, giving lawyers confidence that no critical document or clause has been missed. The AI does the burden of low-level cognitive tasks common in due diligence, compliance, insurance or in-house contract management, enabling lawyers to spend more time advising clients on business-critical issues. According to Luminence’s website, one of its clients has been able to upscale the review of documents from a rate of 80 per hour, to some 3,600 per hour. Luminance technology uses inference, deep learning, natural language processing and pattern recognition, supervised and “unsupervised” machine learning. Luminance has a broad and increasing client base across the globe.

Another type of AI product is called “COIN” or Contract Intelligence, developed by J P Morgan Chase and Co. The product engages machine-learning algorithms to review and interpret commercial loan agreements. The company has estimated that this AI saves 360,000 hours each year of lawyer and loan officer work.[125] It has been said that by the use of these kinds of products, AI is “closing in on the work of junior lawyers”. That is probably right.

This begs the question whether there is such a product for litigation. Law firms assist in the litigation process in a large number of ways, including:

  • taking instructions (gathering facts)

  • determining (usually with counsel’s involvement) who should be witnesses

  • reviewing documentation in support of a case or to determine how it might damage a case

  • advising on the law

  • assessing prospects (ie the risk of failure or the likelihood of success)

  • managing the trial process for the participant, and

  • preparing or assisting in the preparation of legal submissions.

AI can and will increasingly be deployed in aide of these tasks.[126] For example, as to the third point (reviewing documentation), electronic discovery processes already take the place of lawyers or paralegals looking through large volumes of material, thereby significantly reducing time and cost.[127] Courts have more recently embraced discovery using algorithms, including so-called “predictive coding”, as a means of keeping the discovery obligation manageable.[128]

As to the fourth point (advising on the law), predictive coding has been applied, selecting relevant statutory provisions from those retrieved with keyword searches.[129] Further, significant research tools are now available. In the United States, for example, you can submit a legal question to a firm, which will use AI to analyse the issues and produce a suggested memo to be provided to the relevant client. There is a turnaround time because humans have to turn the memo into a more credible product.[130] As mentioned above, ROSS (an IBM product) is a legal research tool. It enables law firms to slash the time spent on research, while improving results. According to IBM, existing technologies such as keyword search poorly makes sense of the volume, variety, velocity and veracity of legal data. Watson’s cognitive computing capability enables ROSS’ intelligence. The ROSS application works by allowing lawyers to research by asking questions in natural language, just as they would with each other. According to IBM, because it is built upon a cognitive computing system, ROSS is able to sift through over a billion text documents a second and return the exact passage the user needs. According to IBM: “Do more than humanly possible. Supercharge lawyers with artificial intelligence.”[131] However, it should be noted that a recent review of ROSS[132] has found that it is still a “supplement” to traditional Boolean searches. In the review, young lawyers were asked to answer American bankruptcy law questions using ROSS. ROSS outperformed other search technologies but still only found about half the relevant authorities within the first 20 results.[133] There were, however, significant reductions in research time, which transpose to lower costs to clients and therefore a more competitive law service. The above review concluded that the gains from using ROSS did not constitute a dramatic transformation in the use of technology in legal services. Rather, the tool was characterised as a “significant iteration in the continuing evolution of legal research tools that began with the launch of digital data bases of authorities and have continued through developments in search technologies”.[134]

As to point 5 (assessing prospects), there currently exist AI products that will seek to predict litigation outcomes. This is based, inter alia, on not only the facts and the law but AI analysis of the particular judge allocated to the case and the particular counsel retained. It is not inconceivable that an AI product might be developed that will assess the risk on a “real time” basis — that is, during the trial — as the transcript and submissions are made to the court over the course of a trial. It has been said in relation to the US Supreme Court that computational statistics can often yield more accurate predictions of the likely behaviour of courts than retained lawyers in the cases.[135] The predictive litigation outcomes are particularly suited to the US legal system where the level of activity is truly immense. For example, there are 3,124 state courts in America, and over 15 million civil lawsuits filed each year (roughly 41,000 claims a day). This gives rise to a significant opportunity to “mine” the data from such cases and to develop predictive algorithms.[136] Such data is unlikely to be available other than in the huge legal litigation markets such as the United States.[137] Predictive algorithms, in respect of particular judges, is prohibited in France.[138]

Anterior to all this is the question of whether litigation should be commenced in the first place. Here we might see the introduction of “self-executing contracts”,[139] able to initiate legal proceedings if AI detects breaches. Of course, that is a rather naive approach to litigation as launching it is substantially about reputation and risk as much as legal rights.

Reduction of operating costs

Aside from these direct litigation-type tasks AI might be put to, there is a whole body of AI development underway to assist law firms to reduce their operating costs, thus making them more competitive over time.[140] A good example is the current development of AI to enable a fee earner’s computer to prepare the narrative and time spent on work done by the fee earner, based on what work is being done on the computer over time.[141]

Overall, the general consensus, at least in Australia, is that AI will be increasingly deployed in the legal profession to take repetitive legal work from lawyers and their assistants, but will not replace trusted legal advisers.[142] Relationships with clients may well be enhanced as clients see solicitors doing more value-add work and less of the repetitive work that is currently so costly (such as discovery work in litigation). Computers and AI have already transformed the accounting profession, enabling accountants to provide more value-add work. It is predicted that the three “As” (automation, AI and analytics) will likely shape the legal department of the future.[143] No doubt this is the position for law firms too.

AI and the work of counsel

As to legal research, AI will develop apace, but it has to be said it is currently well advanced already. Legal research now takes a fraction of the time it used to take, say, 10 years ago.[144] Counsel will no doubt be advantaged by such developments.

AI might also be able to do the pleadings for a case. Counsel (or their instructor) would simply input the facts in question. The AI algorithm would then analyse the facts, draw on its database, and then produce a statement of claim, with particulars, containing the relevant causes of action and the appropriate relief. There might be yet other applications for AI. It was touched on above whether AI products could be deployed to predict litigation outcomes. This is likely to increasingly emerge, displacing the traditional quantitative guesses of counsel that the case is “50/50” or the qualitative assessments that the case is “well arguable” or “unlikely to succeed”.

However, where AI must necessarily fall short is with the art of the advocate (sometimes called court craft) — for this requires an understanding of the “social processes” of the courtroom, including as between the judge, opposing counsel and witnesses, taking into account the observance of duties to the court. Further, problem-solving towards a fair resolution of a dispute requires a didactic interaction between counsel for each party and the court. In effect, at least three super-computers (brains) are engaged on the one series of problems within a proceeding. Each hypothesis, fact, analysis, reasoning and conclusion is subject to interrogation. This permits deep reasoning across the so-called “open-textured rules” landscape referred to above. Further, the “art of the advocate” involves fine and frequent judgments in a complex environment. It is hard to see AI affecting this aspect, particularly where qualitative judgments are called on to be made regularly.

There may be areas where the work of counsel will increase with AI. One such area might be in relation to the testing of the relevant algorithm or algorithms producing the “computational law”, and the challenges (or support) for the relevant databases relied upon by those algorithms. Whether this will require special new expertise by trial counsel — for example, in the area of computing and/or engineering — is yet to be seen. Another area that might increase the work of counsel is in the possible certification of factual data for input into AI algorithms, particularly for tribunals whose work has been subsumed by AI. This would limit (but not eliminate) “gaming” of the relevant system. The current Civil Procedure Act 2010 (Vic) requires certain certifications by practitioners before proceedings can be commenced. This might be extended through the use of AI.

As foreshadowed above, there remains the broader question of what the Bars around Australia should do regarding AI. Since 2015, the Canadian Bar Association has had an Information Technology Committee, which, inter alia, monitors developments in computer technology, case law and regulations relating to and regards AI as a specific area of interest. The Committee is also tasked to keep members informed, to comment on legislation, and to make recommendations in relation to changes in the law in this area. The Victorian Bar has a similar committee. It is clear enough that the Bars across Australia should make every endeavour to monitor and keep on top of the AI revolution. And perhaps computing and AI should ultimately be compulsory continuing legal education for counsel (a need law schools are increasingly aware of).

Policy issues for the future regulation of AI

The regulation of the development of AI is a new frontier. AI development in Silicon Valley is regularly described as “the wild west”[145] Regulators are grappling with the kinds of policies that should be put in place to deal with AI development, and governments across the globe are undertaking governance reviews of AI and monitoring what other governments are doing.[146] However, already, there are well developed principles across a range of disciplines, including in engineering activities.

An obvious regulatory field for consideration is driverless cars. Should they be required to pass certain regulatory standards before they can take to the road? The answer seems obvious, but what would be the content of those regulations? Would the car need to pass a test (like a person does to obtain a licence)? If a driverless car crashes, who is at fault?[147] Simply identifying the relevant defendant may be a challenge: the company writing the algorithms; the data upon which the driverless car relied; the satellite companies that supplied the data; the electrical engineer who serviced the car? In Germany, the legal authorities are currently thinking about how the next level of laws should cope with this challenge.[148] Such dilemmas may well call for a fundamental policy reset, involving “no fault” litigation, or insurance coverage for driverless cars.[149] If there is damage suffered from the use of AI, will the court have to look at the dataset the AI relied upon and determine who imposed a bias (if any) in the data, who set the size of the dataset, and who did or did not review the AI algorithm? If there is damage suffered from the use of AI, and there was no “protocol” for the use of the AI, is this itself prima facie a negligent act? There are also heavy societal governance issues associated with the use of AI for objects such as cars. Who should be in control of or audit AI’s use? Take, for example, a company that sells a car with advanced AI that is defective. If the directors do not insist on an audit (or independent certification) of the AI and/or its use, are the directors negligent?

Principles have also now been developed relating to the use of AI in the administration of justice.[150]

Courts themselves will have to grapple with other new controversies arising from the use of AI. It may be that particular laws do not adequately cover new scenarios, for example: who owns an AI algorithm derived by another AI algorithm?[151] With the inexorable aggregation of data for the purposes of enabling AI to perform useful analysis, how are privacy and ownership rights over the data best protected. Much of the mining of data by AI is on “the Cloud”. Despite privacy and confidentiality protocols, is the Cloud in fact appropriate for data mining? Particularly in legal proceedings?

This aggregation and mining of data gives rise to a further issue: the evidentiary regime in litigation relating to such data. The Evidence Act 1995 (Cth) operates essentially as a filter on what is or is not probative evidence. If AI draws on (mines) data from a huge dataset, how is the court to determine the probative value of that extraction, and therefore the conclusions derived from that extraction? There may not be any peer review of the extraction or other check or control on it. It might just be “pulled” from the internet. It might be highly valuable but be given very low probative weight because its source is not verifiable. These are all imponderables of future litigation.

One obvious but discrete area for consideration regarding AI is the possible development of non-discretionary elements within legislation or regulations — that is, elements designed so that there is no call for an evaluative process. This might apply, for example, in relation to citizenship status, social security payments, or certain compensation amounts; these could all be determined by algorithms, updating themselves (learning) on a predetermined basis. Essentially, this would be the work of automated design systems (ADSs) as mentioned earlier.

Further, as mentioned above, the use of digital cameras to recognise people and fine them for offences gives rise to a whole area of regulatory oversight issues.[152] Further, in the area of AI weapons, we may well see international treaties or protocols for their use. Finally, there is the whole question of the control of a country’s population by AI systems.[153] This is a fertile area for the courts, lawyers and counsel.

The pace of AI development

It appears AI will not be constrained by computing power, memory capacity, cost or data collection (as to the latter, at least in many fields such as insurance, finance and medicine). The interesting question therefore is this: what will constrain it? There are two important factors. First, AI cannot as yet read or comprehend, or itself reason, so it has significant limitations in understanding. In effect, it has no understanding of why it is doing something,[154] and AI cannot as yet innately reason to a conclusion. Secondly, there may be significant forces at play that impose “protocols” or constraints on the use of AI — for example, there may be an international treaty that only “amoral” algorithms can be developed.

AI is at the “hype” stage at present, but significant studies across industry suggest the overall impact will be high. One study found that, across the technology, media and telecommunications industry, more than 70% of respondents expected large effects from AI in five years.[155] Even in the public sector, 40% of respondents had this prediction. Having said that, according to one source, the number of convincing examples of the implementation of AI as commercially viable products do not make it inevitable that there will be generalised negative results for professions. An October 2016 survey of the adoption of machine-learning-based AI in legal services by large British law firms over the previous year, showed that there were only three reported applications that had been actually developed, plus a further nine collaborations/agreements partnerships in respect of that.[156]

It seems that the popular view is that the more dramatic effects of AI may occur within 10–20 years, when data consolidation has been better developed.[157] While there may be significant job losses due to automation (eg in call centres and banks), the 2016 World Economic Forum report suggests that upcoming disruption to employment will be multifaceted rather than narrowly influenced. For example, there are areas of expansion in employment; Infosys has trained more than 120,000 employees in design thinking.[158]

It should be noted that there is a significant investment around the world in AI development — in 2016, there was an estimated $26–40 billion spent on it.[159]

In the United States an AI Index has recently been developed, put together by Stanford University and others. In November 2017, the first (annual) AI Index report was published.[160] The report deals with the volume of activity in the AI field and its technical performance, and makes comments on “derivative measures” and observations about “towards human-level performance”. The report has significant limitations in that it does not contain data about AI research and development by governments or large corporations, and is US-centric. Nevertheless it shows a heavy uplift in activity, and that AI can beat human capability in certain areas (such as object detection), can match speech recognition, and is nearly as quick as a human to find the answer to a question within a document. Yet the report notes that AI’s capability, in contrast to a human, significantly degrades when tasks become “generalised”.

It is true that there are calls, in some quarters, to slow the development of AI for ethical and safety reasons. For example, in mid-November 2017, hundreds of AI researchers from Canada and Australia wrote to Prime Ministers Justin Trudeau and Malcolm Turnbull, calling for an international ban on “weaponised AI”.[161] These kinds of concerns have been identified and raised as far back as 30 years ago.[162] Assuming this were to occur, the question might be: how would this be enforced? By what court? And how would the court satisfy itself that a particular piece of weaponised AI is or is not within acceptable protocols unless the court is itself able to interrogate the AI?

Nevertheless, it seems the pace of AI development will only increase. The world’s leading experts cannot agree on when, if at all, AI will develop to the point of human-level intelligence, but some have said by 2055 or thereabouts.[163] It is not inconceivable that one day human intelligence might not be the apogee of intelligence. But the literature suggests that this is out past at least 50 years.

There are now literally thousands of Silicon Valley firms working on AI products in the legal services space. This may well increase in number. And as we have seen, under Moore’s Law, computing power will continue to develop with immense capacity in the future. It seems unlikely that the development of AI in legal services will slow — rather it is likely to increase dramatically as firms try to develop products that can produce steps in the litigation process that are cheaper and more effective, giving service providers competitive advantages in the delivery of legal services.

Finally, it is also not inconceivable that one day AI will prevent or eliminate controversy in society, because there will be higher levels of certainty. If this occurs, there may well be no controversies and therefore courts. If AI could eliminate controversy, because everything is far more certain, and where the “risk” of a counterparty defaulting is perfectly priced in a contract, there may be no need for litigation. But that outcome seems a very, very long way off from today.

Conclusion — impact on litigation in the short to medium term

It seems that in the short to medium term, AI will continue to be developed, and applied, in at least four areas in litigation:

  • removing repetitive and relatively low-skilled work, such as reviewing vast volumes of discovery

  • providing more powerful search engines and analysis regarding legal principles and arguments, and even perhaps reasoning

  • providing predictions on court proceeding outcomes — as discussed, this is a risk mitigation or risk assessment aide, not a trial aide per se, and

  • providing opportunities to mine vast volumes of data to determine whether relevant expert material can be used, or criticised, in proceedings.

If AI is to have an impact on the way superior court litigation is conducted, it is likely that a necessary condition for its influence will be less flexibility built into the litigation system — that is, less opportunity to provide corrective mechanisms for procedure or substantive irregularities or decision-making. This possible reset in the priorities of the courts is essentially a legislative task.

At lower tribunal levels with less complex controversies, AI may play a much more significant role. However, that role may develop more slowly than first thought; the development would need to come from government, which is not the natural engine room of entrepreneurial improvements in systems. Developments in the private sector to improve client services and lower costs may not always align with the societal framework changes needed to deal with AI. In any event, the natural engine rooms for developing AI further seem to be academia, businesses with an appropriate risk capital profile (venture capital), and businesses with huge balance sheets (such as IBM).[164] However, it is very possible that, even if there is material change in this regard, this may ultimately be seen to be sacrificing justice for efficiency. The right balance between these two is always contextual: an advanced wealthy democratic society may well tend to skew the balance in one direction; a different kind of society may skew it the other way.

The legal profession, including the Bars, must be on the look out for ways to improve access to justice, at lowering costs, and at improving time scales for litigation. As Nettle J has stated:[165]

As the custodians of the law, we not only have a responsibility to be at the forefront in the innovation and application of that kind of new technology but we also have reason to be excited about the benefits which it is likely to yield.

[1] This article was first published in 2020 by Thomson Reuters in the Journal of Civil Litigation and Practice and should be cited as D Farrands, “Artificial Intelligence and Litigation — Future Possibilities” (2020) 9(1) Journal of Civil Litigation and Practice 7. For all subscription inquiries please phone, from Australia: 1300 304 195, from Overseas: +61 2 8587 7980 or online at The official PDF version of this article can also be purchased separately from Thomson Reuters at This version of the article has been updated with certain further developments and learnings in this area.

[2] Barrister, Victorian Bar, Aickin Chambers, Melbourne: LLB, B Ec, F Fin, CA, GAICD. The author is grateful to Ian Macdonald, Catherine Burke, Karl Stewart, Geoff McGill, Phil Scorgie, Kevin Jones, Sue Gatford, Robbie Stamp, Kevin Ashley and Alex Babin for their contributions and feedback on this article.

[3] J Allsop, “The role and future of the Federal Court within the Australian judicial system” [2017] FedJSchol 12; see more recently J Allsop, “Technology and the future of the courts” (FCA) [2019] FedJSchol 4; Handbook for Judicial Officers, Judicial Commission of NSW, 2021; see also M Perry, “iDecide: digital pathways to decision” [2019] FedJSchol 3.

[4] For an excellent introduction to the topic of AI and its future, reference should be made to a study published in September 2016, by the so-called AI100 Group (part of the 100-year study on AI), a project hosted by Stanford University, the first report being P Stone et al, “Artificial intelligence and life in 2030: one hundred year study on artificial intelligence”, Report of the 2015–2016 Study Panel, Stanford University, 2016 at, accessed 6 July 2022. The AI100 Group’s remit is to investigate the long-term impact of the science, engineering and deployment of AI-enabled computing systems on people, communities and society. Its core deliverables are five-yearly surveys assessing the current state of AI, of which the 2016 report is the first. In the report, the Group describes AI and its component parts, reviews AI research trends, overviews AI used in particular sectors, and recommends AI policy generally.

[5] Digital 2019 — as to mobile phone, internet, social media and email use, see S Kemp, “Q4 global state of digital in October 2019”, at, accessed 6 July 2022; as to Facebook use, see M Mohsin, “10 Facebook statistics every marketer should know in 2021”, Oberlo, at, accessed 6 July 2022. As to efficiencies of AI, Pricewaterhouse has estimated that artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030: PriceWaterhouseCoopers, “Sizing the prize: what’s the real value of AI for your business and how can you capitalise?”, 2017, at, accessed 6 July 2022.

[6] Alan Turing is regarded as the father of modern computers and “intelligent” machines, but the phrase “artificial intelligence” was actually coined by John McCarthy in 1956: see M Tegmark, Life 3.0 — Being human in the age of Artificial Intelligence, Penguin Random House, 2017, p 40. For an excellent overview timeline of the history of “AI” since 1956, see Accenture Applied Intelligence, “AI explained — a guide for executives”, 2018, at, accessed 6 July 2022. For an overview of the likely impact of AI see D West and J Allen, “How artificial intelligence is transforming the world”, Brookings Institution, 2018 at, accessed 6 July 2022. Similarly, for a good examination of the future of AI and its likely impact on humanity (such as on employment, human rights, and warfare), see R McLay, “Managing the rise of artificial intelligence” at, accessed 6 July 2022.

[7] For an account of that area, see M Warren, “Open justice in the technological age”, Redmond Barry Lecture, Melbourne, 21 October 2013 at, accessed 7 July 2022; see more recently J Hetyey, “‘The way forward: placing innovation ideas into practice’: technology, innovation and change in the Supreme Court of Victoria”, Law Institute of Victoria Future Focus Forum, Melbourne, 23 November 2017 at, accessed 7 July 2022. The paper deals with the Supreme Court’s digital strategy, including in the areas of e-filing, judges’ portals and courtroom technology.

[8] For a high-level and general overview, see R Tromans, “Legal AI — a beginner’s guide”, Thomson Reuters, 2017, at, accessed 7 July 2022.

[9] That is, underlying assumptions or beliefs as to what is positive and valued behaviour in a group or society, and what behaviour is negatively valued: see I Macdonald, C Burke and K Stewart, Systems leadership — creating positive organisations, Gower, 2018, pp 55–56, 60–64.

[10] The topic requires a definition of “intelligence”. There does not appear to be a single universally accepted definition. The Oxford Dictionary defines intelligence as the ability to acquire and apply knowledge and skills. A broader definition might be: the capacity to be logical, problem-solving, learning and the application of learning. There are other definitions in use, such as “that quality that enables an entity to function appropriately and with foresight in its environment”: N Nilsson, The quest for artificial intelligence: a history of ideas and achievements, Cambridge University Press, 2010.

[11] In common parlance, the term “work” is capable of many meanings and uses. For example, we go to “work”, can be “at work” and when we get there, we do “work”.

[12] See further Macdonald, Burke and Stewart, n 9, pp 15–16. The discretion in selecting from possibilities is “decision making”.

[13] Involving problems that people have already solved in the past and for which the problem-solver immediately recognises a ready-made solution or procedure: see R Sternberg and J Davidson, The nature of insight, MIT Press, 1995, p 4.

[14] See Sternberg and Davidson, ibid.

[15] By “creative work”, the author does not mean imitated creative work, such as where a computer might seek to write music like Beethoven — this is not strictly creative in nature: see Y Harara, Homos deus — a brief history of tomorrow, Harvill Secker, 2015, pp 324–325.

[16] See E Wilson, Consilience — the unity of knowledge, Alfred A Knopf, Inc, USA, 1998, p 115. (Wilson notes that the persistent production of scenarios lacking reality and survival value is called insanity).

[17] It has been postulated, although not universally accepted, that the time dimension of the task is directly proportional to the complexity of the task, and is reflective of “fair” remuneration in a requisite organisation: see generally E Jacques, Requisite organisation: a total system for effective managerial organisation and managerial leadership in the 21st Century, Cason Hall, 1998.

[18] See J Bosland and J Gill, “The principles of open justice and the judicial duty to give public reasons” (2014) 38 Melbourne University Law Review 482. There are of course exceptions, such as for interlocutory-related matters (eg in relation to an award of costs) or an extension of time (503). As observed by his Honour Hayne J in “‘Concerning Judicial Method’ — Fifty Years On”, Fourteenth Lucinda Lecture, Monash University Law School, 17 October 2006: “[T]he fundamental reason for publishing law reports is that the common law is to be found in what the judges of courts of record give as their reasons for decision.”

[19] M Kirby, “In my opinion — legal and ethical issues in artificial intelligence” (1987) 2(2) International Computer Law Adviser 4.

[20] See E Jaques and K Cason, Human capability — a study of individual potential and its application, Cason Hall & Co, 1994, p 10.

[21] As to AI dealing with “incomplete” datasets, see, eg, T Riley, “Artificial intelligence goes deep to beat humans at poker”, Science, 3 March 2017, at, accessed 7 July 2022.

[22] Recently, in Thaler v Commissioner of Patents [2022] FCAFC 62, the Full Court of the Federal Court of Australia found that under the Patent Act, an invention had to come from the mind of a natural person(s) (at [105]). It was said by the Full Court that the grant of the patent rewards the person(s) for their “ingenuity”. (The Full Court noted that the outcome in the proceeding did not address the question of who was the inventor of the AI system, only whether an AI system could be an “inventor” under the Act).

[23] If computers do learn to “read”, this will be truly revolutionary.

[24] Apparently, it taught itself: Harara, n 15, p 321.

[25] Harara, ibid, p 319.

[26] These are touched on in M Chui and P Breuer, “Artificial intelligence in business: separating the real from the hype”, McKinsey & Co podcast, November 2017 at, accessed 7 July 2022.

[27] This would not cover “emotionally” based judgments.

[28] Made in 1965 by the man who cofounded Intel. Moore observed in 1965 (now known as “Moore’s Law”) that every year twice as many transitors could fit onto a computer chip, hence doubling the power of the chip. He adjusted that prediction in 1975 to every two years. The net effect of such change was to bring huge and continuing increases in computing power over very short periods. With the huge increases in computing power over time, computer companies continually built functionality into computers and other devices, culminating (so far) in phenominal devices such as mobile phones. Although Moore’s Law is not indefinite, computer companies are continually designing alternative ways to bring more computing power into devices.

[29] A Gore, The future — six drivers of global change, W H Allen, 2013, p 38.

[30] See Tegmark, n 6, p 69.

[31] As to the computational capacity of the brain, see H Moravec, “When will computer hardware match the human brain?” (1980) 1 Journal of Evolution and Technology; as to the equivalent capacity of a computer in 2015, see Tegmark, n 6, p 132.

[32] R Kurzweil, The singularity is near: when humans transcend biology, Viking, 2005. A wilder estimate by that author is that, by 2050, the average computer will have more processing power than all of humanity combined.

[33] For a more detailed explanation of “machine learning”, see Tegmark, n 6, pp 72–73.

[34] See Tegmark, ibid, p 78.

[35] ibid, p 79.

[36] In general terms, this was achieved by AI predicting from each position the probability that white would ultimately win, coupled with a separate network of calculations to predict likely next moves, combining these with a method that searched through a pruned list of likely future-move sequences to identify the next move that would lead to the strongest position down the game.

[37] Experts in the United States are grappling with this. See R Goebel et al, “Explainable AI: the new 42?”, IFIP International Federation for Information Processing 2018, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Canada, 2018 at, assessed 7 July 2022. See also M Grabmair, “Modeling purposive legal argumentation and case outcome prediction using argument schemes in the value judgment formalism”, Doctoral Dissertation, University of Pittsburgh, 2016, (unpublished) at http://dscholarship., accessed 7 July 2022.

[38] See Tegmark, n 6, p 58.

[39] ibid, p 60.

[40] ibid, p 67.

[41] See S Ransbotham, D Kiron and P Gerbert, “Reshaping business with artificial intelligence — closing the gap between ambition and action”, MIT Sloan Management Review in collaboration with Boston Consulting Group, 2017, at p 8 at, accessed 7 July 2022.

[42] ibid.

[43] Ping An Insurance Co of China Ltd.

[44] See Ransbotham, Kiron and Gerbert, n 41, 4.

[45] See Ransbotham, Kiron and Gerbert, ibid, 2.

[46] Hamersley Iron in Western Australia uses this extensively to extract iron ore, involving daily terabits of data being sent to the headquarters in Perth, extracted from sensors on vehicles and elsewhere.

[47] Passing an email around an organisation because it has initially gone to the wrong recipient.

[48] See Tegmark, n 6, p 39 and the associated definitions used therein — for example, in contrast to “narrow intelligence”, “general intelligence”.

[49] See generally IBM, IBM Watson Products and Solutions, at, accessed 7 July 2022.

[50] See Stone et al, n 4, p 4: “Contrary to more fantastic predictions for AI in the popular press, the study panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining, long-term goals and intent have been developed, nor are they likely to be developed in the near future.”

[51] See Macdonald, Burke and Stewart, n 9, p 15.

[52] See K Ashley, Artificial intelligence and legal analytics — new tools for law practice in the digital age, Cambridge University Press, 2017.

[53] Ashley, ibid.

[54] ibid.

[55] ibid.

[56] See generally Ashley, ibid. The text explores in depth the current analytics, the roadblocks to its further development and the possible future for the discipline. See more recently the excellent article K Ashley, “Automatically extracting meaning from legal texts: opportunities and challenges” (Summer 2019) 35(4) Georgia State University Law Review Art 3. In the article, Ashley notes (1120) the impressive new applications of legal text analytics for contract review, litigation support, legal information retrieval, and legal question and answer tasks. However, the author also notes significant constraints yet to be overcome: the analytics programmes cannot: extract legal rules in logical form from statutory texts; explain answered given; reason robustly about how different circumstances would affect answers given. The author contends (1120) that: “To some extent, these limitations are temporary.”

[57] See LexisNexis, “The future is insight: connecting the dots with legal analytics”, 2017, at, accessed 7 July 2022.

[58] For example, LexisNexis ingests vast quantities of data — 13 million new documents daily from more than 50,000 data sources and has more than 60 billion documents and 2.5 petabytes of legal data stored in its legal big data platform: LexisNexis, ibid, p 3.

[59] ibid, p 8.

[60] For example, in the case of LexisNexis and its Lex Machina product.

[61] This gives rise to a discrete idea: if AI can deal with evidence and apply the law to it, that automated system would have significant ramifications for the notion of the “legal profession”.

[62] See, eg, N Ferguson, The great degeneration — how institutions decay and economies die, Penguin Books, 2014 at pp 79–80, citing the English Lord Chief Justice, T Bingham, The rule of law, Penguin Press, 2010.

[63] A M Gleeson, “Courts and the rule of law” in C Saunders and K Le Roy (eds), The rule of law, Federation Press, 2003 at p 178.

[64] Gleeson, ibid, citing Commissioner of Taxation (Cth) v Westraders Pty Ltd (1980) 144 CLR 55, at 60 (Barwick CJ) (in relation to the interpretation of tax legislation).

[65] See O Dixon, “Concerning judicial method” in Jesting Pilate and other papers and addresses, 2nd ed, W S Hein, 1997 at pp 152, 165, as analysed by Hayne, n 18, pp 11–12.

[66] See J Middleton, “Statutory interpretation — mostly common sense?” (2017) 40(2) Melbourne University Law Review 626.

[67] For example, accepted despite having been obtained illegally: Evidence Act 1995 (Cth) s 135.

[68] L Branting, Reasoning with rules and precedents: a computational model of legal analysis, Kluwer, 2000.

[69] G Nettle, “Technology and the law” (2017) 13(2) TJR 185; Handbook for Judicial Officers, Judicial Commission of NSW, 2021.

[70] Nettle, ibid, citing J Stone, Legal system and lawyers’ reasonings, Stanford University Press, 1964, p 319. As Kiefel CJ of the High Court of Australia has recently stated “it is a human ability to evaluate complex evidence and apply nuanced legal reasoning to cases past and present with competing possible outcomes”: cited by Allsop, “Technology and the future of the courts”, n 3.

[71] March v E & M H Stramare Pty Ltd (1991) 171 CLR 506. The common sense approach is in fact filled with complexity: see J Stapleton, “Reflections on common sense causation in Australia”, J Degeling, S Edelman and T Goudkamp (eds), Torts in commercial law, Thomson Reuters, 2011 at pp 331–365; see, eg, Wrongs Act 1958 (Vic) ss 51, 52.

[72] Hunt & Hunt Lawyers (a firm) v Mitchell Morgan Nominees Pty Ltd (2013) 247 CLR 613 at [57].

[73] With the law sometimes reaching for and drawing on other normative standards such as from religion: see Donoghue v Stevenson [1932] AC 562.

[74] Although definitions of it have been posed in computer science literature: see T Nachbar, “Algorithmic fairness, algorithmic discrimination” [2021] 48 Florida State University Law Review 509 at 514. The article recognises, at p 515, that there is no widely held normative or legal concept of computational “fairness” (see also pp 523–525). The author recognises it as being “unlikely” that a comprehensive concept of “fairness” can expressed in a suitably concrete form to permit computational decision-making.

[75] For example, consistency, bias suppression, accuracy of information, correctability, representativeness, and ethicality: see J Thornton, “Cost, accuracy, and subjective fairness in legal information technology: a response to technological due process critics” (2016) 91 New York University Law Review 1821 at 1841–1842.

[76] See the approach of the High Court in Fortress Credit Corp (Australia) II Pty Ltd v Fletcher(2015) 254 CLR 489; Tabcorp Holdings Ltd v Victoria [2016] HCA 4.

[77] Note Carneades, a computational model of legal argument, outlines modelling of the burden of proof: T Gordon, H Prakken and D Walton, “The Carneades model of argument and burden of proof” (2007) 171(10–15) Artificial Intelligence 875.

[78] Note that a great deal of work in relation to AI and the law focuses on modelling non-deductive legal reasoning: see the casebased reasoning models described in Ashley, n 52.

[79] In Ying Mui Pty Ltd v Hoh (No 7) [2018] VSC 214, Vickery J of the Supreme Court of Victoria noted (at [16]) that the decision to grant a declaration could never be made by artificial intelligence.

[80] Tegmark, n 6, p 105. See for an excellent analysis of this question, the article T Sourdin, “Judge v robot? Artificial intelligence and judicial decision-making”, Handbook for Judicial Officers, Judicial Commission of NSW, 2021.

[81] ibid, p 106.

[82] See for example, T Wu, “Will artificial intelligence eat the law? The rise of hybrid social-order systems” (2019) 119 Colum L Rev 2001; as to other articles on this topic, see R Stern et al, “Automating fairness? Artificial intelligence and the Chinese courts” (2021) 59 Columbia Journal of Transnational Law 515, n 1.

[83] ibid at 517.

[84] As to whether AI can explain why it has done what it has done/recommended, see C Kuang, “Can AI be taught to explain itself?”, The New York Times, 21 November 2017, at, accessed 13 July 2022.

[85] See Nachbar, n 74, at 519.

[86] Thornton, n 75, at 1846.

[87] As reported at Y Jie, “China’s courts look to AI for smarter judgments”, Sixth Tone, 18 November 2016 at, accessed 13 July 2022.

[88] Stern, n 82, at 529–530, n 45, n 48.

[89] ibid at 531.

[90] For further background to the project, see ibid, at 541. The software as developed over time has been the subject of substantial criticism: ibid, at 543–544.

[91] ibid at 518.

[92] ibid at 520.

[93] As to how fMRI works, see S Watson, “How fMRI Works”, HowStuffWorks, 1 October 2008, at, accessed 13 July 2022.

[94] Harara, n 15, p 314.

[95] There are, of course, real evidentiary issues with this — a defendant might appear to be truthful according to the analysis but be mistaken.

[96] See, eg, F Fernandez-Armesto, Truth, a history and guide for the perplexed, St Martin’s Press, 2001.

[97] As to the problems with probabilistic assessments of the probative value of items of evidence, see H Ho, “The legal concept of evidence”, The Stanford Encyclopedia of Philosophy (Winter 2021 edn), E Zalta (ed), at, accessed 13 July 2022.

[98] Dasreef Pty Ltd v Hawchar (2011) 243 CLR 588; Evidence Act 1995 (Cth) s 79.

[99] As identified in LexisNexis, “Lawyers and robots? — conversations around the future of the legal industry”, Research and white papers, 2017, at, accessed 13 July 2022, a major problem with using data as evidence (extracted by AI) is in producing an audit trail that demonstrates its integrity from the moment it was “collected” to the time it is produced to the court. This is particularly problematic with distributed processing storage, such as the use of the Cloud (observations by C Reed, University of London).

[100] Justice Forrest appointed two experts to assist him with highly technical electrical engineering matters in the Kilmore fire class action: Matthews v SPI Electricity Pty Ltd (No 19) [2013] VSC 180. See generally in this area P Vickery, “New horizons for the bar in the age of technology”, ABA Conference, Dublin, 7 July 2017 at pp 6–9, 17ff, at, accessed 13 July 2022.

[101] Harara, n 15, p 322.

[102] AI tools in these and other areas are outlined in M Zalnieriute, “Technology and the courts: artificial intelligence and judicial impartiality”, Submission to ALRCR of Judicial Impartiality, 16 June 2021 at, accessed 9 August 2022.

[103] It has been argued to the contrary that transparency may be unnecessary to providing accountability as discrimination law provides a remedy for unfairness: see Nachbar, n 74, at 509.

[104] Gleeson, n 63, p 188.

[105] ibid.

[106] Noting, however, that in the area of injunctions, the legal principles remain relatively broad, such as considerations of “the balance of convenience”.

[107] R French, “Perspectives on court annexed alternative dispute resolution”, Law Council of Australia Multi-Door Symposium, Canberra, 27 July 2009, at, accessed 13 July 2022.

[108] An example raised by Kirby, n 19, p 6. Other examples of the use of an “AI tribunal” are those provided by the Canadian Civil Resolution Tribunal which deals with small civil disputes (under $5,000) and small property issues, and plans in the future to deal with car accident cases and personal injury claims: see Allsop, “Technology and the future of the courts”, n 3, 7. See also A Reiling, “Courts and artificial intelligence” (2020) 11(2) International Journal for Court Administration 8.

[109] T Sourdin, “Justice and technological innovation” (2015) 25 JJA 96 at 101.

[110] There is a similar product that was developed in France, WeClaim, which helps people make small claims and participate in class actions — it has been translated into four languages. This product is said not to be AI per se but rather “a logic tree” product: see Artificial Lawyer, “French legal start-up, WeClaim, pioneers semi-automated litigation”, 12 December 2016, at, accessed 13 July 2022.

[111] It is said that over 150,000 parking fines have been successfully overturned this way: R Susskind, Tomorrow’s lawyers: an introduction to your future, Oxford University Press, 2017, ch 5. In the criminal area, a similar product exists, LawBot, developed by Cambridge students. LawBot is a chatbot that provides free advice to victims of crime. It covers 26 criminal offences; however, it is not designed to replace a lawyer or take a case forward.

[112] As to access to justice and the role AI might play generally, see J Tito, “How AI can improve access to justice”, Centre for Public Impact, 23 October 2017, at, accessed 13 July 2022.

[113] See Artificial Lawyer, “What Joshua Browder’s Equifax claims tool means for lawyers”, 13 September 2017,, accessed 13 July 2022.

[114] Susskind, n 111.

[115] See generally M Legg, “The future of dispute resolution: online ADR and online courts” [2016] University of New South Wales Faculty of Law Research Series 71; see Thornton, n 75, at 1829–1832. The article deals generally with “Automated Decision Systems” (ADSs).

[116] R Susskind and D Susskind, The future of the professions — how technology will transform the work of human experts, Oxford University Press, 2015.

[117] See Artificial Lawyer, “Paris Bar Incubator calls for applicants for 2018 innovation prize”, October 2018 at, accessed 13 July 2022. See more recently P Motteau, “Is France becoming the vanguard of civil LegalTech?”, Legal Business World, 22 January 2020, at, accessed 13 July 2022.

[118] As to system design and ownership, see, Macdonald, Burke and Stewart, n 9, pp 235–252.

[119] See Legg, n 115.

[120] For an extensive analysis on online solutions, see Legg, ibid, at 76–80.

[121] For an excellent article on the topic, however, see J Blass, “Observing the effects of automating the judicial system with behavioral equivalence” (2022) 73 South Carolina Law Review 825. See also, Macdonald, Burke and Stewart, n 9.

[122] D Remus and F Levy, “Can robots be lawyers? Computers, lawyers, and the practice of law” (2017) 30(3) Georgetown Journal of Legal Ethics 501.

[123] See Luminance at, accessed 13 July 2022. The board of directors of Luminance is extremely impressive: see, accessed 13 July 2022. For a summary of other major AI applications in the legal services area, see D Faggella, “AI in law and legal practice — a comprehensive review of 35 current applications”, Emerj, March 2020, at, accessed 13 July 2022.

[124] See Luminance, n 123.

[125] See H Son, “JPMorgan software does in seconds what took lawyers 360,000 hours”, Bloomberg, 28 February 2017,, accessed 13 July 2022.

[126] For a more general indication of the developing scope of AI in legal work generally, see Artificial Lawyer at, accessed 13 July 2022.

[127] See J Markoff, “Armies of expensive lawyers, replaced by cheaper software”, New York Times, 4 March 2011, at, accessed 13 July 2022; Sourdin, n 109, 103.

[128] See, eg, Pyrrho Investments Ltd v MWB Property Ltd [2016] EWHC 256 (Ch); Irish Bank Resolution Corp Ltd v Quin [2015] IEHC 175; G Cormack and M Grossman, “Evaluation of machine-leading protocols for technology-assisted review in electronic discovery”, SIGIR 2014: Proceedings of the 37th international ACM SIGIR Conference on Research and Development in Information Retrieval, 2014 at, accessed 14 July 2022; see Vickery, n 100, p 25 ff.

[129] Performed by Ashley, n 52.

[130] It is currently beyond the ability of AI products in the market to produce lucid and highly accurate written advice in relation to specific legal questions posed, hence the need for human “intervention”.

[133] The implication, of course, is that a human must go and look for the other authorities using some other means, such as textbooks.

[134] Blue Hill Research, n 132, p 11.

[135] Susskind, n 111, Ch 5. See further D Katz, M Bommarito II and J Blackman, “A general approach for predicting the behavior of the Supreme Court of the United States”, 2017, at, accessed 14 July 2022.

[136] For a company providing such services in the United States, see Premonition, at, accessed 14 July 2022.

[137] AI tools have nevertheless been developed, at least at the experimental level, in other jurisdictions, such as in Australia in relation to migration cases: see AI tools in these and other areas are outlined in Zalnieriute, n 102, at 6.

[138] AI tools in these and other areas are outlined in Zalnieriute, n 102, at 7.

[139] Possibly operating through so-called Blockchain technology. As to this aspect, see further R Kemp, “Legal aspects of artificial intelligence” at, accessed 14 July 2022.

[140] Cost competitiveness is one of the three main drivers of change firms must embrace to remain successful: see, eg, Susskind, n 111, Ch 2.

[141] See Zero at, accessed 14 July 2022.

[142] LexisNexis, “Human v cloud — 2017 LexisNexis roadshow report”, 2017 at, accessed 14 July 2022. The report examines (on a survey basis) how professionals are faring in a landscape dominated by technological disruption and what impact this has had on ways of working.

[143] See Deloitte, “The legal department of the future: how disruptive trends are creating a new business model for in-house legal”, 2017, at, accessed 14 July 2022.

[144] LexisNexis and JadeNet products, as well as Austlii, have essentially solved the manual “legal research” problems of the past.

[145] See, eg, Inc.Technology, “Five growing artificial intelligence startups you need to know about”, 25 July 2017, at, accessed 14 July 2022.

[146] For an outline of this, see Kemp, n 139, pp 18–21.

[147] Is it the sound system used to hear other cars, the sensors used to “sense” other cars, the satellites used to position the car, the owner of the algorithm used to operate the car etc?

[148] Chui and Breuer, n 26.

[149] The premiums for which should in theory be very low because driverless cars will not crash!

[150] Reiling, n 108.

[151] As to some of the United Kingdom aspects of intellectual property law relating to AI, see Kemp, n 139, pp 22–23.

[152] As to which, see, eg, M Mann and M Smith, “Automated facial recognition technology: recent developments and approaches to oversight” (2017) 40(1) UNSW Law Journal 121 at, accessed 14 July 2022.

[153] See for example, P Mozur, et al, “In China, an invisible cage”, The New York Times, 28 June 2022 at, accessed 9 August 2022.

[154] IBM has said that it has come up with a “curriculum” with respect to its legal cognitive product ROSS (which is based on its more general platform, Watson), which has “helped Watson understand and comprehend the law”: A Sills, “ROSS and Watson Tackle the Law” at, accessed 14 July 2022; the reference to understanding and comprehension may well be significantly overstated.

[155] Ransbotham, Kiron and Gerbert, n 41, p 3.

[156] Kemp, n 139, p 8, as noted in G Greenleaf, “Technology and the professions: utopian and systopian futures” (2017) 40(1) UNSW Law Journal 302, fn 34 at, accessed 14 July 2022.

[157] Ransbotham, Kiron and Gerbert, n 41, p 16.

[158] ibid.

[159] Chui and Breuer, n 26. In 2018, Accenture stated that in 2020 the AI market will exceed $40 billion, quoting the CEO of Microsoft, S Nadella, as stating “AI is the ultimate breakthrough technology”: Accenture Applied Intelligence, n 6.

[160] See Y Shoham et al, “Artificial Intelligence Index — 2017 annual report” (AI Index) at, accessed 17 July 2022. The Index comes out of the Stanford University work referred to in n 2.

[161] P Begley and D Wroe, “Future warfare: when robots join the battle, who decides who lives and dies?”, The Sydney Morning Herald, 11 November 2017 at, accessed 14 July 2022. See more recently the excellent outline on this topic by M Perry, “Automated weaponry and artificial intelligence: implications for the rule of law” [2017] FedJSchol 1 at, accessed 14 July 2022.

[162] Kirby, n 19.

[163] Tegmark, n 6, 42.

[164] For a discussion on developing AI technology “inhouse” or otherwise, see LexisNexis, n 99.

[165] Nettle, n 69.