Technology and the law[1]

The Honourable Geoffrey Nettle AC QC[2]

The paper discusses the use of computer-based technology with a particular focus on computational law systems that can make the intellectual decisions which can fashion and determine the outcome of a case. The author explores the uses of computational law in the area of discovery and gives examples from Ireland, the use of artificial intelligence by Australian law firms, and trials conducted in the family law system.

A lot has been written and said about the use of computer-based technology in court rooms and the likely effects of it on the way counsel conduct litigation and judges and juries determine the outcome.[3] Much of that discourse is valuable and some of it is interesting. Essentially, however, it is concerned with the electronic or, more accurately, digital means of storing and presenting information in accordance with intellectual decisions made by counsel as to what they consider to be relevant to the case in hand. As Professor Tania Sourdin has written, these developments reflect the first and second levels of technological innovation in the justice system.[4]

Today, I want to look at an aspect of computer-based technology which I think to be more interesting, and that is computational law systems that can make the intellectual decisions which fashion and perhaps ultimately determine the outcome of a case. This, in Sourdin’s nomenclature, reflects the third level of technological innovation.

To start with some definitions, “computational law” means different things to different people. For the purposes of this discussion, I propose to adopt the definition of computational law propounded by Nathaniel Love and Michael Genesereth in their paper on “Computational law” which was delivered in June 2005.[5] They described computational law as:[6]

an approach to automated legal reasoning focusing on semantically rich laws, regulations, contract terms, and business rules in the context of electronically-mediated actions.

They added that:[7]

A representation language for computational law must enable processing of both semantic data and multiple, semantically rich rule sets in the context of a formal model of behaviour.

In substance, therefore, what I mean by computational law for the purpose of this exercise is the algorithmic application of complex sets of fixed rules, which are originally expressed in words but for the purposes of the exercise are recoded to the representation language, to data sets that represent facts, which are also originally expressed in words but for the purposes of the exercise are recoded to the representation language, in order to produce a conclusion which is first expressed in the representation language and then finally recoded into words.

At the outset, I should also stress that computational law is by no means confined to the future. As Love and Genesereth observed in their paper, we had even then reached the point that, if a computational law system were supplied with sufficient semantic data and rules, it could structure transactions that were valid with respect to complex behavioural constraints without the need for assistance from a lawyer.[8] By way of example, they cited the application of a computational law system to a university procedure for advising students on registering for a final semester. That system could automatically apply data from the students’ academic record to departmental course pre-requisites and breadth requirements in order to determine which subjects the student was permitted to take.[9]

More to the point, however, such computational law systems are now having an impact on the practice of law in this country. Some large law and accounting firms are already using software that exhibits features of computational law to assist with discovery.[10] To a greater or lesser degree, it can make informed decisions about the documents which are relevant to a matter in issue and those which are not. I note in passing too that the High Court of Ireland expressly approved the use of so-called “Technology Assisted Review” in a large commercial insolvency matter.[11]

Meanwhile, other firms are moving into the field of computer-generated legal advice. At the Law Tech Summit held at Noosa in 2013, delegates were told of applications that enabled clients to enter a series of facts and receive the probable outcome of a legal matter based on relevant case law and statutes.[12] In 2014, Chris Merritt reported in The Australian newspaper[13] that an east coast law firm called Plexus had unveiled what it said was the nation’s first commercial use of artificial intelligence to provide legal advice on the requirements for setting up a trade promotion. He noted that Slater and Gordon had developed a similar product that dealt with unfair dismissal claims, albeit that their system led to a face-to-face interview with a solicitor. Mr Merritt went on to report that the Plexus system was capable of producing online advice in the space of about 10 minutes compared to the six or so weeks which it would have taken a lawyer to prepare, and that the machine could do the job at about 20–30% of the cost of the lawyer.

In similar vein, as some of you will know, judges and judicial registrars of the Family Court of Australia and family law practitioners have been trialling a system called “Split-Up”, which Sourdin describes as a “hybrid rule-based neutral network system” that can generate advice on how property from a marriage would likely be settled if the matter were determined by the court.[14]

Further, if it is not already the case, it is unlikely to be long before computational law systems that are capable of producing do-it-yourself wills, trust deeds, superannuation fund set-ups, business contracts, conveyancing documents, intervention order applications and other family law processes become widely available in Australia.[15]

As those systems become available, they will enable individuals to attend to a significant part of their legal affairs without the intervention of a solicitor, just as the development of computational software of the kind operated by the Commissioner of Taxation enables thousands of people now to complete an online tax return without the intervention of an accountant.[16] And, as the range and accessibility of computational law programs develop, it is not improbable that a significant part of what at present comprises the bread and butter of high street solicitors’ practices will be eliminated.[17]

A further point worth considering is that, paradoxically, the increase in electronically mediated transactions and computer-generated legal advice may also tend to reduce commercial disputes and, therefore, commercial litigation. History suggests that the public may be inclined to give greater credence to advice produced by a computer than to advice produced by a human being, albeit that the computer incorporates some if not all of the prejudices and inadequacies of the human beings who create it, program it and inform it.

As an illustration of the point, some of you will recall that, when DNA technology was relatively new and the allele readings and probability calculations were manually produced, there were often significant forensic disputes about them with both sides calling expert evidence and testing it at length. Then came computer-generated spectrograph readings and computer-generated probability calculations with the result that there is now seldom a dispute about the results of DNA testing. And, significantly, that is so even though the computers which generate the results are to a large extent infused by their creators with the same kinds of prejudices and predilections as used to be productive of dispute when the readings were manually produced.

In the same vein, at least in simpler matters, technology is likely to increase the incidence of self-represented litigants. In a 2001 paper entitled “Advisory systems for pro se litigants”,[18] L Karl Branting proposed a framework for developing computer advisory systems that used then-existing inference, document-drafting and interface design techniques. He described an example which had already been installed at public expense for pro se litigants in protection order applications in the Idaho Supreme Court. He contended that similar applications could be developed for use in other areas of law and predicted that improvements in interface design, including multi-language text, speech output and web delivery, would greatly increase the acceptance of these applications.

There is little reason to doubt that the same kinds of developments will occur in this country and, if so, that the number of lawyers needed to be involved in simple forms of litigation and possibly also the number of judges and other judicial officers required to decide such cases may be reduced.

Are there then any aspects of litigation as we know it that are likely to survive the effects of computational law? I am inclined to think that two stand out. The first is litigation involving disputed facts and the second is litigation involving the application of open-textured laws.

For the time being, I conceive it to be unlikely that computational law will have much impact on cases involving disputed facts, if only because of the vast range of variables involved in human fact finding and, therefore, the immensity of the task of constructing the kind of algorithms and databases which might conceivably replicate those functions. It is one thing to use an algorithm or a combination of algorithms to apply a complex rule set to an established and accepted set of facts. But, where facts are disputed, and so must be determined on the basis of evidence, the presentational dimension of evidence (especially oral evidence) and the intellectual processes involved in its evaluation and interpretation (whether by judge or jury) are so complex and so much informed by human intuition and experience as to defy synthesisation by any presently available artificial intelligence system.[19]

In a paper presented at the Cambridge Centre for Public Law Conference in 2014,[20] Perry J of the Federal Court of Australia wrote of the watershed moment in 1997 when IBM’s supercomputer, “Deep Blue”, defeated world chess champion Garry Kasparov and so demonstrated that computers could make decisions that outperform the best of human minds. But, as her Honour remarked, in order to do that, the computer had to be programmed from the outset with a full history of Kasparov’s previous public matches and style and, between each game, a team of chess experts and programmers were required to alter and improve the program to accommodate what Kasparov had just done in the previous game.

One can see how that sort of computer engineering might one day be applied to the assessment of oral evidence. Put in enough of the known facts and data concerning the style of the witness, take a break every couple of minutes to enable a team of experts to update the database and vary the program to accommodate what the witness has just said, and then proceed ad seriatim until the witness’s evidence is concluded.

Possibly, too, if that could be done, the outcome would be more reliable than the unaided assessment of a judge or jury. Human beings have limited attention spans but machines just keep on going. It is conceivable that with access to sufficient statistics, a computer could make more accurate determinations about the probability of certain kinds of behaviour than human beings would be likely to do.

But, inevitably, the reliability of computer assessment of evidence, particularly oral evidence, would depend on the quality of the team of experts, not to mention the validity of the algorithms, and also on whether the breaks were sufficiently close in time to avoid something being overlooked or miscoded along the way. And, as matters stand, such a process would surely cost vastly more than the conventional assessment of the evidence by a judge or jury and it would almost certainly take a great deal longer.

That is not to deny that, with enough time, enough data storage and the encoding of enough human sensory perceptions and behavioural characteristics into algorithmic functions, there will one day be produced a computational law system which, without need of any further adjustment, is able to do at least as good a job in assessing oral evidence as a judge or jury.[21] For example, if the question were whether an accused had committed a violent assault, and if it were established that the accused was heavily intoxicated at the time of the alleged offence, a sufficiently comprehensive statistical correlation between heavy intoxication and the propensity to violence might, in light of other known factors (for example, past criminal conduct), enable a computer to reach a sounder assessment of guilt than would a jury.

Equally, if the issue were whether DNA found at the scene of a crime was a sufficient match to an accused’s DNA, a rationally programmed computer applying Bayesian analysis would inevitably avoid logical errors like the so-called prosecutor’s fallacy[22] of assuming that the prior probability of a random match is equal to the probability that the accused is innocent.[23]

Such statistically based computer-aided analysis of evidence might also prove useful in civil matters. Consider for example the possible application of computer analysis of evidence to a tax case in which the question is whether the taxpayer had acquired an asset as part of a profit-making undertaking or plan within the meaning of s 15.15 of the Income Tax Assessment Act 1997 (Cth) or whether a particular transaction is a tax avoidance scheme within the meaning of Pt IVA of the Income Tax Assessment Act 1936 (Cth). Assuming enough statistics of the correlation between the kind of transaction under consideration and cases in which it has been established that such a transaction was entered into as part of a profit-making undertaking or plan or as part of a tax avoidance scheme, a computer’s determination of the probability of the subject transaction being part of such a plan or scheme might well prove more reliable than any human interpretation of the oral and written evidence offered on behalf of the taxpayer.

Of course, developments of that kind would require some significant modifications to the current law relating to tendency and coincidence evidence. But it is foreseeable that such amendments might be forthcoming. There is nothing new in amending the various Evidence Acts to facilitate advances in computer technology. The computer-generated documentary evidence provisions have been in force for decades.[24]

It is questionable, however, whether society would accept that the outcome of litigation should be determined by computer assessment of oral evidence; especially in criminal litigation. It is one thing to receive and value computer-generated legal advice as a working approximation of a possible outcome generated by the application of established rules to assumed facts. It is acceptable because in essence it is little different from the kind of legal advice which is produced by human beings. But it would be quite another thing for litigants to accept a computer’s assessment of their credit and reliability, and still more so a computer’s assessment of their credit and reliability relative to that of opposing witnesses.[25] In the federal sphere, there are also the requirements of s 80 of the Constitution to be accommodated.[26]

Either way, it remains that the technical improvements required to make an accurate assessment of evidence suggest that computer-aided analysis of evidence, particularly oral evidence, is still a fair way off.

That brings me to the application of computational law to open-textured rules; by which I mean, for example, whether something was reasonably foreseeable or whether a transaction is unconscionable or whether a contract term is unfair or whether an act is a breach of good faith or whether a distribution of liabilities is just and equitable.

For similar, although not identical reasons, it appears that the application of computational law to cases involving open-textured rules will prove problematic. As Branting noted in his earlier work, “Reasoning with rules and precedents: a computational model of legal analysis”,[27] there was at the time of writing in 2000, a “broad consensus within the automated legal reasoning community that rule-based reasoning was insufficient to model the problem solving of attorneys because of the problem of open-textured legal predicates”. He cited[28] as an example of those shortcomings a rule-based system called LDS for determining the settlement value of personal injury claims. It was designed to “chain” forward from the facts of a new case to five distinct factors bearing on settlement value, including the loss suffered by the plaintiff, the likelihood of establishing liability and the relative degrees of responsibility of the plaintiff and the defendant. It then combined those factors to produce an estimate of the expected judgment. But the limitation of the system was that whenever it came to a question of whether open-textured predicates were satisfied, such as, for example, whether the particular use of a product was foreseeable, it forced the user to determine whether the predicate was satisfied as part of the data input into the system.

The nub of the problem is the difference between the process of scientific reasoning and the process of legal reasoning.[29] Hitherto, the methodology of computational law has been the methodology of scientific positivism.[30] At base, that knows nothing of introspective notions of interpretive knowledge or metaphysics or theology. When applied to determine the outcome of a case, it assumes that there can only ever be one proper outcome and that its identification requires no more than the application of logic and reason to what has previously been decided. Yet, as lawyers know, where a law is open textured, logic and reason (as applied under the rubric of legal reasoning) will often yield more than one possible outcome; and especially in the absence of hard precedent. The significance of similarities and differences between cases is determined by normative processes.[31] The selection of the proper outcome requires an epistemology beyond empiricism and scientific method. As Julius Stone concluded in “Legal System and Lawyers’ Reasonings”, judicial decision-making involves “acts of will, as well as of cognition”[32] and it necessitates “the dedication of a certain part of lawyers’ conscious concern to study of the various criteria of choice made available by earlier thought, and of the relevance of the facts of contemporary social contexts to the doing of justice”.[33]

Two examples may assist in demonstrating the point. Consider first Lord Atkin’s formulation of liability in negligence[34] as based upon a general public sentiment of moral wrongdoing for which the offender must pay, subject to the qualification that, because of the need to contain liability, “[t]he rule that you are to love your neighbour becomes in law, you must not injure your neighbour; and the lawyer’s question, Who is my neighbour? receives a restricted reply. You must take reasonable care to avoid acts or omissions which you can reasonably foresee would be likely to injure your neighbour”. That famous piece of legal reasoning bespeaks the interpretivist invocation of metaphysics and theology and the application of the norms, values and symbols of the Judeo-Christian imperative to love thy neighbour as thyself.[35]

Consider then Sir Owen Dixon’s extrajudicial address “Concerning judicial method”,[36] written almost 20 years after Donoghue v Stevenson, in which his Honour proposed a means of escaping the excesses of the rule in Foakes v Beer[37] by the device of extending the existing doctrine of estoppel in pais beyond a misrepresentation of existing fact to an assumed conventional basis of legal dealing; a concept which, it will be recalled, ultimately found favour with Mason CJ and Deane J in The Commonwealth v Verwayen.[38] Pertinently, for present purposes, Sir Owen spoke of the judicial warrant for going down that path of development in terms of the court “shar[ing] the feeling that there is something wrong with the conclusion” that precedent dictated and that there is much that “a court animated by [that] feeling” might do and yet not depart from the traditional method of judicial reasoning.[39] That famous example of extra curial reasoning bespeaks the repudiation of the one possible result dictated by positivism in favour of one of a number of possible results which may flow from an exercise in introspectivism grounded in the norms, values and symbols of intuitive knowledge.

Present day computational law systems are incapable of replicating processes of those kinds and it is likely to take some time before they can. In the early 2000s, a group of law professors and computer scientists gathered, first in Amsterdam, and then in New York, to discuss the capacity of artificial intelligence to make contributions to evidence, inference, and proof in litigation. Out of those conferences came a body of research that suggested that introspective legal reasoning could potentially be performed by computers.[40] One author posited how fuzzy-logic methodologies could be made to replicate the way that legal reasoning involves matters of perception and degree rather than precise numerical measurements ordinarily associated with computer programs.[41] Others referred to various, presumably eventually surmountable, obstacles such as constructing a body of data of common sense reasoning for a program to draw upon in circumstances where most legal decisions do not explicitly identify common sense chains of reasoning underlying the decision.[42] Crucially, however, the research demonstrated that although abductive reasoning (the “method of reasoning that leads to truly new findings”)[43] could be performed successfully by computers in a legal evidentiary context,[44] the programs that perform the “creative” or “imaginative” aspects of such reasoning such as “forming analogies based on past experience” were still a long way from being developed.[45]

Of course, since then developments have proceeded apace. One example with which we are all familiar is the immense complexity and sophistication of Google’s search algorithm and the databases which lie behind it, that enable Google automatically to interrogate a user as to whether what the user has typed into the search engine is what the user really intended or whether what the user intended was in truth something else which Google then specifies. Most of us are also aware of computational systems of the kind used by Amazon, eBay and Gumtree, which, on the basis of a user’s past purchases, can predict other products in which the user may be interested and display items accordingly, as well as automatically interrogating the user as to whether those other products do appeal in order to refine future displays. Added to that, according to report, work is now advancing on means of connecting together platforms such as Facebook and Twitter for determining, on the basis of what a user may follow or post, a range of places to which the user might care to go to eat or drink.

These applications use data mining techniques to perform empirical analysis of a person’s behaviour in order to predict his or her preferences. As such, they presage the kind of sophistication which would be required to somehow synthesise human introspectivist analysis. It is possible to envisage techniques similar to current data mining techniques being deployed to build up a picture of social norms generally. But, up to this point, I am not aware of any literature which suggests that an existing system could go close to performing the task of introspective reasoning in a legal context and, conservatively, one might suppose it will be the better part of a decade before there is.

Furthermore, even allowing that, with enough time, capacity and computing genius, sufficient norms, values and symbols of intuitive knowledge will one day be loaded into a database to enable a positivist algorithm to synthesise introspectivist human analysis (as was done at a much more rudimentary level in the Kasparov/Deep Blue exercise or as is now being done by Google and the like), it will only be because someone, or more likely some large group of persons, has made a host of a priori decisions about the respective weights to be ascribed to the criteria of choice revealed by earlier thought and the relevance of the facts of contemporary social contexts to doing justice. And, at best, they will be a priori decisions based on an incomplete even if vast dataset and a conception of contemporary social context to doing justice which is not only subjective but static. Thus, while the process may not be any different in kind from the intellectual processes that a judge undertakes in the posited circumstances, it is likely that it will be different in result, and hence that it might not be regarded as authoritative.

When Donoghue v Stevenson was decided in 1932, there were possibly few members of Australian society who would have demurred to a conception of moral wrongdoing grounded in the Judeo-Christian imperative to love thy neighbour as thyself or the legal adaptation of it limited by the facts of contemporary social contexts to doing justice by reference to proximity. By contrast today, in an increasingly pluralist and apostate society, the same may no longer be true. Hence, the significance of Julius Stone’s conclusion that the recognition of a duty of care in novel circumstances involves an assessment not only of the criteria of choice made available by earlier thought but also of the relevance of contemporary social contexts to the doing of justice, and the latter being as much informed by a judge’s perception of a heterodox society as it is by its elements.

In this country there is also a widely publicised disdain of the idea of unelected judges being authorised to make determinations by reference to open-structured broad-based criteria such as a bill of rights or charter of human rights and responsibilities.[46] It is not infrequently said that a significant proportion of the people of this country regard it as undemocratic and, therefore, undesirable to trust unelected and to some extent uncontrollable judges to make decisions based on broad conceptions of contemporary social contexts to doing justice.[47] It is widely considered that, because views about such matters can and do markedly differ, they are better left to society’s elected representatives, and that by and large judges should be confined to the more tightly constrained limitations of rule-based determinations with only some small degree of leeway at the upper appellate level.[48] Given that degree of reticence about allowing judges to make decisions based on broad conceptions of contemporary social contexts to doing justice, it is not unlikely that society would also be resistant to the idea of policy choices being made by a computer on the basis of a priori determinations made by a cohort of unelected, unanswerable and essentially unknown software engineers and legal specialists working alone and largely unexamined in the development of a database and complex algorithm intended to function as a modern day computational law Atkinian replacement.

By contrast, the present inability of computational law systems to perform a human introspective analysis of a legal problem that takes into account social norms and values may not prove a serious limitation to the development of an effective computational law system that assists in making a final determination rather than making it. As Dr James Popple suggests, even today “predicting a judge made change in the law [due to social mores] is beyond all but the very best lawyers”[49] and, probably, appellate court judges. Hence, the inability of a computer program to do any better should not prevent it from being of assistance.

If so, one may suppose it is likely that in the relatively near future counsel and solicitors will be equipped with computational law programs that are able to assist them in their preparation of advice and the conduct of litigation. Presumably judges will also be equipped with such programs and, given sufficient time and development, those programs will be capable of producing a fair set of reasons as to why or why not a duty of care should be recognised in the novel circumstances of a case, or upon such other novel legal issue as there may fall for decision, with reference to the cases which the program determines to be relevant, those which it determines are not relevant, and some sort of statistical analysis of deviations from paradigm cases. It is worth considering, too, that, when and if such computer programs are available to practitioners and judges, they will also be available to unrepresented litigants and to the commentariat.

As an illustration of what is to come, in Reasoning with rules and precedents, Branting wrote[50] of a system called “GREBE” which applies a general framework for integrating rules and precedents to the task of legal analysis. It differs from previous systems by integrating case-based reasoning to compensate for weak domain theories, case elaborations and goal reformulation. Unlike previous systems, it also uses a highly expressive case-description language in which arbitrary orderings of causal, temporal and intentional relations can be stated explicitly. Most significantly, it employs two algorithms, one to effect retrieval by best-first incremental matching and the other to refine the match by structural difference links (“MRSDL”) which use pre-computed information about structural differences between cases. That generates alternative explanations by application of case-based reasoning directly to a goal and then to a sub-goal produced by goal reformulation, followed by an evaluation of the combination of rules and precedents, leading to the strongest argument in favour of and against a claim.

The output consists of a detailed memorandum which identifies the issues relevant to the question to be decided; a determination of the legal rules and precedents applicable to each issue; an illumination of how conclusions about the issues follow from the facts of the case and the relevant authorities; the identification of relevant differences between a given case and applicable precedents; and the presentation of arguments on both sides of the issues where conflicting precedents exist. And, when tested by comparing GREBE’s analysis of 18 hypothetical workers’ compensation claims against the efforts of law students set the same problems, GREBE’s performance was found to be generally superior.[51]

Branting noted that the difficulties in accurately representing complex facts involving “detailed human actions and intentions” in GREBE’s system makes the system better suited to areas that are “relatively isolated from the complexities of human life”. He nominated corporate taxation as one area where such problems might be avoided. But, of course, the capacity to deal with such complexities is constantly developing.

What then will be the function of counsel and judges once programs of that kind are more generally available and in use? Happily, in one sense, it is likely that our functions will remain much as they are now: to present and contend for the considerations which favour a desired outcome and to weigh up competing considerations and authorities in order to reach a decision. But, at the same time, the process will be different because it will be affected by computer analysis. Both sides and the judge will have access to the relevant computational law program. All will know what it says should be the answer. One side presumably will be contending that the answer proffered by the program is correct and should be adopted while the other side will be likely to criticise the program, point out its limitations and inadequacies and fashion arguments in favour of the opposite result.

Submissions and judgments may change accordingly. There may need to be explicit reference to the programs and the results which they recommend. Possibly, there will be competing programs which dictate different conclusions and, if so, counsel and judges may need to analyse each of them and compare them. The skills of counsel and judges would have to change accordingly. Just as the adoption of robotics in industry is changing the role of tradesmen into skilled computer technicians and industrial plant managers from skilled personnel managers to skilled computer scientists, so would the role of counsel and judges become increasingly one of a skilled computer scientist with the capacity to identify the limitations in programs and to fashion submissions and judgments about them.

Not long ago, law students were taught how to find the law in the English and Empire Digest, the Australian Digest and Halsbury’s Laws of England. Now they are taught how to find it by online computer searches. Within the foreseeable future, it does not seem unlikely that they will be taught about the capacities of variously available computational law programs and how to use them and recognise their weaknesses. Equally, as and when computational law programs come to be relied upon as primary analytical tools in the determination of legal outcomes, counsel and judges will need to learn the skills of computational law program application, analysis and deconstruction.

Paradoxically, in areas other than the law, there has been a large degree of digitisation of processes which has not required users to undertake any kind of analysis of the programs deployed.

Whether it be a checkout reader at a supermarket, online purchases, selling real estate on the ’net, computer-based medical analysis or computer-based pathological testing, the experience has, by and large, been one of users simply learning to use the technology, not second guess it.

That is because all of the parameters of the transaction are pre-set. The application of the optical reader to the barcode does not require any intellectual input on the part of the operator. It is the same for online commerce. So it is too with some forms of medical analysis. To invoke an example from one of our cases in the High Court, it has already been determined and the computer has been instructed that the presence of certain mutations or polymorphisms in a patient’s DNA bespeaks an increased likelihood of specific kinds of cancer, and it is not in any respect the function of the examining pathologist to question the validity of that correlation.[52]

Up to this point, the approach to digitalisation in our profession has been similar. We have had to learn how to conduct online searches, such as on AustLII, LexisNexis and Westlaw, how to analyse a transcript with Transcript Analyser, how to use an iPad to provide access to authorities and how to run an e-trial without physical documents.[53] But few of us have had much interest in and still less need to consider the intricacies of the computer technologies which underlie those devices. It has been enough that we have mastered the operation of them.

It will be different with computational law. So far, the systems we have had to master have been mere information retrieval and organisational systems. Like a digest, they locate and inform us of what has already been decided. By contrast, the purpose of computational law is to determine what must now be decided. In the law, unless a previously decided case is on all fours with the case for decision — and sometimes even then — there is always more than one possible answer and a consequent need to choose between them. As Stone said, that involves a process of will as well as of cognition and of making choices not only on the basis of the various criteria of choice made available by earlier thought but also according to our assessment of the relevance of the facts of contemporary social context to the doing of justice.[54] And it is at that point that one will need to understand the basis on which a computational law system has made its choices and also to have the ability to discern whether and if so why they should be accepted or rejected.

May I then, finally, mention an aspect of computational law which should be a cause for optimism? In most areas of human activity, increased digitisation has made goods and services cheaper and more readily available. Thus far, in the law, as in medicine, it has tended to have the opposite effect. Technology has made litigation more expensive, just like technology has made major medicine more expensive,[55] and now litigation is so expensive that to a large extent it is no longer an option for people of ordinary means. That means that some counsel now have less work than is optimal and there are indications that, in order to sustain their earnings, the amount of time they devote to a matter expands to fill the time available, with consequent further increase in unit costs. In turn that makes litigation still more expensive and still less attractive and access to litigation further declines.

More fundamentally, there are currently many more lawyers than ever and yet litigation has never been less accessible to those of ordinary means. The paradox of the present time is thus that we have more than enough lawyers and yet we have inadequate legal services for all who truly require them. Many areas that need lawyers, like crime, immigration, social security, town planning, administrative law, consumer complaints, domestic and retail tenancy, family law and succession, often go without or underrepresented because the cost is prohibitively high.

One solution to the problem is for lawyers to reduce prices to make their offering more attractive. But, unless lawyers can reduce costs, a reduction in prices is not an attractive or possibly even a viable option. Computational law, however, has the potential to alleviate the problem. Other walks of life demonstrate the point. Thanks to computational systems in civil aviation, most of an interstate or international flight is now conducted by a computer with the pilot and co-pilot intervening only when the really hard decisions need to be made.[56] Because of the adoption of computational design and drafting systems in the engineering and architecture professions, a large part of engineering and architectural design and documentation is now computer-generated with professional engineers and architects intervening only at the points at which hard choices have to be made.[57] Similarly in commerce, most of financial accounting is now able to be computer-generated with the intervention of accountants only when the hard choices need to be made, and automated financial reporting is becoming available.[58] It can be the same in the law. Computational law in the hands of skilled counsel has the potential to do much of the work in many matters, with counsel intervening as the final arbiter only when the final or otherwise hard decisions need to be made. And applied assiduously to the law, as computer systems have already been applied in other professions, it has the capacity so to reduce unit costs of advice and preparation for trial as to make legal services a more realistic option for people of ordinary means, with consequent scale increase in demand and societal benefit.

Possibly, a change of that order will not appeal to all of the members of our profession as a particularly attractive prospect. Some may not be especially interested in the areas of law in which the consequent increase in demand for legal services is likely to be generated. Others may not favour the prospect of high volume, low cost, computer-assisted output compared to the more august paradigm of old. Some may be disinclined to put in the work necessary to master the skills that are required for deploying the new technology. Those who are of that disposition, however, must keep in mind that the march of technology is relentless. What has already occurred in science, engineering, architecture and financial services is now beginning to occur in the law.

Properly applied, computational law has the potential to provide a degree of assistance in final decision-making that affords us the opportunity of providing a better, quicker legal service at significantly reduced unit cost to a much larger potential clientele, with consequent large-scale social benefits. And as the custodians of the law, we not only have a responsibility to be at the forefront in the innovation and application of that kind of new technology but we also have reason to be excited about the benefits which it is likely to yield.



[1] Paper presented at the Bar Association of Queensland Annual Conference, 27 February 2016. Published in (2017) 13 TJR 185, updated 2021.

[2] Former Justice of the High Court of Australia.

[3] See, eg, F Lederer, “High-tech trial lawyers and the court: responsibilities, problems, and opportunities, an introduction”, Courtroom 21 Court Affiliate Conference, 2003; M Warren, “Open justice in the technological age”, speech delivered at the Redmond Barry Lecture, 21 October 2013, Melbourne.

[4] T Sourdin, “Justice and technological innovation” (2015) 25 JJA 96.

[5] N Love and M Genesereth, “Computational law”, Proceedings of the Tenth International Conference on Artificial Intelligence and Law, 2005, p 205.

[6] ibid.

[7] ibid at 206.

[8] ibid.

[9] ibid at 207.

[10] See, eg, J Markoff, “Armies of expensive lawyers, replaced by cheaper software”, New York Times (online), 4 March 2011, www.nytimes.com/2011/03/05/science/05legal.html, accessed 26 August 2021; KordaMentha, “KordaMentha forensic adds relativity to their eDiscovery capabilities”, www.kordamentha.com/news/forensic-relativity, accessed 26 August 2021; see also T Sourdin, above n 4 at 103.

[11] Irish Bank Resolution Corporation Ltd v Quinn [2015] IEHC 175.

[12] S Pennington, “Lawyers next for tech-driven outsourcing”, Sydney Morning Herald (online), 10 September 2013, www.smh.com.au/it-pro/business-it/lawyers-next-for-techdriven-outsourcing-20130909-hv1qa.html, accessed 26 August 2021.

[13] C Merritt, “Artificial intelligence comes to the law”, The Australian (online), 20 June 2014.

[14] T Sourdin, above n 4 at 101.

[15] See, eg, L Branting, “Advisory systems for pro se litigants”, Proceedings of the Eighth International Conference on Artificial Intelligence and the Law, 2001, p 139.

[16] Australian Taxation Office, “myTax”, available through www.my.gov.au.

[17] See S Pennington, above n 12.

[18] See, eg, L Branting, above n 15.

[19] Cf T Levitt and K Laskey, “Computational inference for evidential reasoning in support of judicial proof”, in M MacCrimmon and P Tillers (eds), The dynamics of judicial proof: computation, logic and common sense, Springer-Verlag, New York, 2002, p 345 at pp 352–383; J Josephson, “On the proof dynamics of inference to the best explanation” in MacCrimmon and Tillers (eds), ibid at p 287; P Snow and M Belis, “Structured deliberation for dynamic uncertain inference”, in MacCrimmon and Tillers (eds), ibid at p 397.

[20] M Perry and A Smith, “iDecide: the legal implications of automated decision-making”, paper presented to the Cambridge Centre for Public Law Conference, 15–17 September 2014, http://classic.austlii.edu.au/au/journals/FedJSchol/2014/17.html, accessed 20 April 2021.

[21] See, eg, A D’Amato, “Can/should computers replace judges?” (1977) 11 Georgia Law Review 1277; T Sourdin, above n 4 at 102.

[22] W Thompson and E Schumann, “Interpretation of statistical evidence in criminal trials: the prosecutor’s fallacy and the defense attorney’s fallacy” (1987) 11(3) Law and Human Behavior 167.

[23] See E Nissan, “Select topics in legal evidence and assistance by artificial intelligence techniques” (2008) 39 Cybernetics and Systems 333 at 343–348.

[24] See, eg, Civil Evidence Act 1968 (UK), c 64; Evidence Act 1898 (NSW) (rep), Pt IIC (inserted by Evidence (Amendment) Act 1976 (NSW)). See also, eg, Evidence Act 2008 (Vic), ss 69, 146, 147, Dictionary, definition of “document”.

[25] Cf Nissan, above n 23 at 375–379.

[26] Brown v The Queen (1986) 160 CLR 171; Cheatle v The Queen (1993) 177 CLR 541; Alqudsi v The Queen (2016) 258 CLR 203.

[27] L Branting, Reasoning with rules and precedents — a computational model of legal analysis, Dordrecht: Kluwer Academic Publishers, 2000, p 146.

[28] ibid at p 147.

[29] J Popple, A Pragmatic Legal Expert System, Dartmouth, Aldershot, 1996 at pp 7–8.

[30] See and compare T Bathurst, “Advocate v Rumpole: who will survive? An analysis of advocates’ ongoing relevance in the age of technology” (2015) 40 Australian Bar Review 185 at 190.

[31] S Burton, An introduction to law and legal reasoning, Little, Brown & Co, 1985.

[32] J Stone, Legal system and lawyers’ reasonings, Maitland Publications, 1964, p 318.

[33] ibid at p 320.

[34] Donoghue v Stevenson [1932] AC 562 at 580.

[35] Leviticus 19:18; Mark 12:29–31.

[36] O Dixon, “Concerning judicial method” (1956) 29 Australian Law Journal 468; republished in S Woinarski (ed), Jesting Pilate and other papers and addresses, 2nd edn, W S Hein & Co, Buffalo New York 1997, p 152.

[37] (1884) LR 9 App Cas 605.

[38] (1990) 170 CLR 394.

[39] O Dixon, above n 36 at 473.

[40] See MacCrimmon and Tillers (eds), above n 19.

[41] See, eg, L Zadeh, “From computing with numbers to computing with words: From manipulation of measurements to manipulation of perceptions”, in MacCrimmon and Tillers (eds), above n 19 at p 81.

[42] M MacCrimmon, “What is ‘common’ about common sense? Cautionary tales for travellers crossing disciplinary boundaries”, in MacCrimmon and Tillers (eds), above n 19 at pp 68–70.

[43] P van Andel and D Bourcier, “Serendipity and abduction in proofs, presumptions and emerging laws”, in MacCrimmon and Tillers (eds), above n 19 at p 276.

[44] J Josephson, above n 19 at pp 297–304. See also, D Schum, “Species of abductive reasoning in fact investigation in law”, in MacCrimmon and Tillers (eds), above n 19 at p 308; T Levitt and K Laskey, above n 19 at pp 352–383; D Poole, “Logical argumentation, abduction and Bayesian decision theory: a Bayesian approach to logical arguments and its application to legal evidential reasoning”, in MacCrimmon and Tillers (eds), p 385; J Jenkins, “What can information technology do for law?” (2008) 21 Harvard Journal of Law and Technology 589 at 497–600.

[45] J Josephson, above n 19 at p 304. See also F Pasquale and G Cashwell, “Four futures of legal automation” (2015) 63 UCLA Law Review Discourse 26 at 45.

[46] See National Human Rights Consultation Committee, National Human Rights Consultation Report, 2009, at pp 15–50, 263–265, 281 at www.humanrights.gov.au/sites/default/files/content/legal/submissions/2009/200906_NHRC_complete.pdf, accessed 26 August 2021.

[47] See J Allan, The vantage of law: Its role in thinking about law, judging and bills of rights, Ashgate Publishing Ltd, 2011; Cf D Meagher, “The common law principle of legality in the age of rights”, (2011) 35 Melbourne University Law Review 449 at 463–465.

[48] Cf T Bingham, The Rule of Law, Allen Lane, London, 2010 at pp 51–54.

[49] J Popple, above n 29 at p 24.

[50] L Branting, above n 27, at pp 159–161.

[51] See also, J Popple, above n 29 at pp 24–50 for an overview of a range of computational law systems including rule-based systems, case-based systems and hybrid systems.

[52] D’Arcy v Myriad Genetics Inc (2015) 258 CLR 334.

[53] F Lederer, ”Courtroom technology: for trial lawyers, the future is now” (2004) 19 Criminal Justice 14; Cf P Keane, “Access to justice and other shibboleths”, paper presented to the JCA Colloquium, Melbourne, 10 October 2009 at pp 25–28.

[54] J Stone, above n 32 at pp 319–320.

[55] See P Keane, above n 53 at p 27; see, eg, J Skinner, “The costly paradox of health-care technology”, MIT Technology Review (online), 5 September 2013, at www.technologyreview.com/news/518876/the-costly-paradox-of-health-care-technology/, accessed 26 August 2021.

[56] And the automation increases: see, eg, J Markoff, “Planes Without pilots”, New York Times Science Blog (online), 6 April 2015, at www.nytimes.com/2015/04/07/science/planes-without-pilots.html, accessed 26 August 2021.

[57] See, eg, M Graves, “Architecture and the lost art of drawing”, The New York Times online, 1 September 2012, at www.nytimes.com/2012/09/02/opinion/sunday/architecture-and-the-lost-art-of-drawing.html, accessed 26 August 2021.

[58] See, eg, D McLennan, “Automating Business Reporting”, Scrib, 2011, at www.scribd.com/document/234347352/Automating-Business-Reporting, accessed 26 August 2021.