More about the French ban on Judge analytics
On 4 June 2019, the French Government criminalised the “reuse” of data identifying judges “with the purpose or effect of evaluating, analysing, comparing or predicting their actual or alleged professional practices” with a maximum penalty of 5 years in prison.
The legal tech sector was stunned. The move appears to run counter to the modern trends of open justice and the “consumerization” of inaccessible professions through technology. Start-ups may be asking themselves whether this is the start of a wider reactionary movement against these ideals, and whether the effort is worth the risks.
After all, at the heart of litigation data analytics businesses are propositions that are unnerving to traditional legal practice: our efforts at predicting outcomes can be unusably ambiguous, and a lawyer’s personal experience is a poor dataset from which to draw broad conclusions.
As to the former, professional third party litigation funder Balance Legal Capital ran a survey last year asking respondents to state in numerical terms how they construed a “probabilistic” phrase, such as “good prospects” or “reasonable chance”. The results were eye-opening. One phrase (“serious possibility”) received answers ranging from 36% to 72% (the same phrase can suggest something nearly as poor to one litigator as it is good to another). Data analytics force us to confront the uncomfortable truth that we hide behind ambiguous language.
As to the latter, most lawyers will not, over their careers, work on a sufficient number of cases to draw any statistically meaningful lessons from them. Lessons learned from one may not apply to the next, and unless we are dutifully recording each prediction we make and its outcome, we will probably fail even to learn the lesson.
Litigation data analytics forces the profession to confront the fact that we are going to have to learn a new skill. That skill will probably involve a better level of numeracy than most aspiring lawyers ever expected.
Small wonder then that there remains skepticism and resistance within the profession. In response to a pitch from a provider of analytics services that focuses on the London market, I have overheard the question – “why use an average? Every case is different”. At a litigation financing conference I recently attended in New York, I was very surprised when, during a panel discussion on the subject, even the funders were unimpressed.
True, every case is different. So is every commute, and every year’s weather, and every smoker’s natural resistance to lung cancer. But no sane doctor would choose to draw on only her own patients’ experiences in responding to a certain type of treatment, and no sane commuter would want Citymapper to recommend routes based only on his own experience of getting to work that way: “Sorry, it looks like this is your first day of work: travel times are unavailable”.
That said, while spending time with the naysayers, I have heard a number of legitimate questions which those in this industry should keep in mind.
Litigators, being questioning types, rarely take things sitting down. So, when software promises to simplify a challenging part of their job, the response is often – “how does it work?” Practitioners will need to be able to trust the quality of the database, and the less the service looks like a black box, the more likely this is to happen.
The next issue is that there is just not enough data going into these systems yet. While for a fairly vanilla question (e.g., “how often have negligence claims historically succeeded?”) the number of cases analysed by these platforms might run into the hundreds, the more this data is drilled into to identify which of those hundreds is most relevant, the less data there is and the less value it has. Second, complex commercial cases will be more difficult for a predictive platform to grapple with: the more data points analysed, the more likely it is that a false positive pattern will be identified within them. In the 1970s, an American sportswriter noticed that the Super Bowl was a very accurate predicter of the direction of the stock market. Deep Blue was also initially fooled when Gary Kasparov used an early game strategy which it couldn’t recognize in its vast database. On the other hand, reported decisions in lower courts, which generally deal with simpler cases, are collected haphazardly (BAILII, a leading free database, warns that its archive of county and magistrates’ court decisions is “very incomplete”). If judicial trends change over time (and they do), will this software also be permanently impeded by only ever having in its database a rolling 5-year block of relevant case law?
These platforms will continue to improve. Equally, they will not replace litigators. The better way to look at this technology is as a tool which improves the profession’s efficiency: we can do more work, more quickly, more accurately and (hopefully) more economically. Weather forecasting – another profession seeking to predict outcomes from complex systems – has enjoyed huge gains in accuracy since the introduction of powerful pattern-recognising software, but the accuracy of the weatherman remains a joke rather than a proverb. Whilst until access to analytics is included in a subscription to the leading online research sites, those with access to the analysis will have an edge over those who don’t, the tool is nothing if its operator does not know how to use it.
It seems unlikely that concerns such as these motivated the French Government’s ban. To its credit, the English judiciary is showing no signs of reacting in the same way as in France. This may be due to the fact that those judges’ whose decisions have been fed into the machines so far are sufficiently senior that they are untroubled by the heightened scrutiny. The general inability to choose a judge and lack of a docketing system also places natural limitations on the value a judge-by-judge analysis of historic cases. However, we live in times when national dailies can apparently call judges “enemies of the people” without consequence. We also saw the media trawling the reported decisions of Sir Martin Moore-Bick to assess how suited he was to the job of chairing the Grenfell Tower Inquiry. Those attempts were feeble, but it is not difficult to peer into the future and foresee political hucksters tendentiously presenting correlative litigation data as significant and causative for their own ends. If that happens, then it wouldn’t be the first time a well-meaning government regulated and censored publication of material for the ostensible greater good.
Copyright © 2019 Legal IT Professionals. All Rights Reserved.