If you become so caught up in the smallest detail that you fail to understand the bigger picture, you may say that you cannot see the wood for the trees. I am NOT about to break my promise to defer further detailed comment on the predictive coding saga rumbling around the US at present. My purpose was to ensure that I did not engender more speculative comment on this site to add to the volumes which exist elsewhere! However, it was not my purpose to desist from all comment, provided that the comments could be said to add to the debate.
I feel strongly that we need to get away from all the hype about the subject as well as the extraordinary personal attacks on Judge Peck’s integrity contained in the Da Silva plaintiffs’ “paper” submitted to Judge Carter and concentrate on making sense of the various rulings and opinions for our joint edification. If you want to read about what I mean and do not want to go to the source, have a look at Craig Ball’s blog on the subject to get a feel of some of the vitriol – Putting the Duh in Da Silva Moor [Ball in Your Court, 26th March, 2012].
Firstly, I think it would be useful if lawyers were to ask themselves how this subject would play out here in front of a judge or Master. No one has suggested to me that writing about US decisions is in fact a waste of time because they are US decisions which have no persuasive effect in this jurisdiction. While the cases are, of course, US cases, it does not take a genius to work out that the subject matter has a much wider application than in the courts of New York State (Da Silva) or Illinois (Kleen).
Secondly, I think we have to remind ourselves that the way lawyers used to conduct disclosure and review is no longer acceptable. No one seriously suggests that the kind of manual or linear review that used to take place (piles of paper with paralegals and trainees handling pieces of paper for days on end just to put them in chronological order before any kind of review was possible) is likely to be acceptable today. In that sense we have all moved on! The sheer number of lawyer hours multiplied by the relevant charge out rates is likely to render such an exercise disproportionate in all but the smallest of cases.
Thirdly, no one is saying that predictive coding is a panacea. It is not. For instance it does not work well (or at all) in cases involving a lot of figures or spreadsheets as the process is designed to look at text and not numbers. It is, however, one of the tools in a lawyer’s armoury and needs to be considered (and even rejected if the exercise does not justify its use) in most cases apart from the very smallest.
I am often asked where the tipping point is. There is no definitive answer to the question because everyone will have their own views and each case will be different but what one can say with certainty is that there are lots of cases out there in law firms of varying sizes where lawyers are spending lots of their clients’ money in ways which are generally inefficient and where the chances of achieving a good result are remote.
Two recent instances illustrate the problem:
- A lawyer had 20,000 documents (it might equally have been 10,000 or even 5,000) to review. After appropriate culling, filtering and deduplicating, there remained a substantial number to be reviewed and time was short. This is just the sort of case where a consideration of the benefits of predictive coding was justified. Arguably, it could be negligent not to consider it even if the idea is ultimately rejected for good reasons (and the decision process documented). Such a scenario is by no means an isolated one. There are plenty of cases out there with document sets of a similar size where currently no consideration is being given to the use of predictive coding.
- A second lawyer is dealing with the process of disclosure in a traditional manner on the basis that there just might be, hidden away in the body of documents, the one document which could/might be vital to the case. How quickly that approach becomes disproportionate will depend on the size of the case and the sums involved, but in many instances that position is likely to be reached long before all the documents have been reviewed. There has to be another way.
So, where are we now?
Obviously, we need to consider each case separately because, as the saying goes, there is no such thing as one size fits all.
We should accept that there is now a strong body of evidence that computer assisted review is at least as reliable as human review and that there are studies which suggest that it is superior to human review in many cases.
We need to understand that the gold standard does not lie in competing lists of keywords, often prepared before the issues are fully understood and from different viewpoints. In many cases, keywords either throw up a large number of false positives or underperform in that they miss vital documents. I am aware of no case which expressly endorses the use of keywords as a method of identifying documents which are pertinent from a larger data set, just as there is no case which endorses the use of teams of paralegals and other junior lawyers to handle pieces of paper. Similarly, there is unlikely to be a case where the specific use of predictive coding is endorsed.
We should not forget that Judge Peck did not order the parties to use predictive coding. They had already agreed to do so. All he tried to do was to lay out some guidelines as to how the process should work. Ultimately, what I hope we will gain from Da Silva is guidance as to how the process may be handled. In the end it is all about transparency and if the parties confer with one another to agree the best way forward (and in the absence of agreement have a sensible argument before the court) I suspect that is all that can be expected.
We must await developments in the US in the hope and expectation that we will all be able to draw some guidance from whatever decisions are made in Da Silva and in Kleen. It seems likely that, whatever the guidance, it will have just as much relevance and application to disclosure exercises over here as it is sure to have over there.
If you cannot contain your impatience and want to read a really good assessment of where we are today, then read John Tredennick’s article – Judge Peck Provides a Primer on Computer-Assisted Review [Catalyst, 14th March, 2012]
In the end it is as much about keeping matters in perspective (and costs to a reasonable and effective level) as being able to see the wood for the trees.
Photo credit: Beech trees (Fagus sylvatica) in the w:Sonian Forest (Forêt de Soignes – Zoniënwoud) Belgium by Donar Reiskoffer [Wikimedia Commons]