Judge Andrew Peck’s endorsement of predictive coding, issued orally on February 8th and in writing February 24th, has been making waves for almost a month now, as everyone from expert analysts to Twitter pundits weighs in on the ruling. “Controversial” seems just the word for it.
Predictive coding, also known as technology-assisted review, speeds up the discovery of ESI by teaching computers how to mimic expert human reviewers. In other words, the computer observes how the lead attorney makes a decision about responsiveness (for example), and generates an algorithm that takes the same variables into account in the same ways. Predictive coding thus allows decisions about responsiveness, privilege, and other crucial components of the discovery process to be made in a fraction of the time it would ordinarily take.
The particular case in which Judge Peck’s ruling was made concerned a lawsuit brought by five named female plaintiffs, who charged that the defendant prevented women from advancing beyond entry level positions and systematically discriminated based on gender. Because of this, millions of documents had to be located and reviewed, and the time and cost on the producing party became exorbitant. As Law Technology News reported, predictive coding was only endorsed because both parties agreed to use it; however, the plaintiffs have been less than enthusiastic.
As reported by ACEDS, the written opinion from Judge Peck was released two days after the plaintiffs filed their objections, which meant they did not get a chance to address the full ruling. Chief among the plaintiffs’ complaints is that the software adopted by the defendants does not have any reliability protocols in place. The plaintiffs are also unhappy that Judge Peck endorsed a “novel” discovery tool as the only method of document production.
While the plaintiffs raise valid, logical objections to the ruling, it is nothing that we haven’t heard before. Most, if not all criticisms of predictive coding boil down to uncertainty over whether or not its results will favorably compare to the results from human reviewers. Will relevant documents be missed? Will the algorithms generated by the software actually reflect how experienced attorneys make decisions? Fortunately, we know how to answer these questions. Testing protocols for reliability and accuracy are not difficult to conceive or implement, and the defendants should include these without complaint.
In the short term, Da Silva Moore represents a golden opportunity for predictive coding to prove itself as a legitimate step forward in e-discovery technology. In the long term, continuing to give predictive coding a cold shoulder does not advance the best interests of the legal community. A balance can and should be struck between caution and innovation.
UPDATE, MARCH 20th: Predictive Coding, Take 2