A “Kleen” Miss

A “Kleen” Miss

The antitrust case of Kleen Products, LLC, et. al. v. Packaging Corporation of America, et. al., which many may know as the “other” predictive coding case, has finally started to tear the attention of the e-discovery industry away from Da Silva Moore. While the issue at hand in Da Silva Moore was the argument over the predictive coding protocol, in Kleen the plaintiffs and defendants are butting heads over whether or not the technology should be used at all.

A little bit of background on the case…

The plaintiffs allege that the defendants engaged in a series of parallel capacity reductions and price increases even as consumer demand for their product – containerboard, the primary component of innumerable packaging products for industrial and consumer use – was increasing. While the defendants submit that the capacity reductions were not carried out in parallel, they have not denied that the price increases were. However, they state that because the containerboard industry is highly consolidated, oligopolistic in nature, and has inelastic demand, contemporaneous price increases are lawful interdependent conduct. Put in plain English, if one company in the industry raises their prices because of increased costs and/or weakened demand, the others must follow suit or risk being forced out of business.

The defendants failed to convince the court to dismiss the antitrust claim, and were ordered to respond to the complaint on or before May 2nd, 2011. Factoring into the court’s decision was the fact that prices were increased even as capacity was decreased, in the face of increased demand, and the timing of these changes, which occurred after industry-wide meetings. Currently, the case is well underway, with 99% of the defendants’ discovery process complete. However, the plaintiffs are asking Judge Nan Nolan to compel the defendants to redo their discovery because they used keyword searching rather than predictive coding.

Boolean keyword searches are often regarded as the “gold standard” of e-discovery. When performed correctly, they combine the efficiency and thoroughness of a computer with the judgment of a human reviewer. During e-discovery, the main benefit of keyword searches is that they can greatly assist the process of identifying relevant documents. While it is true that there are significant limitations with keyword searches (for example, they are blind to the context in which a word or phrase is used, and if a relevant search term is simply left out there is no way to know), they are also what every other e-discovery technology is compared to.

In Kleen, the defendants have used keyword searches to produce over a million documents already, and have put thousands of hours into review. Even so, the plaintiffs want Judge Nolan to issue an order that would have the defendants redo discovery using predictive coding because they feel that the limitations of keyword search are such that a significant number of documents relevant to the case have not been identified.

While the fact that 99% of discovery has been completed seriously weakens the strength of the plaintiff’s argument that discovery needs to be redone, there are other issues as well. In contrast to Da Silva Moore, in which the plaintiffs have applied endless amounts of scrutiny to the protocols for the use of predictive coding in review, the plaintiffs of Kleen have neglected this piece of the puzzle almost entirely. Predictive coding technology is still very much in its early days, as far as the legal community is concerned, and is far from achieving the each of use that keyword search enjoys. An expert in the case must carefully train the program in order for it to be able to identify the correct documents with accuracy equal to that of a human reviewer; even a tiny mistake in the algorithm can turn into huge deficiencies in quality. It is better to use a less advanced tool very well than to place an extremely complex tool in the hands of someone who doesn’t know how to use it.

During the hearing, the plaintiffs did shift their gaze to the actual process for keyword searches employed by the defendants, rather than solely criticizing the technology itself. Despite the best efforts of their testimonial expert, the plaintiffs were unable to convincingly portray the defendants’ keyword search methodology as flawed, and the cross examination of the expert actually cast it in a very favorable light, when compared to the status quo.

As both Kleen and Da Silva Moore continue their time in the spotlight as the predictive coding cases du jour, further critical comparisons are sure to prove very illuminating.

Share this entry


LLM unifies the legal process by combining legal holds, case strategy, matter and budget management, review and analytics in a single, web-based platform. We connect legal strategy to tactics in a way no one else can, so every part of the process is actionable. Our product scales to help corporate and law firm teams gain cost-savings and eliminate inefficiencies.
Send this to a friend