Manufacturing

Predictive Coding Today: Before You Jump In, What Should You Consider?

Predictive coding, the term used to describe the use of machine learning tools in document review, has taken center stage in today’s e-discovery world. Counsel have been inundated with countless predictive coding seminars, demonstrations, conferences and papers. In discussions with clients, we have found that there is a growing awareness of the concept of predictive coding, but taking first steps to use the technology in real workflows remains a mystery. What criteria should be applied in choosing a matter appropriate for a predictive coding test? How can you measure the outcomes? What parameters would ensure that the first matter would produce helpful data in making decisions on predictive coding in the future?

We will address these and other specific considerations for your first predictive coding matter.

Internal Technology vs. External Workflow

One must always begin with the end in view. This is primarily a matter of understanding your current e-discovery practices and your near-term objectives in predictive coding. To what extent do you use e-discovery technology in-house? Does your company have a forensic data and e-discovery group sophisticated enough to consider purchasing a full license to the processing and hosting software?

Large companies that are subject to serial litigation will sometimes have a full-blown e-discovery shop running internally. Many companies have at least some legal hold and preservation software solutions, perhaps as part of existing data management solutions. Check with your IT department and refresh your knowledge of your data systems.

Most companies will source their e-discovery processes with some combination of technology vendors, managed review providers and outside counsel. If you are a company that is seeking to acquire e-discovery technology through a licensing agreement, you will have other considerations in addition to those described below, like upfront cost, support and maintenance, and your company’s ability to meet the technical requirements to properly host the software.

Choosing The Right First Matter

There is no formula for finding the perfect first matter to try predictive coding. But here are a few practical considerations in setting up a pilot:

A Real, Live Matter: You should strongly consider employing predictive technology on a real case. And if an appropriate live matter is not available, you should endeavor to simulate the exact circumstances of a live matter as much as possible. Predictive coding is not technology that exists separate and apart from invested human review and intelligent workflows, and the best way to bring those assets to a pilot is in the context of a real review.

Volume: While there is no magical number of documents needed to test the capabilities of predictive coding, you should strive to find a matter that has a volume of documents representative of your typical matters. Predictive coding technology will make the most sense from a cost-benefit perspective when there is a critical mass of documents somewhere above 75,000 documents. If your datasets for review are typically a much smaller scale, you can still try predictive coding, but the cost of technology may play a potentially bigger factor when drawing your conclusions.

Rolling Collections and Productions Rolling: Do not disqualify a potential pilot on the basis of rolling collections or rolling productions. Collections are rarely complete before review needs to begin, and productions are rarely one-time events when negotiated with the requesting party. Custodian data or other collected sources roll into the database over a period of time, and productions roll to the requesting party in batches as well. Most predictive technologies employ a semantic algorithm that analyzes a text corpus as a whole, so it is important for the predictive coding solutions to account for new documents that will introduce new text on a rolling basis.

Similarly, many predictive technologies iterate their learning over the course of the entire document review. As the learning becomes more refined, the predictive tool should be able to identify inconsistencies that develop over time with documents that have already been produced.

Evolving Rubric

Another reality of discovery is that the rubric applied in document review evolves over time. The machine learning also needs to be able to adapt to the shifting contours of discovery. As facts develop in the case, as parties negotiate the breadth of discoverable evidence or as regulatory inquiries shift in the scope of their investigation, the criteria for document review will often need to change mid-stride – what was previously irrelevant may now become highly relevant. Predictive technologies operate to apply machine learning based on what is known at any period of time, so when that knowledge evolves over the course of litigation, predictive solutions must contain workflows that adapt and integrate that learning on an ongoing basis.

Real-Life Timelines

Finally, we know from personal experience that discovery can have extremely demanding deadlines. Once again, do not shy away from considering a pilot for a case with short-fuse timelines because this is another quality of predictive coding technology that needs to be measured. Can the machine learning reach a level of precision for a large quantity of documents in a very short period of time? Sometimes in cases with rolling productions, an initial production will be due just a few days after the documents are loaded into the review platform. Does this work within the workflow of predictive coding?

Measuring Your Outcomes

Finally, you should track your first predictive coding matter with hard data and analysis. Oftentimes, counsel are left with only qualitative feedback on how the process went, but you should seek independent measurements that reflect cost savings and quality that will give you a good sense for when predictive coding is right for your matters.

Quality: The Baseline Comparison. Every test use case requires an accurate base case to compare against. For an initial matter, if it is serving the true purpose of a pilot, you may want to consider a “bake-off” style contest. In parallel with the predictive review, try to run a non-predictive review workflow simultaneously. The reviews need to be completely separated, and the non-predictive review should represent the best utilization of existing advanced tools available (near-dupe, threading, concept clustering) to gauge a fair comparison. Try to measure quality through error rates from randomized samples and measure cost according to actual amounts.

Costs: Aggregate Project-Level. E-discovery costs can be summed into three major areas: technology cost, review cost and outside counsel. For example, saving review costs through predictive coding may seem highly attractive, but if it requires more expensive technology and more expensive hours billed by outside counsel, the calculus changes. All of the parties involved may try to market serious savings in various areas, but we recommend that you maintain your own, independent running-cost tracker that assesses the project-level costs aggregately. This will help you define the types of matters (volume, complexity and timelines) that will be best-suited for predictive coding going forward.

The potential benefits of predictive coding are well-known, but it may not be appropriate for all matters. Implementing predictive technology in a live matter or simulating live-matter circumstances best reveals the utility of the tool. And specific measurements on quality and costs can provide insight into the best, most practical uses for this emerging technology.

Published .