Technology

Consensus Coding℠: A Game Changer for Human Review

Editor: What are some of the challenges of human review within the e-discovery process?

Regard: To date, the human review process can be characterized, at best, as “acceptable” from an industry perspective and, again, at best, as “inconsistent” for companies that engage in it. Historically, document review has been conducted with a lot of good faith effort but without the mechanisms for measuring or improving quality. Academic studies have suggested that even well-informed document reviewers can and do disagree on a large percentage of decisions. Even when clear instructions are provided to reviewers, without quality control, document review decisions can vary greatly from person to person. Nevertheless, human review is a necessary part of the discovery process, whether linear or technology driven.

Editor: iDiscovery Solutions (iDS) has developed a platform to improve human review, Consensus Coding. Can you explain how it came about?

Regard: As the industry began to adopt more robust processes and computer-augmented, algorithmic document review, it neglected to improve human review along the way. Consensus Coding℠ solves that problem by combining the computer technology and the algorithms with what we’ve learned from the science of information retrieval to make human review significantly better. Specifically, iDS has leveraged the ability of computers to control and reconcile the document tagging process and we’ve applied qualitative principles of information retrieval to those tagging decisions. As a result, we can estimate the accuracy of individual document decisions, an entire collection as a whole, and the performance of individual reviewers.

Editor: These days, human review and technology-assisted review are rarely discussed in the same conversation. What’s wrong with this picture?

Regard: Part of the problem is that people consider this an either/or choice. It’s not. Many companies that develop new technologies choose not only to promote the benefits of computer-assisted review, but also to disparage human review. What they’ve failed to realize is that computer-assisted review is just that – human review assisted by a computer. Computer-assisted reviews start with human review – the system must learn before it can function. We’ve taken a different approach with Consensus Coding℠, which was developed in accordance with the underlying principle that computer-assisted review is meant to enhance rather than replace the ability of humans to recognize relevancy or privilege. Computers can be used to improve the human review process by making it faster, more economical and more efficient. Ken can explain the technical details. It’s fascinating.

Shuart: From a technical standpoint, Consensus Coding℠ was built on the premise that we can generate a statistical confidence measurement on a yes or no proposition. Before we begin a review, our clients define the minimum acceptable level of confidence they want to achieve in the coding decisions made during the review. We set that number as the marker for accuracy on a document. During the review, a document is given a statistical measurement based on the way it has been coded. The document is randomly recirculated to others on the review team until its statistical grade has achieved the predefined confidence threshold. Once the document reaches that level of accuracy, it is set aside, while others continue in the workflow until all documents have met or exceeded the predetermined percentage. At that point, our review is complete. If a document does not trend toward desired accuracy, we know that it needs to be set aside for further review.

Regard: One of the key benefits of the platform is that every decision can recalibrate our confidence score, both in the document and in the reviewer. We know that two equally trained people can disagree on decisions about a document. So we ask a few questions. What if one person has a better track record? Is his or her decision of greater value than a decision by others? What if you have three, four or five people vote? We know that if you crowdsource the decision, you can determine if larger group decisions are better than those made by just one or two people.

This is the essence of Consensus Coding℠. We use proprietary algorithms and workflows behind the scenes to quickly and precisely control who votes on a particular document, so it’s not always the same team. As we track the decisions of reviewers in real time, our weighted voting system adjusts with those decisions, allowing us to keep documents in review until we’ve reached the client’s predetermined accuracy threshold. Clients literally can determine if they want or need to spend the money for a document review that is verified at 85, 90, or 95 percent. It really changes the way people think about document review.

For example, we no longer rate a document decision based on the tenure or title of the reviewer; we base it on his or her actual capabilities and track record. The point is that clients can realize a fundamental shift in quality control and improved results where it makes the most sense and has the greatest impact. We are not just bringing the human side into the review process; we are improving it, which by extension improves the performance of the computer algorithms.

Editor: How is this technology implemented within a corporation or outside firms? Can it be incorporated into existing information handling systems?

Regard: We’ve integrated Consensus Coding℠ into Relativity; however, it works with any document review platform. All iDS needs are the identification numbers of documents being reviewed and the votes of reviewers for each document. We can weight reviewers’ decisions based on prior performance, identify which documents are finished in the review process vs. which need to be continued, and make assignments to the best reviewers for the next set of documents by using grading disciplines. And we do it all remotely in a batch process. Our system works equally well whether iDiscovery Solutions is hosting the documents, a vendor is hosting the documents, or a law firm or company is hosting in-house on their own document review platform.

Editor: How much lead time do companies need before they start using the system?

Regard: They need zero training time and zero lead time. This is because there is absolutely no change in workflow for the reviewer. For example, reviewers using Relativity are tasked with doing exactly the same thing they do today: use Relativity, look at potentially responsive documents and make final decisions about responsiveness and privilege. We can assess a reviewer’s recommendations immediately, with no break in continuous reviewing of new documents. In fact, the reviewer may be looking at documents that have not yet been reviewed or at documents that someone looked at the day before, but he or she is indifferent to that status and previous coding decisions.

Editor: The 2012 RAND report lists document review as the single greatest cost in litigation at an incredible 73 percent. How can Consensus Coding help reduce costs?

Regard: Consensus Coding℠ measures the quality of review decisions on a per-seat basis. The preconceived notion that a lawyer with greater tenure automatically makes better decisions is actually measured, and while that assumption often holds true, sometimes it doesn’t. The same logic applies to assessments of reviewers along any lines, such as level of education completed or even U.S. vs. non-U.S. reviewers. The point is that these soft variables can be taken into account through the actual measurement of the final work product. This enables us to expand the options for clients as to who is reviewing which documents. There is no need to make random quality decisions; the accuracy of the reviewers is quantified.

As a result, choices can be made about staggering documents by collection, meaning we can help clients identify the type of education, skills and experience needed for their document review project, while always maintaining a targeted, measurable degree of quality and results. This allows our clients to reduce review costs considerably. Industry value decisions can be based on measured results. Further, measurement capabilities enable us to collect high-quality decisions into document sets that are used to train a predictive coding engine. Clients find that investing in higher-quality seed-set training ensures greater accuracy in the remaining predictive coding processes, which ultimately saves them much more money.

Shuart: I’d like to add one final point on the synergies inherent in a process that scores both the reviewers and the level of accuracy in individual document decisions. At the end of the project, we look at the scores and can see which reviewers were really spot on, with no decisions overturned, and which reviewers had one or more overturns. Going forward, if we have another review on a similar subject matter, we can assign these proven reviewers, offer a focused review team, and improve the quality and efficiency of our work.

Editor: What other important benefits are derived from the platform?

Regard: One of the most important benefits is defensibility, meaning the ability to measure a process, explain a process and repeat a process with the same results. Consensus Coding℠ facilitates all three. Because it’s been difficult to measure the quality of document reviews, lawyers and judges have been content with reasonable efforts. Now, our ability to apply the science of information retrieval and measure the quality of document review opens up possibilities. Lawyers presenting discovery results to the court and to opposing counsel can do so with confidence that discovery won’t be an issue because they can actually measure how well it went. They can invest more time on the issues of merits and damages.

The goal should be to get clients out of discovery, not through discovery, so they can go back to the business of doing business. This is in line with our overall goals at iDS, and I think it should be a goal of our industry as well.

Editor: iDS is unique in its services beyond discovery technology, namely in offering experts to testify in court as to the credibility of computer systems. Talk about these services in the context of Consensus Coding.

Regard: iDiscovery Solutions’ mission is to influence the intersection of law and technology. We are unique in that we offer more than just technology services; we are subject-matter experts who consult on cases, some of which require us to provide expert testimony in court. In fact, our primary focus is expert consulting on technology, computer science, math, statistics and computer forensics. Consensus Coding℠ strengthens our ability to expertly assist clients faced with discovery requests and discovery production. Our ability to provide expert testimony as well as expert consulting is consistent with our goal of applying best-of-breed technology and processes to make discovery less burdensome and help our clients get back to more important litigation issues – and back to business.

Editor: Is Consensus Coding a game changer?

Regard: Sometimes developments that really change an industry don’t make the loudest noise, but are the result of quietly cleaning up an area that was skipped over. When you apply the type of methodical precision that Consensus Coding℠ offers, the entire e-discovery lifecycle and litigation process benefits. We expect it not only to improve human review but also to provide an ancillary uplift to the entire Electronic Discovery Reference Model (EDRM).

We invite your readers to visit iDS at Booth #1521 at LegalTech NYC (Feb. 3–5, 2015) and schedule a demo of Consensus Coding℠.

Published .