Somehow, editors across the country have gotten the idea that computers will replace lawyers in litigation. The Wall Street Journal asked, Why Hire a Lawyer? Computers Are Cheaper, and The New York Times promised a world of Armies of Lawyers, Replaced by Cheaper Software. Columnist Paul Krugman even picked up the theme to discuss the economy. Most recently, the New Scientist suggested that Lawyerbots Takes the Drudgery Out of Law.
It’s certainly a compelling narrative, but the discussion is obscuring the real issues complicating litigation. (We’re not sure what a lawyerbot is, but it is fun to imagine the Pentagon has a prototype Johnny 5 in a lab somewhere wearing a pinstripe suit, loafers, and a laser cannon.)
None of this is really about robots or computers taking legal jobs. The topic is the use of Machine Learning in litigation to review documents. The review process is the most painful, expensive, labor intensive, and costly phase of discovery, but the problem isn’t whether machines or humans will do it better. In fact, research shows that humans and computer assisted review can be equally as ineffective. For all the attention it gets, predictive coding has only been used in a handful of cases involving especially large data sets.
It’s the Process, Not the Technology
While “predictive coding” sounds like a great new technology designed to save litigation, it’s actually a process. Predictive coding is the application of advanced machine learning algorithms, using computers to assist in the relevancy review process by recognizing responsive documents in eDiscovery.
But that only works if human reviewers examine sample documents the computer returns, and then use that material to train the computers to find similar documents. It’s an iterative process that must be repeated over and over with careful input from humans. The image of a lawyerbot is fun, but what it means is that a smaller number of lawyers will work more closely in training their computers. But it certainly does not mean computers will replace humans.
Not Ready for Prime Time
Jason Baron is the National Archives’ Director of Litigation and is one of the founding coordinators of the TREC Legal Track, a search project organized through the National Institute of Standards and Technology to evaluate search protocols used in eDiscovery.
He recently spoke at the Seventh Circuit Electronic Discovery Workshop on Computer-Assisted Review. “Again let me be clear: I remain a strong cheerleader and advocate for advanced forms of search and document review,” he said. “But there are dozens of open questions that remain for further research in this area, and would caution against making judicial short-cuts to findings that say this area is essentially a ‘solved problem’ for lawyers. We ain’t there yet.”
Experienced litigators agree. Not surprisingly, Sidley Austin attorney David Breau told the Wall Street Journal, “Computers excel at sifting through a big pile of stuff and sorting it into categories.” But he went on to note that lawyers are still needed to review the documents once they are sorted before turning them over to the other side.
The most pernicious problems that bedevil eDiscovery need to be addressed by humans- identifying and protecting work product, identifying sources of data and custodians of that information, and of course, finding the smoking gun emails that makes your case. There is no machine that does these things.
It will take time for Predictive Coding to fully mature. The TREC Legal Track has been cancelled for at least a year, although at least two other studies into advanced search protocols are underway. Courts will eventually settle the predictive coding question.
But for now, lawyers need to remain focused on the core problems in litigation: finding platforms that can manage large volumes of evidence, or review processes that can protect for privilege, all while controlling costs. Machine learning can play a role in very large cases, but only as one component of a well-designed eDiscovery process.
[hs_action id=”6368″]