Post Process

Everything to do with E-discovery & ESI

Getting the Measure of E-Discovery

Posted by rjbiii on February 23, 2010

I’ve been involved in putting together a regimen for measuring e-discovery processes for a client. Not the first time I’ve done it, but as I go through the exercise, I find myself reflecting on the metrics used.

One of my first observations is that the standard reporting portfolios for most platforms still need enhancement. In comparing various platforms and work flows, I become frustrated with the inability to obtain fairly basic reports out of the box. I’ll not give specific examples, because I’m not writing this to pick on anyone in particular. The importance of reporting for each phase of an e-discovery project (see the EDRM for a nice description of these phases) has increased with the recognition by those in the industry that better project management is needed in this space. There’s even a blog on the subject (that, incidentally, also discusses metrics). In order to manage the project correctly, however, you must have a handle on the current status of processing, review, and production. The platforms I’ve looked at fall short on that score. The usual solution is to tap into the back-end database to create the reports I want.

So a couple of things I look for when I examine technology are:

  1. The standard reporting portfolio for a platform; and
  2. Accessibility to the back-database for creating reports not provided.

Now, to some specific metrics. These can vary greatly, depending on work flow. A great result in one environment might be lousy in the next. Here, I’m considering a standard project with culling, attorney review, and production. When discussing loading, or ingestion, I look examine a number of things.

Ingestion Speed. This measures the volume of data that can be loaded into an application, and is expressed in volume over time (e.g., GBs / hour). It is not the most important metric, slightly slower ingestion speeds should not become a concern in a well-managed project. Large discrepancies in this, however, might serve to send attorney reviewers to the break room just a bit too often.

Ingestion Error PCT. This is important, and affects data integrity. This measures the inaccuracy of the ingestion process. Whenever a platform separates a component in a document (say a gif in the signature of an e-mail), it increases document count and leads to greater review times (or culling/processing times). Should a document not be correctly loaded, and goes missing from the review-set, then potentially relevant data is omitted from the review. Why measure inaccuracy rather than accuracy? My way of focusing on what should I think should be emphasized. Differences should be (relatively) small…so 99% to 97% doesn’t look like a big difference. But 1% to 3% does.

Culling Volume. This measure is an industry standard. It measures the total volume removed by the use of certain processes. File-type filtering, removing known files (or de-NISTing, as it is sometimes referred to), and date range filtering are three commonly used culling methods. De-duplication and “near” de-duplication are often factored in as well. Another method includes domain analysis and culling (flagging or removing junk and privileged domains). Culling volume can be expressed in terms of volume (obviously), using GBs or MBs, and it can be expressed as a percentage of the dataset (removed 30% of the documents).

Culling Error PCT. What percentage, if any, of documents culled would have been considered relevant or privileged? In other words, what culled documents should have been reviewed? Now, how do you obtain this figure? Only with a lot of work. You’d basically have to review the culled document-set. But it would be an interesting experience.

Review-set Precision. The percentage of relevant and privileged documents in the dataset that is presented for review by attorneys. This item greatly affects review time, which is by far the largest cost component on a project.

Search Term Analysis. This is not so much a measure of the technology used, but looks at the effectiveness of various components of the search protocol. It measures the effectiveness of each term and can be used to improve search criteria.

Review Rate. This metric is applied against both the review team as a whole and individual reviewers. It is expressed in DDH (document decisions per hour) and is vital in managing review. Faster is not always better, but the review as a whole usually has to move at a certain pace for deadlines to be met.

Reviewer Accuracy. Used to provide constructive feedback to reviewers when they make errors with respect to classifying (or “tagging”) documents. Obtained by using QC processes.

Production Rate (or Conversion Rate). Unless you’re really under the gun, this metric shouldn’t be a vital one for meeting deadlines, but it is important to know what a system’s throughput rate is for producing natives to tiffs and generating deliverables for exchange.

Production Accuracy. Not really a quantifiable measure, but should provide a general sense of how well production specs were followed and whether the material produced was what was requested by counsel to be produced.

This isn’t the full spectrum. There are a number of others available, but I think these provide a nice foundation for measuring effectiveness.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

%d bloggers like this: