Post Process

Everything to do with E-discovery & ESI

Archive for February, 2010

When Should You Disclose Your Social Security Number?

Posted by rjbiii on February 25, 2010

Yahoo! Finance presents an article discussing when it is safe to give out your SSN, and when you should give more thought to doing so. From the article:

Just because someone asks for it doesn’t mean you have to comply, says Michael J. Arata, the author of “Identity Theft For Dummies,” especially since there are only a handful of organizations that actually have a valid need for it. For instance, anytime you’re applying for credit — for a new credit card, a loan, new telephone or cellular service — the creditor will need your Social Security number to run a credit check. You’ll also need to provide it if you are applying for federal or local government benefits such as Social Security, Medicare or Medicaid, unemployment insurance or disability. Another example: If you or your children receive services or aid at the state or local level, such as free or reduced fee lunch or financial aid. The local motor vehicle department, thanks to the USA PATRIOT Act, has the legal right to ask for Social Security numbers, too. In addition, when you complete a cash transaction totaling more than $10,000 you’ll be required to provide your number so that transaction can be reported to the Internal Revenue Service, says ITRC’s Foley.

The article contains a nice chart that divides organizations who request your card into “mandatory” and “optional” groups. It also has a sidebar that tells you what the sections in your social security number mean.

Posted in Articles, Privacy | 1 Comment »

Implementing a Litigation Hold

Posted by rjbiii on February 24, 2010

Law.com has posted an article discussing trigger events and implementing litigation holds. This is the first of 7 parts on the subject. From the article:

As articulated by Judge Scheindlin in Pension Committee v. Banc of America, courts definitely do not want to wade through stacks of motions papers and days of hearings to determine if preservation efforts were sufficient to prevent the destruction of ESI and other documents. As a result, it is imperative for an organization to have in place a litigation hold policy and adequate procedures necessary to avoid going down the litigation “detour” of discovery sanctions motions.

The goal, on the other hand, is not perfection but rather development of a systematic approach to implementing litigation holds within your organization. Systematic, means repeatable and methodical. The idea is to build credibility. The purpose is to demonstrate reasonable efforts conducted in good faith, to search for ESI containing the truth and preserving it. While no system is foolproof, we developed the “Seven Steps” to help meet the litigation hold duties enumerated in recent litigation hold cases.

Posted in Articles, Data Management, Duty to Preserve, Litigation Hold | Leave a Comment »

Getting the Measure of E-Discovery

Posted by rjbiii on February 23, 2010

I’ve been involved in putting together a regimen for measuring e-discovery processes for a client. Not the first time I’ve done it, but as I go through the exercise, I find myself reflecting on the metrics used.

One of my first observations is that the standard reporting portfolios for most platforms still need enhancement. In comparing various platforms and work flows, I become frustrated with the inability to obtain fairly basic reports out of the box. I’ll not give specific examples, because I’m not writing this to pick on anyone in particular. The importance of reporting for each phase of an e-discovery project (see the EDRM for a nice description of these phases) has increased with the recognition by those in the industry that better project management is needed in this space. There’s even a blog on the subject (that, incidentally, also discusses metrics). In order to manage the project correctly, however, you must have a handle on the current status of processing, review, and production. The platforms I’ve looked at fall short on that score. The usual solution is to tap into the back-end database to create the reports I want.

So a couple of things I look for when I examine technology are:

  1. The standard reporting portfolio for a platform; and
  2. Accessibility to the back-database for creating reports not provided.

Now, to some specific metrics. These can vary greatly, depending on work flow. A great result in one environment might be lousy in the next. Here, I’m considering a standard project with culling, attorney review, and production. When discussing loading, or ingestion, I look examine a number of things.

Ingestion Speed. This measures the volume of data that can be loaded into an application, and is expressed in volume over time (e.g., GBs / hour). It is not the most important metric, slightly slower ingestion speeds should not become a concern in a well-managed project. Large discrepancies in this, however, might serve to send attorney reviewers to the break room just a bit too often.

Ingestion Error PCT. This is important, and affects data integrity. This measures the inaccuracy of the ingestion process. Whenever a platform separates a component in a document (say a gif in the signature of an e-mail), it increases document count and leads to greater review times (or culling/processing times). Should a document not be correctly loaded, and goes missing from the review-set, then potentially relevant data is omitted from the review. Why measure inaccuracy rather than accuracy? My way of focusing on what should I think should be emphasized. Differences should be (relatively) small…so 99% to 97% doesn’t look like a big difference. But 1% to 3% does.

Culling Volume. This measure is an industry standard. It measures the total volume removed by the use of certain processes. File-type filtering, removing known files (or de-NISTing, as it is sometimes referred to), and date range filtering are three commonly used culling methods. De-duplication and “near” de-duplication are often factored in as well. Another method includes domain analysis and culling (flagging or removing junk and privileged domains). Culling volume can be expressed in terms of volume (obviously), using GBs or MBs, and it can be expressed as a percentage of the dataset (removed 30% of the documents).

Culling Error PCT. What percentage, if any, of documents culled would have been considered relevant or privileged? In other words, what culled documents should have been reviewed? Now, how do you obtain this figure? Only with a lot of work. You’d basically have to review the culled document-set. But it would be an interesting experience.

Review-set Precision. The percentage of relevant and privileged documents in the dataset that is presented for review by attorneys. This item greatly affects review time, which is by far the largest cost component on a project.

Search Term Analysis. This is not so much a measure of the technology used, but looks at the effectiveness of various components of the search protocol. It measures the effectiveness of each term and can be used to improve search criteria.

Review Rate. This metric is applied against both the review team as a whole and individual reviewers. It is expressed in DDH (document decisions per hour) and is vital in managing review. Faster is not always better, but the review as a whole usually has to move at a certain pace for deadlines to be met.

Reviewer Accuracy. Used to provide constructive feedback to reviewers when they make errors with respect to classifying (or “tagging”) documents. Obtained by using QC processes.

Production Rate (or Conversion Rate). Unless you’re really under the gun, this metric shouldn’t be a vital one for meeting deadlines, but it is important to know what a system’s throughput rate is for producing natives to tiffs and generating deliverables for exchange.

Production Accuracy. Not really a quantifiable measure, but should provide a general sense of how well production specs were followed and whether the material produced was what was requested by counsel to be produced.

This isn’t the full spectrum. There are a number of others available, but I think these provide a nice foundation for measuring effectiveness.

Posted in Best Practices, Project Management | Tagged: | Leave a Comment »

Science Daily: New Approach to Generating Truly Random Numbers May Improve Internet Security, Weather Forecasts

Posted by rjbiii on February 22, 2010

You read correctly: weather forecasts. The article says this with respect to the importance of randomness:

According to Bernhard Fechner of the University of Hagen, and Andre Osterloh of BTC AG, in Germany, the “quality” of a random number is a measure of how truly random the number is. This quality affects significantly any security or simulation in which it is used. If a so-called random number is not truly random, then someone could predict a security key and crack the Internet encryption on bank accounts, e-commerce sites or secure government websites, for instance. Similarly, if the random numbers used in scientific models of the weather, climate, or the spread of disease and economic boom and bust are predictable, then systematic errors will creep into the models and make the predictions unreliable.

Posted in Articles, Computer Security, Technology | Leave a Comment »

ABA’s Chart Comparing Lit Support Applications

Posted by rjbiii on February 22, 2010

May be found here. Note it is a PDF.

Posted in Uncategorized | Leave a Comment »

E-Discovery Still Stirring the Pot

Posted by rjbiii on February 19, 2010

While we continue to hear the complaints from corporate clients on the cost of e-discovery, the issue is beginning to affect attorneys and companies in other ways as well. As an example:

The Recorder posts an article discussing the suspension of a prosecutor once considered a rising star. From the article:

A California State Bar Court appellate panel has upheld a four-year suspension for former Santa Clara County prosecutor Benjamin Field, despite an amicus curiae brief from the California District Attorneys Association warning of a chilling effect on prosecutions.

The Bar review panel found it “inexcusable” and “disturbing” that Field, once a star in South Bay legal circles and considered a viable candidate for a judgeship, concealed evidence and ignored judges’ orders over a 10-year period. The ruling, released late Friday, also affirmed five years of probation.

In another story, ABC News tells the tale of the former Toyota attorney who is ready to divulge information about the car maker’s “illegal discovery practices” to Congress. Quoting the article:

“The information and documents I have regarding Toyota’s deceptive and illegal discovery practices will one day become publicly available,” [attorney Dimitrios] Biller said. “Our judicial system, government and the American people need to know how Toyota operates with total disregard of our laws and legal system.”

Finally, Judge Shira Scheindlin, of Zubulake fame, issued a new ruling that imposed sanctions on multiple plaintiffs for their failures to preserve evidence, despite the fact that [t]his case [did] not present any egregious examples of litigants purposefully destroying evidence. They just didn’t preserve the stuff they should have.

With any litigation, decisions about e-discovery processes involve risk assessments. I’d say some parties are not adequately evaluating that risk.

Posted in Articles, EDD Industry, Judge Shira A. Scheindlin, Litigation Hold, Sanctions | 2 Comments »

Northern Kentucky Law Review E-Discovery Issue Now Online

Posted by rjbiii on February 5, 2010

Last year the Salmon P. Chase School Of Law at Northern Kentucky University sponsored a symposium on E-Discovery, and subsequently released an issue of its Journal chocked full of articles on the subject, including one of mine, entitled Avoiding an E-Discovery Odyssey (PDF). Now that entire issue is available online for free. There is some interesting stuff in there. Happy reading!

Posted in Articles, Effectively Managing E-Discovery | Leave a Comment »