Post Process

Everything to do with E-discovery & ESI

Archive for the ‘Best Practices’ Category

Getting the Measure of E-Discovery

Posted by rjbiii on February 23, 2010

I’ve been involved in putting together a regimen for measuring e-discovery processes for a client. Not the first time I’ve done it, but as I go through the exercise, I find myself reflecting on the metrics used.

One of my first observations is that the standard reporting portfolios for most platforms still need enhancement. In comparing various platforms and work flows, I become frustrated with the inability to obtain fairly basic reports out of the box. I’ll not give specific examples, because I’m not writing this to pick on anyone in particular. The importance of reporting for each phase of an e-discovery project (see the EDRM for a nice description of these phases) has increased with the recognition by those in the industry that better project management is needed in this space. There’s even a blog on the subject (that, incidentally, also discusses metrics). In order to manage the project correctly, however, you must have a handle on the current status of processing, review, and production. The platforms I’ve looked at fall short on that score. The usual solution is to tap into the back-end database to create the reports I want.

So a couple of things I look for when I examine technology are:

  1. The standard reporting portfolio for a platform; and
  2. Accessibility to the back-database for creating reports not provided.

Now, to some specific metrics. These can vary greatly, depending on work flow. A great result in one environment might be lousy in the next. Here, I’m considering a standard project with culling, attorney review, and production. When discussing loading, or ingestion, I look examine a number of things.

Ingestion Speed. This measures the volume of data that can be loaded into an application, and is expressed in volume over time (e.g., GBs / hour). It is not the most important metric, slightly slower ingestion speeds should not become a concern in a well-managed project. Large discrepancies in this, however, might serve to send attorney reviewers to the break room just a bit too often.

Ingestion Error PCT. This is important, and affects data integrity. This measures the inaccuracy of the ingestion process. Whenever a platform separates a component in a document (say a gif in the signature of an e-mail), it increases document count and leads to greater review times (or culling/processing times). Should a document not be correctly loaded, and goes missing from the review-set, then potentially relevant data is omitted from the review. Why measure inaccuracy rather than accuracy? My way of focusing on what should I think should be emphasized. Differences should be (relatively) small…so 99% to 97% doesn’t look like a big difference. But 1% to 3% does.

Culling Volume. This measure is an industry standard. It measures the total volume removed by the use of certain processes. File-type filtering, removing known files (or de-NISTing, as it is sometimes referred to), and date range filtering are three commonly used culling methods. De-duplication and “near” de-duplication are often factored in as well. Another method includes domain analysis and culling (flagging or removing junk and privileged domains). Culling volume can be expressed in terms of volume (obviously), using GBs or MBs, and it can be expressed as a percentage of the dataset (removed 30% of the documents).

Culling Error PCT. What percentage, if any, of documents culled would have been considered relevant or privileged? In other words, what culled documents should have been reviewed? Now, how do you obtain this figure? Only with a lot of work. You’d basically have to review the culled document-set. But it would be an interesting experience.

Review-set Precision. The percentage of relevant and privileged documents in the dataset that is presented for review by attorneys. This item greatly affects review time, which is by far the largest cost component on a project.

Search Term Analysis. This is not so much a measure of the technology used, but looks at the effectiveness of various components of the search protocol. It measures the effectiveness of each term and can be used to improve search criteria.

Review Rate. This metric is applied against both the review team as a whole and individual reviewers. It is expressed in DDH (document decisions per hour) and is vital in managing review. Faster is not always better, but the review as a whole usually has to move at a certain pace for deadlines to be met.

Reviewer Accuracy. Used to provide constructive feedback to reviewers when they make errors with respect to classifying (or “tagging”) documents. Obtained by using QC processes.

Production Rate (or Conversion Rate). Unless you’re really under the gun, this metric shouldn’t be a vital one for meeting deadlines, but it is important to know what a system’s throughput rate is for producing natives to tiffs and generating deliverables for exchange.

Production Accuracy. Not really a quantifiable measure, but should provide a general sense of how well production specs were followed and whether the material produced was what was requested by counsel to be produced.

This isn’t the full spectrum. There are a number of others available, but I think these provide a nice foundation for measuring effectiveness.

Advertisements

Posted in Best Practices, Project Management | Tagged: | Leave a Comment »

Around the Block: 1/25/2010

Posted by rjbiii on January 25, 2010

Interesting Items floating around the blogs and the Tweetdeck:

Bow Tie Law asks the Question: To DeNIST or not To DeNIST?, while writing an article that explains the process, and benefits, of DeNISTing. There really is no question here as to whether one should do it (absent exceptional circumstances). The real question is what more should one do besides DeNISTing to remove “junk” files. A good article, though not one that threatens Shakespeare’s position in English Literature. From the article:

“Can’t you just DeNIST the data and get rid of all the junk files…?” This is a question I am often asked. It usually comes after an individual attends an eDiscovery conference and the magical phrase “DeNIST” was uttered at some point. The individual is led to believe, or rather wants to believe, it’s a supernatural process that separates all the wheat from the chaff. Well, that’s only half the story…

The DOJ releases a guide to search and seizure of computer equipment. Potential consumers may order a bound version of the guide, or download an electronic copy. From the website:

Electronic Crime Scene Investigation: An On-the-Scene Reference for First Responder is a quick reference for first responders who may be responsible for identifying, preserving, collecting and securing evidence at an electronic crime scene. It describes different types of electronic devices and the potential evidence they may hold, and provides an overview of how to secure, evaluate and document the scene. It describes how to collect, package and transport digital evidence and lists of potential sources of digital evidence for 14 crime categories.

Philadelphia attorney Stanley P. Jaskiewicz pens a post about The Law of Unintended Consequences, and how courts use it. From the article:
[The law of unintended consequences] is certainly not new. Even so, the widely cited mocking definition of a “computer” as “a device designed to speed and automate errors” shows how well this concept is suited to the Digital Age. Certainly, examples of technology projects gone horribly awry are common in the public and private sectors, with ramifications far worse than the situations they were intended to fix. Hershey’s software upgrade that caused the candy producer to miss a Halloween season, for example, or Virginia’s infamous temporary inability to issue driver’s licenses are perhaps two of the best-known fiascos (or at least those that were not hushed up by confidential settlements). Domino’s Pizza even resorted to creating its own online ordering system after a third-party application “became a real source of pushback” from disgruntled franchisees, according to Domino’s CIO.

Paralegal Jemerra J. Cherry posts an article examining methods of online researching to help determine settlement and jury verdict amounts in cases similar to yours:

No matter what type of law you practice, researching jury verdicts and settlements is an important part of any case. How would you know a plaintiff’s demand is over the top if you didn’t research it? Don’t wait until your case has been active for a year to start researching. Early case assessment is helpful when going to mediations, arbitrations or when having a meeting with your client. Plaintiffs utilize verdict research to outline and support a demand. On the flip side, defendants use verdict research to state why a plaintiff’s demand is unreasonably high. In order to properly evaluate your case, verdict and settlement research is key.

Posted in Articles, Best Practices, Data Manipulation | Leave a Comment »

Case Summary: Phillip M. Adams & Assocs., On Spoliation and Info. Management

Posted by rjbiii on July 5, 2009

Phillip M. Adams & Assocs., L.L.C. v. Dell, Inc., 2009 U.S. Dist. LEXIS 26964 (D. Utah Mar. 27, 2009)

FACTS: Plaintiffs, and requesting party, Philip M. Adams & Associates, alleged infringement of their patents for technology that detected and resolved defects in the most widely used floppy disk controller, thus preventing data from being destroyed. The patents in question were purportedly assigned to plaintiffs by the original inventor. FDC-related defects gave rise to multiple lawsuits, culminating with the settlement of a class action suit against Toshiba in October of 1999.
Requesting party accused producing party of spoliation, as stated in the opinion:

…first, that ASUS has illegally used Adams’ patented software; and second, that ASUS has destroyed evidence of that use. The first assertion is identical to the liability issue in this case. The second assertion is premised on the first: Assuming ASUS used Adams’ software, ASUS’ failure to produce evidence of that use is sanctionable spoliation. Adams has no direct proof of destruction of evidence but is inferring destruction or withholding of evidence. Since Adams is convinced that ASUS infringed, Adams is also convinced that failure to produce evidence of infringement is sanctionable.

Issues we examine:

  1. When did the producing party’s duty to preserve attach?
  2. How does the Safe Harbor provision (FRCP 37(e)) factor into the determination of sanctions in this case?
  3. What role does producing party’s information management system play in the sanctions calculus?
  4. How does the producing party’s lack of produced data on certain subjects in the aggregate balanced against the absence of specific evidence of wrong-doing by requesting party?

Issue 1: Court’s reasoning:
Producing party acknowledges receiving a letter from requesting party’s counsel asserting infringement on February 23, 2005. It does not acknowledge receiving an earlier letter dated October 4, 2004. Thus, Producing Party dates the beginning of its duty to preserve from the date of the February letter, and states that it has complied with that duty from that time forward. Producing party takes the position that a delay in giving notice and bringing suit by requesting party is the reason for the lack of available data from the years 2000 and 2001.
The court noted that both parties agreed that “a litigant’s duty to preserve evidence arises when ‘he knows or should know [it] is relevant to imminent or ongoing litigation.'” The court acknowledged the producing party’s stance that this trigger occurred upon receiving counsel’s letter, but stated that this was “not the inviolable benchmark.” The court cited 103 Investors I, L.P. v. Square D Co., 470 F.3d 985 (10th Cir. 2006) to buttress its argument.
In 103 Investors, the defendant disposed of 50 to 60 feet of “busway” material after a fire had occurred, destroying all but four feet of the busway, and eliminating any of the busway that should have contained a warning label. The court concluded that in that instance, the defendant should have known that litigation was imminent, although the material had been disposed of long before the complaint was filed.
The court described the history of this defect. In 1999 Toshiba paid a large sum to settle a class action related to the floppy drive error in play in the instant matter. That same year, a class action suit was filed against HP for the same defect. In 2000, producing party was working on correcting the issue. Sony became embroiled in a class action in 2000. The court stated that the industry had (or should have become) “sensitized” to the possibility of litigation on this issue.

It appears that this extends the duty to preserve, which is already among the more difficult and costly issues in e-discovery today. By extending the duty’s trigger to occur prior to any direct or specific action against defendants, the court is asking too much of any IT department. It may be that the lack of documents produced by the defendants (this is discussed below) puts the court in the position of trying to fashion a rationale for punishment. But taken literally, the effects of the opinion could set a difficult, perhaps impossible, standards for compliance with the duty.

Issue 2: Safe Harbor?

The court, to the dismay of many commentators, dismisses the effects of the safe harbor provision in FRCP 37(e). Ralph Losey claims the court “mines” the rule into oblivion. I think what is in play here is that the court feels that the producing party would use Safe Harbor as a rationale for not producing data that it should have. Nevertheless, Safe Harbor’s reach, already attenuated, appears to weaken further in this opinion.


Issue 3: What role does producing party’s information management system play in the sanctions calculus?

The court comes down hard on the IG practices of the producing party. It stated that the system’s architecture, possessed of questionable reliability, should not be excused, though it evolved, rather than was deliberately designed to operate as it does. The result is that it operated to deprive the requesting party of access to evidence.
Traits of this system are described thusly:
[Producing Party] extensively describes its email management and storage practices, to explain the nearly complete absence of emails related to the subject of this litigation.

First, [Producing Party] says its email servers are not designed for archival purposes, and employees are instructed to locally preserve any emails of long term value.

[Producing Party] employees send and receive email via company email servers.

Storage on [Producing Party’s] email servers is limited, and the company directs employees to download those emails they deem important or necessary to perform their job function from the company email server to their individual company issued computer.

[Producing Party] informs its employees that any email not downloaded to an employee’s computer are automatically overwritten to make room for additional email storage on ASUSTeK ‘s servers.

It is [Producing Party’s] routine practice that its employees download to their individual computer those emails the employee deems important or necessary to perform his or her job function or comply with legal or statutory obligations.

Second, ASUS employee computers are periodically replaced, at which time ASUS places all archiving responsibility for email and other documents on its employees. During the course of their employment, ASUSTeK employees return their individual company issued computers in exchange for newer replacement computers.

40. The hard drives of all computers returned to or exchanged with the company are formatted to erase all electronic information stored on these computers before they are recycled, reused or given to charity.

41. During a computer exchange, it is [Producing Party’s] practice to direct its employees to download those emails and electronic documents from the employee’s individual computer to the employee’s newly issued computer that the employee deems important or necessary to perform his or her job function or comply with legal or statutory obligations.

The court stated that descriptions these data management practices may explain why relevant e-mails were not produced, but it did not establish the Producing Party’s good faith in managing its data. It calls the information management practices of the producing party “questionable” and that although an organization may design its systems to suit its business purposes, the information management practices are still accountable to such third parties as adversaries in litigation. The court opines that: “[a] court – and more importantly, a litigant – is not required to simply accept whatever information management practices a party may have. A practice may be unreasonable, given responsibilities to third parties.

Furthermore, while the court accepts that the Producing Party’s system “evolved” rather than was purposefully designed with the goal of hiding data needed for litigation, it nevertheless quoted the Sedona Conference: “An organization should have reasonable policies and procedures for managing its information and records.”

Finally, the court took aim at the practice of allowing individual users to drive retention practices, when it stated: “[Producing Party’s]’ practices invite the abuse of rights of others, because the practices tend toward loss of data. The practices place operations-level employees in the position of deciding what information is relevant to the enterprise and its data retention needs.”

Issue 4: How does the producing party’s lack of produced data on certain subjects in the aggregate balanced against the absence of specific evidence of wrong-doing by requesting party?

Producing Party turned over executable files of their own invention, but failed to surrender the source code for those executables. They also failed to produce other relevant executables and related source code, or “a single document” relating to the development of the applications under scrutiny. The court expressed concern over the absence of certain types of documents from the production:

[Producing Party’s] only response is that it has produced a large volume of documents. That may be the case; but, it has not produced the most critical documents – those that relate to its misappropriation, its copying, and its willful behavior. The only conclusion after all this time is that [Producing Party] has destroyed critical evidence that it simply cannot show did not exist.

By this expression, the court adopted Requesting Party’s argument that Producing Party had “‘spoliated the most critical evidence in this case, e.g., test programs and related source code’ “[S]ince [Producing Party] has not produced it, the only conclusion is that [they have destroyed it.”

The court also noted, in its analysis of Producing Party’s objection to the admissibility of data produced by third parties on grounds of authentication, that the Producing Party, while claiming “a near total absence of evidence…[sought] to eliminate the only evidence available. The court concluded that such tactics should not prevail to “prevent consideration of the best evidence available.”

Requesting Party listed types of documentation that they would expect Producing Party to possess, but never received during production. Communications and documentation from outside sources contributed to a suspicion that such documentation once existed. Indeed, as the court examines the Producing Party’s duty to preserve, it leads off by stating: “[t]he universe of materials we are missing is very large. Indisputably, we have very little evidence compared to what would be expected.”

In dismissing arguments that destruction of the data in question was covered by the “Safe Harbor” provision under FRCP 37(e), the court stated: “[o]ther than the patent application and the executable file, it does not appear [Producing Party] has produced any significant tangible discovery on the topics where information is conspicuously lacking.”

Ultimately the court found that Producing Party had breached its duty to preserve relevant data. It appears from the information above that the dearth of critical documentation from the Defendant’s productions was a significant contributor to the ruling, but the court does not explain the weight to which it assigned this as an element in its ruling.

Posted in 10th Circuit, Best Practices, Case Summary, D. Utah, Data Custodians, Data Management, Data Retention Practices, Document Retention, Duty to Preserve, FRCP 37(e), Good Faith, Information Governance, Magistrate Judge David Nuffer, Reasonable Anticipation of Litigation, Safe Harbor, Source Code, Spoliation | 1 Comment »

The difference between an archive and a backup

Posted by rjbiii on December 26, 2008

Computer Technology Review has posted an article describing the effect of the FRCP on business and corporate IT departments. The article contains the now familiar refrain to proactively manage your digital resources. One nice blurb, though, discusses the difference between archives and back-ups:

This underscores the difference between an archive and a backup system. An archive in today’s regulatory and litigation preparedness sense is an actively managed set of information kept as a business record when needed and disposed of when not. Backups on the other hand are designed for near term disaster recovery and not long term preservation. But many companies have suspended the rotation of their backup media, sometimes for years, because of a fear of sanctions or even bad press resulting from the improper deletion of this potentially discoverable data. What should have been a disaster recovery mechanism is now functioning as a very inefficient archive of all historical information. This becomes magnified as companies inherit backup media through merger and acquisition. In many instances the current IT staff has no idea what data exists upon those tapes.

Posted in Articles, Back Up Tapes, Best Practices, Compliance, Data Management, Data Retention Practices, FRCP 26, FRCP 34 | Tagged: , , | Leave a Comment »

On a New British Standard for Storing Data to be Used as Evidence

Posted by rjbiii on December 25, 2008

The Register reports that the national standards body of the U.K., the BSI Group, has formulated a new standard for storing data properly for “maximizing” the weight of data presented in court. The standard deals with the manner in which evidence is stored.
From the article:

By complying with BS 10008, “it is anticipated that the evidential weight of electronic information transferred to and/or managed by a corporate body will be maximised,” said national standards body BSI British Standards.

The Standard is called Evidential weight and legal admissibility of electronic information – Specification. It sets out the requirements for the implementation and operation of electronic information management systems, including the storage and transfer of information, and addresses issues relating to authenticity and integrity of information.

Legal admissibility concerns whether or not a piece of evidence would be accepted by a court of law. To ensure admissibility, information must be managed by a secure system throughout its lifetime, which can be for many years. Where doubt can be placed on the information, the evidential weight may be reduced, potentially harming the legal case.

From the BSI Group’s description:

What does the standard include?

* The management of electronic information over long periods, including through technology changes, where information integrity is vital
* How to manage the various risks associated with electronic information
* How to demonstrate the authenticity of electronic information
* The management of quality issues related to document scanning processes
* The provision of a full life history of an electronic object throughout its life
* Electronic transfer of information from one computer system to another
* Covers policies, security issues, procedures, technology requirements and auditability of electronic document management systems (EDMS).

Posted in Best Practices, Data Management, Data Retention Practices, International Issues, United Kingdom | Tagged: | Leave a Comment »

Blogging LegalTech West 2008: Litigation Holds

Posted by rjbiii on June 29, 2008

The second, and for me last, presentation of the day was “Ready…Set…Preserve: Navigating the Legal Hold Process and Technology. The panel consisted of Patrick Oot of Verizon, Kraft’s Chief Counsel Theodore Banks, and American Electric Power’s Kamal Kamara.

The rule of thumb that triggers a legal hold is (say it with me class), the date when litigation may be reasonably anticipated. The very last date that can be justified for the issuance of a legal hold is the date the complaint is actually filed. The first step to implementing a legal hold is to determine the identity of the key players. However, before the hold is even necessary, some preemptive actions should have been addressed. Litigation readiness best practices suggest that record management training for all employees is important. These rules apply:

  1. The guidelines employees study must be related to their jobs.
  2. Information on how to comply with relevant policies should be easy to find. They should have access to manuals, or intranet web sites with the necessary guidelines.
  3. Training should be consistent, and reinforced periodically.

The purpose of the legal hold is to stop destruction of potentially responsive information, identify that data, and save it. Employees should understand the consequences of failing to comply, and where to get help when they have questions.

Mr. Banks explained that for Kraft, the legal hold was triggered later than would be appropriate for some others, because of the nature of the complaints his company confronted, and the design of its information system. Much of the data needed was historical information that was preserved anyway, often for reasons of compliance with federal retention laws.

Mr. Kamara described his company’s home-built lit hold solution as being similar to e-vite. All three companies used custom built solutions rather than “off the shelf” products.

Some important points: acknowledgment by recipients is an essential component to a lit hold system; audit trails and the availability of reports is important.

I enjoyed this presentation more than the previous session. The panelists were good, but I also got to see screenshots of various systems, which I found interesting. The next step now is to see how technology can be used not only to issue notice of a hold, but to also take action to prevent actual destruction of information.

Posted in Best Practices, Data Custodians, Document Retention, Duty to Preserve, Industry Events, Litigation Hold, Trends | 1 Comment »

Blogging LegalTech West 2008: Building an e-discovery task force

Posted by rjbiii on June 28, 2008

As mentioned in the previous post, there were three main tracks of courses to choose from. My associates and I glanced over them to divide the subject matter up between us, and I ended up on the “Corporate Perspectives” track, which suited me.

What was less than satisfying was that the first event was a panel discussion centered on building an E-Discovery task force inside the company. Not a particularly interesting topic for me; not because it’s unimportant, but rather because I have already attended a number of similar presentations, read much of the literature on it, written about it in my own papers, and dealt with the subject extensively in my own work. So it’s “old hat” to me, as they say.

Nevertheless, a colleague of mine and I found good seats and settled in. The panel consisted of Kroll Ontrack’s Linda Sharp (who acted as moderator), Cynthia Nichols from Taco Bell Corp., Michael Kelleher of Folger Levin & Kahn LLP, and Joel Vogel o Paul, Hastings, Janofsky & Walker LLP.

Most of the information presented was standard, and no new ground was covered (at least for me), however, it was well-done and all panelists contributed significantly to the discussion. No new ground was covered, but in all likelihood, the needs of the target audience were met.

The meeting began with a discussion outlining the need for the proactive implementation of pre-litigation measures to deal with issues presented by in the era of “ESI.” With Ms. Sharp leading the way, it was noted that 50% of corporate America has no policiy with respect to managing ESI, and 75% feel they lose time due to inefficient or non-existent ESI policies.

The panel then turned to the question of what elements and constituencies should comprise an E-Discovery team? Depending upon the size and internal structure of the company, the panel listed the following possibilities:

  • Corporate Counsel
  • IT
  • Human Resources
  • Records Management
  • Corporate Security
  • Trial Counsel
  • Discovery Counsel
  • Outside Vendor(s)

Obviously, the nature of the matters that confront any particular corporation, and the relationship the company has with outside law firms and vendors are factors in building the right team.

The discussion then moved the task force’s need to educate themselves on their company’s data infrastructure. Questions the task force should address are: where does company data reside? How is it maintained? How is it accessed, and by whom? When (and how) is it destroyed? Here, some recommended that a systems information directory be generated and maintained by the team. Others argued maintaining the document was inefficient, and that this could best be addressed by updates as needed (i.e., as new legal matters arise). I tend to lean toward maintenance on a regular basis, although I can see some situations in which the contrary view would be a better fit.

The discussion then looked at Discovery Response Checklists, and what elements should constitute it. Some of these items included: the issuance of hold statements, discontinuing data destruction and back-up tape recycling policies; and handling e-mail archiving.

Overall, a fairly pedestrian, but useful presentation. The panelists were articulate and knowledgeable, and laid out the issues in an organized and effective manner. If you’re interested in the subject, and other ideas for proactive measures, one article that I liked on the issue is:., Renee T. Lawson, Taming the Beast—Implementation of Effective Best Practices for Electronic Data Discovery, 747 PLI/LIT 305, (Oct-Dec 2006).

Posted in Best Practices, Data Management, Discovery, Industry Events, Trends | Leave a Comment »

Return to sender…please

Posted by rjbiii on June 15, 2008

The International Herald Tribune has posted an article discussing the fact that e-mails don’t always reach their destinations…and the sender isn’t always notified:

The basic Internet e-mail standard – SMTP, or simple mail transport protocol – has always provided for the destination server to send back an error message if the original message cannot be delivered. If no error message comes back, however, can the originating server assume that the message arrived, safe and sound? Not necessarily. A misconfigured server anywhere in the path between sender and recipient can miscarry the message.

The problem that leads to the loss of a message can also prevent the sender from receiving a report of failed delivery. In such instances, e-mail disappears into the ether.

Assuming a message sent is received is dangerous. In a case, if you’re trying to prove communication, retrieve the recipient’s inbox.

Posted in Articles, Best Practices, email | Tagged: | Leave a Comment »

Case Blurb: Creative Pipe; Not all keyword searches are created equal

Posted by rjbiii on June 15, 2008

While it is known that [Producing Party] and [Producing Party’s attorneys] selected the keywords, nothing is known from the affidavits provided to the court regarding their qualifications for designing a search and information retrieval strategy that could be expected to produce an effective and reliable privilege review. As will be discussed, while it is universally acknowledged that keyword searches are useful tools for search and retrieval of ESI, all keyword searches are not created equal; and there is a growing body of literature that highlights the risks associated with conducting an unreliable or inadequate keyword search or relying exclusively on such searches for privilege review. Additionally, the Defendants do not assert that any sampling was done of the text searchable ESI files that were determined not to contain privileged information on the basis of the keyword search to see if the search results were reliable. Common sense suggests that even a properly designed and executed keyword search may prove to be over-inclusive or under-inclusive, resulting in the identification of documents as privileged which are not, and non-privileged which, in fact, are. The only prudent way to test the reliability of the keyword search is to perform some appropriate sampling of the documents determined to be privileged and those determined not to be in order to arrive at a comfort level that the categories are neither over-inclusive nor under-inclusive resulting in the identification of documents as privileged which are not, and non-privileged which, in fact, are. The only prudent way to test the reliability of the keyword search is to perform some appropriate sampling of the documents determined to be privileged and those determined not to be in order to arrive at a comfort level that the categories are neither over-inclusive nor under-inclusive.

Victor Stanley, Inc. v. Creative Pipe, Inc., 2008 WL 2221841 (D.Md. May 29, 2008 )

Posted in 4th Circuit, Best Practices, Case Blurbs, D. Md., Magistrate Judge Paul W. Grimm, Search Protocols | Tagged: , | Leave a Comment »

Trends in E-Discovery Point to Bad News for the Unprepared

Posted by rjbiii on December 26, 2007

E-discovery should be the thing which causes IT departments to break out their cub scout books and remember what it means to “be prepared.” A recent article posted by the Wisconsin Technology Network discusses the meanings of emerging trends in electronic discovery:

A CIO who is on top of things will have frequent meetings with staff attorneys, review e-discovery processes, and map out what the organization’s infrastructure looks like – essentially knowing where data “lives” so the organization can react to litigation. The number of hours spent on e-discovery is growing, but the time investment depends largely on a company’s litigation profile.

This will sound familiar to frequent readers. The article notes some general trends:

Even for complacent companies, Phelps said e-discovery case law is providing more answers in three specific areas: litigation holds, obligations to preserve data, and the determination of what information is reasonably accessible.

Of course, sometimes the guidance is conflicting and ambiguous, but what is clear is that indifference to the rules won’t be excused by courts.

Posted in Articles, Best Practices, Trends | Leave a Comment »