Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
Due to the high volume of electronically stored information, document review and production is often the most expensive part of the discovery process. In an effort to lower the costs of litigation discovery, Squire Sanders invested in an assessment of next-generation intelligent discovery tools and processes. The purpose of this exercise was to identify and validate software and techniques that can defensibly reduce the expense of human review, while maintaining or improving quality. Having studied other available technologies, we decided to thoroughly evaluate the Equivio>Relevance system. Our results and general observations are as follows.
Evaluation Background
The test data was taken from a collection of documents related to the defense of a putative environmental class action related to a large manufacturing facility. The documents used in the evaluation had been previously reviewed and prepared for production by our lawyers using traditional review methods. The case settled just prior to the actual production of the documents. Accordingly, there was a known end result to measure against.
The goal of this exercise was two-fold:
The original review project consisted of approximately 200,000 documents collected from the client (50/50 mix of hard copy and electronic data). Using all available best practices to efficiently review the data, a team of nine lawyers took four months (1,250 hours) to screen and identify documents responsive to the opposing party's requests. While this effort was considered efficient at an overall average of 160 documents reviewed per billable hour, it still consumed nearly half a million dollars in billable time.
Evaluation Data
The evaluation focused on documents responsive to two similar requests for production.
Approximately 15% of the documents were known to be relevant and responsive to two of the 12 original requests for production. The remaining documents were generally nonresponsive, or responsive to other production requests beyond the scope of the evaluation.
Equivio Training
The Equivio>Relevance process works as follows:
Following the process above, a lawyer with advanced knowledge of the case reviewed 1,960 documents (49 sample sets) until the system stabilized. The training process took approximately 10 hours of dedicated review time. At the conclusion of training, the system scored the predicted relevancy of all 44,581 documents on a scale from 0 to 100 (with a higher number indicating a greater likelihood of relevancy). The scoring process took three minutes to complete.
Equivio Results
Once the documents have been scored, the system recommends the optimal F-measure, or cutoff, score, which is the highest balance between recall and accuracy. The F-measure can be used to decide which documents to consider nonrelevant (those ranked below the F-measure point). In the example in Figure 1 below, Equivio>Relevance determined the evaluation project had a cutoff score of 14. In this scenario, 74% of the documents were deemed to have a score too low to be considered relevant, or responsive to the request for production (i.e., 74% of the documents could potentially be auto-culled without attorney review). In other words, the review could focus on the top-scoring 26% of the collection (documents with scores above 14), which, in this case, contain 94% of the relevant documents in the collection. NOTE: The cutoff score can be adjusted to be more or less inclusive of the total data set in the judgment of the review project leader. Nonrelevant documents can also be sampled as a quality control measure.
[IMGCAP(1)]
The validity of the Equivio> Relevance scoring is measured via a process known as discrepancy analysis. Typically, a sample set of a few thousand documents would be created and reviewed by lawyers. The results of the human-reviewed set would then be compared to the results of the Equivio>Relevance system in order to judge the accuracy of the results. The evaluation project had the benefit of drawing from a known production set previously screened by our lawyers.
The discrepancy set consisted of 1,398 randomly selected documents previously designated as responsive to our target issue. Additionally, 2,100 nonresponsive documents were randomly collected for the analysis. The results were compelling (see Figure 2 below).
[IMGCAP(2)]
As shown in Figure 2, the Equivio>Relevance system agreed with the human review on 1,274 of 1,398 documents (91%). Likewise, the Equivio>Relevance system and the human-review team concurred 1,771 of 2,100 times (84%) on nonrelevant designations. Combined, the Equivio>Relevance system agreed with the previous human review on 3,045 of 3,498 documents (87%).
The next step in the process is to referee the times when the Equivio>Relevance system does not match the human-reviewed set. From the 453 documents where Equivio>Relevance did not agree with the human review, we drew a sample of 100 documents to be analyzed by another lawyer with deep knowledge of the case (a so-called “super-reviewer”). Again, the results were forceful.
As shown in Figure 3 below, of the 50 documents designated relevant by the human-review team and not relevant by Equivio>Relevance, only 18 were deemed actually relevant by the super-reviewer. Of the 50 documents judged not relevant by the human-review team and relevant by Equivio>Relevance, 21 were actually relevant. Based on this statistical analysis, the Equivio>Relevance system did as well as the human-review team (92% to 94% accuracy).
[IMGCAP(3)]
Transparency
While there are a number of proprietary technologies at work behind the scenes, the Equivio>Relevance system is not a black box. The reasoning behind the learning and document scoring can be demonstrated via two critical tools.
Process
The quality and defensibility of an Equivio>Relevance review depends upon a well-defined implementation process. The following best practices and considerations should be incorporated into the review project.
Training
The Equivio>Relevance training is only as good as the input of the skilled practitioner who codes the document samples. The individual doing the training should have a high degree of knowledge of the case and code documents with precision and consistency.
Issue Development
Defining the relevancy issues should be approached with care. If the issue is too broad or too narrow, the quality of results may be adversely impacted.
Sampling
Additional document sampling is desirable. A collection of random documents should be reviewed to create the required discrepancy analysis set (typically 3,000 to 5,000 documents). A second sample set can be used to test auto-culling decisions. In other words, if the Equivio>Relevance scores are leveraged to designate documents that scored below a cutoff as not relevant material, additional sampling and review can further validate the results.
Discrepancy Analysis
In most cases, the built-in discrepancy analysis process is required to validate the Equivio>Relevance process and to bolster the defensibility of the review process.
Client Buy-In
The use of Equivio>Relevance affords a number of options. Clients should be well informed in choosing between these options. Just as with other review methods, clients should be advised and be given the opportunity for input regarding how the Equivio>Relevance tool will be used in a particular review project.
Leveraging Equivio
There are a number of potential ways to cut review costs by leveraging Equivio>Relevance scoring:
[IMGCAP(4)]
Conclusion
Our evaluation of Equivio> Relevance suggests this technology can be used in a variety of ways to cut costs, improve accuracy and speed up the review of documents. It can also be used as a powerful early case assessment tool. At a minimum, Equivio>Relevance offers the ability to augment the value of a typical discovery database. In other appropriate cases, it can be used (as part of the processes described earlier) to cull nonrelevant material significantly faster and more economically than traditional review methods.
Due to the high volume of electronically stored information, document review and production is often the most expensive part of the discovery process. In an effort to lower the costs of litigation discovery,
Evaluation Background
The test data was taken from a collection of documents related to the defense of a putative environmental class action related to a large manufacturing facility. The documents used in the evaluation had been previously reviewed and prepared for production by our lawyers using traditional review methods. The case settled just prior to the actual production of the documents. Accordingly, there was a known end result to measure against.
The goal of this exercise was two-fold:
The original review project consisted of approximately 200,000 documents collected from the client (50/50 mix of hard copy and electronic data). Using all available best practices to efficiently review the data, a team of nine lawyers took four months (1,250 hours) to screen and identify documents responsive to the opposing party's requests. While this effort was considered efficient at an overall average of 160 documents reviewed per billable hour, it still consumed nearly half a million dollars in billable time.
Evaluation Data
The evaluation focused on documents responsive to two similar requests for production.
Approximately 15% of the documents were known to be relevant and responsive to two of the 12 original requests for production. The remaining documents were generally nonresponsive, or responsive to other production requests beyond the scope of the evaluation.
Equivio Training
The Equivio>Relevance process works as follows:
Following the process above, a lawyer with advanced knowledge of the case reviewed 1,960 documents (49 sample sets) until the system stabilized. The training process took approximately 10 hours of dedicated review time. At the conclusion of training, the system scored the predicted relevancy of all 44,581 documents on a scale from 0 to 100 (with a higher number indicating a greater likelihood of relevancy). The scoring process took three minutes to complete.
Equivio Results
Once the documents have been scored, the system recommends the optimal F-measure, or cutoff, score, which is the highest balance between recall and accuracy. The F-measure can be used to decide which documents to consider nonrelevant (those ranked below the F-measure point). In the example in Figure 1 below, Equivio>Relevance determined the evaluation project had a cutoff score of 14. In this scenario, 74% of the documents were deemed to have a score too low to be considered relevant, or responsive to the request for production (i.e., 74% of the documents could potentially be auto-culled without attorney review). In other words, the review could focus on the top-scoring 26% of the collection (documents with scores above 14), which, in this case, contain 94% of the relevant documents in the collection. NOTE: The cutoff score can be adjusted to be more or less inclusive of the total data set in the judgment of the review project leader. Nonrelevant documents can also be sampled as a quality control measure.
[IMGCAP(1)]
The validity of the Equivio> Relevance scoring is measured via a process known as discrepancy analysis. Typically, a sample set of a few thousand documents would be created and reviewed by lawyers. The results of the human-reviewed set would then be compared to the results of the Equivio>Relevance system in order to judge the accuracy of the results. The evaluation project had the benefit of drawing from a known production set previously screened by our lawyers.
The discrepancy set consisted of 1,398 randomly selected documents previously designated as responsive to our target issue. Additionally, 2,100 nonresponsive documents were randomly collected for the analysis. The results were compelling (see Figure 2 below).
[IMGCAP(2)]
As shown in Figure 2, the Equivio>Relevance system agreed with the human review on 1,274 of 1,398 documents (91%). Likewise, the Equivio>Relevance system and the human-review team concurred 1,771 of 2,100 times (84%) on nonrelevant designations. Combined, the Equivio>Relevance system agreed with the previous human review on 3,045 of 3,498 documents (87%).
The next step in the process is to referee the times when the Equivio>Relevance system does not match the human-reviewed set. From the 453 documents where Equivio>Relevance did not agree with the human review, we drew a sample of 100 documents to be analyzed by another lawyer with deep knowledge of the case (a so-called “super-reviewer”). Again, the results were forceful.
As shown in Figure 3 below, of the 50 documents designated relevant by the human-review team and not relevant by Equivio>Relevance, only 18 were deemed actually relevant by the super-reviewer. Of the 50 documents judged not relevant by the human-review team and relevant by Equivio>Relevance, 21 were actually relevant. Based on this statistical analysis, the Equivio>Relevance system did as well as the human-review team (92% to 94% accuracy).
[IMGCAP(3)]
Transparency
While there are a number of proprietary technologies at work behind the scenes, the Equivio>Relevance system is not a black box. The reasoning behind the learning and document scoring can be demonstrated via two critical tools.
Process
The quality and defensibility of an Equivio>Relevance review depends upon a well-defined implementation process. The following best practices and considerations should be incorporated into the review project.
Training
The Equivio>Relevance training is only as good as the input of the skilled practitioner who codes the document samples. The individual doing the training should have a high degree of knowledge of the case and code documents with precision and consistency.
Issue Development
Defining the relevancy issues should be approached with care. If the issue is too broad or too narrow, the quality of results may be adversely impacted.
Sampling
Additional document sampling is desirable. A collection of random documents should be reviewed to create the required discrepancy analysis set (typically 3,000 to 5,000 documents). A second sample set can be used to test auto-culling decisions. In other words, if the Equivio>Relevance scores are leveraged to designate documents that scored below a cutoff as not relevant material, additional sampling and review can further validate the results.
Discrepancy Analysis
In most cases, the built-in discrepancy analysis process is required to validate the Equivio>Relevance process and to bolster the defensibility of the review process.
Client Buy-In
The use of Equivio>Relevance affords a number of options. Clients should be well informed in choosing between these options. Just as with other review methods, clients should be advised and be given the opportunity for input regarding how the Equivio>Relevance tool will be used in a particular review project.
Leveraging Equivio
There are a number of potential ways to cut review costs by leveraging Equivio>Relevance scoring:
[IMGCAP(4)]
Conclusion
Our evaluation of Equivio> Relevance suggests this technology can be used in a variety of ways to cut costs, improve accuracy and speed up the review of documents. It can also be used as a powerful early case assessment tool. At a minimum, Equivio>Relevance offers the ability to augment the value of a typical discovery database. In other appropriate cases, it can be used (as part of the processes described earlier) to cull nonrelevant material significantly faster and more economically than traditional review methods.
This article highlights how copyright law in the United Kingdom differs from U.S. copyright law, and points out differences that may be crucial to entertainment and media businesses familiar with U.S law that are interested in operating in the United Kingdom or under UK law. The article also briefly addresses contrasts in UK and U.S. trademark law.
The Article 8 opt-in election adds an additional layer of complexity to the already labyrinthine rules governing perfection of security interests under the UCC. A lender that is unaware of the nuances created by the opt in (may find its security interest vulnerable to being primed by another party that has taken steps to perfect in a superior manner under the circumstances.
With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.
Possession of real property is a matter of physical fact. Having the right or legal entitlement to possession is not "possession," possession is "the fact of having or holding property in one's power." That power means having physical dominion and control over the property.
In 1987, a unanimous Court of Appeals reaffirmed the vitality of the "stranger to the deed" rule, which holds that if a grantor executes a deed to a grantee purporting to create an easement in a third party, the easement is invalid. Daniello v. Wagner, decided by the Second Department on November 29th, makes it clear that not all grantors (or their lawyers) have received the Court of Appeals' message, suggesting that the rule needs re-examination.