Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
Medical experts testifying about causation in toxic tort, medical device, and pharmaceutical litigation frequently claim to base opinions upon a dispassionate review of the scientific literature, purporting to do the same analysis they perform in their clinical practices. But peeling back the fa'ade often reveals that they have done nothing of the kind. In fact, although many medical experts proclaim adherence to scientific methods and procedures adopted or endorsed by various organizations, cross-examination often reveals the falsity of that assertion. By contrasting the experts' litigation analysis to the analytical standards of their profession, a party can effectively challenge the admissibility of causation testimony.
Explanation
In the last decade, medical organizations and peer-reviewed journals have adopted principles of evidence-based medicine, which can apply to an expert's medical causation analysis. According to the Ninth Circuit, evidence-based medicine compares a patient's condition against a background of peer-reviewed literature. See Primiano v. Cook, 598 F.3d 558, 567 (9th Cir. 2010). Primiano explains that evidence-based medicine embodies science and judgment:
The classic medical school texts ' explain that medicine is scientific, but not entirely a science. Medicine is not a science but a learned profession, deeply rooted in a number of sciences and charged with the obligation to apply them for man's benefit. Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. Despite the importance of evidence-based medicine, much of medical decision-making relies on judgment ' a process that is difficult to quantify or even to assess qualitatively. Especially when a relevant experience base is unavailable, physicians must use their knowledge and experience as a basis for weighing known factors along with the inevitable uncertainties to make a sound judgment.
Id. at 565 (quotations and footnotes omitted). While many medical experts claim to adhere to evidence-based medicine principles both inside the courtroom and out, their adherence often amounts to little more than lip service. When the experts' analysis reflects more litigation-driven judgment and less scientific method, their testimony can and should be challenged and excluded.
Evidence-Based Medicine Principles
Many organizations and scientific journals have adopted evidence-based medicine in one form or another. Oxford University's Centre for Evidence Based Medicine is well-known. See www.cebm.net/index.aspx?o=1001 (last visited July 7, 2011). While many organizations embrace evidence-based medicine concepts, not everyone subscribes to them. See Hendrix v. Evenflo Co., 255 F.R.D. 568, 607 n.72 (N.D. Fla. 2009) (“Some in the medical community appear opposed to this, calling it derogatory of traditional clinical experience.”), aff'd, 609 F.3d 1183 (11th Cir. 2010). To illustrate the use and application of evidence-based medicine principles, orthopedic medicine standards are illustrative.
The American Academy of Orthopedic Surgeons (AAOS) recognizes evidence-based medicine principles. See, e.g., Joseph Bernstein, M.D., M.S., Evidence-Based Medicine, 12 J. Am. Acad. Orthop. Surg. 80 (2004); Kurt P. Spindler, M.D., et al., Reading and Reviewing the Orthopaedic Literature: A Systematic, Evidence-Based Medicine Approach, 13 J. Am. Acad. Orthop. Surg. 220 (2005). AAOS evidence-based medicine attempts to infuse some objectivity into medical decision-making:
It requires us to make decisions by critically reading and reviewing the literature, then weighing the findings reported in studies by the scientific validity of the work and the researchers' approach. Evidence-based medicine asks us to be more critical about the changes that we make in our practice. It requires us to use the best evidence by placing more value on well-designed and well-executed clinical investigations and less value on expert opinion and uncontrolled observational studies (e.g., case reports and case series).
Reading and Reviewing the Orthopaedic Literature, supra, at 220'21 (emphasis added).
AAOS evidence-based medicine methodology treats different types of evidence differently:
In clinical research, not all sources of evidence are created equal. Among studies reporting on treatment outcomes, most epidemiologists would agree with the following pyramid of evidence:
Randomized controlled trial
Prospective cohort study
Retrospective cohort study
Case-control study
Case series
Case report
Expert opinion
Personal observation
Evidence-Based Medicine, supra, at 83. “[W]eaker study designs are frequently used to generate hypotheses in a field, whereas stronger designs are used to test hypotheses.” Reading and Reviewing the Orthopaedic Literature, supra, at 221; Mininder S. Kocher, M.D., M.P.H. & David Zurakowski, Ph.D., Clinical Epidemiology and Biostatistics: A Primer for Orthopaedic Surgeons, 86 J. Bone & Joint Surg. 607, 608 (Mar. 2004)
(” ' [C]ase series are often anecdotal, are subject to many possible biases, lack a hypothesis, and are difficult to compare with other series. Thus, case series are usually viewed as a means of generating hypotheses for additional studies but not as conclusive.”). Evidence-based medicine requires scientific consideration of the strength, validity, and type of study in drawing conclusions, not merely spouting off one's “inference” or “judgment” after reading a bundle of articles:
[A] well designed and executed double-blind randomized controlled prospective clinical trial with excellent follow-up provides stronger evidence for the use of diagnostics and therapeutics than do weaker designs. Although case reports and case series still have value (such as alerting clinicians to new diseases or alerting researchers to new treatments that may be worthy of study), study designs that use appropriate comparison groups and pay careful attention to sources of bias should be held in higher regard when accumulating evidence to change the way we practice.
This concept of ranking research studies in terms of their methodological strength is called the hierarchy of evidence. It is being used by many journals, including The Journal of Bone and Joint Surgery, Clinical Orthopaedics and Related Research, Arthroscopy, and The American Journal of Sports Medicine to classify published manuscripts ' . In considering whether to change one's practice based on the results of an evidence-based study, it is imperative to know the type of study used in order to judge the methodologic strength of the study.
Reading and Reviewing the Orthopaedic Literature, supra, at 223 (emphasis added); Clinical Epidemiology and Biostatistics, supra, at 613 (“The steps of evidence-based medicine involve converting the need for information into an answerable question; tracking down the best evidence to answer that question; critically appraising the evidence with regard to its validity, impact, and applicability; and integrating the critical appraisal with clinical expertise and the patient's unique values and circumstances.”). Only valid, reliable evidence can support inference of causation:
[T]he standard to prove cause-effect is set higher than the standard to suggest an association. Inference of causation requires supporting data from non-observational studies such as a randomized clinical trial, a biologically plausible explanation, a relatively large effect size, reproducibility of findings, a temporal relationship between cause and effect, and a biological gradient demonstrated by a dose-response relationship.
Clinical Epidemiology and Biostatistics, supra, at 608 (emphasis added).
Admissibility
Plaintiffs and defendants frequently differ, often stridently, over admissibility of case series and reports and whether they can reliably support causation. See, e.g., Hollander v. Sandoz Pharms. Corp., 289 F.3d 1193, 1210 (10th Cir. 2002) (discussing arguments for and against admissibility of case reports and case series). Of course, case reports and case series can provide valuable information a clinician can use in treating a patient and generating hypotheses for further study. See Evidence-Based Medicine, supra, at 83 (“Therefore, absent other evidence, a case report can be the legitimate basis for action. Weak evidence is not the same as no evidence.”). But a clinician's use of case reports to support treatment decisions, however, is a far cry from basing conclusions of causation on them. See e.g., Hollander, 289 F.3d at 1213 (“The data on which they rely might well raise serious concerns in conscientious clinicians seeking to decide whether the benefits of the drug outweigh its risks. However, in deriving their opinions that Parlodel caused Ms. Hollander's stroke from the various sources we have outlined, [the experts] all made several speculative leaps.”). So, admissibility hinges not on the expert's consideration of case reports or series, but, rather, on the use to which they put them in their overall analysis.
Evidence-Based Medicine Versus Unconstrained
Judgment
Evidence-based medicine methodology, therefore, requires “conscientious, explicit and judicious” consideration of available scientific data with due regard for the relative weight and validity of the studies that data comprises. Litigation experts, by contrast, often lump all scientific studies together regardless of their type, weight, or validity, giving inconclusive and often conflicting, paradoxical case reports, literature reviews, and in vitro and in vivo animal studies predominant weight on the question of causation. The record may reveal no critical analysis of the data or dispassionate balancing or weighting of the evidence by the experts, but only a bundling of articles without regard for their limitations. Such analysis reflects a result-oriented approach, the very antithesis of “conscientious, explicit and judicious use of current best evidence” advocated by their profession.
The court in Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584 (D.N.J. 2002), addressed a similar issue. In Magistrini, the plaintiff's expert purported to consider the weight of the evidence to determine that dry-cleaning fluid caused the plaintiff's cancer. The court explained that a valid weight of the evidence methodology, comparable to evidence-based medicine principles, involves more than simply reading and concluding:
Importantly, because the weight-of-the-evidence methodology involves substantial judgment on the part of the expert, it is crucial that the expert supply his method for weighting the studies he has chosen to include in order to prevent a mere listing of studies and jumping to a conclusion. How else can one expert's choice of “weight” be helpful to a jury which may be called on to assess a “battle of weighers”? The particular combination of evidence considered and weighed here has not been subjected to peer review. However, the weight-of-the-evidence methodology has been used, in a non-judicial context, to assess the potentially carcinogenic risk of agents for regulatory purposes. The existence and maintenance of standards controlling the technique's operation when used for regulatory purposes is informative here ' . When a weight-of-the-evidence evaluation is conducted, all of the relevant evidence must be gathered, and the assessment or weighing of that evidence must not be arbitrary, but must itself be based on methods of science.
* * *
In order to ensure that the “weight-of-the-evidence” methodology is truly a methodology, rather than a mere conclusion-oriented selection process that weighs more heavily those studies that supported an outcome, there must be a scientific method of weighting that is used and explained.
Id. at 602, 607 (emphasis added). Because the expert could not explain “what 'methodical systematic process'” he used, the court excluded the testimony:
“Judgment” does not substitute for scientific method; without a reliable method, result-oriented “judgment” cannot be distinguished from scientifically or methodologically-based judgment. Where, as here, elements of judgment pervade the methodology, it is essential that the expert set forth the method for weighing the evidence upon which his opinion is based. Absent that, this Court's role as gatekeeper to assess the reliability of the methodology applied in this case is nullified.
Id. at 608 (most quotations and citations omitted).
When experts base causation conclusions upon “judgment” and “inferences” from their review of the literature, instead of “the conscientious, explicit and judicious use of current best evidence” as dictated by the standards of their profession, their testimony, opinions, and conclusions cannot pass muster under Daubert and Rule 702. See Zenith Elecs. Corp. v. WH-TV Broad. Corp., 395 F.3d 416, 419 (7th Cir. 2005) (“Shapiro's method, 'expert intuition,' is neither normal among social scientists nor testable-and conclusions that are not falsifiable aren't worth much to either science or the judiciary.”).
Assessing whether a medical expert adheres to principles of evidence-based medicine is a revealing way to gauge the reliability of the testimony. Straying too far from sound methodology and closer to the realm of ipse dixit raises serious questions about the reliability of the testimony:
Expert opinion rests near the bottom of the pyramid of evidence, a position that has metaphoric significance: the teaching of experts is the foundation upon which all other knowledge rests. Good students turn to teachers and textbooks (not journal articles) to begin their study of a given area. But as a form of evidence, expert opinion is subordinate to systematic research. The reason is that history is full of examples in which experts were egregiously wrong. For instance, William Harvey was criticized harshly by the “experts” for his radical notion that blood circulates ' .
' Orthopaedics is based on a more objective foundation than psychoanalysis, but we share with that field a method of professional training in which the novice is placed in the role of apprentice to the master. Because we are appropriately conditioned to accept the teachings of the experts when it comes to the basics, we may find it hard to reject their pronouncements when they veer into speculation. Yet we must. We are obliged to remember the hierarchy of evidence: expert opinion certainly trumps nonexpert opinion, but it is weaker than good clinical research.
Evidence Based Medicine, supra, at 84-85 (emphasis added). Scientific reliability suffers ' and admissibility pays the price ' when testimony and conclusions disengage from the non-litigation methods and procedures the experts themselves claim to follow. See, e.g., Truck Ins. Exch. v. Magnetek, Inc., 360 F.3d 1206, 1213 (10th Cir. 2004) (affirming exclusion of causation expert testimony in part because the expert's opinion “did not meet the standards of fire investigation [the expert] himself professed he adhered to”); In re Breast Implant Prods. Liab. Litig., 11 F. Supp. 2d 1217, 1236 (D. Colo. 1998) (excluding causation testimony in part because the expert “[w]ithout explanation ' disregards the methodology of his specialty”); cf. Gross v. King David Bistro, Inc., 83 F. Supp. 2d 597, 601 (D. Md. 2000) (holding that the “Daubert analysis commands that in court, science must do the speaking, not merely the scientist”).
Conclusion
Medical experts commonly profess to employ in the courtroom the level of intellectual rigor governing their work in their clinical practice. But scrutinizing their analysis and measuring it against the principles of evidence-based medicine can help reveal the flaws in their methodology and the inadmissibility of their testimony.
John Sear, a member of this newsletter's Board of Editors, is a partner in the Minneapolis, MN, office of Bowman and Brooke, LLP. He has a uniquely diverse product liability defense practice.
Medical experts testifying about causation in toxic tort, medical device, and pharmaceutical litigation frequently claim to base opinions upon a dispassionate review of the scientific literature, purporting to do the same analysis they perform in their clinical practices. But peeling back the fa'ade often reveals that they have done nothing of the kind. In fact, although many medical experts proclaim adherence to scientific methods and procedures adopted or endorsed by various organizations, cross-examination often reveals the falsity of that assertion. By contrasting the experts' litigation analysis to the analytical standards of their profession, a party can effectively challenge the admissibility of causation testimony.
Explanation
In the last decade, medical organizations and peer-reviewed journals have adopted principles of evidence-based medicine, which can apply to an expert's medical causation analysis. According to the Ninth Circuit, evidence-based medicine compares a patient's condition against a background of peer-reviewed literature. See
The classic medical school texts ' explain that medicine is scientific, but not entirely a science. Medicine is not a science but a learned profession, deeply rooted in a number of sciences and charged with the obligation to apply them for man's benefit. Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. Despite the importance of evidence-based medicine, much of medical decision-making relies on judgment ' a process that is difficult to quantify or even to assess qualitatively. Especially when a relevant experience base is unavailable, physicians must use their knowledge and experience as a basis for weighing known factors along with the inevitable uncertainties to make a sound judgment.
Id. at 565 (quotations and footnotes omitted). While many medical experts claim to adhere to evidence-based medicine principles both inside the courtroom and out, their adherence often amounts to little more than lip service. When the experts' analysis reflects more litigation-driven judgment and less scientific method, their testimony can and should be challenged and excluded.
Evidence-Based Medicine Principles
Many organizations and scientific journals have adopted evidence-based medicine in one form or another. Oxford University's Centre for Evidence Based Medicine is well-known. See www.cebm.net/index.aspx?o=1001 (last visited July 7, 2011). While many organizations embrace evidence-based medicine concepts, not everyone subscribes to them. See
The American Academy of Orthopedic Surgeons (AAOS) recognizes evidence-based medicine principles. See, e.g., Joseph Bernstein, M.D., M.S., Evidence-Based Medicine, 12 J. Am. Acad. Orthop. Surg. 80 (2004); Kurt P. Spindler, M.D., et al., Reading and Reviewing the Orthopaedic Literature: A Systematic, Evidence-Based Medicine Approach, 13 J. Am. Acad. Orthop. Surg. 220 (2005). AAOS evidence-based medicine attempts to infuse some objectivity into medical decision-making:
It requires us to make decisions by critically reading and reviewing the literature, then weighing the findings reported in studies by the scientific validity of the work and the researchers' approach. Evidence-based medicine asks us to be more critical about the changes that we make in our practice. It requires us to use the best evidence by placing more value on well-designed and well-executed clinical investigations and less value on expert opinion and uncontrolled observational studies (e.g., case reports and case series).
Reading and Reviewing the Orthopaedic Literature, supra, at 220'21 (emphasis added).
AAOS evidence-based medicine methodology treats different types of evidence differently:
In clinical research, not all sources of evidence are created equal. Among studies reporting on treatment outcomes, most epidemiologists would agree with the following pyramid of evidence:
Randomized controlled trial
Prospective cohort study
Retrospective cohort study
Case-control study
Case series
Case report
Expert opinion
Personal observation
Evidence-Based Medicine, supra, at 83. “[W]eaker study designs are frequently used to generate hypotheses in a field, whereas stronger designs are used to test hypotheses.” Reading and Reviewing the Orthopaedic Literature, supra, at 221; Mininder S. Kocher, M.D., M.P.H. & David Zurakowski, Ph.D., Clinical Epidemiology and Biostatistics: A Primer for Orthopaedic Surgeons, 86 J. Bone & Joint Surg. 607, 608 (Mar. 2004)
(” ' [C]ase series are often anecdotal, are subject to many possible biases, lack a hypothesis, and are difficult to compare with other series. Thus, case series are usually viewed as a means of generating hypotheses for additional studies but not as conclusive.”). Evidence-based medicine requires scientific consideration of the strength, validity, and type of study in drawing conclusions, not merely spouting off one's “inference” or “judgment” after reading a bundle of articles:
[A] well designed and executed double-blind randomized controlled prospective clinical trial with excellent follow-up provides stronger evidence for the use of diagnostics and therapeutics than do weaker designs. Although case reports and case series still have value (such as alerting clinicians to new diseases or alerting researchers to new treatments that may be worthy of study), study designs that use appropriate comparison groups and pay careful attention to sources of bias should be held in higher regard when accumulating evidence to change the way we practice.
This concept of ranking research studies in terms of their methodological strength is called the hierarchy of evidence. It is being used by many journals, including The Journal of Bone and Joint Surgery, Clinical Orthopaedics and Related Research, Arthroscopy, and The American Journal of Sports Medicine to classify published manuscripts ' . In considering whether to change one's practice based on the results of an evidence-based study, it is imperative to know the type of study used in order to judge the methodologic strength of the study.
Reading and Reviewing the Orthopaedic Literature, supra, at 223 (emphasis added); Clinical Epidemiology and Biostatistics, supra, at 613 (“The steps of evidence-based medicine involve converting the need for information into an answerable question; tracking down the best evidence to answer that question; critically appraising the evidence with regard to its validity, impact, and applicability; and integrating the critical appraisal with clinical expertise and the patient's unique values and circumstances.”). Only valid, reliable evidence can support inference of causation:
[T]he standard to prove cause-effect is set higher than the standard to suggest an association. Inference of causation requires supporting data from non-observational studies such as a randomized clinical trial, a biologically plausible explanation, a relatively large effect size, reproducibility of findings, a temporal relationship between cause and effect, and a biological gradient demonstrated by a dose-response relationship.
Clinical Epidemiology and Biostatistics, supra, at 608 (emphasis added).
Admissibility
Plaintiffs and defendants frequently differ, often stridently, over admissibility of case series and reports and whether they can reliably support causation. See, e.g.,
Evidence-Based Medicine Versus Unconstrained
Judgment
Evidence-based medicine methodology, therefore, requires “conscientious, explicit and judicious” consideration of available scientific data with due regard for the relative weight and validity of the studies that data comprises. Litigation experts, by contrast, often lump all scientific studies together regardless of their type, weight, or validity, giving inconclusive and often conflicting, paradoxical case reports, literature reviews, and in vitro and in vivo animal studies predominant weight on the question of causation. The record may reveal no critical analysis of the data or dispassionate balancing or weighting of the evidence by the experts, but only a bundling of articles without regard for their limitations. Such analysis reflects a result-oriented approach, the very antithesis of “conscientious, explicit and judicious use of current best evidence” advocated by their profession.
Importantly, because the weight-of-the-evidence methodology involves substantial judgment on the part of the expert, it is crucial that the expert supply his method for weighting the studies he has chosen to include in order to prevent a mere listing of studies and jumping to a conclusion. How else can one expert's choice of “weight” be helpful to a jury which may be called on to assess a “battle of weighers”? The particular combination of evidence considered and weighed here has not been subjected to peer review. However, the weight-of-the-evidence methodology has been used, in a non-judicial context, to assess the potentially carcinogenic risk of agents for regulatory purposes. The existence and maintenance of standards controlling the technique's operation when used for regulatory purposes is informative here ' . When a weight-of-the-evidence evaluation is conducted, all of the relevant evidence must be gathered, and the assessment or weighing of that evidence must not be arbitrary, but must itself be based on methods of science.
* * *
In order to ensure that the “weight-of-the-evidence” methodology is truly a methodology, rather than a mere conclusion-oriented selection process that weighs more heavily those studies that supported an outcome, there must be a scientific method of weighting that is used and explained.
Id. at 602, 607 (emphasis added). Because the expert could not explain “what 'methodical systematic process'” he used, the court excluded the testimony:
“Judgment” does not substitute for scientific method; without a reliable method, result-oriented “judgment” cannot be distinguished from scientifically or methodologically-based judgment. Where, as here, elements of judgment pervade the methodology, it is essential that the expert set forth the method for weighing the evidence upon which his opinion is based. Absent that, this Court's role as gatekeeper to assess the reliability of the methodology applied in this case is nullified.
Id. at 608 (most quotations and citations omitted).
When experts base causation conclusions upon “judgment” and “inferences” from their review of the literature, instead of “the conscientious, explicit and judicious use of current best evidence” as dictated by the standards of their profession, their testimony, opinions, and conclusions cannot pass muster under Daubert and Rule 702. See
Assessing whether a medical expert adheres to principles of evidence-based medicine is a revealing way to gauge the reliability of the testimony. Straying too far from sound methodology and closer to the realm of ipse dixit raises serious questions about the reliability of the testimony:
Expert opinion rests near the bottom of the pyramid of evidence, a position that has metaphoric significance: the teaching of experts is the foundation upon which all other knowledge rests. Good students turn to teachers and textbooks (not journal articles) to begin their study of a given area. But as a form of evidence, expert opinion is subordinate to systematic research. The reason is that history is full of examples in which experts were egregiously wrong. For instance, William Harvey was criticized harshly by the “experts” for his radical notion that blood circulates ' .
' Orthopaedics is based on a more objective foundation than psychoanalysis, but we share with that field a method of professional training in which the novice is placed in the role of apprentice to the master. Because we are appropriately conditioned to accept the teachings of the experts when it comes to the basics, we may find it hard to reject their pronouncements when they veer into speculation. Yet we must. We are obliged to remember the hierarchy of evidence: expert opinion certainly trumps nonexpert opinion, but it is weaker than good clinical research.
Evidence Based Medicine, supra, at 84-85 (emphasis added). Scientific reliability suffers ' and admissibility pays the price ' when testimony and conclusions disengage from the non-litigation methods and procedures the experts themselves claim to follow. See, e.g.,
Conclusion
Medical experts commonly profess to employ in the courtroom the level of intellectual rigor governing their work in their clinical practice. But scrutinizing their analysis and measuring it against the principles of evidence-based medicine can help reveal the flaws in their methodology and the inadmissibility of their testimony.
John Sear, a member of this newsletter's Board of Editors, is a partner in the Minneapolis, MN, office of Bowman and Brooke, LLP. He has a uniquely diverse product liability defense practice.
ENJOY UNLIMITED ACCESS TO THE SINGLE SOURCE OF OBJECTIVE LEGAL ANALYSIS, PRACTICAL INSIGHTS, AND NEWS IN ENTERTAINMENT LAW.
Already a have an account? Sign In Now Log In Now
For enterprise-wide or corporate acess, please contact Customer Service at [email protected] or 877-256-2473
This article highlights how copyright law in the United Kingdom differs from U.S. copyright law, and points out differences that may be crucial to entertainment and media businesses familiar with U.S law that are interested in operating in the United Kingdom or under UK law. The article also briefly addresses contrasts in UK and U.S. trademark law.
The Article 8 opt-in election adds an additional layer of complexity to the already labyrinthine rules governing perfection of security interests under the UCC. A lender that is unaware of the nuances created by the opt in (may find its security interest vulnerable to being primed by another party that has taken steps to perfect in a superior manner under the circumstances.
With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.
Possession of real property is a matter of physical fact. Having the right or legal entitlement to possession is not "possession," possession is "the fact of having or holding property in one's power." That power means having physical dominion and control over the property.
In 1987, a unanimous Court of Appeals reaffirmed the vitality of the "stranger to the deed" rule, which holds that if a grantor executes a deed to a grantee purporting to create an easement in a third party, the easement is invalid. Daniello v. Wagner, decided by the Second Department on November 29th, makes it clear that not all grantors (or their lawyers) have received the Court of Appeals' message, suggesting that the rule needs re-examination.