OUP user menu

Secondary Case Review Methods and Anatomic Pathology Culture

Stephen S. Raab MD, Dana Marie Grzybicki MD, PhD
DOI: http://dx.doi.org/10.1309/AJCPPKUU7Z8OGDTA 829-831 First published online: 1 June 2010

In this issue of the Journal, Owens and colleagues1 at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, describe the development of a patient safety information technology (IT) tool. Because this tool only recently has been implemented, Owens et al1 chose not to evaluate the effectiveness of the tool as a patient safety improvement initiative and preferred to evaluate the tool on its effect on timeliness, another Institute of Medicine (IOM) quality metric.2 The work of Owens et al1 highlights the relationship of patient safety to other quality domains, error detection, and quality improvement in surgical pathology.

The IOM defined 6 domains of quality: safety, effectiveness, efficiency, timeliness, equity, and patient centeredness.2 During the past several decades, most of the published work in the anatomic pathology medical literature has focused on the domains of safety and timeliness. The 6 quality domains are not independent, as illustrated by the data reported by Owens et al.1 For many surgical pathologists, the mantra has become “get it right and get it (out) fast,” and efforts to change a specific practice domain (eg, safety) may lead to unforeseen changes in other domains (eg, timeliness). The fact that Owens et al1 measured turnaround time indicated that they were concerned that possible delays in case sign-out may have occurred because of newly introduced processes. Owens et al1 reported a slight increase in median case turnaround time following implementation of their patient safety IT tool. Outlier analysis was not performed, which possibly is a more important measure. Barriers to improve a specific quality metric are many and include the lack of organizational commitment or focus and investment in competing quality metrics. Organizational culture strongly affects the ability of front-line personnel to improve health care delivery, especially safe care.

The study of health care organizational culture is important in determining the factors contributing to the success and failure of quality improvement initiatives.3 Organizational patient safety cultures may be measured through surveys that assess multiple dimensions, such as resources for safety, overall emphasis on safety, fear of shame, fear of blame, unit recognition and support for safety efforts, and provision of safe care.3 A problem in some organizations is that the upper levels of leadership may view their organization as safer than do the front-line personnel. For example, organizational leaders may not recognize or may condone the behaviors of disruptive physicians who interfere with the process of quality care delivery. Leape and Fromson4 provide examples of disruptive behaviors, such as outbursts of anger, criticizing staff in front of other staff, boundary violations, unethical behavior, and negative comments about another physician’s care. For surgical pathologists, disruptive behaviors may originate from clinical services (eg, a clinician stating that a pathologist is not competent) or internally within the laboratory.

Positive changes in patient safety culture have been shown to affect error reporting frequency in nonlaboratory and laboratory environments. Zarbo and D’Angelo5 were able to show this in the anatomic pathology laboratory at Henry Ford Hospital, Detroit, MI. As a culture of no blame and no shame is promoted, front-line personnel feel safe and empowered to report problems. In a thriving patient safety environment, reporting errors is the first step in reducing error and is followed by root cause analysis, redesign of systems, and implementation of quality improvement initiatives. Cultural barriers limit organizational error reporting and subsequent error root cause analysis.

Much of the published literature on surgical pathology error focuses on diagnostic interpretation error. As Meier et al6 pointed out, surgical pathology diagnostic interpretation error is only one form of error (the other categories being reporting, specimen quality, and patient identification). Similar to errors in test results rendered by the clinical laboratory, 2 forms of error are associated with diagnostic interpretations: errors of accuracy and errors of precision. An error of accuracy is when a diagnostic interpretation does not correspond with the “truth” (eg, the presence or absence of disease in the patient), and an error in precision is when pathologists disagree on the diagnostic interpretation. These definitions of error are compatible with the IOM definition of medical error, as an inaccurate diagnosis or imprecise diagnoses (by a group of pathologists) are not the intended consequence of a diagnostic test.

All diagnostic interpretation errors have an active and a latent component, although pathology systems tend to focus on the individual pathologist as the active and major contributor to the error. This is unfortunate, as the concept of imprecision is a latent problem in the field of pathology and interpretation errors rarely are simply caused by slips or the use of incorrect cognitive heuristics. Interpretation errors are also caused by failures in our apprenticeship educational models, overwork, lack of system checks, and systems that do not encourage standardization and reaching diagnostic consensus.

Surgical pathologists already use several retrospective and prospective methods to detect interpretation errors. Secondary case slide review is one of them. The diagnostic interpretation error frequency reported in individual studies is highly variable,7 reflecting differences in study design, biases, system variability, and definitions of error. As hospital or unit patient safety cultural assessments have only recently been performed, the evaluation of anatomic pathology laboratory culture and error reporting generally has not been linked to diagnostic disagreement frequencies. However, it is believed that the harder one looks for error, the more error one will find. For example, when Zarbo and D’Angelo5 specifically looked for anatomic pathology specimen “defects,” they found that 27.9% of specimens were defective.

Prospective secondary case slide reviews detect near-miss events, presumably decreasing patient harm. The IT tool developed by Owens et al1 is an example of a force function redundancy that hypothetically will serve this purpose. Both retrospective and prospective case slide review initiatives may serve as patient safety quality improvement tools, depending on implementation methods.

Owens et al1 reported that following implementation of the UPMC IT tool, after 8 months of data accrual involving 1,523 reviewed cases, 33 (2.2%) had minor disagreements and 1 (0.07%) had a moderate disagreement. No major disagreements were reported.

Before the implementation of this IT tool, UPMC performed a 5% retrospective random review process, followed by a directed review process.8 By using the 5% retrospective random review process (involving 7,444 cases), the disagreement frequency was 2.6% and major errors occurred in 0.36% of reviewed cases. The focused review process involved retrospective review of specific diagnostic problem areas, such as grading dysplasia or determining the presence or absence of metastatic carcinoma in colonic lymph nodes. This focused review process showed a 13.2% disagreement frequency with 3.2% of disagreements being major.

Owens et al1 do not provide an explanation for the variability in diagnostic disagreement rates using these 3 different detection methods. The newly introduced IT-driven prospective random review process detected similar but slightly fewer errors compared with the retrospective random review process and considerably fewer errors compared with the focused review process. There are several possible explanations, although as Owens et al1 point out, additional data collection and more thorough analysis are warranted for full evaluation.

First, it is important to note that the focused review process was implemented specifically to look for error, and the random review was designed as a redundancy function. A benefit of focused review was in detecting the lack of standardization for particular areas of diagnosis. Root cause analysis may be performed on retrospectively obtained focused review data to determine causes of imprecision. Based on these data, quality improvement initiatives may be implemented to improve practice and reach consensus. Prospective error detection methods that are examples of focused review include difficult case consensus conferences before case sign-out and secondary review of particular case types, such as breast or prostate core needle biopsies. Process-driven prospective methods also detect errors before they have clinical consequences (ie, near-miss events).

Redundancy methods of error prevention are less likely to focus on errors of precision and, at least at UPMC, tended to detect fewer errors per cases reviewed. In some laboratory cultures, redundancy methods of error prevention also may be more likely to focus on individuals who make the errors rather than on system problems and on root causes of error.

It is possible that Owens et al1 reported a relatively low disagreement rate because the randomness in IT case selection resulted in pathologists increasing their vigilance and perhaps obtaining more consultations before sign-out. It is also possible that the pathologists had achieved a higher level of standardization through means not discussed and already were functioning at a high level of accuracy and precision. Alternatively, review pathologists may not be concentrating on the review process, especially for cases presumed to not be diagnostic problems. In addition, cultural issues limiting case disagreement reporting could be in effect.

As surgical pathologists know, there are a variety of ways to focus on improving patient safety, with one way being the implementation of secondary review methods. Many pathology departments intuitively know aspects of their culture and choose detection methods tailored to their specific practice. As individuals in a practice, pathologists want to adopt error prevention methods that work to limit the number of diagnostic interpretation errors and other error types. As a field, pathology needs to effectively respond to the more global barriers, such as cultural and educational problems that constrain individual pathologists in producing a safe diagnosis.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
View Abstract