OUP user menu

An Examination of the Usefulness of Repeat Testing Practices in a Large Hospital Clinical Chemistry Laboratory

Carl O. Deetz MD, PhD, Debra K. Nolan, Mitchell G. Scott PhD
DOI: http://dx.doi.org/10.1309/AJCPWPBF62YGEFOR 20-25 First published online: 1 January 2012


A long-standing practice in clinical laboratories has been to automatically repeat laboratory tests when values trigger automated “repeat rules” in the laboratory information system such as a critical test result. We examined 25,553 repeated laboratory values for 30 common chemistry tests from December 1, 2010, to February 28, 2011, to determine whether this practice is necessary and whether it may be possible to reduce repeat testing to improve efficiency and turnaround time for reporting critical values. An “error” was defined to occur when the difference between the initial and verified values exceeded the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error limit. The initial values from 2.6% of all repeated tests (668) were errors. Of these 668 errors, only 102 occurred for values within the analytic measurement range. Median delays in reporting critical values owing to repeated testing ranged from 5 (blood gases) to 17 (glucose) minutes.

Key Words:
  • Critical value
  • Laboratory management
  • Laboratory error

Since the early 1970s, laboratory medicine specialists have used computer technology and automation to identify and confirm critical laboratory values.1,2 The historic practice in clinical laboratories has been to automatically repeat laboratory values that are above or below a critical threshold or that trigger other automated “repeat rules” such as a delta check. These practices were established when laboratory instruments were far less reliable than today,3 yet they persist in many laboratories (including ours). In fact, recent studies show that analytic issues account for only 8% to 15% of clinical laboratory–related errors, with preanalytic and postanalytic errors representing 85% to 92% of all errors.4,5 Contemporary laboratory instruments use numerous safeguards in their hardware and software to improve the accuracy and reliability of results.

A recent summary of data from a College of American Pathologists (CAP) Q-Probes survey suggests 61% of laboratories still repeat testing for critical chemistry values.6 The survey also suggests that laboratory test repeat practices have the potential to delay reporting by 10 to 14 minutes and waste resources without significantly preventing analytic errors.6 These observations led us to question whether automated repeated testing is necessary in our laboratory. We examined 25,553 repeat laboratory values from a total of 855,009 results during a 3-month period to determine whether it may be possible to reduce repeat testing and improve efficiency and turnaround time for reporting laboratory values.

Materials and Methods

Barnes-Jewish Hospital is an approximately 1,200-bed tertiary care center that serves the city of St Louis, MO, and surrounding areas. The annual test volume in our clinical chemistry laboratory is approximately 7 million. From December 1, 2010, to February 28, 2011, 855,009 results from 30 different clinical chemistry tests were examined for repeated testing. Immunoassays were observed for an additional month. Tests examined were alanine transaminase, aspartate transaminase, bilirubin, blood urea nitrogen, ionized calcium, calcium, chloride, cholesterol, creatinine, digoxin, ferritin, gentamicin, glucose, potassium, lactate, lactate dehydrogenase, lipase, magnesium, microalbumin, sodium, pco2, pH, phosphorus, po2, free thyroxine, total protein, triglycerides, thyroid-stimulating hormone, uric acid, and vancomycin.

Electrolytes and routine chemistry tests were performed on lithium heparin–anticoagulated blood specimens using the Modular P System (Roche, Indianapolis, IN). Immunoassay testing, with the exception of microalbumin, was performed on lithium heparin–anticoagulated blood specimens using the Advia Centaur System (Siemens, Tarrytown, NY) via a chemiluminescence enzyme immunoassay. Microalbumin testing was performed on urine specimens on a Microalbumin Tina-quant (Roche) via an immunoturbidimetric assay on the Modular P. Therapeutic drug monitoring tests were performed on lithium heparin–anticoagulated blood via fluorescent polarization on the Abbott AxSym analyzer (Abbott Diagnostics, Abbott Park, IL). Arterial blood gas testing was performed on heparin-anticoagulated whole blood on an ABL 700 series instrument (Radiometer, Copenhagen, Denmark) via amperometric and potentiometric assays.

A report was created by the Barnes-Jewish Hospital information systems using Crystal Reports Enterprise (SAP Business Objects America, Newtown Square, PA) that extracted data from our laboratory information system (Cerner Millennium, Kansas City, MO) using an Oracle database (Oracle, Redwood Shores, CA). The report was designed to retrieve only the test(s) that initiated the automated repeated testing, not any “passenger” repeated testing such as the other tests in an arterial blood gas panel when only the po2 initiated the repeat testing “flag.”

The flags that result in automatic rules-based repeated testing are linear high, values above the analytic measurement range (AMR); linear low, values below the AMR; repeat for clinically significant values (eg, calcium level >12 mg/dL [3 mmol/L]); critical alert values (eg, calcium level >14 mg/dL [3.5 mmol/L]) that require notification of a health care provider; and delta check values (eg, a calcium level that changes by 0.8 mg/dL [0.2 mmol/L] within 48 hours). Data captured in these reports included date, result times, initial result, verified result, instrument ID, and accession number, and these data were transferred to Microsoft Excel (Microsoft, Redmond, WA) for analysis. We examined the number of automatic rules-based repeated laboratory values and the frequency of the different flags resulting in a test being repeated. Absolute and percentage differences between the initial value and the final verified value were determined. If the absolute difference between the final verified value and the initial result was greater than the College of American Pathologists/Clinical Laboratory Improvement Amendments (CAP/CLIA) allowable error, the initial result was deemed as an identified error. The CAP/CLIA allowable errors, AMR, and critical values for the 30 tests are shown in Table 1.


Initial and repeated test results were evaluated for each test following the flow chart in Figure 1 (calcium is shown as an example). Of the 855,009 test results examined, 25,553 (3.0%) were repeated as a result of a rules-based auto-repeat flag Table 2. Of these, 1,690 automated repeats were due to the initial result being above the AMR, and 3,398 were due to the initial result being below the AMR (Table 2). Initial values outside the AMR accounted for 19.9% of all automated repeat testing but represented 566 (84.7%) of the 668 total errors.

View this table:
Table 1
Figure 1

The third row of boxes shows the total number of repeated tests performed for calcium (Repeats) and the total number of errors identified (All errors identified) according to College of American Pathologists/Clinical Laboratory Improvement Amendments (CAP/CLIA) criteria. Subsequent boxes break down the number of repeated tests performed above, below, and within the analytic measurement range (AMR) and the number of flags that initiated the automated repeat testing and the number of errors identified as a result of each flag.

View this table:
Table 2

The remaining 20,844 initial values that were repeated were within the AMR of the test: 15,614 delta check flags, 7,295 review flags, and 3,162 critical results (some initial results generated >1 automated repeat flag). From these 20,844 repeated values, a total of 102 errors were identified in which the difference between the initial result and the final verified result exceeded the CAP/CLIA limits for allowable error. This represented 0.5% of all repeated tests.


Table 3 summarizes the repeated testing when the initial result was within the AMR for 6 electrolyte tests. The most often repeated test was calcium as a result of delta checks, but only 5 errors (0.2%) according to CAP/CLIA criteria were identified out of 3,079 repeated calcium values. The highest rates of identifying errors were for sodium and chloride, with 6.2% and 5.3% of repeated values, respectively, exceeding the CAP/CLIA limits for allowable error. In contrast, the number of repeated tests for potassium (1,295) was twice that of sodium, yet only 10 errors (0.8%) were identified from the repeated potassium tests.

Therapeutic Drug Monitoring

Table 4 shows repeated testing for 3 common therapeutic drug monitoring tests. Of 165 critical values, none of the repeated test values exceeded the allowable error limits for these drugs, and 0 errors were identified out of 61 delta check flags.


Among the 132 immunoassay results within the AMR that were repeated, none resulted in identification of errors that exceeded CAP/CLIA criteria Table 5.

Blood Gases

Table 6 summarizes findings of 2,763 blood gas results within the AMR that were repeated owing to an automated repeat flag. The most common automated repeat flag was a delta check for po2. The largest number of errors identified (41) for blood gas testing was also for po2.

View this table:
Table 3

General Chemistry Tests

Table 7 summarizes the test results within the AMR that were repeated for 14 common chemistry tests. By far the most common cause for automated repeated testing in this group was a delta check, with 9,831 repeated tests being performed. However, the number of repeated tests in which an error was identified was only 35 (0.4%). In addition, only 3 of 605 repeated critical values identified an error in the initial value.

Delay in Reporting Repeated Values

Figure 2 shows the time between the initial result and the final verified result for critical value repeated testing for selected tests from each category. As might be expected, the delays were shortest for tests performed on the blood gas analyzer.


Recent studies have shown that the practice of repeating tests with critical laboratory values or other results that trigger automated repeating may not be necessary with today’s clinical laboratory automated analyzers.3,7 A small study examined a total of 580 repeated tests for potassium, glucose, platelet count, or activated partial thromboplastin time and found that the repeated value was within the acceptable difference for 95.3% of the critical value tests repeated and 97.6% of all repeated tests.3 Another study examined 5 different hematology/coagulation tests and 500 consecutive critical results and the repeated test values for each of the 5 tests. By using their internal definition of acceptable error, Toll et al7 found that 0% to 2.2% of the repeated values for these tests were outside of their acceptable criteria. They concluded that repeated testing for critical values did not offer an advantage or provide additional benefit in hematology and coagulation settings.7 Neither of these studies examined the time delays in reporting the critical values that repeated testing invariably causes. The CAP recently published the results of a survey of 86 laboratories, each of which reviewed 40 critical test results from 4 tests at their institutions.8 The study found that 61% of laboratories always repeated critical results and that the median delay in reporting as a result of repeated testing was 10 to 14 minutes in most laboratories and 17 to 21 minutes in 10% of the laboratories.6,8

View this table:
Table 4
View this table:
Table 5
View this table:
Table 6

Based on the findings of these studies and the Q-Probes survey, we examined the results of our repeated testing. For more than 20 years, we have repeated all tests in an automated manner when the initial values exceed the AMR of the test (high or low), the result is a critical value, the result fails a delta check, or the value exceeds a preset “review” limit. All of the instruments used are directly interfaced to our laboratory information system, and results are autoverified when all rules are satisfied. When one of the aforementioned flags occurs, results are not autoverified and are automatically repeated by the instrument. For initial values above or below the AMR of the method, the test is repeated on dilution (high values) or the sample is examined for being a short sample (below AMR) and repeated using a special “short sample” cup if necessary. For initial values within the AMR, the technologist reviews the repeated result, and if the result is in agreement, the initial value is verified manually by the technologist. If the difference between the initial and repeated results exceeds the 2 SD range of the quality control sample closest in concentration to the test sample or is greater than 10% (whichever is greater), the test is performed a third time, and the average of the 2 results that agree is reported. While this is our policy, it is possible that the decision about performing the test a third time is often made subjectively by the technologist.

Based on experience and the reports discussed, we hypothesized that the vast majority of repeated values from our clinical chemistry laboratory would agree with the initial value and that there may be only limited benefit in continuing such frequent repeat analyses. We also hypothesized that repeated analysis for critical values was delaying notification of caregivers about these critical results.

The definition of what constitutes a significant difference between repeated values is variable. Allowable error can be defined by biologic variability,9,10 subjective opinion of what is clinically significant, clinician survey, or regulatory requirements. We chose to use the CAP/CLIA criteria for allowable error because they are understood by all and are the criteria by which proficiency testing is judged in the United States. This decision can be questioned because there is not a clear rationale for some of these criteria, which may lead to some of the differences we observed in the number and frequency of errors we identified for different tests. For example, the CAP/CLIA criterion for sodium is ± 4 mmol/L (∼ ± 2.8%), whereas the criterion for calcium is ± 1.0 mg/dL (∼ ± 10.0%) It could be argued that the former is clinically insignificant, whereas the latter is clinically significant. It is also possible that the differences in the magnitude of what is considered allowable error led to our finding of considerably more errors for repeated sodium testing within the AMR than for calcium. It is interesting that when we closely examined the data for these 2 tests, this was not the case. There was only 1 additional calcium repeated result of the 3,079 results within the AMR in which the difference was between 0.5 and 1.0 mg/dL. In contrast, for the 31 errors within the AMR for sodium, 19 exhibited a difference between the initial and repeated tests that exceeded 10 mmol/L.

View this table:
Table 7
Figure 2

Results for potassium, pH, glucose, and lactate are shown as representative tests from each category studied. Data are shown as median time from the initial result (squares) and the 75th and 25th quartiles (upper and lower vertical lines, respectively).

Results below the AMR (linear low) may be due to “short sampling” or other preanalytic or analytic error. Results above the AMR are repeated on dilution to obtain a final result, and the absolute or percentage differences from the repeated results are most frequently greater than the CAP/CLIA-derived allowable error. Clearly, when initial results are outside the AMR, repeated testing will continue to be necessary. The results of our study for repeated testing when the initial result is within the AMR suggest that for almost all automated chemistry testing, the repeated testing is unnecessary and delays the reporting of results, which is a particularly important problem for critical values. Our findings also suggest that for some tests such as sodium and po2, repeat testing may be necessary to detect some large errors in the initial result. The reasons for these errors are unclear at this time and may require prospective evaluation. Finally, while it may not be necessary to repeat the analysis for samples that trigger a delta check flag, it will still be necessary for technologists to check the identity and feasibility of the result for the previous sample. Delta checks provide a means for identifying mislabeled samples, sample integrity problems as a result of preanalytic problems, and random analytic errors. While our study strongly suggests that random analytic errors are rare, it does not address the first 2 causes of a delta check flag, and these will need to be investigated by the laboratory.

The delays we observed in reporting critical values that result in repeated testing were similar to those described by survey participants in the Q-Probes study.6,8 It is not surprising that the delays for blood gases were shorter than the delays for other chemistry tests because the analytic time is much shorter. However, the median delays observed for tests such as potassium and glucose are far greater than the actual analytic time. This is most likely due to a technologist taking time to review results, determine if a third test is necessary, and making a decision about manual verification of the final result.

Weaknesses of this study are that the data are from a single laboratory and only several types of automated analyzers. The error rates may vary depending on instrumentation, quality assurance practices, or other variables of individual laboratories. For example, because we have multiple instruments performing the same test (eg, 7 Roche Modular P units), we do not accept calibrations if the quality control samples are outside of 1 SD, which may minimize our error rates. Finally, the number of repeated tests observed for the immunoassay and therapeutic drug monitoring categories may be too small to make firm conclusions. Nevertheless, these results suggest that repeat testing for many automated chemistry tests, including critical values, can be stopped and should also serve as a catalyst for other laboratories to examine the value of their repeat testing practices. Doing so can improve patient care by delivering critical values more rapidly and could potentially save 2% to 3% of reagent costs for many tests.


Upon completion of this activity you will be able to:

  • list the “triggers” that are commonly used to institute automated or manual repeat laboratory testing such as critical values, delta checks, and values outside of the analytic measurement range (AMR).

  • examine repeat testing practices at your own institution to determine whether repeating tests within the AMR results in the identification of significant numbers of analytical errors.

  • predict what types of historical repeat triggers may no longer be necessary in a contemporary automated laboratory setting.

The ASCP is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The ASCP designates this journal-based CME activity for a maximum of 1 AMA PRA Category 1 Credit ™ per article. Physicians should claim only the credit commensurate with the extent of their participation in the activity. This activity qualifies as an American Board of Pathology Maintenance of Certification Part II Self-Assessment Module.

The authors of this article and the planning committee members and staff have no relevant financial relationships with commercial interests to disclose.

Questions appear on p 156. Exam is located at www.ascp.org/ajcpcme.


We thank Cynthia Burch of BJC Health Information Systems for help with this study.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
View Abstract