OUP user menu

External Quality Assurance Performance of Clinical Research Laboratories in Sub-Saharan Africa

Timothy K. Amukele MD, PhD, Kurt Michael MEd, MT, Mary Hanes, Robert E. Miller MD, J. Brooks Jackson MD, MBA
DOI: http://dx.doi.org/10.1309/AJCP8PCM4JVLEEQR 720-723 First published online: 1 November 2012

Abstract

Patient Safety Monitoring in International Laboratories (JHU-SMILE) is a resource at Johns Hopkins University that supports and monitors laboratories in National Institutes of Health–funded international clinical trials. To determine the impact of the JHU-SMILE quality assurance scheme in sub-Saharan African laboratories, we reviewed 40 to 60 months of external quality assurance (EQA) results of the College of American Pathologists (CAP) in these laboratories. We reviewed the performance of 8 analytes: albumin, alanine aminotransferase, creatinine, sodium, WBC, hemoglobin, hematocrit, and the human immunodeficiency virus antibody rapid test. Over the 40- to 60-month observation period, the sub-Saharan laboratories had a 1.63% failure rate, which was 40% lower than the 2011 CAP-wide rate of 2.8%. Seventy-six percent of the observed EQA failures occurred in 4 of the 21 laboratories. These results demonstrate that a system of remote monitoring, feedback, and audits can support quality in low-resource settings, even in places without strong regulatory support for laboratory quality.

Key Words
  • Laboratory
  • Quality assurance
  • Sub-Saharan Africa
  • PEPFAR
  • HIV
  • CAP

Patient Safety Monitoring in International Laboratories (JHU-SMILE) is a resource at Johns Hopkins University, Baltimore, MD, which is responsible for monitoring and supporting laboratory quality in international clinical study sites. JHU-SMILE remotely monitors approximately 165 international laboratories and clinics in 22 countries. Eighty-four of these testing locations are in sub-Saharan Africa. The challenge of poor laboratory quality in sub-Saharan African countries is well known.1,2 Thus we reasoned that our 7-year experience implementing a quality assurance (QA) program for clinical research laboratories in sub-Saharan Africa would be valuable in the design of similar schemes.

JHU-SMILE activities fall into 3 areas. The first area is facilitating and monitoring external quality assurance (EQA) also known as proficiency testing (PT). The second is the remediation of problems detected in independent annual audits. The third is serving as a resource for laboratory-related issues at international study sites. This resource function includes laboratory quality management; instrument and test validation; specimen management, including chain-of-custody and compliance with shipping regulations; laboratory data management; equipment maintenance; and personnel safety.

In this report we present a 40-month summary of the EQA performance of the sub-Saharan African JHU-SMILE laboratories. Our primary goal was to review the usefulness of the JHU-SMILE QA scheme in this setting using the College of American Pathologists (CAP) EQA performance as our key outcome. Our secondary goal was to determine which analytes and instruments were associated with higher rates of failures in international EQA schemes.

We limited our review to laboratories that had at least 10 sequential EQA testing cycles in the JHU-SMILE database, and had also used the CAP or Accutest/DigitalPT (Accutest, Boston, MA) as their EQA provider. We reasoned that this focus would allow us to detect long-term trends as well as compare the JHU-SMILE laboratories' performance with CAP-wide averages. Of note, 90% of our laboratories used CAP as their EQA provider during the 40-month period.

Materials and Methods

The 8 analytes used in the current analysis were albumin, alanine aminotransferase, creatinine, sodium, WBC count, hemoglobin, hematocrit, and the human immunodeficiency virus (HIV) antibody rapid test. These 8 analytes were selected because each is widely tested, each represents a different core technology, and the method of detection for each analyte is fairly consistent across different instruments.

For the 7 automated analytes, 10 sequential testing cycles represent 40 months of EQA performance. For HIV serologic testing, 10 sequential testing cycles represent 60 months of continuous EQA performance. This requirement for 10 sequential testing cycles of data limited our analysis to laboratories that had been a part of the JHU-SMILE cohort since at least 2007.

We tried to detect any systematic bias related to individual instruments or manufacturers by calculating the mean standard deviation index (SDI) of each analyte measured with that instrument. This was done by compiling laboratory-specific (rather than CAP-wide) SDIs for each instrument. In cases in which more than 1 instrument was used during the 10 EQA cycles, we ascribed any EQA failures to the instrument that was in use when the EQA failure occurred. However, we did not include failures for instruments that were used for only 1 EQA testing cycle. The reason for this approach was to correlate findings with the core Clinical Laboratory Improvement Amendments (CLIA) EQA criteria, which requires an 80% pass rate for each analyte over 2 testing cycles.

Results

At the end of 2007, JHU-SMILE was monitoring 39 sub-Saharan African laboratories. Twenty-one laboratories, representing 54% (21 of 39) of the total, were included in our analyses. More than 90% of the EQA materials used were CAP surveys. Of the 21 laboratories, 14 met the selection criteria for the automated analytes and 19 met the criteria for manual (rapid HIV antibody) testing. We compared the performance of our cohort with the 2011 CAP-wide average pass rates, which are derived from the more than 22,000 laboratories that participate in CAP-administered EQA programs. The 14 laboratories that met the selection criteria for automated analytes had a 40% lower rate of EQA failures than those represented by the 2011 CAP-wide performance rate (1.63% vs 2.8%, written communication, W. Johnson, CAP). EQA fail rates for our cohort of laboratories were 1.0% for WBC, 0.6% for hemoglobin, 2.2% for hematocrit, 0.7% for alanine aminotransferase, 2.1% for creatinine, 0.7% for albumin, 4.1% for sodium, and 2.4% for rapid HIV antibody testing.

Figure 1 shows the percentage of all EQA survey failures arising at individual laboratories. Seven of the 14 laboratories account for all the EQA failures during this 40-month period. Four laboratories accounted for 76% of all EQA failures. Figure 2 shows EQA survey failures over time, expressed as a percentage of total EQA challenges. Failure rates are small, but increase over time in a statistically significant manner. Sodium challenge failures account for most of the increased failures.

Figure 1

Percentage of all external quality assurance (EQA) survey failures arising at individual laboratories. Laboratory identifiers are deidentified and indicated on the x-axis. The number of EQA failures by each laboratory are denoted by the bar graphs. The aggregate sum of EQA failures (expressed as a percentage) is denoted by the data points on the line graph and corresponds to the right-hand y-axis.

Figure 2

External quality assurance (EQA) survey failure rate over time. The y-axis shows the total number of failures in each EQA cycle expressed as a percentage. The solid line is a least-squares fit of the number of failures over time. R2 = 0.4.

Table 1 shows the instruments used for EQA, the number of cumulative testing cycles for each instrument, and the mean SDI of analytes measured using that instrument. It is clear from this table that there is remarkable instrument harmonization in this cohort. The Beckman Coulter ACT Diff 5 (GMI, Ramsey, MN) and Roche COBAS Integra (Roche Diagnostics, Indianapolis, IN) account for 60% (201/336) of all hematology and 61% (272/443) of all chemistry EQA testing, respectively, performed by these laboratories. Using a mean SDI of ±2.0 as a criterion, none of the instruments had unacceptable bias.

View this table:
Table 1

Discussion

Impact of a QA System in Sub-Saharan African Laboratories

This report summarizes the EQA performance of a cohort of sub-Saharan HIV clinical research laboratories on 8 different tests over a 40-month period. During this period, the laboratories in our cohort had 40% fewer EQA failures than the 2011 CAP-wide average pass rates. This shows that a remote QA scheme can ensure good-quality laboratory work even in physically distant settings without strong national regulatory support for laboratory quality.

Our data revealed that sodium had a failure rate that was about 5 times the rate of the other analytes. Based on prior studies on EQA sample stability,3 the performance of the 7 laboratories without any failures, and our experience with the laboratories, we surmise that the cause for the higher sodium EQA fail rate is infrequent and imprecise calibration in those laboratories responsible for the sodium EQA failures.

How Does the System Work, How Much Does It Cost, Can It Be Replicated?

JHU-SMILE staff members are registered medical technologists with quality control, QA, and regulatory knowledge, as well as many years of laboratory experience. In addition, most hold master's degrees. This level of experience is necessary to help resolve issues identified by the EQA results and on-site audits. Each staff member is assigned 15 to 20 laboratories as a primary coordinator and 15 to 20 as a backup coordinator. All laboratories for which they are responsible are thoroughly reviewed at least once a month, with additional reviews as often as needed. A list of routine tasks includes obtaining new EQA reports, initiating investigations for any analyte that is less than 100% successful, updating long-term EQA summaries, audit remediation, daily electronic mail communication, occasional phone calls, and individual mentoring on laboratory-related issues. In addition, in an average year, our staff members visit up to 5% of laboratory sites to address issues that need prompt or in-person resolution.

For the 3 years reviewed in this analysis, the primary costs for administering the JHU-SMILE scheme included salary and benefits (52%), EQA survey materials (15%), shipping costs (10%), and travel (4%). The biggest challenge in our experience has been the shipping logistics, including customs and broker fees, and the ability to forecast shipping costs. Shipping costs can vary by more than 10-fold depending on the country in question and the hazardous classification of the material. Survey material costs vary by the number of surveys needed to cover the analytes and their source, but for a typical laboratory offering basic chemistry, hematology, and serology testing and standard shipping these would be approximately $3,000 to $5,000 per year. This is similar to the experience of other EQA programs in low-resource settings, which have reported shipping logistics as their biggest challenge.4,5

We identified several key elements that enable this model of support and monitoring to work. In order of their importance, these include establishing strong working relationships between remote coordinators and an on-site QA contact person, on-site administrative level support, and easy access to performance data and mentoring. Experience has shown that these elements are necessary even in an atmosphere with strong laboratory regulation. We built a secure website (www.psmile.org) to store the several hundred documents that we collect or create each week. The website also allows users to retrieve monitoring documents and search a resource library which covers topics of interest to clinical laboratories. The resource library is available to the public.

Laboratories supporting research efforts receive extra funding, are part of more structured health care delivery systems (eg, attached to research clinics), and may have personnel with better training or motivation than those that are not involved in clinical research. This is true of our cohort of laboratories. For example, we examined the fraction of laboratories in sub-Saharan Africa that meet the standards of CLIA or the International Organization of Standards 15189, and compared that fraction with the fraction of our cohort with the same certifications.

In late 2011, only 372 laboratories in sub-Saharan Africa met 1 of these 2 international standards.6 Excluding South Africa, that number is 34. Using our current best estimate of the number of clinical laboratories in sub-Saharan Africa,7 372 laboratories likely represent approximately 0.3% of the laboratories in this region. In 2007, at the beginning of our study, only 3 of 21 laboratories in our cohort met either of these standards. However, by the end of our study period, an additional 5 laboratories (8/21; [38%]) met at least 1 of these standards. This finding has 2 implications. The first is that attempts to replicate the JHU-SMILE QA system will need to be adjusted to match the level of staff training, financial support, etc, that are locally available and could be implemented at the local or regional level. For example, EQA samples can be produced locally at much lower costs. This system has been in operation in a few developing countries.8 Second, it shows that if one creates a culture of high quality, laboratories will improve their performance to meet those expectations.

Laboratory testing is the backbone of clinical diagnosis and prognosis, and provides an effective and relatively inexpensive way to improve the quality of health care in a systems-based way. The JHU-SMILE laboratory QA model can serve as a basis for designing sustainable approaches to monitor and improve laboratory quality.

Acknowledgments

This project was supported by grant HHSN266200500001C from the National Institute of Allergy and Infectious Diseases, the National Institutes of Health, Bethesda, MD.

References

View Abstract