OUP user menu

Decreasing the Cutoff for Elevated Blood Lead (EBL) Can Decrease the Screening Sensitivity for EBL

Laura J. McCloskey PhD, Frank R. Bordash, Kathy J. Ubben, James D. Landmark MD, Douglas F. Stickle PhD
DOI: http://dx.doi.org/10.1309/AJCP5RKWF3IZTCTO 360-367 First published online: 1 March 2013


Change in the definition of elevated blood lead (EBL) from greater than or equal to 10 μg/dL (cutoff A) to greater than or equal to 5 μg/dL (cutoff B) was recently endorsed in the United States. A potential effect of this change is to decrease the screening sensitivity for EBL detection. We demonstrate this effect by simulated sampling of an example patient distribution for lead. Using lead-dependent assay imprecision, simulated sampling of the patient distribution tracked individual misclassifications relative to the EBL cutoff. Decreasing the EBL cutoff from A to B reduced screening sensitivity for EBL detection in this population to less than 90%, a decrease of 4%. The result was due to the fact that, for B, a greater fraction of the EBL population was near the EBL cutoff and therefore subject to misclassification due to assay imprecision. The effect of the decreased EBL cutoff to reduce EBL screening sensitivity is likely to apply to EBL screening programs generally.

Key Words
  • Lead
  • Elevated blood lead
  • Screening
  • Atomic absorption
  • Point-of-care testing
  • LeadCare II
  • Simulation

Deleterious effects of lead exposure on children's health and development are well known.1,2 Screening for elevated blood lead (EBL) in children and work to reduce children's lead exposure have therefore been long-standing programmatic public health concerns in the United States.3,4 Although no level of lead in blood has been defined as safe,2,3 currently the definition of EBL is lead greater than or equal to 10 μg/dL (hereinafter referred to as EBL cutoff A).3,4 Recently, the Centers for Disease Control and Prevention endorsed recommendations of its Advisory Committee on Childhood Lead Poisoning Prevention (ACCLPP) to define EBL as lead greater than or equal to 5 μg/dL (EBL cutoff B).5,6

For most public health EBL screening programs, such a cutoff change will significantly increase screen-positive rates for EBL, as per the intent. In addition, however, a decrease in the EBL cutoff can potentially decrease the sensitivity of screening for detection of EBL. Such an effect is predicted for certain combinations of assay imprecision and differences in the densities of patient result distributions between cutoffs A and B.7 Our objectives in this study were to demonstrate the effect for an example basis lead distribution (the patient lead distribution observed at the University of Nebraska Medical Center in Omaha, Nebraska, in 2011) and also to examine the numerical scale on which change in sensitivity would be operative for this specific population distribution. Our approach was by a simulation of sampling the basis distribution according to specified assay imprecision. The simulated sampling tracked the misclassifications of EBL (namely, EBL in the basis distribution classified as screen negative in the simulation results distribution) to determine screening sensitivity for detection of EBL as a function of the EBL cutoff concentration.

Materials and Methods

Characterization of a Basis Patient Population Distribution for Lead

A current patient population distribution for lead was obtained using first-or-only lead measurements for pediatric subjects (<18 years) during a 1-year interval (2011) at the University of Nebraska Medical Center (n = 10,333) Figure 1, as measured originally by inductively coupled plasma mass spectrometry, for which results were reported in increments of 0.1 μg/dL. To use continuous rather than binned data and to avoid characterization of noise in the data attendant to this single distribution, we characterized the patient distribution by a mathematical function that closely fit the patient distribution without noise. The choice of the mathematical function used to characterize the distribution was arbitrary; we settled for use of a simple function that was a sum of exponentials (y = 0.9998 − 0.9401 exp(−x/1.278) − 0.0605 exp(−x/5.676)). This function was used as a surrogate for the patient results distribution (also shown in Figure 1).

Figure 1

A, Patient and surrogate distributions for lead. y-Axis: cumulative fraction of results; x-axis = lead. For patients (solid line): n = 10,333; graph excludes 0.53% of results for lead more than 15 μg/dL (full data set range, 0-60 μg/dL); 75.2% of results were lead less than 2.0 μg/dL; increment of reporting was 0.1 μg/dL. For the surrogate distribution (dashed line), an arbitrary analytical function (y = F(x)) was used: F(x) = 0.9998 − 0.9401 exp(−x/1.278) − 0.0605 exp(−x/5.676). Deviations of the surrogate distribution values from those of the patient distribution data set were less than 0.002 at both 5 μg/dL and 10 μg/dL. B, Slopes of patient and surrogate distributions for lead demonstrating elimination of noise in the surrogate distribution. For the patient distribution (solid line), slopes as a function of x were calculated as the linear regression slope of a 3-point (x, y) data set over a 0.2-μg/dL interval centered on x. For the surrogate distribution (dashed line), slopes as a function of x were calculated from the analytical function F ' = dF/dx.

For sake of argument, the surrogate data set was taken as the “true,” error-free, continuous lead distribution representing this particular patient population. The surrogate distribution was thus used as the “basis” data set for simulations, and we will hereinafter refer to the surrogate distribution as the basis distribution.

Characterization of Measurement Imprecision as a Function of Lead

Because public health EBL screening programs are likely to receive reports from multiple testing sites, we based imprecision of lead measurement on intersite imprecision as reported in the national lead proficiency testing (PT) program run by the Wisconsin State Laboratory of Hygiene (WSLH).8 Two data sets for intersite PT imprecision of lead measurement were used independently: one for analysis by atomic absorption (AA) and one for point-of-care testing (POCT) analysis (LeadCare II, Magellan Diagnostics, Billerica, MA). Standard deviations were characterized as a function of lead (x), using data for standard deviations reported for 142 PT samples used for both AA and POCT between 2007 and 2011. These data are shown in Figure 2.

Figure 2

Intersite standard deviations for lead measurement by atomic absorption (AA) (A) and point-of-care testing (POCT) (B). Data are from the Wisconsin Blood Lead Proficiency Testing program since 2007.8 For AA, SD(Pb) was characterized by an arbitrary fitted function: SD(Pb) = 1.092 exp (Pb/39.619) − 0.3739 μg/dL (r2 = 0.896; n = 142). For POCT, SD(Pb) was characterized by linear regression: SD(Pb) = 0.0752 Pb + 0.714 μg/dL (r2 = 0.870; n = 142). Each point represents a result derived from a variable number of points: there were varying numbers of sites participating in the proficiency testing survey across time, and there were often an unspecified number of site results excluded from the calculation of the standard deviation for each survey.

Assessment of Sensitivity for EBL Detection by Simulation of Sampling

Sensitivity for detection of EBL was determined by simulation of sampling the basis distribution. Briefly, the simulation is performed as follows: a random number was used to pick a sample lead measurement according to the probabilities of the basis distribution; a second random number was used to return a “measured” lead level for that sample, as “measured” by an assay with known imprecision, according to the probabilities of a normal distribution centered on lead [from the distribution Pb ± SD(Pb)]. In detail, the simulation is performed as follows. The simulation required generation of 2 random numbers between 0 and 1. The first random number (0-1) was used as a y-axis value to select a value for lead from the basis distribution by interpolation of the corresponding x-axis value from Figure 1. The second random number (0-1) was used to determine a reported value (x′), which might differ from x due to imprecision of measurement. The deviation of x′ from x was deter-mined according to the probabilities of a normal distribution centered on x, with SD(x) as given in Figure 2. Specifically, the second random number (0-1) was treated as a point along the integral of a normal distribution (0-1), for which the corresponding z value of a normal distribution (multiples of standard deviations, from −∞ to +∞) was determined from a lookup table relating the integral to z. This z value was applied to x, given SD(x), to produce the reported value x′ = x + z SD(x). Thus, for instance, repeated applications of the reporting algorithm for any given x would produce the distribution x′ = x ± SD(x). Repeated sampling of the basis distribution using a succession of 2 random numbers thus turned the basis distribution of x values into an observed (measured) distribution of x′ values.

Each value for x′ produced by simulated sampling of the basis distribution was classified as either a screen-positive or a screen-negative result, according to the given definition of the EBL cutoff. For all x classified as EBL in the original distribution (x ≥ EBL cutoff), the program tracked whether x′ was classified as EBL screen positive (x′ ≥ EBL cutoff; true positive [TP]) or EBL screen negative (x′ < EBL cutoff; false negative [FN]). Sensitivity (S), involving determinations of TP and FN for all x values originally classified as EBL, was computed as S = TP/(TP + FN).

Simulated sampling runs were performed according to the density of the original population distribution (10,000 points). To verify trends in sensitivity as a function of the EBL cutoff, we determined sensitivity by simulation for EBL cutoffs ranging from 4 to 12 μg/dL. Mean values and standard deviations for sensitivity were obtained from 1,000 replicates of the basis data set for each EBL cutoff, in which each replicate was for 10,000 samples.

Simulation calculations were performed using Visual Basic (Microsoft Corporation, Redmond, WA). Random numbers were pseudo-random numbers as produced by the Visual Basic function “Rnd().”


Predicted Screening Sensitivity for Detection of EBL as a Function of the EBL Cutoff Concentration

EBL fractions for the basis distribution were 1.1% and 4.4% for EBL cutoffs A (EBL ≥10 mg/dL) and B (EBL ≥5 μg/dL), respectively. Results of simulations to determine sensitivity of EBL detection as a function of a range of assumed EBL cutoff concentrations are shown in Figure 3. For both AA and POCT, there was a progressive decrease in sensitivity as the EBL cutoff was decreased between EBL cutoffs A and B. According to simulation results, S(B) was decreased compared with S(A) as follows: for AA, S(A) = 92.7% ± 2.5%, S(B) = 88.4% ± 1.5% (ΔS = −4.3%); for POCT, S(A) = 89.9% ± 2.8%, S(B) = 85.9% ± 1.7% (ΔS = −4.0%). There was considerable variation in sensitivity across individual runs for a sample size of 10,000, with coefficients of variation for sensitivity between EBL cutoffs A and B ranging from 1.7% to 2.7% (AA) and 1.9% to 3.2% (POCT).

Figure 3

Sensitivity (S) for detection of elevated blood lead (EBL) as a function of the EBL cutoff. The data points show results by simulated sampling (average ± standard deviation for 1,000 runs of 10,000 samples per run). Dashed lines show results by direct calculation. AA, atomic absorption; POCT, point-of-care testing.

In these examples, the decrease in sensitivity between use of EBL cutoffs A and B was 4% in round numbers. Changes in sensitivity vs EBL for the AA and POCT data sets ran largely in parallel. The predominant driving force for alteration of sensitivity as a function of the EBL cutoff was the difference in density of patient data (fraction of patient data per increment in lead) in the underlying lead distribution in the vicinity of each cutoff, as can be demonstrated by consideration of data shown in Figure 4 and Figure 5.

Figure 4

Fraction of simulation results (x' in Materials and Methods) classified as elevated blood lead (EBL) (y-axis) according to normal distributions of results characterized by Pb ± SD(Pb), as a function of lead relative to the EBL cutoff (x-axis). Parameter = EBL cutoff A (≥10 μg/dL) or B (≥5 μg/dL). A, Results using SD(Pb) for atomic absorption. B, Results using SD(Pb) for point-of-care testing.

Figure 5

A, Fractional contribution to total elevated blood lead (EBL) (y-axis) of segments of the basis distribution (bin widths = 0.005 μg/dL), as a function of lead relative to the EBL cutoff (x-axis). For both plots in A, the sum of points from x = 0 to x = ∞ equals 1. B, Cumulative fraction of total EBL (y-axis) as a function of lead relative to the EBL cutoff (x-axis) for plots shown in A (lines: EBL cutoff A or EBL cutoff B). For both plots in B, y 1 as x ∞. Data in plots A and B were computed from data in Figure 1. Whereas total basis distribution fractions classified as EBL are different between EBL cutoffs A and B (EBL = 1.1% and 4.4%, respectively, from Figure 1), plots in B show that a greater fraction of total EBL is closer to the EBL cutoff for EBL cutoff B compared with EBL cutoff A.

Figure 4 shows the fractional classification of EBL as screen positive as a function of the distance of lead from the EBL cutoff, according to the spread of a normal distribution centered at each lead. That is, for a given lead, the graph shows the fraction of the normal distribution x = Pb ± SD(Pb), for which x would be above the value of the EBL cutoff. Directly at the cutoff (Pb - EBL = 0), there is 50:50 split of a normal distribution above and below EBL, irrespective of the standard deviation. From there, the percentage of results above EBL increases as the distance between lead and EBL increases, proceeding as a function both of lead and standard deviation. For both AA and POCT data, the graphs are slightly different across EBLs due to the differences in standard deviation. The important aspect of these graphs is not the difference between the graphs shown for cutoffs A and B but the range of lead over which a significant fraction of misclassifications occurs for both A and B. For AA, for both EBL cutoffs A and B, 95% correct classifications of EBLs do not occur until lead exceeds EBL by roughly +2 μg/dL; for POCT, 95% correct classifications of EBLs do not occur until lead exceeds EBL by roughly +3 mg/dL. The importance of these ranges is with respect to the difference in the density of results within these ranges between the 2 different EBL cutoffs, as shown in Figure 5.

As shown in Figure 5, the fraction of EBLs within this near-cutoff range is substantially greater for EBL cutoff B than for A. For EBL cutoff A, 42.0% of EBLs are within +3 μg/dL of the EBL cutoff, whereas 61.9% of EBLs are within this range for EBL cutoff B. Thus, despite the fact that there is lesser imprecision for both AA and POCT at the lower EBL cutoff, the effect of imprecision on misclassifications applies to a much greater fraction of EBL at cutoff B than at cutoff A. The net result is a decrease in the overall sensitivity for detection of EBL as the EBL cutoff is decreased, per the simulation results shown in Figure 3.

The simulations represent a somewhat elaborate process, with some degree of uncertainty (eg, use of a pseudo-random number generator, use of a finite sampling rate, and the possibility of error in programming). It is appropriate, therefore, that there be some independent means of verifying these results. The average sensitivity for a given set of conditions can in fact be estimated by a direct calculation involving the data shown in Figures 4 and 5. Briefly, the approach is to calculate the sum across x of the fraction of EBL within small increments of x (points in Figure 5A) multiplied by the associated correct fractional classifications at x (from Figure 4). This sum pro-vides a single-number estimate of the average sensitivity for detection of EBL. The results of these direct calculations are also shown in Figure 3 (dashed lines). As can be seen, across EBL cutoffs there was essentially an exact correspondence between the result of the direct calculation and the average result obtained by simulation of sampling. An advantage to the simulation approach is that it gives estimates of the variation in the sensitivity attendant to a finite sampling rate as would be obtained in actual practice.

Other Statistics Derived From the Simulations

The full array of statistics derived from the simulations of sampling for EBL cutoffs A and B is shown in Table 1. Two aspects of these results deserve commentary. First, note that both sensitivity and specificity decrease as the EBL cutoff is decreased. It is important to note that what is determined by simulation differs considerably from that which is determined by calculations associated with a standard receiver operating characteristic (ROC) curve. In the usual ROC curve, the relationship between sensitivity and specificity is determined as a function of the cutoff to distinguish between 2 populations associated with either normal or abnormal states. In this case, there is a single population having a continuous distribution for which the distinction between normal and abnormal states is itself defined by the cutoff. Thus, unlike the results for a usual ROC, sensitivity and specificity are not inversely related. Second, it is important to note that the positive predictive value (PPV) also changes substantially between EBL cut-offs A and B. For POCT, PPV using EBL cutoff B was 76%, a decrease of 11%, compared with a PPV of approximately 87% using EBL cutoff A. For AA, PPV decreased from 91% to 81% between cutoffs A and B, a difference of 10%. These changes are due to the difference in population density in the vicinity of cutoffs A and B as described above. Because of this difference, for both AA and POCT, false positives constitute a substantially greater fraction of screen positives at cutoff B, thus decreasing the PPV.

View this table:
Table 1

Predicted Screening Sensitivity for Detection of EBL When Separating the Screen-Positive Cutoff Concentration From the EBL Cutoff Concentration

Separating a defined screen-positive cutoff from the definition of EBL could of course be used to maintain a fixed sensitivity for detection of EBL across EBL cutoff concentrations. We performed simulated sampling of results for EBL greater than or equal to 5 μg/dL when using screen-positive cutoff values less than the defined EBL. It was determined for both AA and POCT that moving the screen-positive cutoff back to 4.7 μg/dL could retain sensitivity for detection of EBL greater than or equal to 5 μg/dL equivalent to that operative for detection of EBL greater than or equal to 10 μg/dL when using cutoff A (data not shown). Such a change in the screen-positive cut-off, however, would necessarily increase screen-positive results and further decrease the positive predictive value of screen positives.


The primary objective of this study was to demonstrate changes in screening sensitivity for detection of EBL as a function of the EBL cutoff concentration. The study used as examples 2 combinations of data sets: a basis population lead distribution from Omaha, Nebraska, combined with a defined relationship between imprecision and lead for either POCT or AA. The primary result is that sensitivity is predicted to decrease as the EBL cutoff concentration is decreased. In the examples shown here, the decrease in sensitivity between use of EBL cutoffs A and B was 4% in round numbers for both AA and POCT. Results were consistent with a previous, low-resolution approximating analysis undertaken for POCT for this basis distribution.7 The effect is due largely to the fact that, at the lower EBL cutoff, a much greater percentage of EBLs are within the range of the cutoff wherein misclassifications of results due to measurement imprecision occur, per results shown in Figures 4 and 5. As a qualitative result, the general effect of decreased sensitivity upon change from EBL cutoff A to EBL cutoff B is likely to be operative for any population distribution with a continuously decreasing slope. We believe it is important for both lead screening program directors and clinical laboratory directors to at least be aware of this general result.

Our example calculations took the premise that composite distribution data collected by EBL screening programs are subject to measurement imprecision characteristic of intersite variation, either for AA or POCT methods of measurement. Regional public health EBL screening programs are instead likely to receive results from a mixture of both sites and methods in various proportions. Intrasite analytical standard deviations as a function of lead are likely to be less than inter-site variation for any given method. Nonetheless, expected variation in reagents, instruments, and calibrations across sites makes use of intersite variation for standard deviation a reasonable approach for the purpose of demonstration, especially since our simulations used 2 such data sets, each considered independently. Note that intersite standard deviation values derived from PT results may in fact be conservative, as there is evidence that PT samples are to some extent handled more carefully than patient samples.9 We did not attempt to account for any potential bias in average lead results for either method of measurement. For both AA and POCT, bias is expected to be small in the range of lead relevant to the calculations for sensitivity.

It is important to note that the sensitivity simulation results shown in Figure 3 were based on singleton measurements. For central laboratory measurements such as by AA, it is common practice to repeat all measurements that are above the EBL cutoff so as to detect any possible instances of within-laboratory contamination of a sample. An unintended outcome of this practice is that it also in fact reduces the overall screening sensitivity for EBL detection. This is for the simple reason that among all screen positives (both false positive [FP] and TP) the only possible change for both TP and FP obtained on first measurement is a reclassification as negative upon duplication (decreasing the absolute number of both FP and TP), due simply to assay imprecision. Thus, the total number of TPs can only decrease when only screen positives are resampled. Accordingly, whereas the practice of resampling improves screening specificity, the screening sensitivity [S = TP/(TP + FN)] can only decrease when only screen positives are resampled. Thus, the sensitivity results for AA in Figure 3 represent a maximum for sensitivity that would be observed in practice for AA. Using the average of results across 2 measurements, when the first measurement is classified as screen positive for EBL, we estimate that this effect causes an additional decrease in sensitivity of approximately 2% at EBL cutoff B (data not shown).

The rationale for using a mathematical function to model the patient distribution was described in Materials and Methods. A potential criticism, however, is that the appropriateness of the mathematical function cannot be evaluated for lead less than 2 μg/dL in the original data set, which accounts for approximately 75% of results. It is important, then, to emphasize that surrogate data in this interval are not involved in sensitivity calculations. The sensitivity calculations involved only values qualifying as EBL in the basis distribution. For specificity calculations (as reported in Table 1), there is little contribution to misclassification for lead below 2 μg/dL, and in any event the continuity of the projection from the known distribution to the assumed distribution below 2 μg/dL appears to be quite reasonable.

A potential criticism is that the use of a specific patient population lead distribution rather than that for the general population average limits the clinical utility of the study. The recent ACCLPP report5 cites studies from the National Health and Nutrition Examination Survey (NHANES) to indicate that the recommended EBL cutoff B was based on the 97.5th percentile of the NHANES-generated lead distribution in children 1 to 5 years old. Thus, only 2.5% of the general population aged 1 to 5 years is expected to have EBL at the lower cutoff, whereas in our study population the prevalence was 4.4%. We believe that the study is of public health interest precisely because it concerns analytical issues of screening in a high-prevalence population. Certainly, the public health concern in any given locale is not about the national average for EBL. A large area (27 square miles) of Omaha, Nebraska, has been designated a “Superfund” cleanup site by the US Environmental Protection Agency because of lead contami-nation,10 and thus the public health screening program there faces the challenge of identifying an above-national-average fraction of patients with EBL. This situation is unlikely to be unique, and thus the results of this study are relevant to any such similar circumstances. More broadly, one valuable aspect of the study is its description of the evaluation of effects on sensitivity of EBL changes. The predicted scale of such effects would be difficult to deduce without use of the simulations or calculations as described here.

It is worth noting that, irrespective of predicted changes in sensitivity for changes in the definition of EBL, the results suggest that sensitivity of POCT screening for EBL using the current EBL cutoff A is substantially nonideal (approximately 90%). It is important to note then that, although POCT dominates the number of PT reporting sites (more than 50% of sites reporting in recent WSLH PT surveys8), POCT may nonetheless be the source of only a very small fraction of test results nationwide; the number is simply unknown.

A general philosophical issue related to this study is whether EBL screening programs regard as fundamentally important the need to identify all patients with EBL, however EBL is defined. The usual concept of a screening test is to obtain high sensitivity at the expense of a decrease in specificity. The usual case for screening, however, is that the test is against a single condition (eg, infection). For EBL screening, in contrast, the test is against a continuous gradient of conditions related to the risk of adverse effects of exposure. In this context, the average sensitivity for EBL detection might be regarded as a relatively narrow performance measure. Screening performs very well for those who are in greatest need (EBLs well above the cutoff defining EBL) and less well only in patients who are less likely to be affected (those with EBLs near the EBL cutoff). At a systemwide level, then, there is a clear overall benefit to the screened population derived from the change in definition of EBL (ie, more people at exposure risk will be detected). Nonetheless, given that screening sensitivity for detection of EBL is likely to decrease as the EBL cutoff is decreased, EBL screening programs may wish to consider whether to use a screen-positive cutoff lower in value than that defining EBL. This approach would produce an increase in screen-positive results, but the increase would be only incremental in comparison to the large-percentage increases in screen-positive results that are likely to be encountered by most EBL screening programs upon changing from EBL cutoff A to EBL cutoff B. Whether such a change in approach to cutoffs might be employed, however, it is clear that the overall public health objectives of EBL screening are well served simply by the change in definition of EBL.


View Abstract