Surrogate indicators of sensitivity in gynecologic cytology: Can they be used to improve the measurement of sensitivity in the laboratory?
Andrew A Renshaw1, Fadi Brimo2, Manon Auger2
1 Comprehensive Pathology Associates, Miami, FL, USA
2 Department of Pathology, McGill University Health Center, McGill University, Montreal, QC, Canada
|Date of Submission||17-Jun-2009|
|Date of Acceptance||18-Aug-2009|
|Date of Web Publication||09-Oct-2009|
Andrew A Renshaw
Comprehensive Pathology Associates, Miami, FL
Source of Support: None, Conflict of Interest: None
Background: Measuring the sensitivity of screening in gynecologic cytology in real life is problematic. However, other quality measures may correlate with sensitivity, including the atypical squamous cells (ASC)/squamous intraepithelial lesion (SIL) ratio. Whether these other measures can function as "surrogate indicators" for sensitivity and improve the assessment of sensitivity in the laboratory is not known. Materials and Methods: We compared multiple quality measures with true screening sensitivity in a variety of situations. Results: The abnormal rate, ASC rate, and ASC/SIL ratio were all highly correlated (r = .83 or greater) with sensitivity when the overall laboratory sensitivity was low (85%) but became less correlated (.64 or less) or uncorrelated when the screening sensitivity was higher (88% or 95%, respectively). Sensitivity was more highly correlated with the abnormal rate than the ASC/SIL ratio at low screening sensitivity. While thresholds could be set that were highly sensitive and specific for suboptimal screening, these thresholds were often less than one standard deviation away from the mean. Conclusion: The correlation of the abnormal rate and the ASC/SIL ratio with sensitivity depends on overall sensitivity. Standards to define minimum screening sensitivity can be defined, but these standards are relatively narrow. These features may limit the utility of these quality measures as surrogates for sensitivity.
Keywords: Diagnostic accuracy, gynecological cytology, improvement, performance, quality control, rapid pre-screening, routine screening, sensitivity
|How to cite this article:|
Renshaw AA, Brimo F, Auger M. Surrogate indicators of sensitivity in gynecologic cytology: Can they be used to improve the measurement of sensitivity in the laboratory?. CytoJournal 2009;6:19
|How to cite this URL:|
Renshaw AA, Brimo F, Auger M. Surrogate indicators of sensitivity in gynecologic cytology: Can they be used to improve the measurement of sensitivity in the laboratory?. CytoJournal [serial online] 2009 [cited 2017 Dec 11];6:19. Available from: http://www.cytojournal.com/text.asp?2009/6/1/19/56359
| » Introduction|| |
Screening sensitivity is critical in gynecologic cytology but difficult to perform in the real life laboratory situation. Although most laboratories in the USA measure the false negative proportion,  this has been shown to be inaccurate and to routinely overestimate the true sensitivity of screening.  For example, while most laboratories report sensitivities well above 95% using rescreening, large blinded cross-over studies have consistently shown that the sensitivity of screening of routine Pap smear More Detailss is around 80%.  The major limitation with this method is the insensitivity of the rescreen, which has been shown to be only 30% at a threshold of atypical squamous cells (ASC) and 0% at higher thresholds.  Because there are no controls when negative slides are rescreened, there is no way to account for this error.
An alternative approach is to use prescreening instead of rescreening. This method allows the sensitivity of the prescreen to be measured and accounted for, by using the abnormal cases that are prescreened as controls. Measurements of sensitivity have been obtained using this method that are much closer to those shown in large blinded cross-over studies, and are presumably more accurate as a result. [4-13] Nevertheless, this method is not routinely used in the USA, though it is practiced in both Canada and the United Kingdom.
As a result, most laboratory directors in the USA have only limited information about the sensitivity of screening in their laboratory. Any method that could improve the evaluation of the cytotechnologists (CTs) in their laboratory may be of value. Recently, it has been shown that other quality measures, such as the atypical squamous cells/squamous intraepithelial lesion (ASC/SIL) ratio may correlate with sensitivity.  Whether the ASC/SIL ratio correlates with sensitivity in other laboratory settings, or whether other quality measures may also correlate with sensitivity and serve as a surrogate indicator for sensitivity is not known. To further assess this, we correlated multiple quality measures with multiple screening sensitivities within a laboratory.
| » Materials and Methods|| |
Rapid prescreening (RPS) for a period of 16 consecutive months was performed as previously described  and used to determine the screening sensitivity of individual CTs and the laboratory as a whole. In brief, from November 2006 to February 2008, RPS was routinely performed by up to 15 different CTs depending on the time period on all routine conventional Pap smears (n = 51,792) received at the Cytopathology Laboratory of the McGill University Health Center. Because the usual practice in our laboratory is that all Pap smears from high-risk cases, such as those from the colposcopy and oncology clinics, never undergo RPS and are instead always reviewed by a pathologist even if screened as "negative," all such cases were excluded from the current study. In other words, all the cases included in the current study relate to a routine screening population. The cases included in the study underwent RPS in a manner similar to that which we have previously reported [8,9] with the following modifications. The majority of screeners spend between 15 and 30 min to rapidly prescreen one set of approximately 20 slides each day, allowing 45 - 90 s to screen each slide. One half of the screeners use the turret method while the others use either the whole or the step method, depending on their preference. The great majority of screeners did not perform the RPS first thing in the morning; the period of the day devoted to RPS is variable from one screener to another. The current study evaluates real life RPS performance done without restriction.
All RPS diagnoses were recorded as abnormal/review (R) or negative (N) on a standardized worksheet, without making any marks on the slide or paperwork. The threshold for (R) was ASC. After the cases were rapidly prescreened, they were fully screened without knowledge of the RPS diagnosis, making sure that the full screener was not the same as the RPS screener. Once a diagnosis was made on full screening (FS), the final and RPS diagnoses were compared. In cases where both reviews were labeled N, the results were finalized by the CT. Cases labeled "R" by both screeners or "N" by RPS but "R" by FS were referred to the pathologist for final diagnosis. Cases labeled "R" by RPS but "N" by FS were referred back to the rapid prescreener to review the slide and dot suspected abnormal cells; these were also referred to a pathologist. The final diagnosis of the pathologists was used as the "gold standard" for calculating sensitivity and specificity of RPS and FS. Four pathologists diagnosed all the cases during the study period; all four had subspecialty training in cytopathology.
Sensitivity of routine screening was calculated for individual CTs and for the laboratory overall in each of the two 8 -month study periods. A comparison of individual performances in these two periods was undertaken. Of note is that when calculating the sensitivity of routine screening, we used the more appropriate "corrected" rather than the uncorrected value as described by Renshaw and used in our previous studies. [7, 14, 15] In brief, the sensitivity of routine screening was considered to be overestimated by the sensitivity of RPS and was recalculated by obtaining a correction factor (CF). The CF was calculated by dividing 100 (%) by the sensitivity of RPS. The FS false-negative rate (FNR) was multiplied by this CF and the result was considered to be the real FNR of routine screening. This value was consequently used to obtain the corrected sensitivity; sensitivity = 1 - FNR. The same laboratory correction factor was used to calculate the true sensitivity of each CT.
The ASC rate and ASC/SIL ratio were calculated from the laboratory data. The total abnormal rate included all diagnoses other than "Negative for Intraepithelial Lesion or Malignancy" (NILM) and Unsatisfactory. The sensitivities we report are at a threshold of ASC. There were insufficient data to measure the sensitivity at thresholds of Low-grade Squamous Intraepithelial Lesion (LSIL) or High-grade Squamous Intraepithelial lesion (HSIL).
Correlations were performed using a Pearson correlation between the variables of interest.
| » Results|| |
A total of 51,792 cases were rapidly prescreened. The number of cases reviewed by individual technologists ranged from 281 to 6798. The sensitivity of the laboratory varied from 85% to 88% for the two time periods. Individual CTs had sensitivities from 64% to 100%.
The correlation and variation of a variety of quality measures are summarized in [Table 1],[Table 2],[Table 3]. Each table represents the laboratory with a different overall sensitivity (85%, 88%, and 95%). [Table 1] is from the first 8 months, and includes 11 individual CTs. [Table 2] is from from the second time period, and includes 15 individual CTs. [Table 3] represents a subset of CTs from the second period (11 CTs) selected to achieve a laboratory sensitivity of 95%.
The abnormal rate, ASC rate, and ASC/SIL ratio were all highly correlated (r = .83 or greater) with sensitivity when the overall laboratory sensitivity was low (85%) but became less correlated (.64 or less) or uncorrelated when the screening sensitivity was higher (88% or 95%, respectively). Sensitivity was more highly correlated with the abnormal rate than ASC in laboratories with lower screening sensitivity. The inclusion of Atypical Squamous cells; cannot rule out HSIL (ASC-H) with ASC made only a marginal difference.
Thresholds could be defined as that which were useful in identifying individual CTs whose screening sensitivity was suboptimal [Table 4] and [Table 5], though in most cases the thresholds were less than one standard deviation below the mean. For example, in [Table 4] if one wished to ensure that the screening sensitivity of individual CTs was at least 85%, then one would need to be sure that the total abnormal rate for each CT was at least 2.3%. If one wished to ensure that the screening sensitivity of individual CTs was at least 90%, one would need to ensure that the total abnormal rate was at least 3.4%. However, under this circumstance, the threshold is only 78% specific, which means that there are individual CTs whose abnormal rate is less than this, and yet they were still able to achieve an overall sensitivity of 90%. When the laboratory had a sensitivity of 95%, there were no CTs whose screening sensitivity was less than 90%. The specificities of the thresholds were the same as those seen when the laboratory had a screening sensitivity of 88%.
| » Discussion|| |
The current study is one of the few studies to compare multiple quality measures from gynecologic cytology with actual screening sensitivity to see if they may serve as "surrogates" for true sensitivity. The data we present show that many of these other quality measures may correlate with the true sensitivity of the laboratory, but there are substantial limitations to their use.
By definition, these surrogates are inferior to directly measuring performance because they depend not only on screening sensitivity but on other factors as well.  Individual CTs with the same screening sensitivity may have very different thresholds for ASC or LSIL, and this would result in different ASC rates and ASC/SIL ratios and would reduce the correlation with sensitivity.
Interestingly, the correlation of these other quality measures with sensitivity also depends on the overall sensitivity of the laboratory. If the laboratory is performing poorly, then these surrogates perform well. In this setting, there are a lot of mistakes being made, and thus their effect on these surrogates is great. In contrast, when the laboratory is performing well there are few mistakes being made, and the effect of these mistakes on the surrogates is smaller.
The value of these other quality measures to serve as surrogates for sensitivity may also depend on the disease prevalence. In our study the disease prevalence was relatively stable, so we were unable to directly measure the impact of disease prevalence in this study. Further studies may be of value in defining the effect of this variable.
Nevertheless, despite these limitations, minimum standards could be defined that would identify individual CTs who were performing suboptimally. These minimum standards can be identified in laboratories performing at a range of overall sensitivity, and remain both sensitive and specific. Unfortunately, the minimum standards we have identified in this study are often less than one standard deviation away from the mean. This suggests that in order to be most effective, the variation in screening "style" between CTs would need to be substantially reduced. For example, in this laboratory, the ASC/SIL ratio varied from 0.9 to 4.5, a 600% variation. The minimum standards we defined would be much more useful if the variation in the ASC/SIL ratio in the laboratory was less than this. Strategies that improve the precision of these surrogates, such as location-guided screening,  may be of value in this setting. While the specific data we present in this report are specific to this laboratory, they suggest that most laboratory's minimum standards would be less than one standard deviation below the mean of the value they are examining.
Interestingly, at all levels of sensitivity studied here, the variation in the abnormal rate or the ASC rate was less than the variation in the ASC/SIL ratio. This is surprising. The ASC/SIL ratio was originally developed because it was thought to vary less than the ASC or total abnormal rate between laboratories. [18,19] Survey data from a variety of laboratories have also noted this trend.  Further evaluation may be warranted.
We conclude that the correlation of other quality measures in the gynecologic cytology laboratory with screening sensitivity depends on the overall sensitivity in the laboratory. Thresholds to define minimum screening sensitivity can be defined, but these standards are relatively narrow, and may limit the utility of these standards to serve as surrogates for true screening sensitivity.
| » Competing Interest Statement by all Authors|| |
No competing interest to declare by any of the authors.
| » Authorship Statement by all Authors|| |
All authors of this article declare that we qualify for authorship as defined by ICMJE http://www.icmje.org/#author.
Each author has participated sufficiently in the work and take public responsibility for appropriate portions of the content of this article.
Each author acknowledges that this final version was read and approved.
| » Ethics Statement by all Authors|| |
This study was conducted with approval from Institutional Review Board (IRB) (or its equivalent) of all the institutions associated with this study.
Authors take responsibility to maintain relevant documentation in this respect.
| » References|| |
|1.||Krieger P, Naryshkin S. Random rescreening of cytologic smears: A practical and effective component of quality assurance programs in both large and small cytology laboratories. Acta Cytol 1994;38:291-8. [PUBMED] |
|2.||Renshaw AA, Lezon KM, Wilbur DC. The human false negative rate of rescreening in a two arm prospective clinical trial. Cancer Cytopathol 2001;93:106-10. |
|3.||Renshaw AA. Measuring sensitivity in gynecologic cytology: A review. Cancer Cytopathol 2002;96:210-7. |
|4.||Renshaw AA, Cronin JA, Minter LJ, Whitman T, Jiroutek M, Cibas ES. Performance characteristics of rapid (30 second) prescreening: Implications for calculating the false-negative rate and comparison with other quality assurance techniques. Am J Clin Pathol 1999;111:517-22. [PUBMED] |
|5.||Brooke D, Dudding N, Sutton J. Rapid (partial) prescreening of cervical smears: The quality control method of choice? Cytopathology 2002;13:191-9. [PUBMED] [FULLTEXT] |
|6.||Smith J, Nicholas D, Bod K, Deacon-Smith R. Rapid pre-screening: A validated quality assurance measure in cervical cytology. Cytopathology 2003;14:275-80. |
|7.||Deschenes M, Renshaw AA, Auger M. Measuring the significance of workload on the performance of cytotechnologists in gynecologic cytology: A study using rapid prescreening. Cancer Cytopathol 2008;114:149-54. |
|8.||Djemli A, Khetani K, Auger M. Rapid prescreening of Papanicolaou smears: A practical and efficient qulaity control strategy. Cancer 2006;108:21-6. [PUBMED] [FULLTEXT] |
|9.||Djemli A, Khetani K, Case B, Auger M. Correlation of cytotechnologists' parameters with their performance in rapid prescreening of Papanicolaou smears. Cancer Cytopathol 2006;108:306-10. |
|10.||Tavares SB, Sousa NL, Manrique C, Albuquerque ZB, Zeferino LC, Amaral RG. Comparison of the performance of rapid prescreening, 10% random review and clinical risk criteria as methods of internal quality control in cervical cytopathology. Cancer Cytopathol 2008;114:165-70. |
|11.||Tavares SB, de Sousa NL, Manrique EJ, de Albuquerque ZB, Zeferino LC, Amaral RG. Rapid pre-screening of cervical smears as a method of internal qulaity control in a cervical screening program. Cytopathology 2008;19:254-9. [PUBMED] [FULLTEXT] |
|12.||Brimo F, Renshaw AA, Deschenes M, Charbonneau M, Auger M. Improvement in routine screening performance of cytotechnologists over time: A study using rapid pre-screening. Cancer Cytopathol 2009 in press. |
|13.||Renshaw AA. Strategies for improving gynecologic cytology screening. Cancer Cytopathol 2009 in press. |
|14.||Renshaw AA, DiNisco SA, Minter LJ, Cibas ES. A more accurate measure of the false negative rate of Pap smear screening is obtained by determining the false negative rate of the rescreening process. Cancer Cytopathol 1997;81:272-6. |
|15.||Renshaw AA. Analysis of error in calculating the false negative rate for interpretation of cervicovaginal smears: The need to review abnormal cases. Cancer Cytopathol 1997;81:264-71. |
|16.||Pitman MB, Cibas ES, Powers CN, Renshaw AA, Frable WJ. Reducing or eliminating use of the category of atypical squamous cells of undetermined significance decreases the diagnostic accuracy of the Papanicolaou smear. Cancer Cytopathol 2002;96:128-34. |
|17.||Renshaw AA, Auger M, Birdsong G, Cibas ES, Henry M, Hughes JH, et al. ASC/SIL Ratio for cytotechnologists: A survey of its utility in clinical practice. Cancer Cytopathol 2009 in press. |
|18.||Davey DD, Naryshkin S, Nielsen ML, Kline TS. Atypical squamous cells of undertermined significance: Interlaboratory comparison and quality assurance monitors. Diagn Cytopathol 1994;11:390-6. [PUBMED] |
|19.||Davey DD, Nielsen ML, Naryshkin S, Robb JA, Cohen T, Kline TS. Atypical squamous cells of undetermined significance: Current laboratory practices of participants in the College of American Pathologists interlaboratory comparison program in cervicovaginal cytology. Arch Pathol Lab Med 1996;120:440-4. [PUBMED] |
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5]
|This article has been cited by|
|| result 1 Document PAP smearde epitel hücre atipisi saptanan hastalar: Sonuçlarin takip smear ve servikal biyopsi ile uyumlari | [Patients withepithelial cell abnormality in PAP smears: Correlation of results with follow-up smears and cervical biopsies]
| || Authors of Document Türkmen, I.C., Başsüllü, N., Korkmaz, P., (...), Dünder, I., Bülbül Dob̌usoy, G. |
| ||Source of the Document Turk Patoloji Dergisi/Turkish Journal of Pathology. 2013; |
||Low grade squamous intraepithelial lesion, epithelial cell abnormality-adjusted workload, and the thinprep imaging system
| ||Renshaw, A.A. and Elsheikh, T.M. |
| ||Diagnostic Cytopathology. 2012; 40(8): 698-700 |
|| result 3 Document Predicting screening sensitivity from workload in gynecologic cytology: A review
| || Renshaw, A.A., Elsheikh, T.M. |
| ||Source of the Document Diagnostic Cytopathology. 2011; |
||Predicting screening sensitivity from workload in gynecologic cytology: A review
| ||Andrew A. Renshaw, Tarik M. Elsheikh, Zubair Baloch |
| ||Diagnostic Cytopathology. 2010; : n/a |
|[VIEW] | [DOI]|