Publication Abstract

Authors: Morris DE, Pepe MS, Barlow WE

Title: Contrasting two frameworks for ROC analysis of ordinal ratings.

Journal: Med Decis Making 30(4):484-98

Date: 2010 Jul-Aug

Abstract: BACKGROUND: Statistical evaluation of medical imaging tests used for diagnostic and prognostic purposes often employs receiver operating characteristic (ROC) curves. Two methods for ROC analysis are popular. The ordinal regression method is the standard approach used when evaluating tests with ordinal values. The direct ROC modeling method is a more recently developed approach, motivated by applications to tests with continuous values. OBJECTIVE: The authors compare the methods in terms of model formulations, interpretations of estimated parameters, the ranges of scientific questions that can be addressed with them, their computational algorithms, and the efficiencies with which they use data. RESULTS: The authors show that a strong relationship exists between the methods by demonstrating that they fit the same models when only a single test is evaluated. The ordinal regression models are typically alternative parameterizations of the direct ROC models and vice versa. The direct method has two major advantages over the ordinal regression method: 1) estimated parameters relate directly to ROC curves, facilitating interpretations of covariate effects on ROC performance, and 2) comparisons between tests can be done directly in this framework. Comparisons can be made while accommodating covariate effects and even between tests that have values on different scales, such as between a continuous biomarker test and an ordinal valued imaging test. The ordinal regression method provides slightly more precise parameter estimates from data in our simulated data models. CONCLUSION: Although the ordinal regression method is slightly more efficient, the direct ROC modeling method has important advantages in regard to interpretation, and it offers a framework to address a broader range of scientific questions, including the facility to compare tests.