OR WAIT null SECS
The race-neutral model resulted in worse calibration, NPV, and false-negative rates among racial and ethnic minority subgroups compared to non-Hispanic White participants.
A team, led by Sara Khor, MASc, Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute, University of Washington, examined whether including race and ethnicity as a predictor for colorectal cancer recurrence risk algorithm is associated with racial bias, defined as racial and ethnic differences in model accuracy that could potentially lead to unequal treatment.
Race and ethnicity can be useful as a predictor of clinical risk prediction algorithms. However, there is a lack of empirical studies addressing whether simply omitting race and ethnicity from the algorithms that could ultimately affect decision-making for patients of minoritized racial and ethnic groups.
In the retrospective prognostic study, the investigators used data from a large integrated health care system in Southern California for patients with colorectal cancer who received primary treatment between 2008-2013 and follow-up until December, 31, 2018.
The investigators used Four Cox proportional hazards regression prediction models that were fitted to predict time from surveillance start to cancer recurrence. This included a race-neutral model that explicitly excluded race and ethnicity as a predictor, a race-sensitive model that included race and ethnicity, a model with 2-way interactions between clinical predictors and race and ethnicity, and separate models by race and ethnicity.
They assessed algorithmic fairness using model calibration, discriminative ability, false-positive, and false-negative rates, positive predictive value (PPV), and negative predictive value (NPV).
A total of 4230 patients with a mean age of 65.3 years were included in the study. The racial makeup was 11.6% (n = 490) Asian, Hawaiian, or Pacific Islander, 13.1% (n = 554) Black or African American, 22.1% (n = 937) Hispanic, and 53.1% (n = 2249) non-Hispanic White.
The race-neutral model resulted in worse calibration, NPV, and false-negative rates among racial and ethnic minority subgroups compared to non-Hispanic White participants (eg, false-negative rate for Hispanic patients: 12.0%; 95% confidence interval [CI], 6.0%-18.6%; for non-Hispanic White patients: 3.1%; 95% CI, 0.8%-6.2%).
By adding race and ethnicity as a predictor, the investigators improved algorithmic fairness in calibration slope, discriminative ability, PPV, and false-negative rates (eg, false-negative rate for Hispanic patients: 9.2%; 95% CI, 3.9%-14.9%; for non-Hispanic White patients: 7.9%; 95% CI, 4.3%-11.9%).
The inclusion of racial interaction terms or using race-stratified models did not ultimately improve model fairness.
“In this prognostic study of the racial bias in a cancer recurrence risk algorithm, removing race and ethnicity as a predictor worsened algorithmic fairness in multiple measures, which could lead to inappropriate care recommendations for patients who belong to minoritized racial and ethnic groups,” the authors wrote. “Clinical algorithm development should include evaluation of fairness criteria to understand the potential consequences of removing race and ethnicity for health inequities.”
Khor S, Haupt EC, Hahn EE, Lyons LJL, Shankaran V, Bansal A. Racial and Ethnic Bias in Risk Prediction Models for Colorectal Cancer Recurrence When Race and Ethnicity Are Omitted as Predictors. JAMA Netw Open. 2023;6(6):e2318495. doi:10.1001/jamanetworkopen.2023.18495