Algorithms designed to guide medical care can contribute to racially inequitable outcomes, but eliminating information about patient race and ethnicity as an input to algorithms is not the right way to address the issue, according to new analysis from the RAND Corporation.
Instead, using tools to estimate racial and ethnic information about patients where self-identified information is not available actually could catalyze efforts to both improve health care algorithms and empower clinicians to reduce bias in the way they practice.
The analysis is published in the August edition of the journal Health Affairs.
“Knowledge of race and ethnicity—not ignorance—is necessary to combat algorithmic bias,” said Marc Elliott, the paper’s senior author and a senior principal statistician at RAND, a nonprofit research organization. “Imputing race and ethnicity information can expand opportunities to not only identify algorithmic bias, but also combat bias in both clinical and nonclinical settings.”
Algorithms—well-defined mathematical procedures for deriving predictions or scores from information—are widely used to aid decision making in health care. Such tools affect decisions about who receives care, the type of care they receive, how the care is provided and at what cost.
Health care algorithms are developed with the goals of improving quality of care by minimizing variation in clinical decision making, promoting greater adherence to best practices and maximizing the efficient use of scarce resources.
For example, multiple studies have found that scores derived from algorithms predict strep throat infections and identify cancerous skin lesions more accurately than clinical assessments.
As use of algorithms in health care has grown, there has been a growing recognition that the tools can have the unintended consequence of codifying racial and ethnic biases, potentially perpetuating worse outcomes for patients.
For example, a widely used algorithm developed to help guide care for heart failure has been shown to deflate the risk scores of Black patients relative to similar patients of other races.
In another case, a commonly used algorithm for predicting the success of vaginal birth after a prior cesarean delivery predicts lower success for Black and Hispanic mothers relative to similar white mothers.
RAND researchers say that although biases harbored by algorithms can result in inequitable outcomes, in many cases biased human decision making is the status quo alternative to algorithmic bias. In fact, algorithmic biases often occur when an algorithm learns to detect disparities that are already present in the data on which they are trained.
Although few call for the elimination of algorithms to aid health care decision making, one common recommendation to reduce the potential for algorithmic bias is to exclude race and ethnicity as an input—a practice commonly referred to as fairness through unawareness.
RAND researchers argue that a better strategy is to employ health disparity tools (including widely used ones developed by the RAND team) to measure algorithmic inequities by race and ethnicity, which may be possible even when a patient’s race and ethnicity has not been self-reported.
Imputing missing or unreliable race and ethnicity data facilitates identification of algorithmic bias and makes clear what corrective measures are needed to reduce or remove algorithmic bias from treatment decisions.
Another example of how imputed race and ethnicity information can improve health care equity would be to use the information to ensure equitability in pay-for-performance schemes that use payment systems to reward clinicians who provide higher-quality care.
“Race and ethnicity imputation in health care research traditionally has been applied to identify health disparities,” Elliott said. “The continued growth and use of algorithms in health care decisions spotlights the need to leverage race and ethnicity imputation to address health disparities in new ways.”
Source: Read Full Article