A Brown University School of Public Health research team found that differences in diagnosis coding practices has resulted in artificially inflated mortality rate comparisons to other hospitals, according to a study published in the Journal of the American Medical Association.
Critical access hospitals (CAHs) provide care to Americans living in remote rural areas. As important health care access points, these hospitals serve a population that is disproportionately older, impoverished and burdened by chronic disease.
Prior research studies comparing the quality of care provided by CAHs and non-CAHs have found that risk-adjusted mortality rates at CAHs were higher, and the hospitals’ quality of care, therefore, lower. But a new study led by investigators at the Center for Gerontology and Healthcare Research in Brown’s School of Public Health suggests that standard risk-adjustment methodologies have been unfairly penalizing CAHs.
According to the study, for Medicare beneficiaries in rural areas who were hospitalized during the period of 2007 to 2017, CAHs submitted significantly fewer hospital diagnosis codes than did non-CAHs. The primary reason for the relative under-reporting of diagnoses at CAHs has to do with differences in Medicare reimbursements — while non-CAHs are incentivized by Medicare to complete diagnosis coding, CAHs, which receive cost-based reimbursements, are not.
Because mortality rates are adjusted per severity of illness the result is that CAHs appear to have higher mortality rates for patients with similar conditions, when in reality their patients may in fact be sicker than those in non-CAHs, from the standpoint of risk adjustment.