By Nicole Witowski
We’re at the beginning of a new technological era, and artificial intelligence (AI) is one of the current mega-trends dominating the headlines. Companies from all industries are looking for ways to capitalize on this technology. Health insurers are no different. Large payors are testing how to use AI to spot fraud, speed up prior authorization, and guide care management.
But alongside its potential, AI introduces a set of challenges that payors must balance to ensure its ethical use. A recent case in point is UnitedHealthcare, which is facing a potential class-action lawsuit over allegations that it used an AI tool to wrongfully deny claims for Medicare Advantage (MA) members. Cigna was also hit with lawsuits over its use of an algorithm to reject claims.
These controversies raise the question: is AI a game-changer or a gamble for payors? To answer this, let’s look at AI’s growing role in the payor landscape.
Striking back against healthcare fraud
Earlier this year, a Michigan doctor was sentenced to prison for defrauding health insurers out of $250 million by billing for unnecessary spinal injections as part of an opioid distribution scheme. Over a five-year period, the physician billed Medicare for these lucrative injections more than any other provider in the nation, giving opioid prescriptions to patients who would receive the medically needless shots.
This case is just one of the many examples of healthcare fraud that has made headlines. Healthcare fraud drains tens of billions of dollars each year, siphoning off valuable resources meant to care for patients. And it’s not just about the money; fraud can also lead to unnecessary procedures, inappropriate medications, and even over-treatment, putting patients at risk.
Fraudulent schemes can be hard to spot. They might involve double billing, billing for services that were never provided (phantom billing), or claiming to provide more expensive services than actually rendered (upcoding). These deceptive practices often mimic legitimate claims and hide among the billions of claims, bills, and cost reports filed by healthcare providers every year.
To combat fraud, payors are using AI to catch fraudulent claims faster, streamlining a time-consuming and resource-intensive process. Blue Cross Blue Shield of Massachusetts has developed an algorithm that scans claims and flags any suspicious activity before payment. Similarly, Highmark has reported savings of $245 million, partly attributed to AI software that finds errors and unusual patterns indicative of fraud.
Speeding up prior authorizations
Prior authorization (PA) processing is another area that stands to benefit from AI. Of the many tools that payors use for cost and quality control, prior authorization is one of the more frustrating in the eyes of physicians.
The current prior authorization process takes 10 days on average, according to CMS. This manual slog can lead to delays in patient care, leaving doctors exasperated and patients in harm’s way. In fact, one-third of physicians surveyed by the American Medical Association reported that PA has led to a serious adverse event for a patient in their care:
- 25% of respondents reported PA led to a patient’s hospitalization.
- 19% of respondents reported PA led to a life-threatening event or one that required intervention to prevent permanent impairment or damage.
- 9% of respondents reported PA led to a patient’s disability or permanent bodily damage, congenital anomaly or birth defect, or death.
AI could be a game-changer in improving the prior authorization process. Take Health Care Service Corporation (HCSC) as an example. This payor is integrating both artificial and augmented intelligence to process PA requests about 1,400 times faster than before. Its proprietary AI tool can approve care for member treatments almost instantly by referencing historical authorizations. Blue Shield of California’s collaboration with Google Cloud further underscores AI’s potential to ease the administrative burden on healthcare providers.
While AI is streamlining the PA process, it’s not a magic wand. Rather, AI is a tool that needs human oversight to ensure decisions align with clinical judgment. HCSC’s AI-powered technology doesn’t deny prior authorization requests; it either approves or flags them for human review, showing that even AI needs a second opinion.
Heading off health problems before they get worse
Spotting health problems early is key to better patient outcomes. That’s where AI steps in once again. AI can sift through mountains of data to identify patients who are at high risk of developing certain diseases or experiencing adverse health events. This information can then be used to target these patients with preventive care and interventions.
Molina Healthcare’s AI-powered obstetrical care model is a shining example. The model, used with over 150,000 members over 18 months, led to an 8% decrease in preterm births compared to national trends, an 8% drop in NICU admissions, and a 9% reduction in total days in the NICU. Over the same 18-month period, the model also led to a 60% decrease in racial disparities in preterm births for Black mothers.
As AI gets better at predicting who needs what kind of care, health insurers can make smarter decisions about resource allocation, focusing on prevention and early intervention and less on costly treatments for avoidable health issues.
New restrictions on algorithmic tools in Medicare Advantage
While AI is emerging as a powerful tool for payors, health insurers should proceed with caution. New federal rules for MA plans beginning in 2024 will rein in their use of algorithms in coverage decisions. Insurance companies using such tools will be expected to “ensure that they are making medical necessity determinations based on the circumstances of the specific individual,” the requirements say, “as opposed to using an algorithm or software that doesn’t account for an individual’s circumstances.”
Under MA plans, insurers receive a monthly payment from the federal government for each enrollee, regardless of how much care they need. The fixed price nature of MA plans creates a potential incentive for health plans to deny care to boost profits. Before denying coverage considered not medically necessary, the new rule also requires that a coverage denial “must be reviewed by a physician or other appropriate health care professional with expertise in the field of medicine or health care that is appropriate for the service at issue.”
But the rule doesn’t mention specific penalties for violations, raising concerns about its effectiveness. As more payors adopt AI, lawmakers are urging CMS to set more guardrails for how plans use the technology. Proposed measures include requiring MA plans to give reasons for why a patient is denied service, assessing the frequency of denials, determining the role of AI during the denial process, and other measures.
Health insurers must strike a balance between AI’s promise and pitfalls
AI holds immense promise for payors, but its ethical implementation remains a critical hurdle. Health insurers should tread carefully as they explore this technological frontier. As AI continues to evolve, we expect to see even more innovative use cases for healthcare payors. By striking a delicate balance between AI’s potential and its risks, health insurers can ensure this technology serves the best interests of patients and the healthcare industry as a whole.
For more of the latest trends or to see how healthcare commercial intelligence leverages AI to answer your toughest questions, sign up for a free trial with Definitive Healthcare.