AI is increasingly being adopted into UR to streamline review processes, improve efficiency, and potentially reduce costs. AI tools and agents can automate tasks, analyze patterns, perform deep research, and provide decision support, but they may also introduce potential risks like over-reliance and algorithmic bias. NAIRO has observed a dramatic increase in states creating bills around the use of AI in UR. We have noticed that states have taken a wide-ranging approach to regulating AI into their respective UR regulations, but all agree that some form of human oversight is needed. For example, these bills are currently in process:
- Illinois House Bill (HB) 35 (2025) proposes to create the Artificial Intelligence Systems Use in Health Insurance Act, which provides the Department of Insurance with regulatory oversight of health insurance coverage including oversight of the use of AI systems or predictive models to make or support adverse consumer outcomes.
- Maryland HB 820 (2025) Requires that certain carriers, pharmacy benefits managers, and private review agents ensure that AI, algorithm, or other software tools are used in a certain manner when used for conducting UR.
- New York Assembly Bill 8556 (2025-2026) prescribes requirements and safeguards for the use of an AI, algorithm, or other software tool for the purpose of UR for health and accident insurance.
- Rhode Island Senate Bill (SB) 13 (2025-2026) - Use of Artificial Intelligence by Health Insurers, which promotes transparency and accountability in the use of AI by health insurers to manage coverage and claims.
- Tennessee SB 1261 (2025-2026) Insurance Agents and Policies, which imposes requirements for health insurance issuers using AI, algorithms, or other software for UR or utilization management functions.
NAIRO believes there are important benefits in incorporating AI in UR, including increased efficiency and speed. For example, AI can automate routine tasks like data extraction, prior authorization requests, and initial case reviews, freeing up human reviewers for more complex cases. It can also enhance accuracy and consistency. Foundation AI models such as Google’s Gemini 2.5, Open AI’s GPT 4.5 and others now have the context and reasoning capabilities. They can analyze large datasets and identify hidden patterns that could easily be missed by humans, leading to more accurate and consistent decisions. It would result in improved decision support by providing evidence-based recommendations and highlighting relevant clinical information to facilitate human reviewers in making informed decisions.