Dialog with AI Expert: Assessing the risk and benefits of AI technologies

This document presents part 2 of a dialogue between the NAIRO AI Committee and an AI expert, Madhu Reddiboina, Founder & CEO of RediMinds, Inc .  Artificial intelligence (AI) technologies are rapidly transforming industries, offering unprecedented opportunities alongside complex risks. This expert discussion by the NAIRO AI Committee examines the complex challenges involved in implementing AI, including ongoing problems like model hallucinations, lack of real-world experience, and concerns about bias and over-reliance in important fields such as healthcare.  The discussion highlights best practices for integrating AI with human judgment, balancing accuracy, speed, and cost, and emphasizes the importance of robust governance and continuous oversight.

Q: Why do AI models still hallucinate or generate incorrect information?
A
: Their fundamental feature is to predict “likely” next word, not verified truth. Without grounding to a trusted source like a policy database, clinical guideline, claim files etc., they can confidently make things up. We can control the hallucinations with retrieval (always cite), tool use (calculators, policy checkers), confidence thresholds, and a hard prompt to NOT guess or else :).
 
Q: How does the lack of real-world experience affect AI's decision-making capabilities?
A
: Clearly AI Models do not have lived experience, so they miss context that humans take for granted such as tradeoffs, edge-case ethics, operational constraints etc. In utilization review or IDR, that shows up as brittle decisions when documentation is messy. We fix it with domain ontologies, scenario training, and ALWAYS keep a human in the loop.
 
Q: How can AI systems unintentionally reinforce bias or discrimination?
A:
AI learns from history that is documented and digitized; if history is biased, outputs can be too. Proxies like zip code, benefit type, or provider specialty can encode disparities. We could mitigate some of it via subgroup audits, fairness metrics, bias bounties, and governance that allows humans to override and report harms.
 
Q: What are the risks of over-reliance on AI in critical sectors like healthcare?
A: Over reliance on AI can lead to Automation bias, silent failure, and data drift. In critical workflows, that can delay care or misapply policies. One way to mitigate this is to establish tiered risk controls, clear fallbacks to humans, full audit trails, and continuous monitoring.
 
Q: Best practices for designing AI that works well with humans
A: The approach depends on the process being designed. I start with a process flow or decision map that clearly defines who decides what, and when. Next, identify which parts can be handled effectively by AI given the available data, and which require uniquely human judgment. Then, design an end-to-end workflow that integrates both capabilities. Treat it as an iterative cycle — build, execute, learn, and rebuild — continuously refining how humans and AI complement each other. 
 
Q: Trade-offs between accuracy, speed, and cost in deployment
A: There’s an additional dimension to consider — scale. Speed, accuracy, and scale each influence deployment cost differently. The right balance depends on the primary business outcome you’re optimizing for. If accuracy is paramount, technical and operational choices will differ from when speed or scalability is the goal. Each dimension carries trade-offs in infrastructure, human oversight, and model complexity. Above all, never fully automate high-risk or high-impact decisions.
 
Q: What role will AI play in solving global challenges like healthcare? 
A: AI’s role is to reduce friction, accelerate evidence-based decisions, and expand equitable access to quality care. In the IRO and IDR domains, it enables faster, fairer, and more consistent reviews — freeing clinicians to focus on clinical judgment and human connection. The end result: improved efficiency, transparency, and trust across the healthcare ecosystem. 
 

In summary, responsible use of AI in healthcare requires balancing innovation with strong oversight. With clear governance, transparency, and ongoing human input, organizations can leverage AI to improve clinical decisions, enhance efficiency, and promote fairness in clinical reviews while mitigating risks. As AI continues to evolve, its success will depend not only on technical advancements, but also on a steadfast commitment to ethical collaboration—ensuring that AI augments, rather than replaces, clinical judgment and supports equitable outcomes for patients and providers alike.  

About RediMinds and Madhu Reddiboina

RediMinds, Inc. is an AI innovation company focused on responsible automation in healthcare. Its platforms enhance accuracy, efficiency, and compliance in Independent Review and Dispute Resolution processes through human-in-the-loop AI.

Madhu Reddiboina, Founder and CEO, is an AI and healthcare technology leader known for advancing ethical, transparent AI adoption. A frequent NAIRO and URAC speaker, he has led RediMinds in developing cutting-edge systems that combine intelligence, compassion, and accountability to improve decision quality and healthcare outcomes.

Share this post: