top of page

Artificial Intelligence as a Primary and Emerging Compliance Risk

  • Jan 23
  • 7 min read

My last posts focused on the importance of governance and strategy when implementing AI applications. In this article I’ll look at risks that emerge when using generative and agentic AI, then translate them into mitigation actions that oncology leaders can operationalize without losing sight of patient support and the Triple Aim.



The core tension: The unresolved question is not whether AI can be used safely, but whether organizations can operationalize controls that evolve at the same pace as AI authority.


 Generative and agentic AI promise material gains in efficiency, scale and timeliness of oncology patient support, while simultaneously shifting where risk originates and how it propagates through care delivery and reporting systems. As AI moves from producing content to sequencing work and triggering actions, small defects in documentation, data mapping, or measure logic can silently cascade into delayed interventions, distorted quality performance and regulatory exposure. Practice leaders are therefore forced to balance speed and autonomy against governance, auditability and accountability, rather than treating AI adoption as a purely technical or incremental upgrade. The risk is material because subtle upstream defects in documentation, data mapping and workflow authority can silently propagate into quality reporting, payer exposure and delayed clinical interventions.


A credible AI plan for 2026 has to answer a narrower question than most vendor narratives:


What are the risks in oncology posed by using generative and agentic AI applications and how do practice leaders identify and mitigate those risks before autonomy expands.


Common uses and probable 2026 uses of generative and agentic AI in oncology


Most practices are already using “AI” in ways that are not labeled as such. In oncology the AI footprint is operations-facing and compliance-sensitive: documentation, coding support, quality abstraction, pathway adherence analytics, utilization management, symptom triage, navigation worklists, patient messaging and performance reporting.


AI applications that are common


  • Generative documentation and summarization including ambient documentation that drafts notes for clinician review.

  • Data-to-measure transformation where AI-assisted abstraction or logic layers drive certification and value-based reporting.

  • Patient communications such as templated outreach, education and portal messaging at scale.

  • Internal decision support that identifies guidelines, pathway prompts, or suspected care gaps inside EHR-adjacent workflows


AI applications that might become operationally common in 2026


The most probable near-term evolution is not “autonomous clinical decision-making.” It is agentic orchestration of patient support work:


  • Continuous monitoring of symptom signals, missed appointments, utilization events and lab thresholds to generate prioritized worklists.

  • Routing tasks to the right role with explicit service levels (navigator vs triage nurse vs APP vs physician).

  • Post-discharge sequencing of outreach, medication reconciliation prompts, follow-up scheduling triggers and supportive care screening steps.


This is where the Triple Aim tension shows up in operations. Better models do not matter if decision rights, queue ownership and escalation thresholds are undefined.


The risks and challenges emerging because of AI use in oncology


The most consequential risks are not “AI is sometimes wrong.” They are more specific:


AI changes where errors originate, how they replicate, and how long they persist before detection.


Several risk areas now emerge due to the use of AI applications in oncology.


Scenario 1: Clinical record integrity risk from generative documentation


Ambient AI documentation is often treated as a low-risk entry point. Evidence from recent studies suggests the efficiency benefits are real, while consent, workflow integration and error governance remain variable across implementations.


In oncology, documentation is not passive. It feeds:


  • Symptom triage prioritization

  • Navigation risk stratification

  • Medical necessity narratives and audit defense

  • Infusion supervision documentation

  • Downstream quality abstraction and measure logic


When omissions, semantic substitutions, or fabricated elements enter the record, the defect can propagate into patient support workflows before it is noticed.


Scenario 2: Quality measurement and analytics risk that persists across reporting cycles


AI-supported quality platforms can misclassify patients because of measure logic errors, extraction defects, or mapping drift. A critical operational feature is persistence: once a logic error is embedded, it can remain across cycles unless leaders challenge it with examples and reconciliation.


An example experienced by a client showed that measure logic and downstream data handling produced implausible performance, then corrected only after escalation and vendor code updates.


Scenario 3: Automation bias and “false objectivity”


Automation bias is not a clinician-only issue. Administrators can over-trust AI outputs because they appear data-driven and consistent. In oncology this delays detection of:


  • Documentation errors that affect billing and audit defense

  • Quality reporting defects that affect certification and value-based performance

  • Pathway adherence analytics that may be directionally wrong but operationally persuasive


Scenario 4: Accountability ambiguity when AI becomes part of the medical record


Recent work on ambient documentation highlights that consent processes and responsibility perceptions vary and that trust often hinges on clinician framing and patient understanding of the tool.


Operationally, ambiguity becomes a risk when it is not resolved into policy: who attests, who corrects, what is audited and what constitutes an incident.


Scenario 5: Agentic AI amplification risk


Agentic systems do not merely “generate.” They sequence work. When upstream documentation or analytics are wrong, agentic routing can amplify impact because it changes who gets attention first.


This is the main reason governance has to be framed as decision-flow governance, not IT oversight.


Scenario 6: Emerging transparency expectations that affect contracting and audit posture


Even though AI tools are not regulated as medical devices, transparency expectations are rising around algorithmic interventions in certified health IT and around how organizations document and disclose decision support behavior. The ONC HTI-1 final rule is one of the clearer signals here.


This matters in oncology because quality reporting and decision support are often tightly coupled to certified workflows.


Common mitigation actions for the risks identified


Mitigation is not a single control. It is a control stack tied to the maturity of autonomy. The discipline that reconciles competing frameworks is this: match the control stack to the level of AI authority in workflow.



This is the core operational point:


Readiness is the ability to detect and correct AI defects before they propagate into patient support queues.


How risks could impact patient care, regulatory obligations, and value-based and quality reporting


  1. Patient care and patient support throughput

    In oncology, avoidable utilization and clinical deterioration often trace back to operational timing failures: missed symptom escalation, delayed outreach post-discharge, fragmented supportive care routing. Agentic AI can reduce these failures only if upstream data and documentation are governed. If not, it can institutionalize them by automating the wrong prioritization.

  2. Regulatory and audit posture

    Two practical realities shape 2026 risk:

    1. Non-device AI does not mean low liability. 

      If AI-generated content becomes part of the record and influences care, governance expectations shift to the deploying organization.

    2. Transparency requirements are moving closer to operational workflows 

      via certified health IT requirements and related expectations, which influences what vendors can disclose and what you should demand contractually.

  3. Value-based care and quality reporting

    Quality programs and certification measures are increasingly dependent on algorithmic abstraction and logic. When the abstraction is wrong, the organization can carry three simultaneous harms:

    1. inaccurate performance narratives to leadership and boards

    2. payer exposure or missed incentives

    3. misallocated improvement work because staff are sent to “fix” false gaps


A summarized view for practice leaders


The prevailing simplification in the field is that the path to agentic AI runs through low-risk generative wins. In oncology that is misleading. Ambient documentation can reduce burden and improve clinician experience in the short term, while still introducing record-integrity and accountability risk if governance is weak.


A more accurate 2026 posture is:


  1. Treat generative outputs as clinical inputs, not clerical conveniences

    If an AI output is reused to trigger triage, routing, coding, or quality logic, govern it like an upstream clinical input.

  2. Define decision rights before you expand autonomy

    Agentic benefit is downstream of authority design: queue ownership, Service Level Agreements, escalation thresholds, override rules, auditability.

  3. Measure what value-based programs actually reward

    Do not over-index on time saved. Track time-to-intervention, escalation timeliness, post-discharge stability, avoidable ED utilization and supportive care completion rates.

  4. Assume error persistence unless you build detection loops

    If you are not sampling, reconciling and versioning AI logic, you are implicitly accepting that defects will persist across cycles.


Closing implications


For oncology leaders, the core question is no longer whether AI can generate content. It is whether your organization can govern AI-mediated decision flow without degrading patient support reliability. The practices that will use agentic AI effectively from 2026 through 2030 will be the ones that treat documentation integrity, measure logic integrity and queue ownership as a single operating system.


If you are assessing AI vendors or planning an internal roadmap, this is the point where a formal readiness assessment usually changes outcomes.


How My Work Aligns with This Perspective


Svoboda Consulting consulting services help oncology organizations navigate these risks and create mitigation procedures. My work is grounded in the premise that AI readiness in oncology is primarily an operational and governance challenge, not a technology selection exercise. I focus on helping physician leaders and administrators translate abstract AI risk discussions into concrete decisions about accountability, workflow authority and control design, particularly in navigation, triage, documentation and quality reporting. This includes structured readiness assessments that surface where generative and agentic AI intersect with patient support workflows, value-based care measures and regulatory obligations, and where current controls are insufficient for the level of autonomy being introduced. The goal is not to slow adoption, but to ensure that AI-enabled scale improves timeliness, reliability and patient experience without embedding new sources of clinical, financial, or compliance risk.


Glossary of Terms


Generative AI: AI that creates clinical notes, summaries, or reports.

Agentic AI: AI systems designed to independently plan, make decisions and take actions toward achieving defined objectives, with limited or no continuous human intervention.

Automation Bias: Tendency to over trust automated outputs.

AI-Assisted Documentation: Clinical documentation supported or generated by AI tools, sometimes called “ambient AI” or  “Ambient Documentation.”

Algorithmic Logic: Rules used by AI systems to interpret and act on data.

 

References


These references were used or researched for this article:


  1. ONC HTI-1 Final Rule; FDA Clinical Decision Support Guidance; HHS OCR HIPAA and Health IT guidance. These sources establish expectations for transparency, accountability, and data use in AI-enabled workflows. While many oncology AI tools are not regulated as devices, governance failures can still create compliance, audit, and reputational risk.

  2. NEJM (Verghese et al; Rajkomar et al); Nature Medicine (Topol); NPJ Digital Medicine. These sources demonstrate that AI errors differ from human errors and that automation bias is a predictable failure mode. Reinforces the need for human-in-the-loop controls and auditability when AI influences care prioritization or documentation.

  3. JAMA; Annals of Internal Medicine; Harvard Business Review. These sources explain why generative and ambient documentation tools are rapidly adopted. Highlights that efficiency gains do not eliminate downstream clinical, quality, or legal risk if record integrity is not governed.

  4. ASCO QOPI. These resources show how AI-assisted abstraction and analytics directly affect certification, payer programs and performance narratives. Measure logic or extraction errors can persist across cycles unless actively challenged.                       

  5. Kahneman, Thinking, Fast and Slow. This article provides a non-technical explanation for why leaders and clinicians may over-trust AI outputs, increasing exposure when defects are not surfaced early.

 
 
 

Comments


bottom of page