Shadow AI is becoming a growing issue for hospitals and health systems

Jan 24, 2026By Nelson Advisors

NA

Shadow AI in health systems refers to clinicians and staff using AI tools (like public chatbots or unapproved apps) without their organisation’s formal approval or oversight and it is now widespread in hospitals and health systems.

What “shadow AI” means in healthcare

Shadow AI is the unsanctioned use of AI tools, models or features without visibility from IT, security or governance teams.

In healthcare, that often means staff pasting patient details into public generative AI tools, using unvetted documentation assistants, or standing up their own models in cloud environments outside official processes.

Evidence it’s a growing issue at health systems

A Wolters Kluwer survey of more than 500 hospital and health system workers found 40% had encountered unauthorised AI tools at work, and about 17–20% admitted using them themselves.

Around one in ten respondents said they had used an unauthorised AI tool for direct patient care use cases, underscoring that this is not just back‑office experimentation.

Healthcare workers report turning to shadow AI primarily to speed workflows, cope with documentation burden, and access functionality they feel approved tools lack or do not yet offer.

Why it’s particularly risky in health systems

Data privacy and cybersecurity: Uploading PHI or other sensitive data into public or unvetted AI tools can bypass organisational controls, create “silent” cross‑border data leaks, and increase breach costs.

Patient safety and clinical quality: Unvalidated models can give inaccurate or biased outputs; if clinicians rely on them for documentation or decision support, errors may not be visible to governance teams until harm occurs.

Regulatory and legal exposure: Unapproved AI use can trigger violations of HIPAA and other privacy regimes and also complicate accountability when an AI output influences care without traceable oversight.

What’s driving shadow AI in hospitals

Burnout and workload: Clinicians and admin staff use generative AI to draft emails, summarise notes, or translate content because it feels like the only way to keep up with documentation and communication demands.​

Innovation gap: Staff perceive that official AI programmes are moving too slowly or that approved tools are less capable than consumer‑grade offerings, so they “experiment” on their own.

Low AI literacy and unclear policy: Many organisations have not yet provided clear, practical guidance on what is permitted, where data may go, and how to request new tools, which fuels informal workarounds.

How leading health systems are responding

Visibility and monitoring: Security leaders are starting with discovery tools to detect unauthorised AI usage and data flows, especially those involving patient information.

Governance structures: Some systems have created cross‑functional AI governance councils (clinical, IT, legal, compliance, operations) to evaluate tools, set guardrails, and approve use cases.

“Enable, don’t ban” strategies: Rather than blanket blocking, a number of organisations allow controlled experimentation, provide sanctioned AI options (e.g., ambient scribe, clinical decision support), and educate staff to keep usage inside safe channels.

To discuss how Nelson Advisors can help your HealthTech, MedTech, Health AI or Digital Health company, please email [email protected]