WALTHAM, Mass. — Unauthorized artificial intelligence tools, often referred to as “shadow AI,” are being widely used across U.S. hospitals and health systems, including in some direct patient care settings, according to a new survey released by Wolters Kluwer Health.
The survey of healthcare professionals and administrators found that 40 percent of respondents have encountered unsanctioned AI tools in their organizations, while nearly 20 percent reported personally using them. One in 10 respondents said an unauthorized AI tool had been used for a direct patient care application, raising concerns around patient safety, data privacy, and regulatory compliance.
The findings suggest that clinicians and administrators are increasingly turning to AI to improve speed and workflow efficiency, sometimes in the absence of approved or enterprise-ready options.
“Doctors and administrators are choosing AI tools for speed and workflow optimization, and when approved options aren’t available, they may be taking risks,” said Yaw Fellin, senior vice president and general manager of Clinical Decision Support and Provider Solutions at Wolters Kluwer Health. “Shadow AI isn’t just a technical issue; it’s a governance issue that may raise patient safety concerns.”
According to the survey, half of respondents who encountered unauthorized AI tools cited faster workflows as the primary reason for their use. Among healthcare providers, curiosity and experimentation slightly outweighed functionality as motivating factors. The data also showed uneven involvement in AI governance, with administrators three times more likely than providers to be actively involved in AI policy development, at 30 percent versus 9 percent. Awareness of existing policies, however, was higher among providers than administrators.
Despite governance gaps, the survey found broad adoption and optimism around AI in healthcare. More than half of healthcare professionals reported frequent use of AI tools, and nearly 90 percent agreed or strongly agreed that AI will significantly improve healthcare within the next five years. Data analysis was cited as the most common use case, reported by 60 percent of providers and 78 percent of administrators.
Patient safety emerged as the top concern related to AI use, cited by roughly one-quarter of both providers and administrators. Administrators ranked patient safety as their leading concern overall, followed by privacy and data security risks. Providers identified inaccurate outputs as their second-highest concern, while nearly a quarter of all respondents expressed worries about health data privacy and security.
Scott Simeone, senior vice president and chief information officer at Tufts Medicine, said the survey reflects a broader challenge facing health systems as AI adoption accelerates.
“GenAI is showing high potential for creating value in healthcare, but scaling it depends less on the technology and more on the maturity of organizational governance,” Simeone said. “As clinical use grows, health systems need enterprise-grade controls, transparency, and literacy so clinicians and patients understand when AI is supporting decisions, how it works, and where human judgment remains essential.”
Wolters Kluwer said the findings highlight the need for clearer AI governance frameworks, stronger compliance policies, and broader education to ensure AI tools used in clinical environments are validated, secure, and appropriately monitored. (Source: IANS)


