
The increasing integration of artificial intelligence (AI) tools in healthcare is bringing to light a new risk: shadow AI. This term refers to AI technologies adopted without the approval or knowledge of IT departments, potentially leading to significant data security vulnerabilities. As healthcare organizations expand their use of AI, the lack of oversight can create governance gaps, legal issues, and heightened risks of data breaches.
Vishal Kamat, vice president of data security at IBM, emphasized the dangers of shadow AI, stating, “What makes shadow AI particularly dangerous is its invisibility and autonomy.” These tools can learn and generate outputs without clear traceability, complicating the ability of security teams to safeguard sensitive data. Identifying these tools is only part of the challenge; understanding their interaction with sensitive workflows is equally critical.
According to a 2025 survey by symplr, an enterprise healthcare operations software company, a staggering **86%** of IT executives reported experiencing instances of shadow IT within their health systems, up from **81%** in the previous year. Kamat explained that these occurrences often arise from employees seeking to enhance efficiency. Common examples include the use of personal cloud storage, unauthorized messaging apps, and unvetted project management platforms, all of which may handle sensitive data outside the organization’s approved framework.
Shadow AI can exacerbate these issues. Kamat noted, “Employees may deploy open-source large language models within enterprise cloud environments or use AI code assistants without oversight.” Such actions not only bypass established security controls but also expose organizations to risks like data leakage and regulatory violations.
The **2025 Cost of a Data Breach** report by IBM highlighted the growing prevalence of shadow AI, indicating that **20%** of organizations across all sectors experienced breaches due to security incidents involving shadow AI. This figure represents a **7 percentage point** increase compared to breaches involving sanctioned AI technologies. Organizations with high levels of shadow AI reported average breach costs that were **$200,000** above the global average.
Kamat pointed out that even well-meaning experimentation with unsanctioned tools can unleash serious security risks. In healthcare, where patient data breaches and violations of regulations like **HIPAA** can have severe consequences, the stakes are particularly high. The report revealed that customers’ personally identifiable information was the most compromised data type in shadow AI incidents, while **40%** of tracked incidents involved compromised intellectual property.
A critical issue in managing shadow AI is visibility. Kamat explained, “When security teams lack awareness of AI tools in use, they’re effectively blindfolded. They can’t assess risk, enforce policy, or ensure accountability.” This lack of oversight can lead to unvalidated algorithms influencing clinical decisions and compliance violations going undetected until it is too late.
To mitigate the risks associated with shadow AI, healthcare organizations must prioritize visibility and control. Kamat stressed the need for tools that continuously detect unauthorized applications, AI usage, and data flows involving sensitive patient information. Identifying unauthorized tools is just the first step; they must be rigorously assessed for risks and integrated into a formal review process.
Communication is equally essential. Merely having approved AI tools or policies is insufficient if employees are not informed. Kamat underscored the importance of consistent communication to ensure that staff understand how to use sanctioned tools responsibly. More than **60%** of organizations included in IBM’s report lacked governance policies to manage AI or detect shadow AI, indicating a significant gap in security protocols.
Implementing robust AI access controls and conducting regular audits can help organizations identify unsanctioned AI use, thereby reducing the risk of data breaches and ensuring compliance with privacy regulations. Kamat concluded, “In healthcare, shadow AI isn’t just a technology risk; it’s a compliance and patient safety threat.” With stringent regulations like HIPAA and increasing public scrutiny, proactive governance is vital not only for meeting regulatory requirements but also for maintaining patient trust.
As healthcare organizations navigate the complexities of AI adoption, understanding the implications of shadow AI will be crucial in safeguarding sensitive data and ensuring operational integrity.