New Analysis Reveals >80% of AI Platforms Have Experienced a Data Breach

The rapid and unregulated adoption of AI tools in the workplace is exposing significant security weak points that threaten corporate data, systems, and operations. While these tools boost productivity, most are not built with enterprise-grade security, and their widespread use, often without IT oversight, creates serious vulnerabilities.

A new analysis of 52 popular AI tools revealed that 84% had experienced data breaches, 93% had SSL/TLS configuration flaws, and 91% had hosting infrastructure vulnerabilities, making them susceptible to interception, manipulation, and lateral attacks. Moreover, 51% of tools had experienced corporate credential theft and 44% of companies building AI tools showed signs of employee password reuse, a common path to credential-stuffing attacks.

The most commonly used category for workplace AI, note-taking, scheduling, and content creation, showed 100% SSL/TLS and hosting vulnerabilities, with 92% suffering breaches.

These findings underscore the urgent need for robust application security, enterprise-grade oversight, and strict AI platform monitoring. As AI tools integrate into daily workflows, every tool must be treated as a potential attack vector. Businesses must implement comprehensive network-to-application monitoring and validate their connections to third-party AI platforms to mitigate escalating risks and protect sensitive data. Read more about this story on our LinkedIn page

We use cookies to offer you a better browsing experience and to analyze site traffic. By using our site, you consent to our use of cookies.

Essential Cookies
Site Analytics