As AI tools become more embedded in everyday work processes, new data reveals that their rapid and often unregulated adoption is creating significant cybersecurity risks. According to the Cybernews Business Digital Index, nearly 90% of widely used AI tools have been exposed to data breaches, with 51% experiencing corporate credential theft.

The widespread use of AI chatbots and productivity tools has grown quickly, with around 75% of workers now relying on them for tasks such as note-taking, email drafting and communication. However, only 14% of employers have implemented official AI policies, leaving most usage unmonitored and unsanctioned by IT departments.

This lack of oversight has led to unsafe practices, such as the use of personal accounts for work-related AI queries. Research from Harmonic shows that 45.4% of sensitive data prompts are submitted using personal logins, bypassing enterprise-level monitoring and increasing exposure to data leaks and credential theft.

Surveys by Google and Elon University show that multi-tool use is common. Gen Z workers aged 22–27 are the most active group, with 93% using two or more AI tools at work, followed by 79% of millennials. Despite this widespread usage, one-third of AI users reportedly hide their activity from management, further complicating risk management.

Enterprise Security Gaps in AI Tool Usage

Cybernews analysed 52 of the most frequently used AI web tools in February 2025, based on traffic data from Semrush. The findings point to weak security standards across popular platforms. While the average cybersecurity score across tools was 85 out of 100, 41% of platforms received a D or F grade, exposing serious inconsistencies.

Among these, 84% of the tools studied had suffered at least one known data breach. Of greater concern, 36% had been breached within the past 30 days. Researchers linked these incidents to infrastructure issues, unpatched systems and weak access controls.

Vincentas Baubonis, Head of Security Research at Cybernews, said, “High average scores don’t mean tools are entirely safe – one weak link in your workflow can become the attacker’s entry point. Once inside, a threat actor can move laterally through systems, exfiltrate sensitive company data, access customer information or even deploy ransomware.”

The analysis also revealed widespread issues with SSL/TLS encryption configurations. A total of 93% of tools had encryption weaknesses, increasing the likelihood of intercepted or altered communications between users and platforms. Infrastructure vulnerabilities were found in 91% of tools, often due to outdated server setups or poor cloud configurations.

Credential Theft and Password Reuse Remain Persistent Threats

A major factor behind these security incidents is poor password hygiene. Forty-four percent of companies developing AI tools showed evidence of employee password reuse, increasing the risk of credential-stuffing attacks. Once login details are compromised, attackers can access systems without triggering standard alerts.

In total, 51% of tools had experienced incidents involving stolen corporate credentials. These breaches often act as entry points to broader data compromise, with stolen passwords used to bypass security layers and extract sensitive information.

Emanuelis Norbutas, Chief Technical Officer at nexos.ai, said, “Unregulated use of multiple AI tools in the workplace, especially through personal accounts, creates serious blind spots in corporate security. Each tool becomes a potential exit point for sensitive data, outside the scope of IT governance.”

He added: “Many AI tools simply aren’t built with enterprise-grade security in mind. Employees often assume these tools are safe by default, yet many have already been compromised, with corporate credentials among the first targets.”

Productivity Tools Identified as the Weakest Category

AI tools used for productivity – including note-taking, scheduling and content generation – were found to be the most vulnerable. Business Digital Index reports that every tool in this category had flaws in infrastructure security and SSL/TLS configurations.

These tools also showed the highest average number of stolen corporate credentials per company, at 1,332. Ninety-two percent had experienced at least one data breach, highlighting the risks involved in using unregulated third-party platforms for daily tasks.

Baubonis said, “A tool might appear secure on the surface, but a single overlooked vulnerability can jeopardise everything. Hugging Face is a perfect example of that risk – it only takes one blind spot to undermine months of security planning and expose the organisation to threats it never anticipated.”