Survey surfaces growing awareness of AI security risks
How AI vulnerabilities are shaping new cybersecurity challenges
Takeaways
- AI vulnerabilities are increasingly shaping cybersecurity challenges, with 87% of surveyed leaders noting heightened security risks in 2025.
- Data leaks and adversarial advancements are the most pressing AI security concerns, according to the global survey.
- 64% of organizations have processes to assess the security of AI tools before deployment, but many tools lack adequate controls.
- Cyber-enabled fraud and phishing risks are rising, alongside concerns about supply chain, software vulnerabilities, and ransomware attacks.
- The expanding attack surface due to AI adoption makes cybersecurity increasingly difficult, with the lack of controls in many AI tools posing significant risks.
- Security governance frameworks for AI are urgently needed, but rapid AI agent adoption may outpace cybersecurity teams’ ability to respond, potentially leading to major breaches.
A global survey conducted by the World Economic Forum (WEF) suggests there is a lot more to the inherent cybersecurity risks that vulnerabilities in artificial intelligence (AI) applications are creating.
The survey of 873 C-suite executives, academics, civil society, and public-sector cybersecurity leaders finds a full 87% appreciate the fact that AI security risks increased in 2025, with 64% reporting their organization has a process in place to assess the security of AI tools before deploying them. The most pressing AI security concerns are data leaks (30%) followed closely by advancement of adversarial capabilities (28%), the survey finds.
More than three-quarters (77%) also noted their organization has already implemented AI-enabled tools to fulfil its cybersecurity objectives, with phishing and email threat detection (52%) at the top of those initiatives, followed by detecting and responding to intrusions or anomalies (46% ), automating security operations (43%), user-behavior analytics and insider threat detection (40% ), and threat intelligence and risk prioritization (39%).
There are, naturally, obstacles to AI adoption that most organizations are already encountering, especially insufficient knowledge and/or skills (54%), validating AI output (41%), uncertainty about actual risks (39%), insufficient funds (36%), and unclear business cases (33%), the survey finds.
Nevertheless, just about all respondent (94%) also identified AI/machine learning as the technology that will most significantly affect cybersecurity in the next 12 months, followed by cloud computing (61%) and quantum computing (37%).
Cyber-enabled fraud and other threats on the rise
Of course, AI is only the latest of many threat vectors that are creating increased levels of risk to the business. The survey, for example, finds that more than three-quarters of respondents (77%) are also aware that risks involving cyber-enabled fraud and phishing have increased in the last year. Additionally, there is more awareness of risks involving supply chains (65%), software vulnerabilities (58%) and ransomware attacks (54%).
Even though more than three-quarters (78%) said their workforce have the skills needed to achieve its current cybersecurity objectives, a full 61% identified rapidly evolving threat landscapes and emerging technologies as their biggest challenge when it comes to achieving and maintaining cyber resilience, followed by third-party and supply chain vulnerabilities (46%) and a shortage of cybersecurity skills and expertise (45%).
Collectively, the survey makes it apparent that even in the age of AI it’s never been more challenging to ensure cybersecurity. While cybersecurity teams themselves are taking advantage of AI to automate tasks, the overall size of the attack surface that needs to be defended only continues to expand as more AI tools and applications are deployed. Unfortunately, many of those tools, such as the increasingly popular OpenClaw AI agent, have no security controls at all.
Every cybersecurity team will eventually need to deploy a security governance and compliance framework to apply controls to AI. In the meantime, however, the pace of AI agent adoption within many organizations is making it difficult for cybersecurity teams to keep pace. There will, however, soon be that all but inevitable major cybersecurity breach that once it occurs might just curb the enthusiasm for AI agents long enough for cybersecurity teams to fully prepare.
Bericht über E-Mail-Sicherheitsverletzungen 2025
Wichtige Erkenntnisse über die Erfahrungen mit und Auswirkungen von E-Mail-Sicherheitsverletzungen auf Unternehmen weltweit
Abonnieren Sie den Barracuda-Blog.
Melden Sie sich an, um aktuelle Bedrohungsinformationen, Branchenkommentare und mehr zu erhalten.
Der MSP Customer Insight Report 2025
Ein globaler Blick darauf, was Organisationen von ihren Cybersecurity Managed Service Providers benötigen und erwarten.