Top 60 AI Statistics & Tips to Understand How It Can Improve Cybersecurity
Cybercrime is expected to cost $10.29 trillion globally this year — and increase to $15.6 trillion by 2029.
With cybercrime rising in cost and frequency, organizations are looking to increase investments in emerging technologies and artificial intelligence in particular to enhance security measures, including threat detection and prevention, malware detection, authentication and access control, incident response, predictive analytics, and more.
However, AI can also be weaponized by malicious actors to launch more and increasingly sophisticated attacks. Experts, executives, and government officials have warned that AI-generated attacks are already happening and will likely surge in the near future.
To help you get a better understanding of the role of AI in cybersecurity today and in the future, we’ve compiled a list of over 50 statistics. Read to learn about AI trends, challenges, and threats that organizations are facing today — plus, get tips for how to use AI to improve your security posture.
The potential and challenges of AI
More organizations are starting to invest in artificial intelligence to enhance their security posture. Read the statistics below to learn how cybersecurity leaders are planning to invest and implement AI and what challenges they’re facing.
1. 82% of IT and business decision-makers say that AI is already impacting what their organization can achieve, while 52% state they will invest in advanced technologies such as AI to respond to the changing market environment. (Alteryx)
2. 57% of IT and business decision-makers believe that AI uptake will become pervasive across all sectors and business functions. (Alteryx)
3. 80% of tech executives reported that they will increase investment in AI in the next year. (EY survey)
4. 58% of tech executives at companies with plans to increase investments in IT or emerging technologies most often report having a plan to prioritize generative AI. (EY survey)
5. The market for artificial intelligence grew beyond 184 billion U.S. dollars in 2024, a considerable jump of nearly 50 billion compared to 2023. (Statista)
6. The AI market is expected to grow twentyfold by 2030, up to nearly two trillion U.S. dollars. (Statista)
7. More than half of breached organizations are facing security staffing shortages, and security leaders are, in turn, marshaling AI and automation solutions to close the skills gap. (IBM)
8. Nearly three-quarters (73%) of business leaders are feeling pressure to implement AI at their organizations, but 72% said their organization lacks the skills to fully implement AI and ML. (Workday)
9. Hackers don’t have much confidence in companies securing their AI systems, especially when compared to CISOs. When asked how many organizations are adequately prepared to tackle AI security, the top answer of hackers (41%) was less than 10% of organizations whereas the top answer for CISOs (34%) was between 26-50% of organizations. (Bugcrowd)
10. 80% of business leaders agree AI and ML helps employees work more efficiently and make better decisions, and that these technologies are required to keep their business competitive.(Workday)
11. Despite wide-spread adoption and broad agreement about the benefits of AI and ML in the enterprise, business leaders share common concerns:
- 77% are concerned about the timeliness or reliability of the underlying data
- 39% consider potential bias to be a top risk when considering AI
- 48% cite security and privacy concerns as the main barriers to implementation. (Workday)
12. Only 29% of business leaders in 2023 said they are very confident that AI and ML are being applied ethically in business right now. (Workday)
13. As reliance on AI increases, the leading concern for organizations is data privacy (named by 50%), followed by transparency (41%), data governance (41%), and accountability (36%). 34% are also worried about the impact of AI on the environment. (Alteryx)
14. To help mitigate their concerns, 75% of organizations currently have a policy on AI security, ethics, and governance in place. Only 1% aren’t doing anything to address these issues. (Alteryx)
Recommended reading
Why You Need an AI Policy in 2025 & How to Write One [+ Template]
AI in cybersecurity
Cybersecurity products and processes are being transformed by AI. Learn how it’s impacting activities like threat detection and the market as a whole.
15. The global AI in cybersecurity market products was estimated at $25 billion in 2024 and is expected to surpass $147 billion by 2034, with a CAGR of 19% over the next decade. (Precedence Research)
16. 2 out of 3 organizations studied in 2024 stated they’re deploying security AI and automation across their security operations center, a 10% jump from the prior year. (IBM)
17. When deployed extensively across prevention workflows—attack surface management (ASM), redteaming and posture management—organizations averaged USD 2.2 million less in breach costs compared to those with no AI use in prevention workflows. This finding was the largest cost savings revealed in the 2024 Cost of a Data Breach report. (IBM)
18. The number of organizations that used security AI and automation extensively grew to 31% in 2024 from 28% last year. Although it’s just a 3 percentage point difference, it represents a 10.7% increase in use. (IBM)
19. The share of those using AI and automation on a limited basis also grew from 33% to 36%, a 9.1% increase. (IBM)
20. The share of organizations with no security AI and automation deployed decreased from 39% in 2023 to 33% in 2024. (IBM)
21. Among organizations that stated they used AI and automation extensively, about 27% used AI extensively in each of these categories: prevention, detection, investigation and response. Roughly 40% of these organizations used AI technologies at least somewhat in each category. (IBM)
22. 74% of ethical hackers say AI makes hacking more accessible, enabling them to advance their understanding of hacking methods and explain tricky code or why certain exploits work and others don't. (Bugcrowd)
23. 77% of ethical hackers use AI technologies to hack, as compared to 64% in 2023. (Bugcrowd)
24. The top three ways ethical hackers use AI are:
- analyzing data (62%)
- automating tasks (61%)
- identifying vulnerabilities (38%). (Bugcrowd)
25. 71% of ethical hackers believe AI technologies increase the value of hacking, as compared to 21% in 2023. That's a significant increase of 50%. (Bugcrowd)
26. Almost half of ethical hackers said AI will never beat them in value or effectiveness. The top reason was that they bring a level of creativity that AI lacks. (Bugcrowd)
Recommended reading
190 Cybersecurity Statistics to Inspire Action This Year
Generative AI statistics
Generative AI in particular is fundamentally changing the jobs and workflows of cybersecurity professionals. Discover how organizations are investing — or not investing — in generative AI.
27. Two-thirds of organizations (67%) are increasing investments in generative AI after seeing strong value to date. (Deloitte)
28. A large percentage (34%) report improved efficiency and productivity as their single, most important benefit achieved. (Deloitte)
29. Organizations also reported other benefits, including:
- Encouraged innovation (12%)
- Improved existing products and services (10%)
- Reduced costs (9%)
- Enhanced relationships with clients/customers (9%)
- Increased speed and/or ease of developing new systems/software (7%)
- Increased revenue (6%)
- Developed new products and services (6%)
- Shifted workers from lower- to higher-value tasks (4%)
- Better detection of fraud and risk management (4%) (Deloitte)
30. Three-quarters of respondents said their organizations have increased investment around data life cycle management to enable their Generative AI strategy. Top actions include enhancing data security (54%) and improving data quality (48%). (Deloitte)
31. Organizations feel far less ready for the challenges Generative AI brings to risk management and governance—only 23% rated their organization as highly prepared. (Deloitte)
32. Three of the top four things holding organizations back from developing and deploying generative AI tools and applications are:
- worries about regulatory compliance (36%)
- difficulty managing risks (29%)
- and lack of governance model (29%). (Deloitte)
33. To deal with regulatory uncertainty, about half of organizations reported they are preparing regulatory forecasts or assessments. (Deloitte)
34. 46% of organizations said they are working with external partners to deal with regulatory uncertainty. (Deloitte)
35. 14% said they are not making any specific plans at this time to deal with regulatory uncertainty. (Deloitte)
36. When asked what they think will most help drive greater value for their generative AI initiative, the second top action cited was effectively managing risks (13%). (Deloitte)
37. 78% of leaders surveyed in Q1 agreed that more governmental regulation of AI was needed. (Deloitte)
38. 89% of business representatives say that regulations and standards should be developed for the use of AI and generative AI within their sector. 91% say that such frameworks would help businesses implement AI responsibly. (Alteryx)
Recommended reading
How Can Generative AI Be Used in Cybersecurity? 10 Real-World Examples
AI and data breaches
Cyber attacks resulting in data being lost or compromised can severely impact an organization — but AI is lessening that impact. Find out how.
39. Using AI and machine learning insights was the second top factor mitigating average data breach costs. Breaches at organizations using these tools cost $4.6 million on average, which was $258,538 less than the mean cost of a data breach of $4.88 million. (IBM)
40. Breaches at organizations with extensive use of security AI and automation cost $1.88 million less than breaches at organizations with no use of security AI and automation — a 33% difference in average breach cost. (IBM)
41. Breaches at organizations with limited use of security AI and automation cost $4.64 million on average in 2024. This is 19% lower than the average breach cost at organizations with no use of security AI and automation, and 17% higher than than the average breach cost at organizations with extensive use. (IBM)
42. Among organizations that stated they used AI and automation extensively, using AI extensively all four categories— prevention, detection, investigation and response—dramatically lowered average breach costs. For example, when organizations used AI and automation extensively for prevention, their average breach cost was USD 3.76 million. Meanwhile, organizations that didn’t use these tools in prevention saw USD 5.98 million in costs, a 45.6% difference. (IBM)
43. Below are the cost savings when organizations used AI extensively in the other three categories, as compared to organization that didn't use these tools:
- Detection: $1.88 million
- Investigation: $1.74 million
- Response: $1.68 million (IBM)
44. Among organizations that stated they used AI and automation extensively, using AI extensively all any function— prevention, detection, investigation and response—accelerated the work of identifying and containing breaches. For example, when organizations did not use AI and automation for prevention, it took them 312 days on average to identify and contain a breach. Meanwhile, organizations that use these tools extensively in prevention were able to identify and contain a breach 111 days faster on average. (IBM)
45. Organizations extensively using security AI and automation identified and contained data breaches nearly 100 days faster on average than organizations that didn’t use these technologies at all. (IBM)
46. Organizations with limited use of security AI and automation identified and contained data breaches in 241 days on average, 66 days faster than organizations that didn’t use these technologies at all. (IBM)
Recommended reading
Biggest Data Breaches of 2024: What Went Wrong and Key Lessons for Strengthening Cybersecurity
AI as cyber threat
While AI can be a powerful tool for cyber defense, it can also be a powerful weapon for cyber crime. Learn more about how AI poses a threat in the wrong hands.
47. 53% of IT professionals said the top global concern is ChatGPT’s ability to help hackers craft more believable and legitimate sounding phishing emails. (BlackBerry)
48. 71% of IT professionals believe ChatGPT may already be in use by nation-states to attack other countries through hacking and phishing attempts. (BlackBerry)
49. 40% of all phishing emails targeting businesses are now generated by AI. (VIPRE Security Group)
50. 93% of ethical hackers believe that companies using AI tools introduced a new attack vector for threat actors to exploit. (Bugcrowd)
51. Only 53% of ethical hackers believe that existing security solutions meet the needs and risks of AI. (Bugcrowd)
52. 82% of ethical hackers believe the AI threat landscape is evolving too fast to adequately secure. (Bugcrowd)
53. 72% of ethical hackers believe the risks associated with AI outweigh its potential. (Bugcrowd)
54. When asked what are the top ways AI technologies can be misused to weaken an organization's cybersecurity or GRC measures, the top five answers from ethical hackers were:
- Creating fake data and identities that are hard to detect
- Manipulating systems to carry out nefarious tasks
- Developing tools and methods for large-scale attacks
- Poisoning data inputs to influence predictive models
- Exploiting errors and biases that affect risk decisions
55. In a study conducted by security researchers and practitioners, 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. (Harvard Business Review)
56. New research demonstrates that the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates as the non-AI-phishing messages created by human experts. (Harvard Business Review)
Tips for using AI to improve cybersecurity
Below are tips for using AI to enhance your organization’s security posture.
1. Identify high-potential use cases for AI in cybersecurity.
Whether you’re just starting to use AI, looking for ways to optimize it, or trying to justify more investment in AI in your cybersecurity program, it’s essential that you identify the high-potential use cases. These use cases will offer the most benefits to your organization and be the least complex to implement.
Some examples might be intrusion detection, malware detection, data protection, and compliance.
High-potential use cases should be prioritized first. You can put high-value use cases with increasing complexity on your roadmap.
2. Prioritize AI and automation in your prevention strategies
Organizations that applied AI and automation to security prevention saw the biggest impact from their AI investments in this IBM's 2024 Cost of a Data Breach report compared to three other security areas (detection, investigation and response), saving an average of USD 2.22 million over those organizations that didn’t deploy AI in prevention technologies.
Given these cost savings, it makes sense to prioritize applying AI and automation in the areas of ASM, red-teaming, posture management, and other security prevention strategies. This can often be addressed by working with a managed service provider.
How Does AI Reduce Human Error in Cybersecurity?
This ebook provides a high-level overview of the crucial role AI is playing in mitigating risks of data breaches, regulatory violations, financial losses, and more, using examples of real-world applications in cybersecurity and IT compliance.
3. Invest in security AI and automation to help improve detection and response times.
Another ideal use case for security AI and automation is the identification and containment of incidents and intrusion attempts. Using security technologies that can augment or replace human intervention will enable your organization to reduce processes driven by manual inputs, often across dozens of tools and complex, nonintegrated systems — or eliminate them entirely.
Investing in security AI and automation in this way can help significantly reduce average data breach costs and breach lifecycles.
4. Train and upskill cyber analysts in AI
While AI can be used to detect threats among enormous data sets more efficiently than humans, organizations still need cyber analysts to improve the logic underpinning AI algorithms to close potential threat entry points. According to the survey by Capgemini Research Institute, half of the executives say that there is a lack of qualified cybersecurity experts who are capable of doing so.
The key will be to train and upskill cyber analysts in AI so they have both the knowledge of the company and its key processes and the ability to integrate and validate AI along these processes as it is trying to secure them.
Cyber analysts will also be essential in decision making and policy making as well as other tasks like creating detection criteria and researching emerging threats.
Organizations that invest in cyber analyst training and upskilling will be most able to leverage the skill of human expertise and the speed and scale of AI.
5. Take a multilayered approach to cybersecurity
You can’t rely on AI alone to protect your organization from cyber threats. You must incorporate it into a multilayered approach consisting of technology, people, and processes.
In addition to AI, your organization’s cybersecurity strategy should include continuous monitoring, regular security awareness training, robust security policies, and cybersecurity talent management, among other security controls and tools.
Recommended reading
7 Benefits of Continuous Monitoring & How Automation Can Maximize Impact
Leverage security AI and automation with Secureframe
Security AI and automation has a tremendous impact on an organization’s ability to prevent and triage costly security incidents and maintain a strong security and compliance posture.
Secureframe’s suite of AI capabilities are designed to revolutionize the way businesses approach security and compliance, and include:
- Comply AI for Vendor Risk Management: Allows organizations to send custom or template-based questionnaires directly from the platform, with vendor responses posting directly in Secureframe for centralized management. Comply AI further enhances efficiency by automatically extracting relevant answers from hosted vendor documents, such as SOC 2 reports, speeding up security assessments and reviews.
- AI Framework Support: Secureframe now provides support for NIST AI RMF and ISO 42001, two key frameworks that guide organizations in the responsible design, development, and deployment of AI systems.
- Comply AI for Remediation: Simplify the process of fixing failing cloud tests. This feature automatically generates fixes as infrastructure-as-code, allowing users to effortlessly implement these solutions in their cloud environments, improving security and making the remediation process more efficient.
- Comply AI for Risk: Comply AI for Risk streamlines the risk assessment process. By analyzing risk descriptions, it creates inherent risk scores, treatment plans, and residual risk scores, helping organizations improve their risk awareness and response.
- Comply AI for Policies: Using generative AI, Comply AI for Policies streamlines policy creation and editing. The AI-driven text editor enables organizations to produce policies that are not only compliant but also reflective of their unique tone and voice.
- Generative AI in Questionnaire Automation: Utilizes generative AI to suggest answers, pulling control and test information from Secureframe Comply, policies, and the Knowledge Base. This AI-powered automation saves customers even more time on completing lengthy questionnaires while ensuring accuracy by referencing content directly from Secureframe Comply.
Learn how Secureframe’s compliance automation platform can help your organization specifically by scheduling a demo today.