
AI in Cybersecurity: How It’s Used + 6 Latest Developments
Artificial intelligence (AI) and machine learning technologies have been powering some cybersecurity capabilities for decades. Anti-virus, spam-filtering, and phishing-detection tools are just a few examples.
However, the recent advances in AI have led to an explosion in interest around AI-powered cybersecurity capabilities. This has resulted in an unprecedented amount of product releases, investment, and discourse around AI in cybersecurity.
To understand how AI has already and will continue to shape cybersecurity, we’ll explain how AI is used in cybersecurity, starting with more established use cases as well as some of the latest developments.
How is AI used in cybersecurity?
AI is used in cybersecurity to automate tasks that are highly repetitive, manually-intensive, and tedious for cyber analysts and other security experts to complete. This frees up time and resources so these human experts can focus on more complex security tasks like policymaking.
Take endpoint security for example. Endpoint security refers to the measures an organization puts in place to protect devices like desktops, laptops, and mobile devices from malware, phishing attacks, and other threats. To supplement the efforts of human experts and policies they put in place to govern endpoint security, AI can learn the context, environment, and behaviors associated with specific endpoints as well as asset types and network services. It can then limit access to authorized devices based on these insights, and prevent access entirely for unauthorized and unmanaged devices.
AI is used in other areas of cybersecurity as well to unlock a range of benefits, including:
- Improving the efficiency of cybersecurity analysts
- Identifying and preventing critical threats more quickly
- Effectively responding to cyber attacks
- Reducing cybersecurity costs
Consider the impact of security AI and automation on average data breach costs and breach lifecycles alone. According to a survey by IBM, breaches at organizations with fully deployed security AI and automation cost $3.05 million less than breaches at organizations with no security AI and automation deployed — a 65.2% difference in average breach cost. Organizations with fully deployed security AI and automation are also able to detect and contain a data breach 74 days faster than companies with no security AI and automation deployed.
To better understand the impact of AI on cybersecurity, let’s take a look at some specific examples of how AI is used in cybersecurity below.
Recommended Reading

35+ AI Statistics to Better Understand Its Role in Cybersecurity [2023]
AI in cybersecurity examples
Many organizations are already using AI to help make cybersecurity more manageable, more efficient, and more effective. Below are some of the top applications of AI in cybersecurity.
1. Threat detection
Threat detection is one of the most common applications of AI in cybersecurity. AI can collect, integrate, and analyze data from hundreds and even thousands of control points, including system logs, network flows, endpoint data, cloud API calls, and user behaviors. In addition to providing greater visibility into network communications, traffic, and endpoint devices, AI can also recognize patterns and anomalous behavior to identify threats more accurately at scale.
For example, legacy security systems analyzed and detected malware based on signatures only whereas AI- and ML-powered systems can analyze software based on inherent characteristics, like if it’s designed to rapidly encrypt many files at once, and tag it as malware. By identifying anomalous system and user behavior in real time, these AI- and ML-powered systems can block both known and unknown malware from executing, making it a much more effective solution than signature-based technology.
2. Threat management
Another top application of AI in cybersecurity is threat management.
Consider that 59% of organizations receive more than 500 cloud security alerts per day and 38% receive more than 1,000, according to a survey by Orca Security. 43% of IT decision makers at these organizations said more than 40% of alerts are false positives and 49% said more than 40% are low priority. Despite 56% of respondents spending more than 20% of their day reviewing alerts and deciding which ones should be dealt with first, more than half (55%) said their team missed critical alerts in the past due to ineffective alert prioritization.
This results in a range of issues, including missed critical alerts, time wasted chasing false positives, and alert fatigue which contributes to employee turnover.
In order to combat these issues, organizations can use AI and other advanced technologies like machine learning to supplement the efforts of these human experts. AI can scan vast amounts of data to identify potential threats and filter out non-threatening activities to reduce false positives at a scale and speed that human defenders can’t match.
By reducing the time required to analyze, investigate, and prioritize alerts, security teams can spend more time remediating these alerts, which takes three or more days on average according to 46% of respondents in the Orca Security survey.
3. Threat response
AI is also used effectively to automate certain actions to speed up incident response times. For example, AI can be used to automate response processes to certain alerts. Say a known sample of malware shows up on an end user’s device. Then an automated response may be to immediately shut down that device’s network connectivity to prevent the infection from spreading to the rest of the company.
AI-driven automation capabilities can not only isolate threats by device, user, or location, they can also initiate notification and escalation measures. This enables security experts to spend their time investigating and remediating the incident.
Latest developments in cybersecurity AI
When asked what they would like to see more of in security in 2023, the top answer among a group of roughly 300 IT security decision makers was AI. Many cybersecurity companies are already responding by ramping up their AI-powered capabilities.
Let’s take a look at some of the latest innovations below.
1. AI-powered remediation
More advanced applications of AI are helping security teams remediate threats faster and easier. Some AI-powered tools today can process security alerts and offer users step-by-step remediation instructions based on input from the user, resulting in more effective and tailored remediation recommendations.
Secureframe Comply AI does just that for failing cloud tests. Using infrastructure as code (IaC), Comply AI for remediation automatically generates remediation guidance tailored to users’ environment so they can easily update the underlying issue causing the failing configuration in their environment. This enables them to fix failing controls to pass tests, get audit-ready faster, and improve their overall security and compliance posture.
Recommended Reading

Introducing Secureframe Comply AI: Faster, Tailored Cloud Remediation
2. Enhanced threat intelligence using generative AI
Generative AI is increasingly being deployed in cybersecurity solutions to transform how analysts work. Rather than relying on complex query languages, operations, and reverse engineering to analyze vast amounts of data to understand threats, analysts can rely on generative AI algorithms that automatically scan code and network traffic for threats and provide rich insights.
Google’s Cloud Security AI Workbench is a prominent example. This suite of cybersecurity tools is powered by a specialized AI language model called Sec-PaLM and helps analysts find, summarize, and act on security threats. Take VirusTotal Code Insight, which is powered by Security AI Workbench, for example. Code Insight produces natural language summaries of code snippets in order to help security experts analyze and explain the behavior of malicious scripts. This can enhance their ability to detect and mitigate potential attacks.
3. Stronger password security using LLMs
According to new research, AI can crack most commonly used passwords instantly. For example, a study by Home Security Heroes proved that 51% of common passwords can be cracked by AI in under a minute.
While scary to think of this power in the hands of hackers, AI also has the potential to improve password security in the right hands. Large language models (LLMs) trained on extensive password breaches like PassGPT have the potential to enhance the complexity of generated passwords as well as password strength estimation algorithms. This can help improve individuals’ password hygiene and the accuracy of current strength estimators.
4. Dynamic deception capabilities via AI
While malicious actors will look to capitalize on AI capabilities to fuel deception techniques such as deepfakes, AI can also be used to power deception techniques that defend organizations against advanced threats.
Deception technology platforms are increasingly implementing AI to deceive attackers with realistic vulnerability projections and effective baits and lures. Acalvio’s AI-powered ShadowPlex platform, for example, is designed to deploy dynamic, intelligent, and highly scalable deceptions across an organization’s network.
5. AI-assisted development
This year, CISA published a set of principles for the development of security-by-design and security-by-default cybersecurity products.The goal is to reduce breaches, improve the nation’s cybersecurity, and reduce developers’ ongoing maintenance and patching costs. However, it will likely increase development costs.
As a result, developers are starting to rely on AI-assisted development tools to reduce these costs and improve their productivity while creating more secure software. GitHub Copilot is a relatively new but promising example. In a survey of more than 2,000 developers, developers who used GitHub Copilot completed a task 55% faster than the developers who didn’t.
6. AI-based patch management
As hackers continue to use new techniques and technologies to exploit vulnerabilities, manual approaches to patch management can’t keep up and leave attack surfaces unprotected and vulnerable to data breaches. Research in Action1’s 2023 State of Vulnerability Remediation Report found that 47% of data breaches resulted from unpatched security vulnerabilities, and over half of organizations (56%) remediate security vulnerabilities manually.
AI-based patch management systems can help identify, prioritize, and even address vulnerabilities with much less manual intervention required than legacy systems. This allows security teams to reduce risk without increasing their workload.
For example, GitLab recently released a new security feature that uses AI to explain vulnerabilities to developers — with plans to expand this to automatically resolve them in the future.
Harnessing the power of AI in your cybersecurity strategy
AI is increasingly being used in cybersecurity to reap a range of benefits, from improving the efficiency of cyber analysts to identifying and responding to critical threats in less time.
And this is just the start.
By 2030, the global market for AI-based cybersecurity products is estimated to reach $133.8 billion — a dramatic increase from $14.9 billion in 2021.
Secureframe is leading the security and compliance industry in AI innovation. Secureframe’s questionnaire automation leverages AI so customers can quickly answer security questionnaires and RFPs with more than 90% accuracy. And our latest innovation, ComplyAI, provides AI-powered remediation guidance to help speed up cloud remediation and time-to-compliance.
To learn more about how Secureframe can help you build trust with your customers, schedule a demo.