• blogangle-right
  • How Can Generative AI Be Used in Cybersecurity? 10 Real-World Examples

How Can Generative AI Be Used in Cybersecurity? 10 Real-World Examples

  • June 18, 2025
Author

Anna Fitzgerald

Senior Content Marketing Manager

In a recent survey of security executives and professionals by Splunk Inc., 91% of respondents said they use generative AI and 46% said it will be game-changing for their security teams.

Generative AI has already fundamentally changed the jobs and workflows of cybersecurity professionals.

In this article, we’ll explore the ways generative AI is impacting the cybersecurity industry for good and bad. We’ll also focus on real-world use cases of generative AI in cybersecurity today. 

Recommended reading

ai future of cybersecurity

How Artificial Intelligence Will Affect Cybersecurity in 2024 & Beyond

An overview of generative AI in cybersecurity

Generative AI has become a double-edged sword in the realm of cybersecurity. On one hand, malicious actors are increasingly harnessing its power to create sophisticated threats at scale. They exploit AI models like ChatGPT to generate malware, identify vulnerabilities in code, and bypass user access controls. Moreover, social engineers are employing generative AI to craft more convincing phishing scams and deepfakes, amplifying the threat landscape. A substantial 85% of security professionals that witnessed an increase in cyber attacks over the past 12 months attribute the rise to bad actors using generative AI.

However, generative AI also presents significant opportunities for fortifying cybersecurity defenses. It can aid in the identification of potential attack vectors, automatically respond to security incidents, and bolster threat intelligence capabilities.

To fully grasp the impact of generative AI in cybersecurity, CISOs and other security and IT leaders must understand the risks and benefits it offers. We’ll take a closer look at these below.

How generative AI is threatening cybersecurity

Malicious attackers are seizing the potential of generative AI to launch cyber attacks that are harder to detect and defend against. Let’s take a look at some of the risks of generative AI below.

Increasingly sophisticated attacks

In EY’s 2024 Human Risk in Cybersecurity Survey, 85% of respondents said they believe AI has made cybersecurity attacks more sophisticated.  

Hackers are using gen AI in particular to launch increasingly sophisticated attacks like self-evolving malware. These strains of malware use gen AI to “self-evolve” and create variations with unique techniques, payloads, and polymorphic code to attack a specific target and go undetected by existing security measures. 

Larger volumes of attacks

The World Economic Forum’s Global Cybersecurity Outlook 2025 report showed 72% of businesses see rising cyber risks and nearly half (47%) cite the malicious use of generative AI to enable more sophisticated and scalable attacks as a top concern.

That means cybercriminals are using gen AI to not only create more sophisticated cyber attacks but also to launch them in larger volumes. For example, the IBM X-Force Threat Intelligence Index 2024 found that gen AI capabilities facilitate upwards of a 99.5% reduction in the time needed to craft an effective phishing email. 

Bigger financial losses

By enabling cybercriminals to launch them at scale, generative AI can dramatically increase the profitability of certain attacks. Attackers no longer need to invest heavily in manual planning or technical expertise. Instead, they can rapidly generate convincing messages and execute more campaigns simultaneously using tools powered by large language models (LLMs).

This means even small-time threat actors can carry out wide-reaching attacks, dramatically increasing the potential for financial damage. 

Findings from the latest FBI Internet Crime report confirm that cybercrime costs are rising. In 2024, the IC3 received over 859,000 complaints of internet-related crimes. Despite this being a slight increase in the number of complaints from 2023, reported losses reached an unprecedented $16.6 billion, a 33% year-over-year increase. Because the total number of complaints didn’t increase at the same rate as the resulting financial losses, this indicates that attacks are becoming both more effective and more expensive.

Data leakages

Generative AI models, especially those trained on broad datasets or accessed via public-facing platforms, present a significant risk of data leakage. In fact, research from Check Point found that approximately 1 in 13 GenAI prompts contain potentially sensitive data. Given this frequency, it’s no wonder that 68% of organisations have experienced data leaks linked to the use of AI tools.

This is particularly concerning given that sensitive information is often entered into these models, both intentionally and accidentally, which can then be stored and potentially reproduced in future responses. This risk is magnified in enterprise settings where many users may input proprietary code, credentials, or client data into a generative AI tool.

Recent incidents highlight this issue. The alleged OmniGPT breach reportedly exposed personal data belonging to over 30,000 users, raising serious concerns about data retention and privacy in generative AI systems. As the usage of LLMs grows, the risk of leaking confidential data through prompts or system vulnerabilities will only increase.

Lack of risk management

While adoption of gen AI is rising, efforts to manage the risks introduced by gen AI are lagging behind. A study released by IBM and Amazon Web Services found that organizations are securing only 24% of their current generative AI projects — despite the fact that 82% of respondents say secure and trustworthy AI is essential to the success of their business. In fact, 69% of the surveyed executives say innovation takes precedence over security.

This is similarly reflected in the results of a 2024 survey by Deloitte, in which only 25% of leaders said their organizations are “highly” or “very highly” prepared to address governance and risk issues related to gen AI adoption.

Organizations that increase their adoption of gen AI without simultaneously updating and bolstering their risk management strategy will increase their risk exposure. 

Insecure code

Many developers are turning to generative AI to improve their productivity. However, a Stanford study found that software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. As more developers lacking the expertise or time to spot and remediate code vulnerabilities use generative AI, more code vulnerabilities will be introduced that can be exploited by hackers. 

Recommended reading

Artificial Intelligence: The Next Big Leap for Security Compliance

How generative AI is improving cybersecurity

Gen AI is also helping make security teams more accurate, efficient, and productive in defending their organizations. Let’s look at how generative AI is transforming security operations below. 

Supplementing understaffed security teams

AI is being used to supplement security teams and improve security outcomes. The majority of executives (64%) have implemented AI for security capabilities and 29% are evaluating implementation. These AI adopters have reported significant performance improvements in the triage of Tier 1 threats, detection of zero-day attacks and threats, and reduction of false positives and noise. 

As a result of these positive impacts, leaders say generative AI is helping them better allocate resources, capacity, talent, or skills.

Making cybersecurity more cost-efficient

Generative AI helps organizations manage cyber risk more cost-effectively by reducing the time and personnel required to complete security tasks. When implemented well, gen AI tools enhance productivity, optimize security workflows, and reduce response times, ultimately lowering the total cost of cybersecurity operations.

For example, in IBM’s 2024 Cost of a Data Breach report, organizations with extensive use of security AI and automation not only identified and contained a data breach 108 days faster—they also saw cost savings of nearly $2.2 million compared to organizations with no use.

Detecting threats in real time

Threat detection is one of the top use cases of generative AI today. By using it to identify patterns and anomalies faster, more efficiently filter incident alerts, and reject false positives, organizations are able to significantly speed up their ability to detect new threat vectors.

Detecting fraud

Generative AI is proving valuable in accelerating and improving fraud detection. In an FIS survey, 78% of business and tech leaders said that AI has improved their company’s fraud detection and risk management strategies.

By learning from large datasets of both legitimate and fraudulent transactions, gen AI models can identify subtle patterns and anomalies that might go unnoticed by traditional rule-based systems. These models continuously adapt to new fraud tactics, helping organizations catch suspicious activity in real time.

For example, AI-powered systems can flag unusual spending behavior, synthetic identity fraud, or manipulation of financial documents. Banks and fintech companies use generative AI to identify possible fraud risks to prevent financial crime, even catching trends that a human agent might miss. As a result of using gen AI, American Express was able to improve fraud detection by 6% and PayPal by 10%.

Other industries are following the banking and fintech industry’s lead. According to a study by the SAS Institute, 97% of government agencies said they expect to use generative AI tools in the next two years to help detect, prevent or investigate fraud.

Enhancing insider threat detection

Many industries, including the defense industry, are looking to generative AI tools to enhance insider threat detection by analyzing behavioral patterns and communication anomalies that indicate potential internal risks. 

Traditional insider threat detection relies heavily on predefined rules and known threat signatures, which often fail to catch subtle or evolving behavior. With generative AI, organizations can monitor and interpret signals like unusual access to sensitive data, changes in writing tone, or attempts to exfiltrate files through unapproved channels. These insights can be used to flag suspicious behavior without generating excessive false positives. 

By correlating activity across users, devices, applications, and communications, gen AI can provide a more holistic view of employee behavior and surface insider threats before they escalate into serious, costly incidents. 

Enhancing threat intelligence

Generative AI is also being used to enhance threat intelligence. Previously, analysts would have to use complex query languages, operations, and reverse engineering to analyze vast amounts of data to understand threats. Now, they can use generative AI algorithms that automatically scan code and network traffic for threats and provide rich insights that help analysts understand the behavior of malicious scripts and other threats.

Automating security patching

Generative AI can automate the analysis and application of patches. Using neural networks, it can scan codebases for vulnerabilities and apply or suggest appropriate patches using natural language processing (NLP) pattern matching or a machine learning algorithm known as the K-nearest neighbors (KNN) algorithm.

Improving incident response

Another successful application of generative AI in cybersecurity is in incident response. Generative AI can provide security analysts with response strategies based on successful tactics used in past incidents, which can help speed up incident response workflows. Gen AI can also continue to learn from incidents to adapt these response strategies over time. Organizations can use generative AI to automate the creation of incident response reports as well. 

During the 2024 RSA Conference, Elie Bursztein, Google and DeepMind AI Cybersecurity Technical and Research Lead, said one of the most promising applications of generative AI is speeding up incident response. While more research and innovation needs to be done, he said that one day Gen AI may be able to model an incident or generate a near real-time incident report to help speed up incident response rates drastically.  

Recommended reading

AI in Cybersecurity: How It’s Used + 8 Latest Developments

Generative AI in cybersecurity examples

Now that we understand some of the general applications of generative AI in cybersecurity, let’s take a look at some specific cybersecurity tools that use generative AI. 

1. Secureframe Comply AI for AI Security Assessments

Secureframe Comply AI for AI Security Assessments helps organizations evaluate and manage the risks associated with using AI systems. This solution enables security and compliance teams to conduct thorough assessments based on regulatory requirements and industry frameworks like the NIST AI RMF or ISO 42001. 

The tool also streamlines documentation and evidence collection by pre-filling responses based on company data and prior assessments. By using generative AI to guide and automate the AI security assessment process, organizations can build safer and more transparent AI systems while maintaining compliance with evolving regulations.

2. Microsoft Security Copilot

Microsoft Security Copilot integrates generative AI directly into Microsoft’s security tools, including Defender, Sentinel, and Intune, to help organizations streamline threat detection, investigation, and response, data loss prevention, and more. Security analysts can use natural language prompts to generate KQL queries, summarize incidents, recommend next steps, prioritize insider risk alerts, and streamline patch management, among other tasks.

One of the tool’s key benefits is accessibility. It lowers the barrier for less experienced analysts to contribute meaningfully by guiding them through complex detection and response tasks. By offering in-line threat intelligence summaries, automated incident timelines, and recommended remediations, Copilot enables teams to respond faster and more confidently to threats.

3. IBM Consulting Cybersecurity Assistant

IBM’s Cybersecurity Assistant is another generative AI-powered solution designed to accelerate and improve the identification, investigation, and response to critical security threats. This tool provides real-time insights and support on operational tasks to both clients and IBM security analysts so they can respond more proactively and precisely to critical threats. . In some cases, this cybersecurity assistant has helped reduce alert investigation times by 48%.

4. Secureframe Comply AI for Remediation

Secureframe launched Comply AI for Remediation to provide a more contextual, accurate, and tailored user experience for remediating failed tests so organizations can quickly fix cybersecurity issues and speed up time-to-compliance.

Comply AI for Remediation provides remediation guidance that’s tailored to users’ environment and so they can easily update the underlying issue causing the failing configuration in their environment. This enables them to fix failing controls to pass tests, get audit-ready faster, and improve their overall security and compliance posture.

Users can also ask follow-up questions using the chatbot in order to get additional details about the remediation code or to provide more tailored guidance for their specific security and compliance requirements. 

5. Google Threat Intelligence

Google recently announced Google Threat Intelligence, which combines the power of Mandiant frontline expertise, VirusTotal threat intelligence which is crowdsourced from over 1 million users, and the Gemini AI model into one offering.

Gemini is an AI-powered agent that provides conversational search across Google’s vast repository of threat intelligence, enabling users to gain insights into threats and protect themselves faster. Traditionally, operationalizing threat intelligence has been labor-intensive and slow. Google Threat Intelligence uses Gemini to analyze potentially malicious code and provides a summary of its findings to more efficiently and effectively assist security professionals in combating malware and other types of threats.

6. Accenture mySecurity

Accenture mySecurity is a centralized suite of assets that integrates gen AI into all cyber-resilience services across supply chain, cloud, application, cyber resilience and identity and access management. It is designed to drive speed and efficiency to help organizations defend themselves against AI-driven threats.

7. Secureframe Comply AI for Risk

Secureframe Comply AI for Risk was designed to automate the risk assessment process to save organizations time and resources. 

Using only a risk description and company information, Comply AI for Risk produces detailed insights into a risk, including the likelihood and impact of a risk before a response, a treatment plan to respond to the risk, and the residual likelihood and impact of the risk after treatment. These detailed outputs from Comply AI for Risk help organizations better understand the potential impact of a risk and proper mitigation methods, improving their risk awareness and response.

8. Tenable ExposureAI

Tenable launched ExposureAI to provide new and rich insights to analysts to make exposure management more accessible. These new generative AI capabilities help analysts search, analyze, and make decisions about exposures faster by:

  • allowing analysts to use natural language search queries to search for specific exposure and asset data
  • summarizing the complete attack path in a written narrative so analysts can understand exposures better
  • allowing analysts to ask Tenable’s AI assistant specific questions about the summarized attack path, as well as each node along the attack path
  • surfacing high-risk exposure insights and recommending actions so analysts can more easily prioritize and remediate high-risk exposures

9. Ironscales Phishing Simulation Testing

Ironscales launched GPT-powered Phishing Simulation Testing (PST) as a beta feature. This tool uses Ironscales' proprietary large language model to generate phishing simulation testing campaigns that are personalized to employees and the advanced phishing attacks they may encounter. 

The goal is to help organizations rapidly personalize their security awareness training to combat the rise and sophistication of social engineering attacks.

10. Secureframe Comply AI for Vendor Risk Management

Secureframe Comply AI for Vendor Risk Management streamlines the security assessment process for third-party vendors using generative AI. Instead of manually filling out vendor assessments, Comply AI can extract relevant content from vendor documentation such as SOC 2 reports and policies to answer security review questions. This helps companies build more secure supply chains while reducing the time and resources required for vendor reviews.

11. ZeroFox FoxGPT

ZeroFox has developed FoxGPT, a generative AI tool designed to accelerate the analysis and summarization of intelligence across large datasets. It can help security teams analyze and contextualize malicious content, phishing attacks, and potential account takeovers. 

12. SentinelOne Purple AI

SentinelOne unveiled a generative AI-powered threat hunting platform that combines real-time, embedded neural networks and a large language model (LLM)-based natural language interface to help analysts identify, analyze, and mitigate threats faster.

Using natural language, analysts can ask complex threat and adversary-hunting questions and run operational commands to manage their enterprise environment and get rapid, accurate, and detailed responses back in seconds. Purple AI can also analyze threats and provide insights on the identified behavior alongside recommended next steps. 

13. VirusTotal Code Insight

VirusTotal Code Insight uses Sec-PaLM, one of the generative AI models hosted on Google Cloud AI, to produce natural language summaries of code snippets. This can help security teams analyze and understand the behavior of potentially malicious scripts. VirusTotal Code Insight is meant to serve as a powerful assistant to cybersecurity analysts, working 24/7 to enhance their overall performance and effectiveness.

14. IBM QRadar Suite

The QRadar suite combines advanced AI and automation to accelerate threat detection and response time. IBM has announced it will release generative AI security capabilities in early 2024 to further automate manual tasks and optimize security teams’ time and talent. These tasks include:

  • Creating simple summaries of security cases and incidents that can be shared with a variety of stakeholders in a single click
  • Automatically generating searches to detect threats based on natural language descriptions of attack behaviour and patterns
  • Helping analysts to quickly understand security log data by providing simple explanations of events that have taken place on a system
  • Interpreting and summarizing highly relevant threat intelligence

15. Secureframe Questionnaire Automation

Responding to security questionnaires can be a tedious and time-consuming process for security analysts and other stakeholders, with questions varying from customer to customer and no standardized format, set, or order of questions.

Secureframe’s Questionnaire Automation uses generative AI to streamline and automate the process. This tool suggests questionnaire responses using policies, controls, tests, and other context from the Secureframe platform along with approved prior responses in the Knowledge Base to deliver greater accuracy. After quickly reviewing the answers and making any adjustments as needed, users can then share completed questionnaires back to prospects and customers in their original format.

What your organization can do in response to generative AI in cybersecurity

Below are some steps your organization can take right now to start defending your organization against generative AI risks.

1. Update employee training

Take the time to evaluate and update employee training around generative AI. This training should reflect the sophistication of cyber attacks leveraging generative AI, including increasingly convincing phishing emails and deep fake calls and videos.

Consider including guardrails for using generative AI tools in employee training as well. 

2. Define acceptable generative AI use in policies

While a recent report by ExtraHop revealed that 32% of organizations have banned the use of generative AI tools, leading organizations like the AICPA are recommending that organizations update their security policies to promote safe usage of AI tools since it’s inevitable. Key considerations include:

  • providing examples of acceptable use
  • limiting usage to one or two reputable tools which the employee has been properly trained upon
  • prohibiting the use of any AI chatbot tool that has not been previously vetted by the IT department and approved by the employee’s supervisor

3. Reduce shadow AI

Similar to how shadow IT increased as SaaS products became more popular and accessible, shadow AI is growing as employees are increasingly adopting AI to improve their productivity.

Shadow AI presents major challenges in terms of security and governance for two major reasons. One, employees may expose sensitive, privileged, or proprietary information when using AI products. Two, an organization’s security team can’t assess and mitigate the risks of AI tools that they don’t know about. 

To address these challenges, organizations can take a multi-pronged approach to reduce shadow IT. This may include educating employees on the risks of shadow IT, identifying unsanctioned AI services and other guardrails in policies, and implementing offensive and defensive security strategies like security fencing to detect and control what type and how much data is flowing within your organization.

4. Use generative AI to bolster your defenses

Since generative AI is a double-edged sword, make sure to use it to your advantage. For example, you can deploy generative AI solutions to detect threats with the speed, scale, and sophistication as nefarious actors are launching them. You can also use it to automate routine tasks that don’t require as much human expertise or judgment, like threat hunting.

When used strategically in these ways, generative AI and automation can help your organization identify and respond to security risks and incidents faster and at scale. 

5. Comply with regulations and frameworks for responsible AI use

AI regulation has already been passed by the EU, China, and parts of the US, and is expected to increase. There are also an increasing number of voluntary frameworks that address the complexities and ethical concerns of AI and are designed to help organizations mitigate risks and enhance governance, like the NIST AI Risk Management Framework (AI RMF) and ISO 42001.

Complying with these regulations and frameworks can provide organizations with a structured approach to managing AI systems responsibly and effectively, thereby enhancing trust and reliability among AI developers and users.

6. Work with partners to integrate generative AI in cybersecurity effectively

Successfully adopting generative AI for cybersecurity requires more than just deploying the right tools, it also depends on working with trusted partners who can help you evaluate, implement, and manage these technologies responsibly. Strategic partnerships can provide access to AI expertise, cybersecurity best practices, and managed services that ensure generative AI enhances rather than compromises your security posture. 

Below are some examples of partners that can enable you to deploy gen AI with confidence and accountability:

  • Technology partners can help you integrate gen AI into existing security operations without disrupting workflows.
  • Managed security service providers (MSSPs) can offer 24/7 monitoring of AI-powered tools to catch anomalies early. 
  • Compliance-focused partners like Secureframe can guide your team through conducting AI risk assessments, ensuring your AI use aligns with frameworks like NIST AI RMF, and helping you navigate new and emerging regulatory requirements. 

Guiding Your Organization's AI Strategy and Implementation

Follow these best practices to effectively implement AI while addressing concerns related to transparency, privacy, and security.

How Secureframe’s generative AI capabilities can improve cybersecurity at your organization

As the cybersecurity landscape becomes more complex and AI-driven, organizations need tools that don’t just detect threats but that adapt, scale, and streamline security and compliance operations. Secureframe continues to expand its suite of generative AI capabilities to meet this need and help organizations take a proactive, intelligent approach to cybersecurity.

By embedding generative AI into key parts of the compliance lifecycle, Secureframe empowers security teams to be more efficient and strategic. Our tools help customers:

  • Automate the risk assessment process to improve their risk awareness and response
  • Fix failing controls to pass tests and get audit-ready faster
  • Create and customize policies that align with their unique voice 
  • Streamline the vendor security assessment process
  • Simplify AI vendor risk assessments to better understand how these risks impact their compliance posture and support responsible AI governance
  • Respond to security questionnaires and RFPs quickly and accurately to close more deals and grow their revenue

Want to explore Secureframe AI further? Schedule a demo to see how Secureframe AI can automate manual tasks related to security, privacy, and compliance.

FAQs

How is generative AI used in cybersecurity?

Generative AI is being used to strengthen organizations’ security posture by augmenting security teams’ capabilities to detect, analyze, and respond to threats faster and more efficiently and by automating routine tasks like incident response reporting. Gen AI is also being used by cyber criminals to leverage more and increasingly sophisticated cyber attacks, like deepfake video calls.

What is generative AI and an example?

Generative AI is a type of AI that uses deep-learning models or algorithms to automatically create text, photos, videos, code, and other output based on the datasets they are trained on. The most-well known example is ChatGPT, an AI-powered language model developed by OpenAI.

Can cybersecurity be automated by generative AI?

Parts of cybersecurity can be automated by generative AI, including threat detection, analysis, and response; however, it can’t entirely replace human experts. For example, while generative AI tools can identify known attack patterns and predict new ones, human analysts can confirm real threats from false positives based on their deeper, contextual understanding of their organization’s unique systems, networks, and operational environment.

How is generative AI transforming cybersecurity strategies?

Generative AI is shifting cybersecurity strategies from reactive to proactive. Instead of waiting for threats to surface, organizations are using generative AI to anticipate potential attack vectors, simulate breach scenarios, and automate mitigation plans. This shift enables security teams to improve response times and scale their operations. 

How can generative AI be used to enhance cybersecurity measures?

Generative AI can enhance cybersecurity measures by enabling real-time threat detection, intelligent automation of response protocols, and deep behavioral analysis. It can identify sophisticated threats faster than traditional systems, generate remediation steps tailored to the context of a breach, and assist in policy creation and control mapping, among other tasks. Ultimately, gen AI is making security operations smarter, faster, and more resilient.