Using AI to Innovate in Cybersecurity Defense

Using AI to Innovate in Cybersecurity Defense

Around the world, the frequency and severity of cyberattacks have increased significantly in recent years. Microsoft customers face more than 600 million cybercriminal and nation-state attacks every day, ranging from ransomware to phishing to identity attacks. Our fifth annual Microsoft Digital Defense Report looks at trends from July 2023 to June 2024, exploring how threat actors, including both cybercriminals and nation-states, are exploiting cutting-edge tools like generative AI for nefarious purposes. 

While our report covers these growing threats in depth, it also highlights the potential of generative AI to enable security operations centers to respond to threats much more quickly and efficiently. This is particularly crucial considering the significant shortage of skilled cybersecurity workers. Many cybersecurity teams are operating at their limits, facing staffing constraints, escalating regulatory compliance demands, and an ever-growing number of increasingly sophisticated adversaries.  

Quote that reads, "On average, it takes 277 days to identify and contain a breach. AI can drastically reduce this lag".

The introduction of AI changes this workload. For defenders, the “automated ingenuity” of generative AI can now be applied across the entire defense chain, from initial detection of anomalies to prompt triage and response. Beyond merely enhancing existing security operations centers, AI holds the potential to introduce entirely new methods of defense.  

AI enables persistent systems that constantly monitor for vulnerabilities and promptly address any breaches. AI also streamlines the sharing of information among defenders, transforming it from a labor-intensive manual process into a continuous, automated one. 

Microsoft has invested heavily in AI to help security operation centers upskill and operate at speeds beyond human capability to tackle threat actors. In a 2023 study, we found that novice users were able to perform 26% faster and were 44% more accurate across all tasks when using Copilot for Security. 

Microsoft is leveraging AI in seven key areas of security operations: 

  1. Triaging requests and tickets by using large language models (LLMs) to decide how to respond to requests and tickets based on how they were handled in the past. The use of LLMs in this scenario save an estimated 20 hours per person, per week, for one of Microsoft’s internal response teams. 
  2. Prioritizing work items by assessing how similar items were prioritized in the past. AI can also ensure that prioritization criteria are up to date with ever-evolving compliance requirements. 
  3. Knowledge gathering from diverse external sources by scraping online content and extracting security-related information at scale, generating concise reports in minutes instead of hours. 
  4. Assisting employees with knowledge retrieval by using LLMs to ensure that all employees are up to date on security policies, best practices, and remediation actions necessary for compliance.  
  5. Strengthening risk assessment by leveraging unstructured organizational knowledge and historical precedents to enrich the set of factors determining risk. 
  6. Learning from the past by using LLMs to ingest data pertaining to previous incidents, violations, and other events to uncover valuable learnings that help the organization get a comprehensive view of past events. 
  7. Scaling reporting by using AI to help combine, consolidate, and distill documents and slides into reports tailored to a specific audience and goal. 

For example, during advanced human-operated ransomware attacks, the time it takes for a threat actor to progress from the initial alert to the encryption event averages just 16 hours, underscoring the importance of operating fast to remediate the actor from the network.  

AI security solutions provide more than just a graphical representation of events; they generate a comprehensive incident summary that allows security operations center analysts to quickly understand the situation and identify human-operated ransomware, enabling swift and decisive action. What would have taken a junior analyst dozens of minutes and several tools can now be achieved at machine-speed. 

Quote by Tom Burt, Corporate Vice President of Customer Security and Trust, that reads, "The story of AI and cybersecurity is also a potentially optimistic one... We continue to innovate our technology to find new ways that AI can benefit and strengthen cybersecurity".

The cybersecurity challenges our society faces are significant, but so too are the tools at our disposal. As threat actors are increasingly using tools like generative AI as weapons, we are prepared to rise to the challenge by using this cutting-edge technology to unlock new defense capabilities.

To learn more, read the full Microsoft Digital Defense Report.

Aryan Guenthner

Leader in Cyber Security Operations

5mo

Awesome

Like
Reply
บุญชู ศรศิลชัย

ช่างไฟฟ้า ที่ โรงเรียนสุเหร่าบางปลา

5mo

คําแนะนําที่ดี

Like
Reply

Love this

Like
Reply
Mark Frudd

Information Security And Compliance Specialist

6mo

AI and automation should be very closely watched. I agree there is a place but we must grasp two key areas, first of all the impact on training and knowledge. Too much automation and AI utilization will lead to over use of tooling, and loss of skills. We have a situation where level 1 and level 2 SOC members become something else. The second key area for me is the the quality of responses. AI and LLD's perform some unique tasks, but the importance of SOC techniques involve the repetitive quality ensuring the same outcome occurs time and time again. Given the volume of activity, how can we measure to ensure the same output which originated on day 1 occurs on day 365? This is of course just one area of security which is impacted. I welcome progress, but this must be tested and introduced in a measured way to support change in my opinion.

Like
Reply

yes we can do the best

Like
Reply

To view or add a comment, sign in

More articles by Microsoft On the Issues

Insights from the community

Others also viewed

Explore topics