Securing Data in the Age of AI

Securing Data in the Age of AI

With the advent of Generative AI there is a lot of excitement around the possibilities, and rightly so. Many organisations are re-imagining their business processes, exploring how it can help "level-up" their people or even rethinking their entire business model.

For this reason, Copilot for Microsoft 365 is proving massively popular. Businesses want to harness their organisations knowledge and given so much of this knowledge lives within Teams, Exchange, OneDrive & SharePoint; Copilot for Microsoft is perfectly positioned to unleash its potential.

But to do so, it's essential to have the right foundations in place, otherwise something that is intended to bring great value may bring significant headaches.


The Security Foundations of a Generative AI Strategy

First things first, governing AI is perceived to be complex and shrouded in mystery, when in reality it's quite simple. When devising a governance strategy for Copilot for Microsoft 365, or any Generative AI tool, the key thing to understand is that AI does not great security gaps.

Tools like Copilot for Microsoft 365 inherit your organisations & users security controls. This means that anything a user has access to, Copilot has access to too. So, with the click of a button, or the typing of a query, users have access to a wealth of information at their fingertips. This is a huge boost for productivity, but on the flip side this can amplify & magnify any security gaps that exist in your organisation.

The days of "security by obscurity" - where by virtue of something being hard to find was then secure as a result - are over.

The below is intended to serve as a simple guide to help demystify the risks and the tactics to mitigate them, in alignment with the Zero-Trust Principles.



Article content

  • Overview: Over-privileged and risky users might have access sensitive data they shouldn't have access to, leading to potential data breaches.
  • Why This Matters: 1) If a user's account is compromised, attackers can access a wealth of sensitive information, increasing the blast radius of an attack, 2) if a user wants to leave your organisation and move to a competitor, they have access to extensive information, increasing the potential damage caused by data theft
  • How This Can Be Addressed: 1) Implement risk-based conditional access policies and multi-factor authentication (MFA) to harden identities, 2) review user access leveraging Oversharing Assessments & Access Reviews, 3) automate your organisations Joiner, Mover, Leavers (JML) process to ensure that users only have access to resources required in their current role
  • Microsoft Technology: Microsoft Entra
  • Zero-Trust Principle: Verify explicitly, Assume Breach



Article content

  • Overview: Devices that are not securely managed can be a gateway for attackers to access corporate resources.
  • Why This Matters: Ensuring that only secure, corporate-managed devices can access Copilot reduces the risk of data theft
  • How This Can Be Addressed: Limit the use of work apps on personal devices, and implement app protection policies to prevent particular actions being taken (e.g. copy & paste or screenshots)
  • Microsoft Technology: Microsoft Intune
  • Zero-Trust Principle: Assume Breach



Article content

  • Overview: Sensitive data can be accessed or shared inappropriately during AI interactions.
  • Why This Matters: Protecting sensitive information is crucial to maintaining compliance and preventing data breaches.
  • How This Can Be Addressed: Apply sensitivity labels and data loss prevention (DLP) policies to protect sensitive information.
  • Microsoft Technology: Microsoft Purview
  • Zero-Trust Principle: Least privileged access



Article content

  • Overview: Unsanctioned AI applications can be used to share sensitive data, leading to potential data leaks.
  • Why This Matters: Controlling the use of AI apps ensures that sensitive data is not shared in unsanctioned applications.
  • How This Can Be Addressed: Discover and assess the risk of AI applications within your organization and block or approve their use.
  • Microsoft Technology: Microsoft Defender for Cloud Apps
  • Zero-Trust Principle: Verify explicitly




To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics