Skip to main content

One of the most challenging aspects of the use of artificial intelligence (AI) from a privacy perspective is that around the collection, use and analysis of large volumes of data that AI relies on – and that this is often at odds with privacy and the rights of individuals. 

NIST as an internationally recognised leader with a reach far beyond the United States. Efforts to provide guidance and tools around AI are well worth an examination. One such example is the NIST AI Risk Management Framework (AI RMF), which is intended to assist with the management of cyber security and privacy risks presented by AI.  

Edward Starkie
Edward Starkie

Director, GRC | Cyber Risk

estarkie@thomasmurray.com

Key AI privacy considerations at a glance 

Data breaches and leaks: AI systems often rely on vast amounts of personal data, increasing the risk and potential impact of data breaches. The volume of data collected and processed by AI systems is an attractive target for attackers.  

Algorithmic bias and discrimination: AI algorithms may perpetuate or amplify existing biases. This leads to unfair or discriminatory outcomes, especially in sensitive areas like hiring, lending, and law enforcement. Decisions automatically made using AI models on these topics must be subject to challenge and reviewed by competent individuals within businesses.  

Lack of transparency and explainability: Many AI systems operate as “black boxes,” making it difficult to understand how decisions are made who is accountable for them. A lack of transparency undermines trust and makes it hard to identify privacy violations – a situation not helped by the inherent complexities of explaining the underlying technology of AI. 

Surveillance and tracking: AI-powered technologies like facial recognition and location tracking raise concerns about mass surveillance and infringement of individual privacy rights. 

Data hoarding: There is a risk of excessive data collection and storage beyond what is necessary, “just in case” it might be useful for AI training or analysis. As is the case for all personal data collected, the appropriate permissions will need to be obtained. 

Re-identification risks: AI's analytical power creates new risks of re-identifying individuals from supposedly anonymised datasets. The use of AI tooling could be used to analyse multiple data sets, combining and analysing the data and increasing the re-identification risk.  

Model inversion and membership inference (MIMI) attacks: MIMI is a new style of privacy attack, where personal data used to train AI models might be inadvertently revealed by the outputs of the system itself.  

Unintended data exposure through explanations: Efforts to make AI systems more explainable may inadvertently reveal sensitive information about the training data or individuals. 

Automated decision-making: The potential for AI to make important decisions about individuals without human oversight raises privacy and fairness concerns. 

Data security vulnerabilities: AI systems may introduce new attack vectors and security vulnerabilities that can be exploited to compromise personal data. 

Addressing these risks requires a combination of technical measures, governance frameworks, and ethical considerations in the development and deployment of AI systems. 

The NIST AI Risk Management Framework 

NIST's AI RMF serves as a foundation for addressing these challenges. It identifies a wide range of risks associated with AI, from safety concerns to issues of transparency and accountability. The programme builds on this framework, focusing specifically on the intersection of AI with cyber security and privacy. 

Core challenges and opportunities 

The programme identifies several critical areas where AI impacts cyber security and privacy. 

Privacy risks: AI's analytical power creates new re-identification risks and potential data leakage from model training. However, AI could also power personal privacy assistants to help individuals manage their online privacy. Regional differences in privacy regulations make global compliance even harder. 

Cyber security implications: AI presents both opportunities and challenges for cyber security. While it can enhance threat detection, it may also increase false positives and require new skills from cyber security practitioners. The cat and mouse game of ‘attackers vs defenders’ will continue – but now the attackers have greater freedom.  

Organisational impact: As AI is used in different business areas, organisations must rethink the value of their data. They should update their inventories and consider new threats and risks. The introduction of this technology into an unsecured or unstructured environment could be a Pandora’s box of risk for businesses.  

AI-enabled threats: The emergence of AI-powered offensive techniques, such as voice generators for phishing attacks, necessitates updates to defensive strategies and training programmes. Individuals must be aware not only of the existence of attacks, but also be able to identify attacks that could impact long-held assumptions about being able to take communications from their colleagues at face value. 

Call for collaboration 

NIST is now asking the cyber security and privacy community to advance this programme. As AI continues to transform the digital landscape, NIST's initiative will be vital in ensuring that cyber security and privacy practices evolve to meet the challenges and opportunities of this new era. 

Analysis and next steps for businesses 

Like all new technologies, the adoption of AI offers significant benefits. For organisations to realise these benefits in full and sustainably, privacy concerns and wider AI-related risk must be assessed and managed.  

With the rush to embrace AI, the input and focus from NIST in this space is welcomed, however it should not be seen as offering a magical solution to all of an organisations’ operational bugbears. NIST’s initiative is a first step towards helping organisations to align their objectives with their risk appetites. The existence and consideration of specific regulation and legislation will be core to businesses when adopting AI technologies. 

At Thomas Murray we have extensive experience of working with businesses to manage both regulatory compliance and wider risk. Our approach is simple: We bring the best to our clients by leveraging best practice, pragmatism, and insights from across the threat landscape and industry. Talk to us about how we can help your business to thrive – not just survive – among these rapid changes. 

Cyber Risk

Cyber Risk

We bring the best of our collective experience, energy and creative power to fiercely safeguard our clients and fortify their communities.

Learn more
Thomas Murray cyber alerts

Thomas Murray cyber alerts

Subscribe to stay up to date with developing threats in the cyber landscape