Leveraging Private AI to Meet the EDPB’s AI Audit Checklist for GDPR-Compliant AI Systems

Share This Post

As the European Union continues to strengthen its data protection and artificial intelligence (AI) regulations, organizations are seeking innovative ways to ensure compliance. Private AI, a cutting-edge approach to machine learning that prioritizes data privacy, has emerged as a powerful tool in this landscape. This article explores how Private AI can help organizations adhere to both the General Data Protection Regulation (GDPR) and the EU AI Act, drawing insights from the AI Auditing project initiated by the European Data Protection Board (EDPB). Launched in June 2024, this project aims to develop and pilot tools for evaluating the GDPR compliance of AI systems, providing a crucial framework for understanding and implementing data protection safeguards in the context of AI systems.

The Audit Checklist

At its core, the EDPB’s Audit Checklist is an end-to-end, socio-technical algorithmic audit (E2EST/AA) methodology, which goes beyond mere technical evaluations. This comprehensive checklist examines AI systems in their real-world contexts, considering not only the algorithms themselves but also their societal impacts and the complex environments in which they operate. The audit covers crucial aspects such as bias assessment, social impact, user participation in design, and the availability of recourse mechanisms for affected individuals. By providing a structured framework for evaluating AI systems used in ranking, image recognition, and natural language processing, the EDPB’s checklist offers a powerful tool for both regulators and AI developers to ensure compliance with the GDPR and the EU AI Act.

Audit Aspects with which Private AI Can Help

The EDPB’s AI Audit Checklist provides a comprehensive framework for assessing AI systems’ compliance with data protection regulations. However, implementing these guidelines can be challenging for organizations. This is where Private AI’s technologies can play a crucial role in addressing several key aspects of the Audit Checklist.

Personal Information Detection and Classification

One of the fundamental requirements in the EDPB checklist is the proper classification and handling of personal data. Private AI offers advanced solutions that can detect over 50 different types of personal data across 53 languages, with high accuracy across structured, semi-structured, and unstructured data formats. This capability directly addresses the audit question:

  • Have data been previously classified into categories, organizing them in non-personal and personal data, and, for the latter, identifying which fields constitute identifiers, quasi-identifiers and special data categories?

By utilizing Private AI’s detection capabilities, organizations can efficiently categorize their data, identifying personal information and special data categories. This not only aids in compliance but also supports data minimization efforts, another crucial aspect highlighted in the Checklist.

Data Minimization and De-identification

The Audit Checklist emphasizes the importance of data minimization and de-identification strategies. Private AI’s redaction capabilities align perfectly with this requirement, addressing the following audit questions:

  • Have data minimisation criteria been determined and applied to the different stages of the AI component, using strategies such as data hiding, separation, abstraction, anonymisation and pseudonymisation that might apply for the purposes of maximising privacy in the operation of the relevant AI-based component?
  •  
  • Have segregation and de-identification strategies been implemented on additional information that is not required for training purposes but shall be required in the verification and validation processes of the model’s behaviour?

Private AI’s ability to redact or replace personal information with placeholders, tokens, or synthetic data enables organizations to implement robust pseudonymization and de-identification strategies. This is particularly valuable for unstructured data, which constitutes approximately 80% of all data and is often challenging to process while maintaining privacy.

Bias Control and Data Integrity

The EDPB checklist also addresses the critical issue of bias in AI systems. While Private AI doesn’t directly remove bias, its comprehensive personal information detection and redaction can support bias identification and mitigation efforts. For example, by removing identifiers such as race, gender, and age from a dataset, this can reduce bias at the inference stage. This functionality aligns with the audit question:

  • Have appropriate procedures been defined in order to identify and remove, or at least limit, any bias in the data used to train the relevant model?

By accurately identifying personal information, organizations can better assess potential sources of bias in their training data and take appropriate corrective actions, e.g., by redacting the relevant identifiers.

For Developers

Integrate privacy into your LLM applications with just three lines of code. Replace sensitive data with entity labels, tokens, or synthetic information at training, fine-tuning, embeddings creation, and prompting stages. All of this with the added benefit of helping reduce bias in LLM responses by removing entities such as religion, physical location, and other indirect identifiers.

				
					import openai
from privateai_client import PAIClient
from privateai_client import request_objects

@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
def chat_completion_with_backoff(**kwargs):
    return openai_client.chat.completions.create(**kwargs)

def redact(raw_text):
    request_obj = request_objects.process_text_obj(text=[raw_text])
    response_obj = pai_client.process_text(request_obj)
    return response_obj

def secure_completion(prompt, raw_text, temp):
    ######## REDACT DATA #####################
    completions = {}
    response_obj =  redact(raw_text)

    ######## BUILD LOCAL ENTITIES MAP TO RE-IDENTIFY ########
    deidentified_text = response_obj.processed_text
    completions['redacted_text'] = deidentified_text
    entity_list = response_obj.get_reidentify_entities()
    
    ######## SEND REDACTED PROMPT TO LLM #####
    MODEL = "gpt-4"
    completion = chat_completion_with_backoff(
            model=MODEL,
            temperature=temp,
            messages=[
            {"role": "user", 
             "content": f'{prompt}: {deidentified_text}'}
            ]
        )
    completions["redacted_completion"] = completion.choices[0].message.content
    
    ######## RE-IDENTIFY COMPLETION ##########
  
    request_obj = request_objects.reidentify_text_obj(
        processed_text=[completion.choices[0].message.content], entities=entity_list
    )
    response_obj = pai_client.reidentify_text(request_obj)
    completions["reidentified_completion"] = response_obj.body[0]
    return completions

				
			

Security and Confidentiality

Finally, Private AI’s technologies contribute significantly to data security and confidentiality, addressing several audit questions related to Articles 5.1.f, 25, and 32 of the GDPR. The ability to anonymize or pseudonymize data effectively helps organizations implement measures to “ensure protection of the processed data” and “guarantee confidentiality.” The specific audit questions that Private AI can help answer in the affirmative are:

  • Are measures to ensure protection of the processed data implemented, particularly those oriented to guarantee confidentiality by means of data anonymisation or pseudonymisation, and integrity to protect component implementation from accidental or intentional manipulation?
  •  
  • Have standards and best practices been taken in consideration for secure configuration and development of the AI relevant component?
  •  
  • Have procedures been implemented in order to properly monitor the functioning of the component and early detect any potential data leak, unauthorised access or other security breaches?

Since Private AI’s technology can be implemented at any place in the technology stack, it is possible to use it to limit the leak of personal data or unauthorized access. 

For Security Teams

Set up Federated Control over each LLM workflow within your organization. 

A control panel allows you to determine which products, teams, or employees can send which types of personal and sensitive data to LLMs, even based on your pre-existing access control settings. 

Easy API integration allows you to send detection events to any monitoring system of your choice.

Screenshot 2024-05-23 at 9.03.00 PM

The advantage of using Private AI over other DLP solutions is that we are able to detect personal identifiers and confidential corporate information in messy data, e.g., when the inputs contain typos or other non-standard formats that regular expressions would miss. What is more, our solution works for all major file types and in over 50 languages, allowing organizations to detect such data no matter how it is sent to the AI system.

Conclusion

As organizations strive to comply with the GDPR and prepare for the EU AI Act, the EDPB’s AI Audit Checklist serves as a valuable guide. Private AI technologies offer powerful tools to address many of the checklist’s key requirements, particularly in the areas of personal data detection, classification, and protection. By leveraging these technologies, organizations can enhance their compliance efforts, minimize risks, and unlock the value of their data while respecting individual privacy rights. You can try Private AI’s solutions by accessing the web demo or by obtaining a free API key here.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.