Brazil’s LGPD: Anonymization, Pseudonymization, and Access Requests

Share This Post

Leia em português aqui

The Lei Geral de Proteção de Dados (LGPD), Brazil’s answer to data privacy, determines the rules organizations that handle personal data have to follow, unless they anonymize the data. This article delves into data anonymization, pseudonymization and what that means for data processing activities, as well as the LGPD’s stringent response time for access requests. We compare these selected aspects of the LGPD with the GDPR, and explore how technologies like those from Private AI can help organizations render data anonymized or pseudonymized efficiently, and comply with the onerous access request obligations.

Understanding Anonymization under the LGPD

  1. Definition and Scope: Anonymized data is defined as data related to a data subject who cannot be identified, considering the use of reasonable and available technical means at the time of the processing. The LGPD also defines anonymization as the use of reasonable and available technical means at the time of treatment, by which a given data loses the possibility of direct or indirect association with an individual. Lastly, the LGPD says that anonymized data shall not be considered personal data, except when the anonymization process to which it was submitted is reversed, using its own means, or when, with reasonable efforts, it may be reversed. This definition aligns somewhat with the GDPR, where anonymized data is also considered to be outside the scope of data protection laws because the data subject is not identifiable.

     

  2. Usage of Anonymized Data: Under the LGPD, once data is anonymized, it is no longer considered personal data and falls outside the act’s scope. This means organizations can use anonymized data freely without adhering to the privacy protections and rights obligations required for personal data. It offers a pathway for analytics, research, and other data-driven activities while maintaining compliance. The language of the law is not terribly strict on this point. For example, it says that processing of personal data shall only be carried out under the following circumstances, one of which reads: for carrying out studies by research entities, ensuring, whenever possible, the anonymization of personal data. The provision for the processing of sensitive personal data reads identically in this regard.

     

  3. Comparative Analysis with GDPR: The GDPR and LGPD share similarities in their approach to anonymization. Both consider anonymized data as non-personal, freeing it from the respective data protection regulations. However, the GDPR is more explicit about the irreversibility of anonymization, implying a higher standard for the process. An interesting detail in the LGPD is that it explicitly excludes data from the definition of anonymized data when they are used to formulate behavioral profiles of a particular natural person, if that person is identified. It is unclear what may have prompted this exclusion, since it is rather obvious that the definition of anonymized data would not apply in this scenario. Let’s take it as a signaling provision that emphasises the sensitivity of personal profiles.

The Role of Pseudonymization

  1. LGPD’s Stance on Pseudonymization: Pseudonymization under LGPD involves processing personal data in such a way that it can no longer be attributed to a specific data subject without the use of additional information, which must be kept separately by the controller in a controlled and secure environment. This process, while a valuable security measure, does not change the data’s status as personal under LGPD, unlike anonymization.

     

  2. Impact on Compliance: Pseudonymized data still requires adherence to the LGPD’s provisions. In fact, it is considered a recommended security measure when processing personal data for public health studies, an oddly narrow scope for this useful technique, by the way.

The Significance of Rapid Response to Access Requests

If the use case for processing the data does not allow for anonymization, data subjects have a right to access the information that is held by organizations governed under the LGPD. The law mandates a short response time of 15 days, compared to 30 under the GDPR, for data subject access requests, emphasizing the need for efficient data management systems. Organizations must be prepared to promptly identify, access, and compile personal data in response to these requests.

Private AI’s Contribution to LGPD Compliance

  1. Facilitating Efficient Data Mapping: Private AI’s technology can swiftly identify and categorize over 50 entities of personal data, a necessity for complying with LGPD’s access request deadlines. Particularly where unstructured data, such as free text is concerned, this can be a time-consuming process, depending on the amount of data in an organization’s system.

     

  2. Enhancing Anonymization Processes: By utilizing advanced algorithms, optimized for various file types, Private AI can help organizations effectively anonymize data, ensuring it falls outside of the LGPD’s purview.

     

  3. Supporting Multilingual and Context-Sensitive Processing: Private AI’s ability to handle diverse languages and contextual nuances aligns with the LGPD’s unparalleled territorial scope, likely capturing great linguistic diversity. The LGPD applies, unlike any other privacy law, not only to  processing carried out in Brazil but also to processing related to providing goods or services to individuals in Brazil. The LGPD applies where “the personal data being processed were collected in the national territory” and explains that “data collected in the national territory are considered to be those whose data subject is in the national territory at the time of collection.” In summary, if an organization processes personal data related to individuals in Brazil, the LGPD applies regardless of the origin of that data. It would come in very handy if the tool used for personal data detection and redaction supported 52 languages!

Conclusion

Brazil’s LGPD places significant emphasis on the proper handling of personal data with much more obligations than covered here. This article highlighted that anonymization offers a gateway for organizations to utilize data without the constraints of the LGPD, provided the process is irreversible in light of reasonable measures taken. Additionally, the globally most stringent timeline for responding to access requests can likely not be met if attempted manually, given the vast amount of data many companies process today. In this context, Private AI’s technology emerges as a critical tool, enabling organizations to navigate these complex requirements efficiently and effectively, enhancing data privacy and security in Brazil’s digital ecosystem. Try it on your own data using our web demo or get a free API key here.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Privacy Management
Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.