Fines Imposed under GDPR for Unsafe Data Processing

Share This Post

There are two types of fines under the GDPR with different monetary thresholds. The first are administrative fines under Art. 83(4) for less severe violations of specific sections limited to € 10 million or 2% of annual turnover for organizations. The second are administrative fines under Art. 83(5) for up to € 20 million or 4% of annual turnover for violations of the basic principles of processing and of data subjects’ rights, for illegal cross-border data transfers, and for certain non-cooperation with or disobedience of the supervisory authority. Under Art. 84, EU member states can also enact additional penalties.

This article focuses on a subset of the Art. 83(4) and (5) fines that processors or controllers may face when violating safe data processing obligations, which can be captured under Art. 5 (Principles relating to processing of personal data) and/or Art. 32 (Security of processing). They are the third most commonly fined violations, with the highest fine to-date being € 22,046,000 million. 

How High are Unsafe Data Processing Fines?

We would like to be able to tell you that unsafe data processing always falls under the lower Art. 83(4) fines, but that is not the case. Confusingly, a violation of Art. 32 (Security of processing) falls under the lower-fined Art. 83(4) whereas Art. 5 violations (Principles relating to processing of personal data) fall under the higher-fined Art. 83(5), with no principled distinction between the two.

Note that Art. 5 sets out “Principles relating to processing of data,” including confidentiality, integrity, and availability, which are all also addressed by Art. 32. Hence, a violation of Art. 32 commonly also means that Art. 5 has been violated, in which case the higher fine will take precedence. 

According to the GDPR Enforcement Tracker, for violations listed as “Insufficient technical and organisational measures to ensure information security,” the fines range between € 387 (Homeowners Association, Romania) and € 20,450,000 (Marriott International, Inc., UK). There are 339 violations with this label in the tracker. Except for 11, they are all cited as Art. 32 violations, and 148 also cite Art. 5. Numerous of these cases also violate other articles.

It is difficult to determine exactly what fines result from unsafe data processing, as it is common for fines to be imposed on violations of several provisions. For example, if an organization violates a data subject’s right while at the same time processing data in an insecure manner, it’s often impossible to say what portion of the total fine is attributable to a violation of which article. 

Furthermore, we noticed that the tracker is not entirely reliable regarding information on which violated article led to the fine. For example, in the Marriott case, the tracker cites Art. 32 as the violated provision, whereas the violation notice makes it clear that the fine was imposed for a violation of both Art. 32 and Art. 5, with the latter one as the higher fine taking precedence.

On the other hand, there are cases that only fine for Art. 32 violations and not Art. 5, whereas that seems to have been a viable option. For example, the German supervisory authority fined 1&1 Telecom GmbH € 900,000 for an Art. 32 violation. In this case, an individual who called customer service could find out about personal data of 1&1 customers by merely providing the name and date of birth. The supervisory authority found this authentication method to be insufficient. Under the letter of the law, this may also have been classified as a violation of Art. 5(1)(f), as the data was not “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’).”

Consequently, the fines for unsafe data processing could be, and indeed have been, as high as € 20 million or 4% of annual turnover.

How Common are these Fines?

According to the Executive Summary of the fourth edition of the GDPR Enforcement Tracker Report, the violation labelled “insufficient technical and organisational measures to ensure information security” is the top 3 violation in terms of frequency (to-date 339 fines) and insofar as violations are recorded in the tracker. Number one and two are “insufficient legal basis for data processing” (608 fines) and ”non-compliance with general data processing principles” (433 fines). 

As mentioned above, non-compliance with general data principles (Art. 5 violations) can also mean that insufficient technical safeguards were in place. In fact, of the 339 violations for “Insufficient technical and organisational measures to ensure information security,” 149 also cite an Art. 5 violation. Conversely, of the 433 violations labelled “Non-compliance with general data processing principles,” 52 also cite Art. 32 as the violated provision. 

Conclusion

It is clear that many organizations resist or struggle implementing the required safeguards when processing personal data. It may be the cost associated with the tools required for doing so, or a lack of awareness of the requirements. The signal from the enforcement agencies is clear: fines are frequently imposed, and they can hurt. This will likely render the cost for non-compliance higher than that for compliance, and with enforcement efforts ramping up year over year, the lack of awareness should also subside very quickly.

If your concern is safeguarding unstructured data, Private AI can help you determine where personal data lives in your organization, even if it is in foreign languages. Using the latest advancements in Machine Learning, the time to identify and categorize your data and limit it to only what is necessary can be minimized. Private AI can identify over 50 different entities in 50 languages. To see the tech in action, try our web demo, or request an API key to try it yourself on your own data.

Get started with Private AI today:

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Privacy Management
Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.