FTC Privacy Enforcement Actions Against AI Companies

Share This Post

In the US, there is, as of yet, no comprehensive federal privacy legislation, but there is a federal agency, the Federal Trade Commission (FTC), that is at the forefront of notable enforcement actions. In the absence of comprehensive federal privacy law, the FTC relies for its privacy enforcement actions on acts that are either narrowly applicable privacy laws, such as the Children’s Online Privacy Protection Act Rule (COPPA Rule) or the Health Insurance Portability and Accountability Act (HIPAA), or on laws that have a broader scope and include privacy protections tangentially, such as the FTC Act, empowers the FTC to take action against deceptive or unfair trade practices. This includes addressing deceptive practices related to privacy policies and data protection. The sharpest tool in the FTC’s enforcement toolkit is probably its ability to require the deletion of models and algorithms trained on data in violation of legal obligations.

The FTC also focuses on education and raising awareness of AI companies and private individuals about their obligations and rights, respectively. Most recently, it published an article titled “AI Companies: Uphold Your Privacy and Confidentiality Commitments,” in which it reminds companies that it has the mandate and power to hold AI companies accountable for the claims that they make with regards to their products, be that in their advertisements, privacy statements, or Terms of Service. 

Privacy Violation in the AI Context

The most important privacy considerations in the context of AI companies are:

  • –  Using personally identifiable information (PII) for purposes other than those for which the PII was collected, i.e., to train an AI model, in contradiction to the commitment made to the individual when they agreed to provide their data; 
  • –  Failing to disclose such additional purposes of use altogether; and
  • –  Hiding the relevant disclosure behind hyperlinks or in non-comprehensive fine print.

The FTC can get AI companies for such practices by couching these privacy-related misrepresentations and omissions in terms of unfair competition. Consumer trust is a big deal and privacy plays an increasingly important role in this respect. If companies develop their AI solutions under the pretense of responsibly handling or not using PII, they have a competitive advantage over companies that make it known that they use PII, or that put additional effort into ensuring that they don’t.

Privacy Protection in the AI Context

An obvious option not to violate competition, consumer protection, and antitrust laws is to make the proper disclosures and obtain consent for the use of PII for model training purposes. Where the use case allows for it or where the proper disclosure is onerous to obtain, another possibility is to filter out the PII from the data set that is to be used to train the model. 

Here, Private AI can help. With its ability to identify and redact more than 50 entities of PII, Private AI is well equipped to help with the difficult task of reliably removing PII from data sets at scale. To see the tech in action, try our web demo, or get a free API key to try it yourself on your own data.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.