The American Privacy Rights Act – The Next Generation of Privacy Laws

Share This Post

For the longest time, the US was one notable outlier in the global trend of developing federal-level comprehensive privacy laws. The nation, (in)famous for its patchworked approach to privacy with many sector-specific and 15 (soon to be 17) states laws that cover privacy protection comprehensively, is now (once again) close to joining the other 137 nations worldwide that have comprehensive privacy laws in place. That’s big news in itself, and what is more, the discussion draft text of the American Privacy Rights Act (APRA) contains several nuggets worth talking about, so talking about it we shall! 

Who and what is covered:

APRA introduces a broad scope of applicability, significantly expanding the range of entities and data types subject to privacy regulations compared to most existing state laws and the European General Data Protection Regulation (GDPR). Unlike many state regulations, APRA includes both businesses and non-profits within its purview, while exempting small businesses unless they engage in selling data or handle information on more than 200,000 individuals for any purpose other than collecting payment for requested services. 

The legislation also delineates roles within the data processing ecosystem, distinguishing between “covered entities” and “service providers”—terms reminiscent of the European concepts of controllers and processors—which clarifies responsibilities for data protection. Special attention is given to data brokers and large data holders who face heightened obligations, such as registering with the Commission and honoring do-not-collect requests (data brokers), providing concise privacy notices limited to 500 words (large data holders), and responding to access, rectification, and deletion requests in half the time compared to other covered entities, namely within 15 days (both). 

Furthermore, APRA sets specific provisions for high-impact social media companies, recognizing the significant influence these platforms have on personal privacy. 

In terms of the types of data covered, APRA adopts a comprehensive approach similar to European models, regulating any data that can be linked to specific individuals, including sensitive data. Sensitive data is very broadly defined. It includes data points like biometric details, calendar information, call logs, and online activities. This definition encompasses more data types than typically covered under U.S. state laws, and certainly more than the GDPR includes under “special categories of data,” addressing modern privacy concerns such as targeted advertising and the extensive tracking of online behavior. On the other hand, APRA excludes employee data in contrast to California. It further excludes de-identified data with a fairly robust standard that has to be met:

DE-IDENTIFIED DATA—The term “de-identified data” means:

A. Information that cannot reasonably be used to infer or derive the identity of an individual, does not identify and is not linked or reasonably linkable to an individual or a device that identifies or is linked or reasonably linkable to such individual, regardless of whether the information is aggregated, provided that the covered entity or service provider:

(i) Takes reasonable physical, administrative, or technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual;

(ii) Publicly commits in a clear and conspicuous manner to:

(I) Process, retain, or transfer the information solely in a de-identified form without any reasonable means for re-identification; and

(II) Not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual; and

(iii) Contractually obligates any entity that receives the information from the covered entity or service provider to:

(I) Comply with all of the provisions of this paragraph with respect to the information; and

(II) Require that such contractual obligations be included in all subsequent instances for which the data may be received.

With regard to the territorial scope, the act generally preempts state privacy laws to create a uniform national standard, but it retains a complex list of exceptions. The California Privacy Protection Agency, for example, did not receive these preemption provisions well. It points out that the APRA’s data protection standards are considerably lower in several respects compared to those applicable in California which is not a step in the right direction. 

What’s Familiar

Comparing APRA with the GDPR, we note that individual rights to access, correction, deletion, and portability have been incorporated into the APRA as well, with the exception of data held exclusively on device. As mentioned, the important distinction between “covered entities” and “service providers” is also European in origin. With this distinction, the primary responsibility will always rest with the covered entity, allowing services providers to store the data or process it for the covered entity in a limited way without shouldering the entire responsibility of being allocated control over the data. Other concepts such as data minimization and privacy impact assessments are also present in the APRA but with notable differences compared to the GDPR.

What’s New

The biggest difference to most existing laws, and certainly to the GDPR, is that APRA gives effect to the insight that giving individuals control over their data is in many cases more of a burden than a blessing. Under APRA, future regulations are supposed to establish a centralized opt-out mechanism that alleviates individuals from having to determine for every website they visit whether they want their data to be processed for the purposes set out in the privacy policy that no one ever reads anyways. The centralized opt-out mechanism allows individuals to opt out of 1) data transfers to relevant service providers, and 2) targeted advertising. 

In furtherance of the goal of making protecting one’s privacy less burdensome, section 102 of APRA implements a strict data minimization standard, shifting the burden of data protection in significant respects to businesses controlling the data. The thought here seems to be this: If data minimization is required by law, individuals do not have to worry about businesses using their data for purposes outside of those narrowly defined ones set out in the act. 

Under the APRA draft, businesses are permitted to collect, process, retain, or transfer data to the extent necessary to provide a specific product or service requested, or to communicate with the individual if such communication is reasonably to be expected in the context of the relationship with the individual. Aside from that, there is a list of 17 additional permitted uses. This use restriction cannot be circumvented by obtaining consent for further processing. There is no “these are the permitted uses unless the individual consents to further processing.” Otherwise covered entities would again seek to obtain that consent, possibly by using so-called “dark patterns” such as burying the relevant information in the bowels of their privacy policy and by making consenting a lot more convenient than withholding it.

In comparison to the GDPR’s legitimate interest provision which serves as an alternative to consent as a legal basis for processing, the APRA provides more clarity for covered entities. No balancing exercise is required to check whether any interests of individuals outweigh the legitimate interest brought forward by the organization.However, a public interest requirement forms part of the last of the permitted uses, namely to conduct a public or peer-reviewed scientific, historical, or statistical research project. This permitted use also has an affirmative consent requirement on top of the public interest determination when sensitive covered data is concerned.

In the first version of the discussion draft, there were considerable issues with the permitted use case of targeted advertisement. For example, it excluded the processing of sensitive data for this purpose but the definition of sensitive data entailed much of the information needed to undertake targeted advertisement, e.g., “information revealing an individual’s online activities over time and across websites or online services.” The revised draft now carves out these data elements (except if they pertain to a minor) and permits their use for targeted advertising, clarifying also that the opting out by an individual from targeted advertising trumps the permission granted in the data minimization section.

Another issue surfaced with regards to the previous APRA discussion draft was that the data minimization provisions seemed to apply to service providers as well. It will often be difficult for service providers to determine whether the data minimization standard is met, as this is in the control of the covered entity in most cases. For example, a cloud service provider stores user data for multiple businesses. Under data minimization principles, they should only hold data that is necessary for their function. However, the cloud provider may not always have the information to determine what data is essential for the services their clients provide to end-users. This ambiguity can lead to retaining more data than necessary or difficulty in complying with the minimization requirements. In reaction to this concern, the current version clarified that the data minimization provisions do not apply to service providers. 

Having ironed out many of the concerns raised with regard to the first discussion draft, the main criticism of the revised version revolves around the inclusion of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) into APRA. For an overview of the issues brought up by representatives see Proposed American Privacy Rights Act clears US House subcommittee.

APRA and AI

The discussion draft addresses AI through provisions concerning “covered algorithms.” These are defined as computational processes that utilize machine learning, statistical methods, or other data processing or AI techniques to make or assist in making decisions based on personal data. Specifically, the draft outlines the use of these algorithms in functions such as delivering or recommending products or services to individuals, where the data involved can identify or link to an individual.

APRA introduces the option for individuals to opt-out of decisions made by such algorithms if they are deemed consequential. Furthermore, entities that handle large volumes of data are required to perform or have an independent auditor conduct an impact assessment on the use of these algorithms if they are using the algorithm to make a consequential decision.These assessments must include comprehensive details about the algorithm’s design, purpose, the data used for training, outputs produced, the necessity and proportionality of the outputs and the benefits and limitations, retraining data description, evaluation metrics, transparency measures, post-deployment monitoring and oversight processes, and potential harms on the basis of protected characteristics, being a minor, or an individual’s political party registration as well as mitigation measures taken.

If an independent auditor is engaged for the purpose of conducting an algorithmic impact assessment, the entity must signal to the National Telecommunications and Information Administration that an impact assessment has been completed. If the entity decides not to engage an independent auditor, the impact assessment must be submitted to the National Telecommunications and Informations Administration. They are tasked with reporting on best practices and strategies to mitigate any identified harms, starting three years after the enactment of the law. The original draft had assigned the Federal Trade Commission (FTC), in collaboration with the Secretary of Commerce, the responsibility to oversee these impact assessments and evaluations. 

The impact assessment must be retained for 5 years and upon request it must be made available to Congress, and a summary may be published publicly by the entity, but this is not mandatory.

Additionally, the draft mandates that developers evaluate algorithm designs before deployment, aiming to safeguard against potential harms like discrimination or adverse effects on access to essential services such as housing or education. The scope is again limited to algorithms making consequential decisions, defined as follows:

The term ‘‘consequential decision’’ means a decision or an offer that determines the eligibility of an individual for, or results in the provision or denial to an individual of, housing, employment, credit opportunities, education enrollment or opportunities, access to places of public accommodation, healthcare, or insurance.

The legislation also encourages entities to adhere to data minimization standards, as discussed above, although the implications of such practices on AI development, particularly in relation to training algorithms with sensitive data, are yet unclear.

The draft also emphasizes civil rights protections, prohibiting the discriminatory use of data in accessing goods or services. It allows for certain exceptions, such as activities aimed at preventing discrimination or promoting diversity.

Lastly, the FTC is empowered to enforce these provisions, with the ability to initiate rulemaking to clarify the requirements for impact assessments and determine which algorithms pose low enough risks to be exempt from stringent evaluations.

Enforcement

The enforcement of APRA is a cooperative effort between federal and state authorities. Specifically, the FTC, state attorneys general, the chief consumer protection officer of each state, or an authorized officer or office designated by the state, are all empowered to enforce the provisions of the Act. Note, however, that the FTC’s commercial rulemaking ability is going to be terminated according to the discussion draft, while it retains some rulemaking ability with regard to concretizing what is reasonably necessary with regard to data minimization requirements.

Additionally, APRA grants individuals a significant tool in the form of a private right of action, enabling them to initiate lawsuits against entities that violate certain privacy rights, notably excluding data minimization obligations. This provision allows individuals not only to seek damages but also to obtain injunctive and declaratory relief. The Act also allows for the recovery of reasonable legal and litigation costs, which helps to ensure that individuals are not deterred from seeking justice due to financial constraints. Mandatory arbitration cannot be imposed on consumers claiming substantial privacy violations, meaning financial harm of more than $10,000 or certain mental or physical harm, or for individuals under the age of 18.

Timelines

In light of the many new obligations that the APRA would bring, its 180-day enforcement timeline starting from the time it is adopted and with specifics in different areas is quite short. Comparing this to the EU AI Act which gives businesses 2 years from coming into force, 6 months seems particularly ambitious. It remains to be seen how the discussion draft progresses and what changes will be made along the way.

A last word on data minimization

We often hear, and again did hear in the context of APRA and its attempt to address data protection in the AI context, that data minimization and AI development are in an unresolvable tension. We at Private AI respectfully disagree. In fact, our solutions aim to do exactly that: remove personal identifiers from unstructured data on a large scale, and we are very good at doing that. Geared towards developers, we provide easy-to-integrate AI-powered software to identify and redact over 50 entities in 53 languages, supporting various file formats. Still sceptical? Try it here on your own data or request and API key.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.