Reviewing OpenAI’s 31st Jan 2024 Privacy and Business Terms Updates

Share This Post

Recent updates to OpenAI’s privacy policies, terms of use applicable to individuals, and business terms have sparked the interest of the privacy and business community. The former two are coming into effect on Jan 31, 2024, except insofar as the Europe privacy policy is concerned,  and the business terms are already in force as of Nov 14, 2023. As we delve into these changes to inform these discussions, it’s crucial to understand their implications, particularly for individual users in Europe and businesses operating under HIPAA regulations, i.e., so-called covered entities which include Health Plans, including health insurance companies, HMOs, company health plans, and certain government programs that pay for health care, such as Medicare and Medicaid. Here’s an exploration of what these updates entail and how they impact the use of AI-driven solutions in different contexts.

Unpacking the Europe-Specific Privacy Policy

One of the most significant changes is the introduction of a Europe-specific privacy policy, which comes into force on Feb 15, 2024 . This move aligns with the stringent data protection standards set by the GDPR. The novelties include the ability of individual users to exercise their privacy-related rights through their OpenAI account, a crucial step towards enhancing user control over personal data. The European privacy policy now also sets out detailed legal bases for processing personal data, explicitly stated for the first time. This includes processing for contract performance, legitimate interests, and user consent, aligning with GDPR’s requirements.

Data transfer mechanisms receive particular attention, acknowledging the complexities of transferring personal data outside the EEA, Switzerland, and the UK. The policy adheres to GDPR standards, utilizing European Commission’s adequacy decisions and Standard Contractual Clauses as safeguards.

This development, however, is specifically tailored for individual users and does not extend to business offerings. For businesses operating in Europe, understanding the nuances of the business terms will be key to ensuring compliance and maintaining user trust.

Business Terms

In the realm of business terms, not much has changed. It’s noteworthy that the HIPAA restriction remains largely unchanged, i.e., the prohibition to include personal health information (PHI) in user input without a business associate agreement (BAA) in place, a requirement under HIPAA. 

However, a subtle shift is observed in the privacy language applicable to businesses. Formerly, reference was made to “API data usage policies,” which changed to “Enterprise Privacy Commitments.”

Unfortunately, in contrast to the business terms, OpenAI did not make the previous version (i.e., the API data usage policies) available for comparison. Hence, we are left to assume that this shift is intended to signal an enhanced focus on comprehensive privacy practices that go beyond just API interactions, without being able to confirm what that means in substance.

The Challenges of Securing a Business Associate Agreement (BAA)

A critical aspect for businesses dealing with health information is securing a BAA with OpenAI or, if the OpenAI services are accessed via Microsoft Azure, with Microsoft directly. The demand for such agreements is high, leading to a bottleneck in customer support. This situation presents a challenge for companies looking to leverage AI solutions while complying with HIPAA.

Navigating the BAA process is more than a compliance hurdle; it’s a reflection of the growing need for robust data protection measures in the AI space. The struggle to secure a BAA underscores the importance of proactive engagement with privacy and security requirements in the evolving landscape of AI technology.

How Private AI Can Help

Where businesses are not able to obtain a BAA in due time, and also because it is good practice, a solution would be to approach HIPAA compliance via the Safe Harbour rule, always assuming of course that the use case is to disclose PHI. Removing the listed 18 identifiers from your data set permits the disclosure without anything further. This de-identified information is no longer considered PHI to which HIPAA applies. Alternatively, if you instead remove any information regarding a health condition, provision of health care to an individual, and the past, present, or future payment for the provision of health care to an individual, then the information would also no longer constitute PHI, but may still be personal information, the disclosure of which can be restricted under other applicable laws.

Private AI’s technology can detect, redact, and remove all 18 of the listed entities of the Safe Harbor rule (and more), greatly facilitating HIPAA compliance and enabling businesses to leverage LLMs safely. Beyond that, Private AI can also remove direct and indirect personal identifiers including health conditions as well as name, address, and numerical identifiers. You can test the technology on your own data, using our web demo, or sign up for an API key here.  

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.