EU AI Act – The Importance of the Dec 8 Political Deal on Foundation Models

Share This Post

On December 8, 2023, the European Parliament, European Commission, and Council of the EU reached a significant political agreement concerning AI regulation. An analysis of the unofficial final draft is forthcoming, but in the meantime, it is important to note key developments.

The primary issue under consideration was whether to regulate the technology behind AI systems like ChatGPT or just the applications. The resistance from Germany, France, and Italy to the regulation of general-purpose AI (GPAI) models did not prevail, and the Parliament’s proposal, as discussed previously, was largely accepted, albeit in a less strict form. 

The Need to Regulate at the Technology Level

The Parliament recognized the importance of regulating AI technology, especially given the growing influence of generative AI. Addressing the inherent risks at the core of AI is essential. In this context, it is crucial to strike a balance between encouraging innovation and ensuring responsible use.

One concern is that large language models (LLMs) have displayed capabilities unforeseen by their developers. Regulating AI applications alone cannot address the full scope of potential harm. The EU AI Act rightly focuses on systemic risk GPAI models, emphasizing transparency and risk mitigation.

The Risk of Leaving it to Self-Regulation

Some argue that AI companies are already taking measures similar to those proposed in the EU AI Act. See for example the extensive research papers that accompanied the release of Llama 2 or GPT-4. However, relying solely on self-regulation is not sufficient. Companies may withhold critical information, as seen with Meta and OpenAI’s non-disclosure of their training data. Profit-driven incentives often lead companies to prioritize their interests over societal well-being.

Sanctions with tangible impacts on a company’s bottom line are necessary to ensure compliance with regulations. Even with the strictest regulation, the effectiveness diminishes if companies have the discretion to open-source their models at will.

The Remaining Issue of Open Sourcing

Even rigorous regulation can be undermined if companies choose to open-source their models without adequate safeguards. Government oversight over the release of high-risk AI models and their source code is essential to prevent misuse. Here is an extract from “Llama 2: Open Foundation and Fine-Tuned Chat Models,” the research paper Meta published alongside open sourcing its models: 

Conclusion

The agreement’s significance lies in its recognition of the need to regulate GPAI models, ensuring transparency in cybersecurity, energy consumption, and risk evaluation. The EU aims to strike a balance between innovation and safeguarding society from harm. Stay tuned for our article on the final text!

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Privacy Management
Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.