Proposed AI Legislation in Canada and What it Means for ChatGPT

Share This Post

Bill C-27, which includes the proposed Consumer Privacy Protection Act (“CPPA”), the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act (“AIDA” or the “Act”), is currently in its second reading in the House of Commons. On April 24, 2023, the second reading of the bill was completed. In order to be passed, the bill needs to progress to the third reading in the House, as well as complete three readings in the Senate. The bill is generally expected to be enacted sometime in 2023. The coming into force of AIDA would take an additional 2 years and can hence not be expected any earlier than 2025.

This article outlines Canada’s proposed AIDA and draws attention to two areas of concern: firstly, the fact that many important questions are left to regulations; secondly that the harm AIDA intends to prevent is only physical and psychological harm as well as property damage and financial loss suffered by individuals, disregarding the significant systemic harm AI systems may cause. 

Overview of AIDA

AIDA consists of 40 provisions, covering: 

  • – Definitions; 
  • – Requirements to be fulfilled by persons carrying out the activities regulated by AIDA; 
  • – Orders the Minister can issue, such as requesting records and performing audits;
  • – The Minister’s duties to keep information received from reports and during audits confidential; 
  • – Administrative monetary penalties;
  • – Making the contravention of AIDA provisions an offence and setting out their punishment;
  • – The administration of the Act;
  • – General offences related to AI systems; and
  • – Coming into force.

Application and Stated Purpose of AIDA

AIDA is proposed to apply to persons, which includes a trust, a joint venture, a partnership, an unincorporated association and any other legal entity, carrying out the “regulated activity.”  

Certain federal government institutions further described in the Act are exempted from AIDA. They are instead obligated to follow the Directive on Automated Decision-Making, which may be indicative of the regulations that will have to be issued by the Governor in Council under the authority of AIDA to achieve the purpose for which the Act is intended.

 “Regulated activity” is defined as:

(a) processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system; and

(b) designing, developing or making available for use an artificial intelligence system or managing its operations.

The activity has to be carried out in the course of international or interprovincial trade and commerce for AIDA to apply. This limitation is required under the Constitution, which provides the federal government with the power to legislate matters of international or interprovincial trade and commerce but leaves it to the provinces to enact laws addressing intra-provincial matters. 

According to a news release by Innovation, Science and Economic Development (ISED), AIDA is proposed for the benefit of innovative businesses and individuals alike. Businesses require clear rules regarding the handling of personal information when developing and deploying AI systems and individuals using these AI systems need to have trust and confidence in the safety of the new digital economy. Overall, AIDA sets out to ensure the responsible development of AI systems by implementing measures to fight bias, prevent harm to individuals, and protect personal information.

Much is Left to Regulations

Notably, many important aspects regarding the scope of application, the required measures persons responsible for AI systems must take, and administrative monetary penalties are left to regulation. AIDA, therefore, constitutes but a shell of a framework. In light of the alleged purpose of providing clear guidance to businesses and instilling trust in individuals, AIDA has obvious shortcomings. 

We can gather from the Act that record-keeping as well as data anonymization obligations are placed on a person carrying out any kind of regulated activity. A definition of anonymization is not provided in the Act. Whether the definition of “anonymize” included in the CPPA, namely “to irreversibly and permanently modify personal information, in accordance with generally accepted best practices, to ensure that no individual can be identified from the information, whether directly or indirectly, by any means” will be considered to apply to the Act is unclear in the absence of a reference to the CPPA’s definition of “anonymize” in AIDA. Note that such an explicit reference is made to the CPPA’s definition of “personal information,” making it likely that the definition of “anonymize” is excluded on purpose.

Aside from data anonymization and record-keeping obligations, persons responsible for a high-impact AI system have to comply with additional requirements including risk assessments, mitigation, and monitoring. Again, what a high-impact AI system is will be established by regulations. However, some examples provided by ISED can be found in the AIDA Companion document:

  • – Screening systems impacting access to services or employment
  • – Biometric systems used for identification and inference
  • – Systems that can influence human behaviour at scale, e.g., online content recommendations
  • – Systems critical to health and safety, incl. autonomous cars

The core provisions setting out the obligations under AIDA all address the question of how these obligations must be fulfilled by stating merely “in accordance with the regulations.” The AIDA Companion document is again helpful here as it sets out examples of measures to assess and mitigate risk.

Presumably, the reason why AIDA is drafted in this way is because the political reality is that an agreement on the details may be hard to achieve in parliamentary debate followed by a vote. If the details are instead left to the Governor in Council, the process is easier. The Governor is restricted by certain legal constraints, including the Constitution, and he or she will engage stakeholders by inviting commentary from experts and the public, but the regulations are not subject to a vote at any stage. Nevertheless, regulations have the full force of laws.

Focus on Individual Harm Only

One determination that has been made under AIDA in its current form is the definition of harm. A regulation will not be able to override or significantly expand on this definition, as regulations must always remain within the scope of the law to which they pertain. The concept of harm is important under AIDA because a person responsible for high-impact AI systems must establish measures to identify, assess, and mitigate harm, and notify the Minister if the use of the system results or is likely to result in material harm. The risk of harm can also mean that relevant records must be produced to the Minister and the serious risk of imminent harm can allow the Minister to order cessation of using the high-impact system. Lastly, the Minister has certain publication rights regarding an AI system if serious risk of imminent harm exists and its prevention can be achieved by that publication; e.g., in instances where individuals have the option to refrain from using the AI system. 

Harm is, however, limited to individual harm, carving out any systemic harm that may result from the AI system. The only (yet very important) exception is bias. The record production obligation also applies if the Minister has reasonable grounds to believe that the use of a high-impact system could result in biased output, as well as harm, as defined under AIDA. Note that the other protections mentioned in the previous paragraph do not kick in if bias is the concern. 

The definition of harm in AIDA seems significantly underinclusive. Many harms that have been identified as the result of AI systems are harms that fall outside of the definition that focuses on harms to individuals. Examples are harm to social relationships, to the political system, the integrity of elections, as well as systemic oppression and amplification of racism, sexism, and homophobia. And let’s not forget the environment. A thought-provoking presentation on the harms AI may cause from the perspective of the people involved with building it can be found here.

Concededly, it may be more difficult to detect harm to groups, communities, and the environment, yet these interests are surely equally worthy of protection from harm caused in the private sector. Afterall, federal institutions making an Algorithmic Impact Assessment under the Directive on Automated Decision-Making are required to consider a much broader range of risks, including the rights of individuals or communities, the health or well-being of individuals or communities, the economic interests of individuals, entities, or communities, and the ongoing sustainability of an ecosystem.

Significance of AIDA for ChatGPT

Since AIDA leaves it to the regulator to define what a high-impact AI system is, it does not reveal whether generative AI systems such as ChatGPT would fall into this category.  The AIDA Companion Document, which gets updated regularly, does, however, refer to AI systems that “perform generally applicable functions – such as text, audio or video generation,” yet not specifically in the context of this being a “high-impact” AI system. Yet, ISED then suggests that developers of such systems would need to document and address risks related to harmful or biased content in their systems. On the other hand, developers that merely develop and design high-impact AI systems and don’t manage them once released, have different obligations than developers managing the operations. This passage seems to suggest that ISED may consider certain generative AI systems as “high-impact.” However, no explicit determination has been made, leaving many organizations that have started to develop a ChatGPT tool on top of their application in uncertainty.

But if we look at recent developments in the EU, we can get a glimpse of the sentiment towards generative AI systems such as ChatGPT abroad. EU’s draft AI Act has recently been amended to expand obligations of “general-purpose” AI, including ChatGPT. In addition, the harms considered now also include harm to the environment, and influences on political campaigns and recommender systems. Furthermore, the new draft re-emphasises that “The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI system.” Read more about it here.

Conclusion

In light of its enormous disruptive potential, regulating AI is a commendable route to take. We have missed the opportunity to do so before attention-extracting technologies caused lasting harm to our society. With LLMs at everyone’s fingertips, we are at risk of repeating this mistake. However, there are measures the industry can take for itself. 

We at Private AI believe that we should not wait for the regulation of AI to make it safer, and neither do we believe that making it safe has to mean sacrificing its potential for innovation and growth. In fact, our platform, PrivateGPT, is mitigating some of the potential harms of uncontrolled mass deployment of generative AI. By effectively identifying and removing personally identifiable information from messages before they are sent to ChatGPT, PrivateGPT can help prevent the unintentional disclosure of personal information and safeguard the privacy of users. This not only protects individuals from harm but also promotes trust and integrity in the use of valuable data for social and commercial causes. 

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Blog

End-to-end Privacy Management

End-to-end privacy management refers to the process of protecting sensitive data throughout its entire lifecycle, from the moment it is collected to the point where

Read More »

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.