How Private AI can help the Public Sector to Comply with the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

Share This Post

Ontario’s Bill 194, formally known as the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024, represents a crucial legislative shift, aiming to fortify digital security and elevate trust within public sector entities. This act is significant not only for its broad coverage, which includes institutions under the Freedom of Information and Protection of Privacy Act (“Privacy Act”), children’s aid societies, and school boards, but also for its depth in introducing stringent regulations on cybersecurity, data protection, and the use of artificial intelligence (AI).

Enhancing AI and Digital Security

Part 1 of Bill 194 proposes the Enhancing Digital Security and Trust Act, 2024, emphasizing stringent management of AI systems within public sector entities. Under the new bill, the use of certain AI systems is prohibited, with details to be spelled out in regulations that still have to be developed. The act mandates entities to inform the public about AI usage, develop robust accountability frameworks, and implement risk management processes to safeguarding against potential misuse and reinforce the ethical use of technology in public administrations.

AI risk management entails the mitigation of biases and the protection of privacy, among other risks. Privacy and bias management crucially require a thorough understanding of what personally identifiable information (PII) is held within these AI systems to manage such risks effectively. This is vital, as knowing the PII content directly impacts the ability to set appropriate privacy controls and eliminate unnecessary PII, adhering to the principle of data minimization. 

Similarly for bias mitigation: knowing what identifiers are present in your data set which could constitute prohibited grounds under human rights laws is a prerequisite to ensuring that no unlawful discriminatory decisions are made by the AI system. A recent study also indicates that the removal of biased data can improve the overall accuracy of predictive models. See the paper by Sahil Verma, Michael Ernst, and Rene Just from the University of Washington “Removing biased data to improve fairness and accuracy.”

Private AI’s solutions can identify over 50 entities of PII, protected health information (PHI) as well as payment card industry (PCI) information and more with above industry-standard accuracy. It can then remove any personal identifiers the user selects, allowing for tailored use case application. This solution works on a large scale to facilitate visibility into and the PII scraping of training data and also during inference with PrivateGPT, a privacy layer you can plug into your LLM workflow to prevent any personal information to be sent to OpenAI, for example. 

Privacy Impact Assessments

Bill 194 also implements a process for Privacy Impact Assessments (PIAs), crucial for evaluating the risks associated with the personal data that is collected, used, and disclosed. While PIAs have been required in the public sector for several years, so far, there was a limited number of instances when PIAs were necessary. Under Bill 194, conducting a PIA becomes mandatory before collecting personal information. 

Picture a scenario where a public entity wishes to procure a large dataset to train an AI model. At the required scale, determining the PII contained in the dataset is not reasonably feasible when done manually. But a PIA necessitates understanding exactly what PII exists within datasets to ensure that data handling aligns with legal and ethical standards. This highlights the importance of deploying advanced solutions that can accurately detect and manage PII, thereby facilitating compliance and enhancing data protection strategies.

Managing Data Breaches

Regarding data breaches, Bill 194 would render breach notification and information obligations mandatory that have thus far been subject to guidance only, except in the healthcare sector. An additional requirement that is (surprisingly) new to the Privacy Act is mandatory privacy safeguards to protect personal information in the custody or under the control of the institution.

Whenever feasible, not having personal data in one’s custody is the best way to safeguard against data breaches. This is of course not always possible, i.e., when a public institution collects data to render services to individuals, but care must be taken not to retain data for longer than is necessary to achieve the purpose for which it has been collected. Oftentimes, this data remains valuable for secondary purposes but may not be required to be retained in identifiable form for these purposes. In those instances, de-identifying or ideally anonymizing the data is necessary to prevent any data breaches from affecting personal data that should not have been in the custody of the institution anymore. 

There is a great advantage to reducing or even eliminating PII where possible, as breaches involving non-personal data pose significantly lower risks to individuals. When a breach does occur, knowing exactly what data has been compromised is crucial for assessing the potential harm and executing an effective response. This reinforces the need for technologies that can provide clear insights into data exposure and help in the rapid assessment of breach impacts.

Private AI’s Role

Bill 194 underscores the growing need for sophisticated data management solutions that can support public sector entities in meeting these new standards. Private AI’s tools can assist in automating the detection and redaction of sensitive information, thereby ensuring that data handling practices are compliant with evolving regulations and that entities are prepared for stringent audit requirements. Try it on your won data here.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.