Data Privacy, Law, and Cybersecurity with Carole Piovesan

Share This Post

Key Points: 

– There are huge privacy implications for ML system training and companies need to be putting policies, processes, and safeguards in place to reduce the privacy burden.

– Companies’ current top concerns are: navigating fluid privacy regulatory environments, identifying risks and liabilities associated with AI systems, and establishing a proper data governance framework.

– Having a proactive privacy breach plan with stakeholders is critical; minutes matter when there’s a breach and you want to have a tried and tested plan in place so you can reduce the harm.

– As different regulations emerge in Canada and globally, companies should be performing a risk assessment and identifying dominant features for worldwide compliance.

– The legal landscape for data privacy and AI is constantly evolving and a proactive approach is recommended. 

Watch the full session:

 

Patricia: I’m really happy to have Carole Piovesan here with us today. Carole is a Partner and Co-Founder of INQ Law, where she focuses on data governance, privacy, cybersecurity, and artificial intelligence. Prior to co-founding INQ, Carole was a lawyer at a large national firm where she served as co-lead of the firm’s national cybersecurity, privacy, and data management group and a lead on artificial intelligence. Carole has advised the Canadian government on legal and policy issues related to AI and regularly advises companies on issues related to their collection, storage, and use of data. She recently co-edited the book Leading Legal Disruption: Artificial Intelligence, and a Toolkit For Lawyers and the Law, published by Thomson Reuters. So thrilled to have you here, Carole, thank you so much for your time.

Carole: Thank you very much for having me. I’m thrilled to join your community and to join you in particular.

Briefly, what is a company’s basic legal responsibility with handling personal data?

Patricia: So briefly, what is the company’s basic legal responsibility with handling personal data? 

Carole: It’s a big question because the responsibility is defined depending on the kind of data that you hold. So, is it health data versus contact information; is it professional data versus someone’s home address, for instance. It really does depend on what data you hold, but at a high level what we often refer to are the ten fair information practices and principles, and that really touches on issues of accountability, openness and consent with individual rights. So your right to access your information, your right to know about complaints and where to complain if you want to challenge compliance and how to go about doing that, your right to correct information. And so all of those ten fair information principles really guide and underpin privacy law, not just in Canada, but they underpin privacy law generally around the world. There are consistent principles that we see that touch on the collector—so the entity or individual (the entity mainly) that’s collecting information, personal information—the rights that apply to them. 

Now, the thing about personal information that I just want to be really clear about, because your question is about personal data, is personal information is defined as information that’s about an identified or identifiable individual. And the reason you say identifiable is because sometimes an individual data point doesn’t identify somebody. But when you mix that data point, you collect multiple data points and you mix them together in data sets, the individual becomes identifiable because you have enough to figure out who that person is. So we don’t just worry about individual data points, we look at the overall use and management of data to determine whether it may be personal data; it may fall within the jurisdiction of privacy law. Does that make sense? 

Patricia: Total sense. And I imagine that when you’re dealing with, for example, companies that do a lot of machine learning, they have huge troves of data, do you recommend anything specific for them? 

Carole: It’s really interesting because the intersection between privacy and machine learning is becoming more and more of a prominent policy discussion now. We’ve seen our own Privacy Commissioner at the federal level come out with recommendations on privacy in the context of AI and machine learning as late as November 2020, where certain recommendations were made following a consultation. We’ve seen some of this dialogue happening out of Europe. Europe recently, in April 2021, proposed a specific regulation to do with artificial intelligence. It’s just draft, it’s not law yet, but it is very indicative of where the world is going. And then last week, the European Data Board responded in the supervisory authority from a privacy perspective, saying that’s all well and good, it’s very interesting, but we have to make very clear the fundamental civil rights that are imbued in privacy or that privacy imbues. So we’re seeing more and more of this discussion of okay, well, yes, machine learning is using a huge amount of data, we understand that. There are necessarily major privacy implications to that data use. What are you doing, AI/ML community? What does it look like for you? The response is, on the one hand, on the policy side, on the other hand, it’s companies actually like yours where we are finding a technological solution or technological approaches to reducing that sort of privacy compliance burden where typically for the training set, the training data set, you don’t care that it’s Carole Piovesan’s data that you have, you care about data features, you care about trends, but you don’t care that it’s necessarily mine. So how are we collecting that information? How are we using that information for the purposes of ML training system training? And then are there certain processes that we can put in place to overall reduce that privacy burden? That’s really where we are right now. And I think there’s a very rich startup community that’s forming to help address that problem, to help really deal with the privacy implications of large data sets, big data and AI.

What are the top 3 concerns that companies are approaching you for help with right now?

Patricia: That’s really great feedback, Carole. It’s a very interesting intersection of the law because a lot of research still has to be done also on the check side, right. It’s just a very complicated space to be in. Would you name the top three concerns that companies are approaching you for help right now? 

Carole: Yes, absolutely. So the first is to navigate the very fluid privacy regulatory environment. If you look across Canada alone, let’s just take Canada: British Columbia, Alberta and Quebec all are fairly advanced or advancing in some form of their private sector privacy law. Okay, then, at the federal level, we had a major bill introduced in November 2020 that seeks to substantially reform the federal private sector privacy law. Then in Ontario, there’s consultation on Ontario’s own private sector privacy law and what it should look like, and those are fairly mature consultations. So it’s not just the beginning. It’s not just an idea. It’s actually quite mature. Not to mention, what we’re finding is in the health privacy realm, where I do a lot of work as well, there are changes that we see either that have been approved or waiting for them to take effect and come into law or they’re being discussed. So even just in Canada, it’s an extremely fluid environment. So we get a lot of questions on that. And then you go global – lots happening in Europe, huge amounts happening in the US with State-level privacy laws percolating across the US. And a big question mark as to whether we’ll see federal private sector privacy law introduced from the US as well. So that’s the first major, really needy area where we’re seeing a lot of companies approach us. 

The second has to do—and this, I would say, is a more recent trend—but it has to do with identifying risks and liabilities associated with AI systems. So, the introduction of the draft legislation out of Europe just earlier, I guess it was last week, end of July, the National Institute for Science and Technology out of the US tabled a request for information that was specific to an AI risk management framework, which is really interesting because the questions that they’re asking in that RFI include, we understand there are certain categories of sort of risk that everybody seems to identify when it comes to AI systems: robustness security, fairness, non-discrimination. We understand that. But is there anything that we’re missing? Are these the right categories? Is there anything we’re missing? What should a risk management framework look like? The US is very much taking the root of a voluntary risk based approach to AI. So what does that look like and how does this start to create those standards around providing some guidance in terms of what a risk management framework could look like? So, that world of what do the risks and liabilities associated with AI, what can they be, and what do they look like in practice, that’s the second area where we’re getting a lot more outreach. 

Then the third is related, and it has to do actually with the governance of those systems. So we talk a lot about data governance meaning, from my world, it means what are the policies and procedures that you have in place to provide proper data management? Then what are the technical safeguards you have in place to support those policies and procedures? And have you done proper scoping and inventory of your data? Do you understand what data you have and what you’re allowed to do with it. So that’s in the data world, but now it’s extending to AI governance, which means what are the policies and procedures that we should have in place to reduce the risk and liabilities associated with AI systems and use them in a manner that is defensible and diligent so that we can extract the benefits that we think we’re going to extract without incurring all of the potential risk and liability that we hear so much about and that we know about. 

So those, I think, are the three main areas where we’re seeing business approach us in particular when it comes to data law and policy.

So that’s in the data world, but now it’s extending to AI governance, which means what are the policies and procedures that we should have in place to reduce the risk and liabilities associated with AI systems and use them in a manner that is defensible and diligent so that we can extract the benefits that we think we’re going to extract without incurring all of the potential risk and liability that we hear so much about and that we know about. 

What do you see companies often getting wrong about data governance?

Patricia: Those are huge areas. I’ll get back to that first point in a bit. But what do you often see companies getting wrong about that third point of how they’re dealing with data governance?

Carole: The biggest thing they get wrong is that they assume data governance is privacy management. They conflate data with privacy. You can have data that is not personal information. And currently, most of that data is just not really regulated. So, when you’re looking at a data governance framework, you’re really trying to turn your mind to beyond privacy. So, yes, you’re thinking about privacy, you’re thinking about security. But then you’re also looking at your data assets, and you’re thinking about how you are using those data assets. In what context, for what purpose? To what end? What do you think the outcome is going to look like? And that construct really thinking beyond privacy is not something that we’re seeing a ton of. It’s starting, much more, but the traditional approach of privacy as the entry point is still dominant. Where we come in, and I think what makes us distinct, is that we’re very much focused on where are you going with this data. So, let’s worry about the sort of configuration and the nature of the data that you have. And start by telling me where you want to go, because then we can sort of trace back to make sure you have the right authorities and you’ve collected it in the right manner and that you’ve been fair and transparent. That’s going to be important to your narrative as a company and is important to your compliance portfolio. When you take that approach, privacy necessarily becomes an important conversation. But it’s not the only conversation, you don’t stop at privacy. You can think a little bit more broadly about that strategic data use and then decide how you get there in an appropriate manner.

Patricia: That’s great. And that also probably helps them be more active about future legislations that might govern that data.

Carole: Absolutely. It’s proactive compliance.

It’s often said that it’s a matter of ‘when, not if’ a data breach is coming. What’s your take on that?

Patricia: And it’s often said that a data breach is going to happen no matter what, right. What’s your take on that? 

Carole: The reason that’s often said is because you have to plan for what is increasingly considered to just be an eventuality, right. To take the position that ‘it may happen to me’ sort of puts you in a category of company or entity that then is happy to take that risk. And the risk is not one that’s worth taking because the reputational impact, in addition to the legal impact, in addition to the sort of broader trust—so beyond just reputation, broader trust impact—is really profound. So it’s not a risk worth taking. Instead, you apply the lens because it is likely, it is true that a breach is going to happen. A breach is very broadly defined; it is the unauthorized access loss, use, modification of personal information. It could be theft, it could be accidental. Even a misdirected email is technically a breach if that email contains personal information. And it could be a notifiable breach if it contains really sensitive information, meaning you would have to notify the affected individual or individuals, and you have to notify the Privacy Commissioner that’s implicated. So the fact that a privacy breach is going to happen or a data breach is going to happen is known, and it’s the right lens to apply to your work because it will force you to plan and prepare proactively so that you’re preventing as much as you can and you’re putting in place the right safeguards to have proper cyber hygiene. Right. So people are your staff, your stakeholders, people who would have access to your data systems, whatever they may look like, whether it’s your contractors, they understand what your principles are, so why you care about privacy and why you care about data protection. They have clear guidance in terms of how they can appropriately manage or attach that information, access it, and then they know what to do if there’s a breach. That last point is super important because minutes matter when there’s been a breach and you want to have a tried, tested and true plan in place so that you know exactly what to execute and you can reduce the harm, reduce the burden overall on the organization and on the affected individuals. 

A breach is very broadly defined; it is the unauthorized access loss, use, modification of personal information. It could be theft, it could be accidental. Even a misdirected email is technically a breach if that email contains personal information.

So, I’ll just give you a little bit of flavour for what I’m talking about. When there’s a breach— particularly a significant breach—your operations, like the thing that generates revenue for your company, generally stops for a period of time. If you have a plan in place, that period of time is much shorter. If you don’t have a plan in place, that period of time is much longer, not to mention the overall expense of trying to manage that breach. So the case to prepare is very strong so that you can not only get back to what your business is supposed to do, but you can maintain that relationship of trust with your clients, your customers, your stakeholders, and you can mitigate as much as possible, minimize the amount of damage to your brand into your narrative that might come from a breach. There’s nothing worse than hearing that the data I entrusted to you, you didn’t take seriously, somebody else accessed, and now I’m harmed and there’s not much I can do about it. There’s nothing worse than that. 

There’s nothing worse than hearing that the data I entrusted to you, you didn’t take seriously, somebody else accessed, and now I’m harmed and there’s not much I can do about it. There’s nothing worse than that.

Patricia: Absolutely. Carole, I love the bit where you said even a misdirected email counts as a data breach. I think that a lot of people have the idea that a data breach is this huge, headline worthy thing, but that’s not necessarily the case, right?

Carole: No, it’s not. And not only is the misdirected email a data breach, a breach doesn’t have to involve millions of records, it can involve one. It doesn’t have to be an external bad actor, it can be a total accident or misjudgment. So, it doesn’t matter, really how it comes to be. I mean, it matters in the overall analysis of what you do. But in determining whether or not you check the box of whether something is a breach, that part doesn’t really matter. What matters is has personal information been accessed by someone who shouldn’t have it, or used in a way that it shouldn’t have been used, or modified in a way that shouldn’t have been modified. There you’re in the breach territory, and you have to document it. The federal legislation is very clear. You have to document that breach and hold on to that record for two years. And the Privacy Commissioner can come and take a look at your records and determine what your breach portfolio looks like. So, there are requirements that are stated in law for more data mature organizations they understand those requirements, and they do have in place plans. They test those plans on an annual or biannual basis through a tabletop exercise. So you simulate a breach, and then you run through the exercise of what it’s like to respond to that breach, and you have all of your sort of vendor retainers in place meaning you’ve got your forensic team that’s already signed a retainer, you’ve got your external legal counsel signed a retainer, you have your cyber insurance, please everybody in the audience tell me you have cyber insurance if you’re collecting personal information. So you have that insurance, you know who your insurer is, you know what your deductibles are. And then you have your PR team in place as well just in the event that something has to be discussed publicly, you want to get that advice in terms of how to do it. So there’s a lot of work that goes into that preparation, and then you have to try it out a bit. You’re going to take your plan, you’re going to give it a go, go through simulation, tweak, and then go through it again. And then the other thing you’re going to do is you’re going to print it, because if there’s a breach, you need to make sure you can access it. And sometimes in the case of a major breach you can’t.

Are there any technologies that you find particularly promising to help a company easily advance their privacy and data governance initiatives?

Patricia: That’s super valuable information, Carole, thank you. Are there any technologies in particular that you find help companies advance in these privacy and data governance initiatives? 

Carole: I actually am seeing a number of different companies and technologies come up, and I want to distinguish companies from technologies because we are hearing more and more about the use of differential privacy to assist with privacy and data—less data governance but more the privacy components—you’re hearing more and more about homomorphic encryption as a way to manage some of the privacy implications of large data use and data transit. So you’re hearing more about different technologies that can assist in the sort of policy response to privacy, but they’re not dominant yet. So they are becoming much more popularized but they’re either not ready or they’re not being used in the way that you would anticipate. But they do seem quite promising. In terms of being able to then manage large data assets they do appear to be more promising. Same with Federated Learning, we’re hearing a lot about Federated Learning. You hear about it quite a bit in the healthcare space in particular. But frankly, across the board, we’re hearing much more about Federated Learning and whether this is an approach and a technology that we can use to facilitate data access from different data pots without actually reproducing or moving any of that data. All extremely promising but still relatively early in their adoption. In the case of homomorphic encryption, I am not a technologist, but what I understand is also very early in the maturity of the technology.

Each province seems to have their own data protection/consumer protection regulation. Will we end up with a patchwork similar to the US or will the CPPA supersede that?

Patricia: And going back to that first point that you made that customers really come to you for, each province in Canada, much like the US, does seem to have these piecemeal regulations. Is the CPPA going to fix that, or is this something we’re still going to have to deal with? 

Carole: No, the CPPA won’t fix that. The CPPA will govern. So that’s the bill that I was referring to that was tabled in November 2020 that seeks to reform major parts of PIPEDA through our existing private sector privacy law. It won’t change the patchwork effect of privacy laws across Canada. It instead plays a role of modernizing PIPEDA, and it seeks to be responsive to the changes to data use in Canada and changes to data legislation around the world. So it does seek to harmonize a little bit more closely with the GDPR, not perfectly, but a little more closely. 

The patchwork effect is not just going to be in Canada, it’s as you go global, you’re going to deal with the patchwork effect across different countries. So, it is tough on compliance and so what the conversations we’re having with our clients that are initiated by the client is, ‘well, how compliant do I have to be like, what does my risk actually look like? If I align with GDPR, should that generally make it okay in other jurisdictions?’ It’s a really tough conversation to have, but it’s a risk assessment because being perfect across every jurisdiction is becoming harder and harder. And there are discussions happening internationally, but how you can harmonize this is not only a discussion in the privacy realm, but it’s actually a really interesting discussion in the AI space. So as we see, different regulations, or draft regulations, or different approaches to artificial intelligence emerge across different countries, there’s a worry that as you develop systems, that there will be different regulatory environments that apply or regimes that apply to your system so how do you scale something in a way that is viable. I think all of this will result in a pretty substantial risk assessment on your own part, in terms of what do you see as the dominant features for compliance and then how do you as best possible tweak to the individual requirements in different States. So, we are seeing a much deeper space for that in the startup world, supporting enterprise clients to manage privacy and data compliance around the world because it is becoming so tricky. So maybe that will be the answer to the compliance complexity. But for now, we’re seeing it more in terms of a risk approach.

It’s a really tough conversation to have, but it’s a risk assessment because being perfect across every jurisdiction is becoming harder and harder.

You’ve recently co-edited a book titled ‘Leading Legal Disruption: Artificial Intelligence, and a Toolkit For Lawyers and the Law.’ What are some surprising things you discovered while editing the book, and where do you think the law is lacking?

Patricia: Got it. Thank you. We have four minutes left. I want to ask you one last question before we open it up to Q and A. You’ve recently co-edited Leading Legal Disruption: Artificial Intelligence, and a Toolkit For Lawyers and the Law. Is there anything surprising that you found out while you are editing or anything you want to share about the book you edited?

Carole: Yeah, it’s an amazing compilation of essays that cover a huge range of topics that are very relevant to the law of artificial intelligence. So very practical, everything from regulatory environment straight through the contracting. What does it mean to contract for AI? What did the intellectual property implications look like for artificial intelligence? Can artificial intelligence be an inventor, a creator, having some controversial discussions about that which I loved. The human rights implications of artificial intelligence. So there were a wide range of topics that were really interesting, interesting to dig deep into. 

I think the biggest surprise I have with the book, which should not be a surprise, is how quickly the landscape changes. So, as soon as it’s published, you’re already starting to think ‘huh, interesting, this has already evolved in such a meaningful way.’ So the book was published pre-draft EU AI legislation. It was definitely published pre-NIST RFI and who knows what will come out of that RFI, but you can see how fast the landscape [changes]. And by the way, it was published in I think it was like May 2021. So we’re talking about a very, very fluid environment in which we find ourselves trying to stay on top of that and stay knowledgeable, relevant, compliant is hard on any business small for sure, even big. That’s one thing that I found a little bit surprising. I thought we’d have a bit of time to catch up, but no, it’s very, very fast, and we’re finding our clients really feeling it as well, which is why they’re starting to reach out much more aggressively, even on the proactive side to say, well, we need help with really building out what this program could look like or understanding what the liability implications of this may be or just contracting for some of this stuff. We have to look at our procurement process more closely because we’re not sure that our procurement process and our vendor assessments properly calibrate for the risk we might be taking on with this particular AI vendor. So clients are becoming a little more sophisticated in that respect, and they’re starting to reach out much more proactively.

How would you advise on the risk of third party data processors and keep them compliant with your information governance policies? 

Patricia: Amazing. Thank you for that. I look forward to taking a look at that book as well. Hopefully soon. We do have a question from Chris Howard, who says, ‘Good to see you both. How would you advise on the risk of third party data processors (eg. identity reconcilers, enrichment) and keep them compliant with your info. gov. policies?’ 

Carole: The one thing that you would be doing for sure is making sure that you’re contracting for the risks that you’re worried about. So whatever the obligations are that you have, you can pass them on through to your third party processors and, in fact, it’s required for you to do so. So, your contract process will have to ensure—and we’re seeing this more and more—that your third party processors have reviewed your policies and align and agree with your policies. And we’re seeing a much more robust diligence process for any of the third party processors. Any vendors that a controller (so a data collector) takes on, the diligence process before actually procuring the system or procuring the service is becoming much more intense. The security addenda that are attached to some of these contracts are really long. The data protection addenda may be very, very long as well. Most companies are trying to take a more streamlined, practical approach, but always bearing in mind the requirements that they need to cover off before they can agree to work with this vendor. Something that we’re finding as well as that some of our clients that are on the procurement side on the procurer side, if you will, they are instituting much lengthier vendor risk assessment that covers off various different areas of data protection and increasingly AI if that’s the vendor that they’re going to be working with. You have to know what questions to ask for sure, especially in the AI world, it’s very emerging so you really need to turn your mind to those questions and know what questions to ask, know what’s relevant, and then document the outcome, because that’s sometimes one of your best sources of protection. Also ask if you are a purchaser of any or you’re the controller, ask for those diligence documents. Ask for the third party audits if they’re claiming to align with a particular standard. Ask for the privacy policies. Then in your contract, make sure that you connect those dots and that you bind your vendor to your requirements and your obligations as best you can. 

We’re seeing a much more robust diligence process for any of the third party processors. Any vendors that a controller (so a data collector) takes on, the diligence process before actually procuring the system or procuring the service is becoming much more intense.

Patricia: Amazing. Carole, thank you so much for your time, it’s been such a delight. Chris, thank you for your question. Take care. Have a great day.

Subscribe To Our Newsletter

Sign up for Private AI’s mailing list to stay up to date with more fresh content, upcoming events, company news, and more! 

More To Explore

Download the Free Report

Request an API Key

Fill out the form below and we’ll send you a free API key for 500 calls (approx. 50k words). No commitment, no credit card required!

Language Packs

Expand the categories below to see which languages are included within each language pack.
Note: English capabilities are automatically included within the Enterprise pricing tier. 

French
Spanish
Portuguese

Arabic
Hebrew
Persian (Farsi)
Swahili

French
German
Italian
Portuguese
Russian
Spanish
Ukrainian
Belarusian
Bulgarian
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
Greek
Hungarian
Icelandic
Latvian
Lithuanian
Luxembourgish
Polish
Romanian
Slovak
Slovenian
Swedish
Turkish

Hindi
Korean
Tagalog
Bengali
Burmese
Indonesian
Khmer
Japanese
Malay
Moldovan
Norwegian (Bokmål)
Punjabi
Tamil
Thai
Vietnamese
Mandarin (simplified)

Arabic
Belarusian
Bengali
Bulgarian
Burmese
Catalan
Croatian
Czech
Danish
Dutch
Estonian
Finnish
French
German
Greek
Hebrew
Hindi
Hungarian
Icelandic
Indonesian
Italian
Japanese
Khmer
Korean
Latvian
Lithuanian
Luxembourgish
Malay
Mandarin (simplified)
Moldovan
Norwegian (Bokmål)
Persian (Farsi)
Polish
Portuguese
Punjabi
Romanian
Russian
Slovak
Slovenian
Spanish
Swahili
Swedish
Tagalog
Tamil
Thai
Turkish
Ukrainian
Vietnamese

Rappel

Testé sur un ensemble de données composé de données conversationnelles désordonnées contenant des informations de santé sensibles. Téléchargez notre livre blanc pour plus de détails, ainsi que nos performances en termes d’exactitude et de score F1, ou contactez-nous pour obtenir une copie du code d’évaluation.

99.5%+ Accuracy

Number quoted is the number of PII words missed as a fraction of total number of words. Computed on a 268 thousand word internal test dataset, comprising data from over 50 different sources, including web scrapes, emails and ASR transcripts.

Please contact us for a copy of the code used to compute these metrics, try it yourself here, or download our whitepaper.