iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://www.privacylawyer.ca/blog/index.html
Canadian Privacy Law Blog

Tuesday, May 07, 2024

Important new Ontario court decision on privilege in incident response documentation


The Ontario divisional court has just released a decision, LifeLabs LP v. Information and Privacy Commr. (Ontario), 2024 ONSC 2194, that should grab the attention of Canadian lawyers who work in cyber incident response. I don’t know whether it will be appealed, but the logic of the decision is pretty sound. But I expect this isn’t over. 

In a nutshell, after a significant ransomware incident, LifeLabs was assisted by well-known cybersecurity and forensic consultants for the investigation, remediation and negotiation with the ransomware bad guys. As required by the relevant privacy laws of those provinces, they notified the privacy commissioners of British Columbia and Ontario, and the commissioners started a joint investigation. In connection with their investigation, the commissioners demanded to see the consultants’ reports and LifeLabs claimed they were privileged. 

Not surprisingly, the ransomware incident was followed by a number of class action lawsuits that were still pending at all material times. 

In June 2020, the Commissioners issued a joint decision finding that LifeLabs had provided insufficient evidence to back up the privilege claim. They were also ordered to hand over the consultants’ reports.  So LifeLabs sought judicial review of the order in the Ontario Divisional Court. The Court just released its decision, upholding the IPC’s order. I’m not sure why it took so long to get to a hearing.

According to the IPC’s decision, there were five categories of records at issue:

i.          The investigation report prepared by the cybersecurity firm hired by LifeLabs, which described how the cyberattack occurred.

ii.          The email correspondence between the cyber intelligence firm and the cyber-attackers after the discovery of the attack by LifeLabs.

iii.         An internal data analysis prepared by LifeLabs on April 28, 2020 to describe which individual health information had been affected by the breach and to notify those affected pursuant to ss. 12(1) and 12(2) of the PHIPA.

iv.        A submission from LifeLabs to the Commissioners dated May 15, 2020 in response to certain specific questions, communicated through legal counsel.

v.         The report of Kevvie Fowler, Deloitte LLP dated June 9, 2020 prepared as part of the representations by LifeLabs and submitted to the Commissioners for that purpose.

Other than the internal LifeLabs assessments, the records were created by consultants retained by LifeLabs’ lawyers. The cybersecurity firm was already engaged by LifeLabs to assess the company’s security, and it was actually them who discovered the incident. They were instructed to provide their reports on the incident to legal counsel.  

The court reviewed the IPC’s privilege decision on a standard of correctness and found that it was correct. 

Before getting into the decision, it should be noted that LifeLabs claimed “solicitor client privilege” and “litigation privilege”. They are related and similar, but not the same. 

Solicitor client privilege protects communications that are made in confidence between a lawyer and their client (or third party acting on behalf of their client). In order to be privileged, the communication must be made for the purpose of seeking or giving legal advice, and the parties must have intended the communication to be confidential. Just because there’s a lawyer in the mix doesn’t make it privileged, and a third party’s involvement, like a consultant retained by the client or the lawyer, doesn’t waive that privilege.

Litigation privilege is intended to create a “zone of privacy” within which counsel can prepare draft questions, arguments, strategies or legal theories, in anticipation of litigation and for the purpose of preparing for that litigation. Documents created by others, to assist counsel, in preparing for litigation can also fit into this category. Notably, the privilege only exists while the litigation is anticipated or ongoing.

If you read the IPC’s decision, you’ll see that not much information was provided by LifeLabs (or at least not to the IPC’s satisfaction) to demonstrate that the five categories of records fit into either solicitor client privilege or litigation privilege.  In large measure, the IPC decided that LifeLabs HAD to investigate the incident and HAD an obligation to provide factual information to the IPC. It doesn’t look like the IPC was looking for actual advice given by counsel or anything related to LifeLabs’ trial strategy for their ongoing litigation. 

Ultimately, the decision turned on LifeLabs not providing evidence to the IPC’s satisfaction to back up their privilege claims.

The main conclusions, simplified a bit, are that: 

1.         Facts are not privileged, even if they were collected or compiled by a lawyer.

2.         If you have a statutory obligation to investigate and provide information to the regulator, the facts that are discovered in that investigation are not privileged.

3.         Solicitor client privilege only protects communications that are made for the purpose of seeking or obtaining legal advice.

4.         Litigation privilege only protects communications and records that are created for the dominant purpose of preparing for litigation.

This is not earth shattering, but it’s a reminder of how the law of privilege works in Canada. 

The court emphasized that even if certain communications or documents are privileged, the facts referred to or reflected in those communications may not be privileged if they exist independently, outside of the privileged context. Facts that have an independent existence outside of solicitor-client privileged communications are not automatically privileged.

The court quoted and agreed with paragraph 49 of the IPC’s decision:

Even if the communication is privileged, the facts referred to or reflected to in those communications are not privileged if they exist outside the documents and are relevant and otherwise subject to disclosure. Some facts have a life outside the communication between lawyer and client but have also been communicated within the solicitor-client relationship. Facts that have an independent existence outside of solicitor-client privileged communications are not privileged. When deciding if such facts are privileged, one must keep one eye on the need to protect the freedom and trust between solicitor and client and another eye on the potential use of privilege to insulate otherwise discoverable evidence. While privilege is jealously guarded it must be interpreted to protect only what it is intended to protect and nothing more.

The court further clarified that simply depositing a document or providing counsel with a copy of a document does not automatically extend privilege to the original document. The protection of privilege is intended to safeguard the communication between lawyer and client and the adversarial preparation for litigation, not the underlying facts themselves.

Therefore, the court concluded that facts concerning the investigation or remediation, even if communicated within a privileged context, may not be privileged if they have an independent existence outside of privileged documents. 

If an organization has a legal obligation to investigate, remediate and report to the privacy commissioner, interjecting lawyers into the process does not relieve the organization of its obligation to report to the commissioner. This obligation includes cooperating with the commissioner's inquiries and providing information necessary for investigations.

The Court wrote:

[76]           Health information custodians, such as LifeLabs, cannot defeat these responsibilities by placing facts about privacy breaches inside privileged documents. Although the claims of privilege here were rejected, even if they had been accepted, this would not have defeated the ON IPC’s duty to inquire into the facts about the data breach within the control and knowledge of LifeLabs. This result flows not only from the ON IPC’s statutory mandate, but also from how litigation privilege and solicitor client privilege function.

[79]           Thus, the IPC’s statutory duty to inquire, and LifeLabs’ duty to respond, does not permit a claim of litigation privilege over facts obtained through its lawyers, even where those facts might also play a role in defending against parallel civil litigation. As Nordheimer, J. wrote in R. v. Assessment Direct, at para. 10, “the privilege does not protect information that would otherwise have to be disclosed”.  LifeLabs did not identify any litigation strategy that would be disclosed in the Investigation Report because of the Privilege Decision.

On this point, the Court agreed with the findings of the IPC:

[80]           Similarly, solicitor-client privilege does not extend to protect facts that are required to be produced pursuant to statutory duty. The ON IPC correctly articulated the law when it stated at para. 49:

… Facts that have an independent existence outside of solicitor-client privileged communications are not privileged. … While privilege is jealously guarded it must be interpreted to protect only what it is intended to protect and nothing more.”

Furthermore, the court emphasized that organizations cannot use claims of privilege to shield facts about privacy breaches from the commissioner. Even if privilege is claimed over certain documents or information, it does not absolve the organization from its duty to cooperate with the commissioner's investigation and provide relevant facts. The court noted that placing unpalatable facts within privileged documents to avoid investigative orders would undermine the purpose of regulatory oversight and accountability.

Just saying something is privileged does not make it privileged. Including a lawyer in a conversation does not make it privileged. Having the lawyer hire the consultant does not automatically make it privileged. 

The IPC and the Court noted that the cybersecurity consulting firm had a prior retainer with LifeLabs related to what it was doing before the incident, during the incident and afterwards. Simply having the report related to the incident addressed to counsel didn’t make that report privileged. The IPC referred to a US case called In re Capital One, which LifeLabs said was an error. The court disagreed with LifeLabs, and reached the same conclusion as the IPC: 

[90]           I disagree. The In re Capital One case affords persuasive authority to support a finding that where a company has a prior retainer with a cybersecurity firm to provide essentially the same services before and after a breach, inserting  counsel’s name into the contract and stating that the deliverables would be made to counsel on behalf of the client, does not render any report prepared subject to the U.S. work product doctrine, which is akin to Canada’s litigation privilege.

Interestingly, the IPC in their March 2020 decision on privilege left the door open for LifeLabs to prove that portions of the records may include information that is subject to solicitor client or litigation privilege. 

I would have liked to have seen a bit more analysis of what is reasonably contemplated litigation and dominant purpose, in the context of the discussion of litigation privilege. The reality is that in the aftermath of an incident like this, litigation is almost certain to follow. Much of the response or even the approach to the incident response is informed by that likelihood. Many records are created in anticipation of defending litigation, but those records are also useful for (or maybe necessary for) dealing with the commissioner’s investigation. Is 50/50 dominant enough? And some of these records would be created because that’s what’s expected of a reasonably prudent company. Is 33/33/33 dominant enough? Should we create different tracks in incident response, assigning certain investigators to the litigation track and others to the commissioner reporting track?

Maybe we should consider amending our privacy laws (or Evidence Acts more generally) to say that the provision of information to a regulator pursuant to a statutory duty does not amount to a waiver of privilege as far as third parties are concerned.

I think lawyers who work in this area will have some interesting discussions about this decision.

It will be interesting to consider how this affects certain activities that take place outside of the context of dealing with an active incident. For example, I may be retained by a client to provide them with my assessment of whether they are complying with their safeguarding obligations under privacy laws. Often, an engagement like that involves working with expert consultants who examine the network security, do penetration testing and benchmark against best practices. New facts are uncovered that will be included in my opinion and advice to the client, and at that stage there is no obligation to assist any privacy regulator in that endeavour. The new facts were “uncovered” or discovered only for the purpose of providing legal advice. I think there are arguments that can be made in both directions regarding whether those new facts can be privileged. That’s a discussion for another day …

I should add this decision doesn’t create any new law about privilege. Nor does it put a dizzying spin on privilege law, but it serves as a reminder that you can’t throw a blanket of privilege over everything associated with incident response. I also don’t think it does away with privilege in connection with incident response. I have provided a lot of advice to a lot of organizations, and I’ve worked with a lot of outside consultants in that context. I remain confident that my communications with my clients, in the context of them seeking my legal advice, is untouched by this decision. 

 


Monday, March 04, 2024

Canada's New "Online Harms" bill - and overview and a few critiques

 It is finally here: the long-anticipated Online Harms bill. It was tabled in Parliament on February 26, 2024 as Bill C-63. It is not as bad as I expected, but it has some serious issues that need to be addressed if it is going to be Charter-compliant. It also has some room for serious improvement and it represents a real missed opportunity in how it handles “deepfakes”, synthetic explicit images and videos.


The bill is 104 pages long and it was just released, so this will be a high level overview and perhaps incomplete. But I will also focus on some issues that leapt out to me on my first few times reading it.


In a nutshell, it does a better job than the discussion paper first floated years ago by not lumping all kinds of “online harms” into one bucket and treating them all the same. This bill more acutely addresses child abuse materials and non-consensual distribution of intimate images. I think the thresholds for some of this are too low, resulting in removal by default. The new Digital Safety Commission has stunning and likely unconstitutional powers. As is often the case, there’s too much left to the regulations. But let’s get into the substance.


Who does it apply to?


So what does it do and who does it apply to?  It applies to social media companies that meet a particular threshold that’s set in regulation. Social media companies are defined as:


social media service means a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content. (service de média social)


It also specifically includes: (a) an adult content service, namely a social media service that is focused on enabling its users to access and share pornographic content; and (b) a live streaming service, namely a social media service that is focused on enabling its users to access and share content by live stream.


This seems intended to capture sites like PornHub and OnlyFans, but I think there are arguments that could be made to say that they'll not fit within that definition. 


It specifically excludes services that do not permit a user to communicate to the public (s. 5(1)) and carves out private messaging features. So instead of going after a very long list of service providers, it is much more focused, but this can be tailored by the minister by regulation. 


New bureaucracy


The online news act creates a whole new regulatory bureaucracy, which includes the Digital Safety Commission, the Digital Safety Ombudsperson and the Digital Safety Office. The Digital Safety Commission is essentially the regulator under this legislation and I'll talk a little bit later about what that its role is. The Ombudsperson is more of an advocate for members of the public and the Digital Safety Office is the bureaucracy that supports them both. As an aside, why call the bill the “Online Harms Act” but call the Commission the “Online Safety Commission”? We have a Privacy Act and a Privacy Commissioner. We have a Competition Act and a Competition Commissioner. We have a Human Rights Act and a Human Rights Commissioner. In this bill, it’s just inelegant. 


Duty to act responsibly


The legislation will impose a duty to act responsibly with respect to harmful content by implementing processes and mitigation measures that have to be approved by the Digital Safety Commissioner of Canada. This is extremely open-ended and there is no guarantee or assurance that this will be compatible with the digital safety schemes that these companies would be setting up in order to comply with the laws of other jurisdictions. We need to be very careful that “made in Canada Solutions” don't result in requirements that are disproportionately burdensome in light of our market size. 


The large social media companies that immediately come to mind already have very robust digital safety policies and practices, so whatever is dictated by the Digital Safety Commissioner should be based on existing best practices and not trying to reinvent the wheel.


If you are a very large social media company, you likely are looking to comply with the laws of every jurisdiction where you are active. Canada is but a drop in the internet bucket and work done by organizations to comply with European requirements should be good enough for Canada. If the cost of compliance is too onerous, service providers will look to avoid Canada, or will adopt policies of removing everything that everyone objects to. And the Social Media companies will be required to pay for the new digital bureaucracy, so that adds significantly to their cost of doing business in Canada.


In addition to having to have government approved policies, the Bill does include some mandatory elements like the ability of users to block other users and flag harmful content. They also have to make a “resource person” available to users to hear concerns, direct them to resources and provide guidance on the use of those resources. 

Age appropriate design code


One thing that I was blown away by is largely hidden in section 65. It reads …


Design features

65 An operator must integrate into a regulated service that it operates any design features respecting the protection of children, such as age appropriate design, that are provided for by regulations.


I was blown away by this for two reasons. The first is that it gives the government the power to dictate potentially huge changes or mandatory elements of an online service. And they can do this by simple regulation. Protecting children is an ostensible motive – but often a pretext – for a huge range of legislative and regulatory actions, many of which overreach. 


The second reason why I was blown away by this is that it could amount to an “Age Appropriate Design Code”, via regulation. In the UK, the Information Commissioner’s Office carried out massive amounts of consultation, research and discussion before developing the UK’s age appropriate design code. In this case, the government can do this with a simple publication in the Canada Gazette. 


Harmful content


A lot of this Bill turns on “what is harmful content”? It is defined in the legislation as seven different categories of content, each of which has its own specific definition. they are.. 


(a) intimate content communicated without consent;

(b) content that sexually victimizes a child or revictimizes a survivor;

(c) content that induces a child to harm themselves;

(d) content used to bully a child;

(e) content that foments hatred;

(f) content that incites violence; and

(g) content that incites violent extremism or terrorism.‍ 


Importantly, the bill treats the first two types of harmful content as distinct from the rest. This actually makes a lot of sense. Child sexual abuse materials are already illegal in Canada and is generally easy to identify. I am not aware of any social media service that will abide that sort of content for a second. 


The category of content called “intimate content communicated without consent” is intended to capture what is already illegal in the Criminal Code related to the non-consensual distribution of intimate images. The definition in the online harms bill expands on that to incorporate what are commonly called “deepfakes”. These are images depicting a person in an explicit manner that are either modifications of existing photographs or videos, or are completely synthetic as the result of someone's imagination or with use of artificial intelligence.


I 100% support including deepfake explicit imagery in this Bill and I would also 100% support including it in the Criminal Code given the significant harm that it can cause to victims, but only if the definition is properly tailored. In the Online Harms bill, the definition is actually problematic and potentially includes any explicit or sexual image. Here is the definition, and note the use of “reasonable to suspect”. 


intimate content communicated without consent means


(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that


(i) the person had a reasonable expectation of privacy at the time of the recording, and


(ii) the person does not consent to the recording being communicated; and


(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.‍ (contenu intime communiqué de façon non consensuelle)


So what is the problem? The problem is that the wording “reasonable grounds to suspect" cannot be found in the Criminal Code definition for this type of content and there is a very good reason for that. Either content is consensual or it is not. In the Criminal Code at section 162.1, it reads:


(2) In this section, "intimate image" means a visual recording of a person made by any means including a photographic, film or video recording,


(a) in which the person is nude, is exposing his or her genital organs or anal region or her breasts or is engaged in explicit sexual activity;

(b) in respect of which, at the time of the recording, there were circumstances that gave rise to a reasonable expectation of privacy; and

(c) in respect of which the person depicted retains a reasonable expectation of privacy at the time the offence is committed.


In the Criminal Code, either there is consent or there is not. In this Bill, the threshold is the dramatically low “reasonable to suspect”. All you need is a reasonable suspicion and it is not just with respect to the circumstances at the time the image was taken or created, assuming we're dealing with an actual person and an actual image. The courts have said 


The words “to suspect” have been defined as meaning to “believe tentatively without clear ground” and “be inclined to think” ... suspicion involves “an expectation that the targeted individual is possibly engaged in some criminal activity. A ‘reasonable’ suspicion means something more than a mere suspicion and something less than a belief based upon reasonable and probable grounds”.


You can be 85% confident that it is consensual, but that remaining 15% results in reasonable suspicion that it is not. When you're dealing with the section related to purported deep fakes, it does not specify that the image has to be of an actual person, whether synthetic or not. It could in fact be a completely fictional person that has been created using photoshop. It would cause no risk of harm to anyone. Given that the image is artificial and the circumstances of its creation are completely unknown, as is the person supposedly depicted in it, you can't help but have reasonable grounds to suspect that it “might” have been communicated nonconsensually. 


Deepfakes of actual people created using artificial intelligence is a real thing and a real problem. But artificial intelligence is actually better at creating images and videos of fake people. You should not be surprised that it is being used to create erotic or sexual content of AI-generated people. While it may not be your cup of tea, it is completely lawful. 


And it actually gets even worse, because with respect to deepfakes, the Online Harms Act turns on whether the actual communication itself may have been without consent, not the creation of the image. Setting aside for a moment that a fictional person can never consent and can ever withhold consent, an example immediately comes to mind drawn directly from Canada's history of bad legislation related to technology and online mischief.


People may recall that a number of years ago, Nova Scotia passed a law called the Cyber-safety Act which was intended to address online bullying. It was so poorly drafted that it was ultimately found to be unconstitutional and thrown out.


During the time when that law was actually enforced, we had an incident in Nova Scotia where two young people discovered that their member of the legislature had previously had a career as an actor. As part of that career, she appeared in a cable television series that was actually quite popular and in at least a couple scenes, she appeared without her top on. These foolish young men decided to take a picture from the internet, and there were hundreds of them to choose from, and tweets it. What happened next? This politician got very mad and contacted the Nova Scotia cyber cops, who threatened the young man with all sorts of significant consequences.


That image, which was taken in a Hollywood studio, presumably after the actor had signed the usual releases, would potentially fit into this category of harmful content if it were tweeted after the Online Harms Act comes into effect because someone reviewing it on behalf of a platform after it had been flagged would have no idea where the image came from. And if anyone says it’s non-consensual, that’s enough to create reasonable suspicion. One relatively explicit scene actually looks like it was taken with a hidden camera. 


Surely, it cannot be the intention of the minister of Justice to regulate that sort of thing. In some ways, it doesn't matter because it would likely be found to be a violation of our freedom of expression, right under section 2B of the charter rights and freedoms, which cannot be justified under section 1 of the charter.


But wait, it gets worse. With respect to the two special categories of harmful content, operators of social media services have an obligation to put in place a flagging mechanism so that objectionable content can be flagged by users. If there are reasonable grounds to believe that the content that has been flagged fits into one of those two categories, they must remove it. Reasonable grounds to believe is also a very low standard. But when you combine the two, the standard is so low that it is in the basement. Reasonable grounds to believe that there are reasonable grounds to suspect is such a low standard that it is probably unintelligible.


Deep fake images are a real, real problem. When a sexually explicit, but synthetic image of a real person is created, it has significant impacts on the victim. If they were doing anything other than window dressing, they would have paid very close attention to the critical definitions and how it is handled. But they have created a scheme in which anything that it's explicit could fit into this category by anybody, rendering the whole thing liable to be thrown out as a violation of the charter, thereby further victimizing vulnerable victims. Victims. And if they had gotten the definition right, which they clearly did not, little code because the harm associated with the dissemination of explicit deep fakes is similar to the harm associated with the already criminalized non-consensual distribution of actual intimate images.


It actually gets even worse, because the digital safety commissioner can get involved and they can order the removal of contents. The removal of content is again based on simple, reasonable grounds to believe that the material is within that category, which again only requires a reasonable ground to suspect a lack of consent. This is a government actor ordering the removal of expressive contents that unquestionably engages the freedom of expression right. Where you have a definition that is so broad that it can include content that does not post any risk of harm to any individual, that definition cannot be upheld as Charter compliant.

Flagging process


If a user flags content as either sexually victimizing a child or as intimate content communicated without consent, the operator has to review it within 24 hours. The operator can only dismiss the flag if it’s trivial, frivolous, vexatious or made in bad faith; or has already been dealt with. If not dismissed, they MUST block it and make it inaccessible to people in Canada. If they block it – which is clearly the default – they have to give notice to the person who posted it and to the flagger, and give them an opportunity to make representations. What this timeline is will be in the regulations. Based on those representations, the operator must decide whether there are reasonable grounds to believe the content is that type of harmful content, and if so, they have to make it inaccessible to persons in Canada. Section 68(4) says they’d have to continue to make it inaccessible to all persons in Canada, which suggests to me they have to have a mechanism to make sure it is not reposted.  There is a reconsideration process, which is largely a repeat of the original flag and review process. 


One thing that I find puzzling is that this mechanism is mandatory and does not seem to permit the platform operator from doing their usual thing, which is to review material posted on their platform and simply removing it if they are of the view that it violates their platform policies. If it is clearly imagery that depicts child sexual abuse, they should be able to remove it without notice or involving the original poster.  

Information grab


Each regulated operator has to submit a “digital safety plan” to the Digital Safety Commissioner. The contents of this are enormous. It’s a full report on everything the operator does to comply with the Act, and also includes information on all the measures used to protect children, preventing harmful content, statistics about flags and takedowns (broken down by category of content), resources allocated by the operator to comply, and information respecting content, other than “harmful content”, that was moderated by the operator and that the operator had reasonable grounds to believe posed a “risk of significant psychological or physical harm.” But that’s not all … it also includes information about complaints, concerns heard and any research the operator has done related to safety on their platform. And, of course, “any other information provided for by regulations.” And most of this also has to be published on the operator’s platform. 


Researchers’ information grab 


The Commission can accredit people (other than individuals) to access electronic data in digital safety plans. These people must be conducting research, education, advocacy, or awareness activities related to the purposes of the act. The Commission can grant access to these inventories and suspend or revoke accreditation if the person doesn't comply with the conditions. Accredited people can also request access to electronic data in digital safety plans from regulated service operators and the Commission can order that the operator provide the data. However, this access is only allowed for research projects related to the act's purposes.


This is another area where the parameters, which are hugely important, will be left to the regulations. There’s no explicit requirement that the accredited researcher have their research approved by a Canadian research ethics board. It’s all left to the regulations. 


We need to remember that “Cambridge Analytica” got their data from a person who purported to be an academic researcher. 


If the operator of a regulated service affected by an order requests it, the Commission may consider changing or canceling the order. The Commission may do so if it finds, according to the criteria in the regulations, that the operator can't comply with the order or that doing so would cause the operator undue hardship. An accredited person who requested an order may complain to the Commission if the operator subject to the order fails to comply.  The Commission must give the operator a chance to make representations. 


Finally, the Commission may publish a list of accredited people and a description of the research projects for which the Commission has made an order.


Submissions from the public


The Act contains a mechanism by which any person in Canada may make a submission to the Commission respecting harmful content that is accessible on a regulated service or the measures taken by the operator of a regulated service to comply with the operator’s duties under the Act. The Commission can provide information about the submission to the relevant operator and there are particular provisions to protect the identity of any employees of an operator that make a submission to the Commission. 


Complaints to the Commission


The real enforcement powers of the Commission come into play in Part 6 of the Act. Any person in Canada may make a complaint to the Commission that content on a regulated service is content that sexually victimizes a child or revictimizes a survivor or is intimate content communicated without consent. These are the particularly acute categories of  deemed “harmful content.”


The Commission has to conduct an initial assessment of the complaint and dismiss it if the Commission is of the opinion that it is trivial, frivolous, vexatious or made in bad faith; or has otherwise been dealt with. 


If the complaint is not dismissed, the Commission must (not may) give notice of the complaint to the operator and make an order requiring the operator to, without delay, make the content inaccessible to all persons in Canada and to continue to make it inaccessible until the Commission gives notice to the operator of its final decision. This is an immediate takedown order without any substantial consideration of the merits of the complaint. All they need is a non-trivial complaint. I don’t mind an immediate takedown if one reasonably suspects the content is child sexual abuse material, but the categories are broader than that.


The operator must ask the user who posted the content on the service whether they consent to their contact information being provided to the Commission. If the user consents, the operator must provide the contact information to the Commission. 


“Hey, you’re being accused of posting illegal content on the internet, do you want us to give your information to the Canadian government?”


The Commission must give the complainant and the user who communicated the content on the service an opportunity to make representations as to whether the content is content that fits into those categories of harmful content. 


Now here is where the rubber hits the road: The Commission must decide whether there are “reasonable grounds to believe” that the content fits into those categories. In a criminal court, the court would have to consider whether the content fits the definition, beyond a reasonable doubt. In a civil court, the court would have to consider whether the content fits the definition, on a balance of probabilities. Here, all the Commission needs to conclude is whether there are “reasonable grounds to believe.” If they do, they issue an order that it be made permanently inaccessible to all persons in Canada.


That is a dramatically low bar for permanent removal. Again, I’m not concerned about it being used with material that is child abuse imagery or is even reasonably suspected to be. But there is a very strong likelihood that this will capture content that really is not intimate content communicated without consent. Recall the definition, and the use of “reasonable to suspect”:


intimate content communicated without consent means


(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that


(i) the person had a reasonable expectation of privacy at the time of the recording, and


(ii) the person does not consent to the recording being communicated; and


(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated.‍ (contenu intime communiqué de façon non consensuelle)


To order a permanent takedown, the Commission just needs to conclude there are reasonable grounds to believe that it is “reasonable to suspect” a lack of consent. There’s no requirement for the complainant to say “that’s me and I did not consent to that.” Unless you know the full context and background of the image or video, and know positively that there WAS consent, there will almost always be grounds to suspect that there wasn’t. And remember that the deepfake provision doesn’t specifically require that it be an actual living person depicted. It could be a complete figment of a computer’s imagination, which is otherwise entirely lawful under Canadian law. But it would still be ordered to be taken down. 


The Commission’s vast powers


The Commission has vast, vast powers. They’re breathtaking, actually. These are set out in Part 7 of the Act. Here’s part of these powers:


86 In ensuring an operator’s compliance with this Act or investigating a complaint made under subsection 81(1), the Commission may, in accordance with any rules made under subsection 20(1),


(a) summon and enforce the appearance of persons before the Commission and compel them to give oral or written evidence on oath and to produce any documents or other things that the Commission considers necessary, in the same manner and to the same extent as a superior court of record;


(b) administer oaths;


(c) receive and accept any evidence or other information, whether on oath, by affidavit or otherwise, that the Commission sees fit, whether or not it would be admissible in a court of law; and


(d) decide any procedural or evidentiary question.


And check out these “Rules of evidence” (or absence of rules of evidence) for the Commission:


87 The Commission is not bound by any legal or technical rules of evidence. It must deal with all matters that come before it as informally and expeditiously as the circumstances and considerations of fairness and natural justice permit.


If the Commissioner holds a hearing – which is entirely in its discretion to determine when a hearing is appropriate – it must be held in public unless it isn’t. There’s a laundry list of reasons why it can decide to close all or part of a hearing to the public. 


I don’t expect we’ll see hearings for many individual complaints.


Inspectors


The next part is staggering. In section 90, the Commission can designate “inspectors” who get a “certificate of designation”. Their powers are set out in section 91. Without a warrant and without notice, an inspector can enter any place in which they have reasonable grounds to believe that there is any document, information or other thing relevant to that purpose. Once they’re in the business, they can 


(a) examine any document or information that is found in the place, copy it in whole or in part and take it for examination or copying;


(b) examine any other thing that is found in the place and take it for examination;


(c) use or cause to be used any computer system at the place to examine any document or information that is found in the place;


(d) reproduce any document or information or cause it to be reproduced and take it for examination or copying; and


(e) use or cause to be used any copying equipment or means of telecommunication at the place to make copies of or transmit any document or information.


They can force any person in charge of the place to assist them and provide documents, information and any other thing. And they can bring anybody else they think is necessary to help them exercise their powers or perform their duties and functions.


There’s also a standalone requirement to provide information or access to an inspector:


93 An inspector may, for a purpose related to verifying compliance or preventing non-compliance with this Act, require any person who is in possession of a document or information that the inspector considers necessary for that purpose to provide the document or information to the inspector or provide the inspector with access to the document or information, in the form and manner and within the time specified by the inspector.


Holy crap. Again, no court order, no warrant, no limit, no oversight.


It’s worth noting that most social media companies don’t operate out of Canada and international law would prevent an inspector from, for example, going to California and inspecting the premises of a company there. 


Compliance orders


The Act grants the Commission staggeringly broad powers to issue “Compliance orders”. All these orders need is “reasonable grounds to believe”. There’s no opportunity for an operator to hear the concerns, make submissions and respond. And what can be ordered is virtually unlimited. There is no due process, no oversight, no appeal of the order and the penalty for contravening such an order is enormous. It’s up to the greater of $25 million or 8% of the operator’s global revenue. If you use Facebook’s 2023 global revenue, that ceiling is $15 BILLION dollars. 


94 (1) If the Commission has reasonable grounds to believe that an operator is contravening or has contravened this Act, it may make an order requiring the operator to take, or refrain from taking, any measure to ensure compliance with this Act.


This is a breathtaking power, without due process, without a hearing, without evidence and only on a “reasonable grounds to believe”. And what can be ordered is massively open-ended. 


You may note that section 124 of the Act says that nobody can be imprisoned in default of payment of a fine under the Act. The reason for this is to avoid due process. Under our laws, if there’s a possibility of imprisonment, there is a requirement for higher due process and procedural fairness. It’s an explicit decision made, in my view, to get away with a lower level of due process. 


Who pays for all this?


The Act makes the regulated operators pay to fund the costs of the Digital Safety Commission, Ombudsperson, and Office. Certainly it has some good optics that the cost of this new bureaucracy will not be paid from the public purse, but I expect that any regulated operator will be doing some math. If the cost of compliance and the direct cost of this “Digital Safety Tax” is sufficiently large, they may think again about whether to continue to provide services in Canada. We saw with the Online News Act that Meta decided the cost of carrying links to news was greater than the benefit they obtained by doing so, and then rationally decided to no longer permit news links in Canada.  

Amendments to the Criminal Code and the Canada Human Rights Act 


Finally, I agree with other commentators in reaching the conclusion that bolting on amendments to the Criminal Code and the Canada Human Rights Act was a huge mistake and will imperil any meaningful discussion of online safety. Once again, the government royally screwed up by including too much in one bill.


The bill makes significant additions to the Criminal Code. Hate propaganda offenses carry harsher penalties. The bill defines "hatred" (in line with Supreme Court of Canada jurisprudence) and creates a new hate crime: "offense motivated by hatred."


Moreover, Bill C-63 amends the Canadian Human Rights Act. It adds "communication of hate speech" through the Internet or similar channels as discriminatory practice. These amendments give individuals the right to file complaints with the Canadian Human Rights Commission which, in turn, can impose penalties of up to $20,000. However, these changes concern user-to-user communication, not social media platforms, broadcast undertakings, or telecommunication service providers.


Bill C-63 further introduces amendments related to the mandatory reporting of child sexual abuse materials. They clarify the definition of "Internet service" to include access, hosting, and interpersonal communication like email. Any person providing an Internet service to the public must send all notifications to a designated law enforcement body. Additionally, the preservation period for data related to an offense is extended.


Conclusion

All in all, it is not as bad as I expected it to be. But it is not without its serious problems. Given that the discussion paper from a number of years ago was a potential disaster and much of that has been improved via the consultation process, I have some hope that the government will listen to those who want to – in good faith – improve the bill. That may be a faint hope, but unless it’s improved, it will likely be substantially struck down as unconstitutional