It's not censorship, it's age verification. You can still access this stuff if you can prove you're an adult. Same as how children aren't allowed to buy the same material in stores. It's still being published, there's no censorship.
> any site or service that makes sexually explicit materials available
so basically, the internet.
> Canadian ISPs required to ensure that the sites are rendered inaccessible
At best, this is regulatory capture for the current tech giants, at worst, basically ability to hand pick who gets to see what sites. So yes, censorship
under the cloak of "age verification" and "protecting kids". We have heard it all before. I'm surprised they didn't somehow stuff the "terrorism" angle in there as well.
>At best, this is regulatory capture for the current tech giants, at worst, basically ability to hand pick who gets to see what sites.
It hasn't happened with any other censorship bill Canada has passed.
This includes laws on pronoun use:
Canada’s gender identity rights Bill C-16 explained
>through a process that would start with a complaint and progress to a proceeding before a human rights tribunal. If the tribunal rules that harassment or discrimination took place, there would typically be an order for monetary and non-monetary remedies. A non-monetary remedy may include sensitivity training, issuing an apology, or even a publication ban, he says.
That's not a censorship bill, that's an anti-harassment bill. Harassment is illegal everywhere: I'm not free to follow you around calling you an asshole. I could get charged for that, especially if you're my employee, tenant, or in the presence of other exacerbating factors. Canadian hate law says I'm not free to follow you around making disparaging comments about your race. C-16 expands that to say that I'm not allowed to follow you around disparaging your gender identity. That's it.
This bill, conversely, gives the government explicit power to block websites that host content that is not child-appropriate. Completely different.
Requiring an onerous age verification scheme provided by government approved vendors is a lot closer to censorship than it isn't.
Say you post stuff to your own blog, and sometimes use colorful language. A parent decides to report you to the regulatory agency, and now you have 20 days to do whatever they demand you do to remediate, or else your site will be blocked at the ISP level.
In order to have age verification, you need identity verification, i.e. tying your identity to your activity. Classic chilling effects. If you're in the closet because you come from a religious family or have a religious boss, can you risk some random site or government bureaucracy getting hacked and outing you? Strong anonymity is essential for free expression.
Then the requirement to identify yourself is friction, so sites will want to avoid it, which can only be accomplished through censorship. Ordinary sites not solely focused on X-rated content will be "moderated" down to the level of children, even when they have adult audiences, because they don't want to be locked behind the porn filter.
Instead of having diverse communities tailored to all different kinds of people and ideas, you bifurcate the world into nerfed risk-averse corporate censorship and explicit smut. The only place you're allowed to have an adult conversation is Pornhub, which is not exactly known for quality intellectual discourse.
> In order to have age verification, you need identity verification, i.e. tying your identity to your activity
I don't see any reason age verification has to tie your identity to your activity. It should be possible with modern cryptographic techniques to make a system whereby the service that checks your age doesn't find out what site the check is for, and that site doesn't find out who you are.
One is you get a unique token which can be tied back to your identity by someone who compromises the issuing service, so if they get compromised you're screwed.
The other is you get a generic token or one that otherwise can't be tied back to a specific identity, in which case the token leaks and there is no way to trace back who is leaking it, creating a generic bypass of the whole system.
Porn site issues you a unique token. You have an age verification service sign that token using a blind signature. You return the signed token to the porn site. The porn site can verify the signature using the public key of the verification service.
Verification service gets compromised, attackers record and eventually publish mapping between unique verification tokens and users, now all the porn sites or anyone who has compromised any of them can have stored its verification tokens and know who all the users are.
The verification service doesn't see the token from the porn site. The user takes the token from the porn site and from that generates a blinded token. The blinded token is what is sent to verification service. They sign it and return it.
It is the user that then generates the signed porn token, by applying the unblinding function to the signed blinded token.
If the blinded tokens leak the porn sites cannot match them up with the tokens they issued because they don't have the parameters to unblind them.
I don't see how it gets you out of the dichotomy. If you can't trace the tokens back to the user then somebody can set up a service that will proxy sign for anyone and you have no way to know who they are. If you can, attackers can unmask legitimate users.
You're also describing a real-time system, which is subject to timing attacks. Even if you couldn't compare the tokens with each other, you could see that each time User 32323 logged in, John Smith requested a token.
I'll talk about timing attacks at the end of this.
Here's how it could work, using an RSA-based signature. The age verification service is using RSA with a modulus of N, a public exponent of e, and a private exponent of d.
To produce a signature S for a message M the age verification service computes and returns S = M^d mod N. Someone who wants to verify that S is a signature for M computes S^e mod N and if that equals M then S was a signature for M.
1. Porn site issues a token to User 32323. Let's all this token T.
2. User 32323 picks a random number r that is relatively prime to N. Since r is relatively prime to N, User 32323 can easily compute r' such that r r' = 1 mod N.
3. User 32323 asks the age verification service to sign r^e T.
4. The age verification service, after receiving proof that the user is an adult, which probably involved the user providing government ID that shows their real identity, signs r^e T.
Remember, to sign the age verification service raises the message they are signing to the power of d mod N, which in this gives r^(ed) T^d = r T^d mod N. The age verification service returns r T^d to User 32323.
5. User 32323 can multiply that r T^d by r', giving T^d mod N.
Note that T^d mod N is the signature that the age verification service would have generated if it had been given the token T directly to sign, instead of having been given r^e T.
The net result is that the age verification service has signed T without ever having seen T. They only saw r^e T.
6. User 32323 can return their token T back to the porn site, along with the signature S = T^d mod N, and a note telling the porn site which age verification service was used.
7. The porn site looks up the modulus N and public exponent e for that age verification service, compute S^e mod N and see that this equals T. That tells them that an adult used the age verification service to get T signed, so they allow the account to be created.
If someone is trying to figure out the real identity of User 32323 they might get T and T^d mod N from porn site. And they could get all the messages that the age verification service signed between the time T was issues and the time User 32323 submitted T^d mod N.
But for each message M there will be some r such that r^e T = M, and so any such message could be the right one [1]. You get no information other than whatever you can infer from timing.
Same for someone starting with the age verifications of a particular person and trying to figure out if any of those are for some particular porn site.
I think that age verification would probably only be done at account creation, which would mean much less timing information would be available. The risk of a timing attack could be further reduced by using a high volume verification service so that there are more verifications going on at near the same time.
You would further reduce the risk by adding some delay on your end. Wait until several hours or even a day or two after receiving T from the porn site before you return the signed T to complete your signup.
[1] There is a very small possibility of a T where there is no r that maps it to M. That could happen if T happened to have a factor in common with N. Since the N for an RSA system is constructed by multiplying two large (thousands of bits) together the chances of accidentally hitting such a T are in the 1 in 2^thousands. And no one knows how to deliberately construct such a T without first factoring N.
> The website blocking provisions are focused on limiting user access and can therefore be applied to websites anywhere in the world with Canadian ISPs required to ensure that the sites are rendered inaccessible. And what about the risk of overblocking? The bill not only envisions the possibility of blocking lawful content or limiting access to those over 18, it expressly permits it. Section 9(5) states that if the court determines that an order is needed, it may have the effect of preventing access to “material other than sexually explicit material made available by the organization” or limiting access to anyone, not just young people. This raises the prospect of full censorship of lawful content under court order based on notices from a government agency.
It's silly to contrast a narrow restriction on speech like "you can be charged for making statements which constitute harassment" or "this specific product cannot be advertised in certain ways" with a massive restriction on speech like "any website that does not check users' IDs can be blocked at an ISP level if it is found to have content on it which is not appropriate for children."
If you like free speech, am I "free" to "speak" loud screams directly into your ear? Am I free to speak lies to you about the safety features in my airplane, when you're buying tickets? Free speech absolutism is a childish position; what crosses a line is subjective.
I think any reasonable person would agree that this internet blocking regime crosses a line and unfairly stifles people's ability to communicate. Not every public space should be obliged to be child-appropriate; obscenity laws are best left in the 20th century.
When a kid can walk into a 18+ movie showing at a theater or buy a porn magazine from the corner store without proof of age you'll have an argument.
Until then you're advocating for the continuation of a a special exemption for online porn sites, which are worse than adult movies and magazines because they are known and habitual hosts of child sexual abuse materials, non-consensual porn, child and adult sex trafficking victims.
Corner stores don't track your ID and purchases. Come on. We all know that any information you enter into a website will be used to create a profile on you. The fact that these profiles will be attached to very personal information about people's sexualities should be very troubling to you. This is a huge privacy violation.
On top of that, it's not just porn websites that this will apply to. It's any website featuring explicit material that could be "harmful to children." That includes most discussions of sex. You're going to have to register your ID to even have an R-rated conversation online. That'll have a huge chilling effect on free speech.
Children's internet access can easily be controlled by parents with readily-available tools. The problem here is parental negligence. There are plenty of ways the government can push parents into using these tools, and none of them are damaging privacy or free speech.
And don't bring child sex abuse material into this. If you're concerned about CSAM on porn sites, you should be advocating that the government investigate that. Not that they institute draconian ID-checking laws. Unless you think CSAM is only a problem when it's viewed by someone under 18?
It uses a one drop rule to test websites. Once theres a teensy bit of adult content, and as has been pointed out this covers things like googles unsafe search modes, then the requirement is to block first and ask questions later.
Requiring people to be licensed or verified to access content is as much censorship as blocking it entirely. There are valid reasons to not want to be on a conservative governments list of porn users. You need to expose yourself to risk to access content? Censorship.
You expect that once this is law, that a single adult picture on the site and a complaint to the CRTC will result in a website being blocked or forced to implement age verification?
So if someone links to or sneaks in a naked picture in the comment section of a tech site that and makes a complaint, then that site face serious consequences?