You also asked me to withdraw my proposal. I will not do that.
Because if I should, then from next year, all detection in messages that has been happening for 10 years will be forbidden in the EU. Forbbiden in the Netherlands.
Detection will stop. Rapes will continue undetected. And that little baby I told you about: Her father would probably still be raping her.<ahref="#expired_legislation">🗨</a>
And what about the crimes will say about the victims? That the victims, of that baby [???] Amsterdam, who confessed to abusing and raping 90 babies and small children.
Eight years later, videos are still out there online.
The worst moment of the child's life shared perpertual.
This is a real problem and it needs to be addressed. But the methods to address it need to be reasonable.
</p>
<h2id="only_clue">Images that are shared online can be the only clue for sexual abuse</h2>
<p>
This line of thinking is exactly what leads to mass surveillance.
While it is true that real clues can be detected with this method,
it is to be expected that the sheer volume of false positives will make it less likely that the relevant reports are processed.
The authorities who process these reports today <ahref="https://www.bitsoffreedom.nl/2022/10/17/sex-crimes-unit-already-overwhelmed-and-eu-lawmakers-will-only-make-it-worse/">are already overwhelmed</a>.
Furthermore, it is likely that the intimate life of millions of innocent people will be violated due to this surveillance.
<p>
An approach that focusses on quality instead of quantity is needed.
</p>
<h2id="expired_law">Without the updated legislation it will become illegal again to scan user's data</h2>
<p>
The platforms will continue to have the option and duty to delete illegal content that they are aware of even without chatcontrol.
This is true for both, the old chatcontrol directive and the new chatcontrol proposal.
Platform providers are and will still be able to use their own terms and conditions to allow them to scan data on their platform as they see fit.
</p>
<h2id="absolutely_necessary">Scanning will only happen when absolutely necessary</h2>
<p>
Not quite. Most communication software has the potential to be used by children.
And children today are using it to communicate with friends, family, teachers, trainers and more.
And the same software is used by adults to fulfill their needs to communicate.
Chatcontrol subjects many innocent citizens to surveillance, including the children!
</p>
<p>
Is that really the EU we want our children to grow up in?
An EU where all communication of children is scrutinized?
Children also have a right and need to privacy!
</p>
<h2id="limit_by_law">The new chatcontrol law is a lot more targeted than its predecessor</h2>
<p>
The chatcontrol law that is currently and temporarily in effect does not define as precisely where the scanning needs to take place,
because scanning is voluntary and service and platform providers can themselves decide if they want to scan.
The new chatcontrol law makes scanning mandatory for a lot of services and platforms.
The new proposal will therefore drastically increase surveillance in the EU.
</p>
<h2id="prevention_not_enough">Scanning will only happen if mitigations failed or were not enough</h2>
<p>
This scanning will still affect all users of a legally provided service or software, even if it is only abused by a minority of users.
</p>
<h2id="approved_technology">Scanning happens only with approved technology</h2>
<p>
The fact that this technology is approved by the
"EU centre to prevent and combat child sexual abuse",
does not provide any meaningful assurance.
Similarily, the database of indicates that the EU centre manages, is intransparent by design.
</p>
<h2id="only_verified_abuse">Detection will only happen based on verified indicators of child sexual abuse</h2>
<p>
This is inaccurate. According to Article 44.1 there are 3 different types of indicators:
<ol>
<li>Indicators based on previously detected and identified abuse material</li>
<li>Indicators NOT based on previously detected and identified abuse material.</li>
<li>Indicators for detecting of illegal child solicitation</li>
</ol>
And these indicators must be used by hosting and communication providers as described in Article 10.1.
</p>
<h2id="allowed_to_detecd">If companies are allowed to detect, they must detect</h2>
<p>
This forces companies to surveil their customers/users.
Most companies view their customers/users with respect
and are probably not happy about this requirement.
</p>
<h2id="dog">Chatcontrol is no worse than a police dog at the airport</h2>
<ul>
<li>The airport is not a private living space.</li>
<li>The airport is not where the police routinely looks into your private communications.</li>
<li>The airport is not a place that most people tie their identity and innermost feelings to.</li>
</ul>
<p>
The kind of information that is revealed through chatcontrol is vastly more invasive
than any baggage search at the airport.
</p>
<h2id="no_false_positive_to_police">No false positive will be passed to the police</h2>
<p>
That is besides the point.
Private conversations should enjoy the respect that they deserve and only be available to the intendended recipients.
That means no mass surveillance and no circumvention of encryption.
</p>
<h2id="encrypted_environments">Scanning for malware works also in encrypted environments</h2>
<p>
No, it does not. Malware needs to be unencrypted to be executable.
Encrypted malware looks like random data and can not be reasonably detected.
</p>
<h2id="malware">Scanning of private content happens already today to fight malware</h2>
<p>
Malware needs to be executable to be harmful.
</p>
<ul>
<li>Text messages are not executable.</li>
<li>Most images do not contain executable code.</li>
<li>Videos do not contain executable code.</li>
</ul>
<p>
Malware detection works on a different axis than child abuse detection.
Personal data is therefore rarely processed by a malware scanner.
</p>
<h2id="expired_legislation">If we do not accept the new proposal, the scanning against child abuse can not continue</h2>
<p>
Yes. And that would be a reasonable outcome.
Investigators have other tools to detect and fight child sexual abuse.
It is technically possible to circumvent the client-side scanning that chatcontrol imposes.
And these methods of circumvention will be adopted by the offenders sooner or later.
Thus, it is mostly innocent citizens who will be subjected to the surveillance of chatcontrol.
Client-side scanning is a terrible idea.
</p>
<h2id="drama">CSAM will stay on the internet perpetually</h2>
<p>
Unfortunately, yes.
But it is unreasonable to expect chatcontrol to solve this.
</p>
<h2id="global_leader">We want to become a global leader in protecting children</h2>