Ylva was awarded the Big Brother Award at Bits of Freedom 2022.
See the video recording at vimeo: https://vimeo.com/797366382 (starting at timestamp 1:37:26)
Below is a transcript of the video. Some parts were difficult to transcribe because of the limited audio quality. The transcript is not official and the above video should be used as the canonical source.
Thank you. Good evening everybody. And it is a bit weird - today is my birthday. This is the strangest birthday present I've ever had. It would be rude of me to reject a birthday present. [inaudible]
My proposal is about fighting child sexual abuse. At a scale and severity of a crime to demand that we act. Videos and pictures are shared online also with abuse of very young children and babies. And also with severe violence. One of the first things I did as a commissioner 3 years ago was launching a strategy against this terrible crime. Focussing on prevention, on victim support and of course on police cooperation. My proposal now on the table fights the online aspect of this crime. This is important not at least in the Netherlands where 40% of child sexual abuse material in the world is hosted🗨.
Prevention is at the heart of my proposal, but prevention alone will never be enough unfortunately🗨. In a recent case the dutch police arrested a man. He has been raping his baby daughter. The police could stop these rapes, rescue this little girl and two other children, because the man got detected their views in facebook messenger. Prevention did not stop these rapes. A baby cannot say no or call a hotline🗨.
My proposal does not introduce detection in online messages. Companies have been detecting child sexual abuse for more than 10 years in online messaging🗨. Last year they sent 1.5 million reports and reviews from the european union alone. Containing more than 5 million videos and photos of sexual violence against children🗨. [inaudible]
One million of these reports, the majority, came from online messages, e-mail and chat. Every single one could contain the vital clue. The often only clue that could lead to an investigation that rescues the child🗨. Next year, this detection for child sexual abuse in online messages will be forbidden. Totally in the EU. Unless there is a specific law that allows it🗨. My proposal restricts detection compared to [Ursula von der Leyen?]. But my proposal makes sure that it is not totally stopped. Allowing detection of the views to continue. Only when absolutely necessary🗨.
With new safeguards that are regulating big tech companies right now. Who, the big companies today detect, when they like, what they like and how they like (more or less). My proposal will limit the detection in time, in place and by law🗨.
The only offer for risk assessment. Only if there is a significant risk of child sexual abuse. Only of the prevention on mitigative measures that failed or was not enough🗨. Only of through court decisions. Only with approved technology🗨. Only based on verified indicators of child sexual abuse🗨. Only then they will be allowed to do the detection. None of these safeguards exist today. That means we will have new safeguards compared with the current situation with my proposal.
But there is one big difference. Today it is voluntary for the companies to detect. In my proposal this process will also be obligatory. When companies are allowed to detect, they also must detect🗨. And if [???] it only detects the views.
It is like a police dog at the airport. Sniffing for drugs offer high risk price. Officers open only suspicious packages when the dog barks. And like police dogs, artifical intelligence must always be subject to human oversight🗨.
And the new EU center I am proposing will be a stake for and filter reports. That no false positives reach the police🗨. And I will not excempt encrypted services if there is a significant risk of child sexual abuse. Today, detection for malware is allowed and possible and carried out in encrypted environments🗨. This is happening today. And I think that we must protect our children as least as much as we protect our devices from malware🗨.
I know you all feel very strongly about this. You also asked me to withdraw my proposal. I will not do that. Because if I should, then from next year, all detection in messages that has been happening for 10 years will be forbidden in the EU. Forbbiden in the Netherlands. Detection will stop. Rapes will continue undetected. And that little baby I told you about: Her father would probably still be raping her🗨. And what about the crimes will say about the victims? That the victims, of that baby [???] Amsterdam, who confessed to abusing and raping 90 babies and small children. Eight years later, videos are still out there online. The worst moment of the child's life shared perpertual. Drama upon drama🗨.
The european union is a global leader in protecting online privacy. I am proud of that. We should all be proud of that.
I also want us to become a global leader in protecting children🗨. For one of the worst crimes that we can ever imagine. I'm proud of our new standards of privacy protection. But we must also protect the privacy of the victims. Thank you for listening.
With all due respect, we shall discuss the arguments put forth.
A possible source for this claim might be this EU news article which in turn refers to the Internet Watch Foundation report from 2020. If we assume that these number are realistic: What is the reason why the Netherlands are such a popular target for this illegal material? And in which way would chatcontrol even help with this? If the source of illegal content is known it is easy to take it down, especially in a modern country like the Netherlands. There is no clear justification for the need of chatcontrol based on this.
Yes. But that still does not justify spying on pretty much any online conversation and treating all people who fulfill their human need for personal communication as potential offenders. The intention is good, but the means are completely overblown.
By that same logic we could also argue for cameras in every room in every home, because sexual abuse of babies happens in the offline world and only in some cases is posted online. Only then we would achieve perfect prevention.
It is true that platform providers have been scanning for illegal content before the chatcontrol proposal was drafted. It is also understandable that platform providers do not wish to have this abhorrent type of data on their platforms. And it is reasonable that platform providers detect and delete this data. However, not all data is shared equally. Data can be shared broadly and in a non-personal manner. And if a platform provider learns that a certain URL points to illegal content, they can act on it.
Chatcontrol however, is a new type of surveillance, that is highly pervasive and that invades personal online spaces in a way that has not been seen to this extent before.
This is a real problem and it needs to be addressed. But the methods to address it need to be reasonable.
This line of thinking is exactly what leads to mass surveillance. While it is true that real clues can be detected with this method, it is to be expected that the sheer volume of false positives will make it less likely that the relevant reports are processed. The authorities who process these reports today are already overwhelmed. Furthermore, it is likely that the intimate life of millions of innocent people will be violated due to this surveillance.
An approach that focusses on quality instead of quantity is needed.
The platforms will continue to have the option and duty to delete illegal content that they are aware of even without chatcontrol. This is true for both, the old chatcontrol directive and the new chatcontrol proposal. Platform providers are and will still be able to use their own terms and conditions to allow them to scan data on their platform as they see fit.
Not quite. Most communication software has the potential to be used by children. And children today are using it to communicate with friends, family, teachers, trainers and more. And the same software is used by adults to fulfill their needs to communicate. Chatcontrol subjects many innocent citizens to surveillance, including the children!
Is that really the EU we want our children to grow up in? An EU where all communication of children is scrutinized? Children also have a right and need to privacy!
The chatcontrol law that is currently and temporarily in effect does not define as precisely where the scanning needs to take place, because scanning is voluntary and service and platform providers can themselves decide if they want to scan. The new chatcontrol law makes scanning mandatory for a lot of services and platforms. The new proposal will therefore drastically increase surveillance in the EU.
This scanning will still affect all users of a legally provided service or software, even if it is only abused by a minority of users.
The fact that this technology is approved by the "EU centre to prevent and combat child sexual abuse", does not provide any meaningful assurance. Similarily, the database of indicates that the EU centre manages, is intransparent by design.
This is inaccurate. According to Article 44.1 there are 3 different types of indicators:
This forces companies to surveil their customers/users. Most companies view their customers/users with respect and are probably not happy about this requirement.
The kind of information that is revealed through chatcontrol is vastly more invasive than any baggage search at the airport.
That is besides the point. Private conversations should enjoy the respect that they deserve and only be available to the intendended recipients. That means no mass surveillance and no circumvention of encryption.
No, it does not. Malware needs to be unencrypted to be executable. Encrypted malware looks like random data and can not be reasonably detected.
Malware needs to be executable to be harmful.
Malware detection works on a different axis than child abuse detection. Personal data is therefore rarely processed by a malware scanner.
Yes. And that would be a reasonable outcome. Investigators have other tools to detect and fight child sexual abuse. It is technically possible to circumvent the client-side scanning that chatcontrol imposes. And these methods of circumvention will be adopted by the offenders sooner or later. Thus, it is mostly innocent citizens who will be subjected to the surveillance of chatcontrol. Client-side scanning is a terrible idea.
Unfortunately, yes. But it is unreasonable to expect chatcontrol to solve this.
A noble goal. But mass surveillance is the wrong approach.