Police caught one of the web’s most dangerous paedophiles. Then everything went dark

A trail of clues helped police close in on a dangerous predator. Now, a battle over the future of end-to-end encryption could change the rules of engagement
Image may contain Silhouette Face Head Person Photography Portrait and Adult
Getty Images / WIRED

TO HIS VICTIMS, David Wilson was a 13-year-old girl. Across a web of fake Facebook, Snapchat and Instagram accounts the 36-year-old used stolen images of young girls to befriend children and gain their trust. Then the abuse started. Wilson would send sexual images of young women and demand his victims, all of them younger than 15, send him photos and videos of themselves in return.

Soon, he turned to blackmail to force children into performing more extreme sexual acts. In some instances he forced his victims to abuse their younger brothers and sisters, some of whom were as young as four. Some of his victims, police were later told, wanted to end their lives. Between 2016 and 2020, Wilson contacted more than 5,000 children – primarily on Facebook. Around 500 of them are thought to have sent him images or videos. But, with each victim, he left a trail of clues. He was arrested for the first time in April 2015 in relation to two counts of possessing indecent images of children but police could not link him to the crimes and took no action.

Then, in early 2017 Facebook discovered 20 accounts, all belonging to teenage boys, who had been sending indecent images of themselves to an account that appeared to belong to a 13-year-old girl. That August officers from the National Crime Agency (NCA) arrested Wilson again after linking IP address data provided by Facebook to an address in King's Lynn, Norfolk, where he lived with his mother. Police also obtained CCTV footage of Wilson topping up a prepaid phone that they believed was linked to the abuse.

When officers raided the house they found the phone hidden in his bedroom. It contained a small number of illegal images but, crucially, did not contain the network of social media accounts that would have, without doubt, connected Wilson to the widespread sexual abuse of hundreds of children.

It would take another three years – and two more arrests – until investigators were able to gather enough evidence to force Wilson into pleading guilty. On February 10, 2021, a judge at Ipswich Crown Court sentenced him to 25 years. He admitted to 96 sex offences against 52 victims, ranging from four- to fourteen-years-old. The judge called him “extremely dangerous” and a “serial” paedophile, while the officers who had spent hundreds of hours investigating his crimes branded him one of the UK’s most “prolific” child abusers. “He showed very little compassion, even when victims begged him to stop. He just ignored them and carried on,” said Tony Cook, the NCA’s head of child sexual abuse operations.

Wilson’s trail of abuse was relentless. Even as the police closed in on him, and despite the arrests in 2017 and 2018, he kept on offending. And Facebook was his platform of choice. For the police to build their case it was crucial to prove that Wilson had control of the social media accounts linked to the abuse. Officers had tracked him topping up a number of prepaid phones and requested data, through a mutual legal aid treaty, showing the messages in his social media accounts.

In October 2019, police received 250,000 messages showing Wilson’s behaviour. “The content was full of social media accounts, showing the messages and associated friends,” Cook said ahead of Wilson’s sentencing. Investigators painstakingly pieced together Wilson’s activity and even rebuilt a phone that he had flushed down a toilet to gather yet more evidence against him. They eventually linked him to seven false female identities, spread across 14 social media accounts, eight email addresses and five prepaid mobile phone SIM cards. Each revelation came with it a new list of victims who were tracked down and protected.

“The information from Facebook was absolutely crucial to this case,” Cook said. Overall, the company made 90 referrals about Wilson to the US National Center for Missing and Exploited Children (NCMEC), a US non-profit organisation that helps find missing children and collects reports of online exploitation and abuse material. Under US law, technology companies have to report child sexual abuse material they find on their platforms to the NCMEC, although they are not obliged to proactively track down illegal content. The NCMEC then passes tips to law enforcement bodies around the world who investigate and build cases. In the case of Wilson, Facebook provided data about his behaviour, including the IP address of the phone he was using, and the content of his messages.

But the system that allowed Facebook to spot Wilson, and helped police build a case against him, is about to be torn down. Since early 2019 Facebook has been working to add end-to-end encryption to Instagram and Messenger. The move, which is likely to happen in 2022, has reignited the debate around how to balance the importance of individual privacy with protecting the most vulnerable people in society. When the rest of Facebook joins WhatsApp, which turned on end-to-end encryption by default in 2016, the daily communications of billions of people will, law enforcement officials argue, vanish. Investigators in the Wilson case say it’s unlikely he would have been caught if Facebook had already been using end-to-end encryption.

The scale of online child sexual abuse is huge. Year after year, child protection agencies report increases in the amount of abuse found online and say things have got worse as more children have been at home during the pandemic. Last year, the NCMEC received 21.4 million reports of online child sexual abuse material. Across all of the companies that reported content, Facebook accounted for 20.3 million, or almost 95 per cent, of that total. Child sexual abuse material reports have swelled due to better technology being used to find it in recent years. And Facebook has been more aggressive at detecting and finding child sexual abuse material than many other tech firms, experts say. But the impact of turning on end-to-end encryption across Instagram and Messenger is still likely to be significant.

The NCMEC estimates that “more than half” of the tips it receives will vanish when Facebook rolls out end-to-end encryption more widely. Rob Jones, the NCA’s threat leadership director, said the move would “take away” the “crown jewels from the online child protection response”. And earlier this year Facebook executives admitted that the introduction of end-to-end encryption will make it harder for the company to find abusive and harmful content being shared on its platforms.

In response, politicians in Europe, the UK, India and US have restarted the same arguments that defined the cryptowars of the 1990s. A few years ago they raised the spectre of terrorism to attack encryption, now the detection of child sexual abuse is being used to make their case. Demands have been made for technical “solutions'' to encryption and Facebook has been encouraged to abandon its planned rollout. Laws are being drafted that could rein in the use of encryption. Meanwhile, civil liberties groups and technologists say that any technical compromises made to end-to-end encryption will weaken the security it provides to billions of people. They fear damaging encryption will allow carte blanche surveillance of entire nations and undermine the universal right to privacy.

At the heart of the debate is an alarming claim: that turning on end-to-end encryption on all messaging platforms and social networks by default would stop law enforcement from being able to catch people like Wilson. But would it? And what, if anything, can be done about it without breaking encryption? The issue is set to define the future of online communication but, after decades of debate, there remains no easy answer, no magic bullet. So what happens next?

END-TO-END ENCRYPTION HAS come a long way since Phil Zimmermann created Pretty Good Privacy, a previously dominant form of end-to-end encryption, in the early 1990s. Now the technology, which scrambles messages so only the sender and intended recipient can see them, is everywhere. It’s become mainstream, with every big technology company using it in some way. Signal’s open-source encryption protocol has become the default and made it possible for companies to use end-to-end encryption at scale. There are no central databases of people’s messages that can be hacked. And, best of all, when it's turned on you don’t need to do anything. You're protected even if you've never heard of cryptography before.

“You have to protect people's private, sensitive messages,” says Will Cathcart, the head of WhatsApp, who has been in charge of the messaging service since 2019. The technology, he argues, is a way to fight and prevent crime by making people’s personal information more secure. In the time Cathcart has led WhatsApp, Facebook has become increasingly bullish about the use of end-to-end encryption in its products. In March 2019, founder and CEO Mark Zuckerberg published a 3,200-word blog post outlining his “privacy-focused vision” for social networks.

Now WhatsApp believes it has found a way to avoid sparking another cryptowar. Its solution avoids technical workarounds that could break encryption for everyone but, in doing so, raises yet more questions. “We feel like the approach we've taken is a middle ground. It is [a] balance where you give everyone the important level of security, of end-to-end encryption, but you have thoughtful, limited metadata, that you proactively use to fight abuse,” Cathcart says. He believes WhatsApp’s approach provides an “effective” way to detect people and groups who are sharing child sexual abuse material – without knowing exactly what they’re sending. “It's a hard problem to know the scale, but we're really focused on what we can do to make it happen less on WhatsApp,” Cathcart says.

What little data there is on the scale of WhatsApp’s paedophile problem suggests it is significant. WhatsApp detects and bans 300,000 accounts each month that it believes are linked to child sexual abuse material, and in 2020 alone it made 400,000 reports to the NCMEC. The difference in figures, WhatsApp says, comes from it banning all accounts it suspects share child sexual abuse material and reports only being made to NCMEC when it finds illegal material being shared. All of this is based on everything but the content of people’s messages. To do this, WhatsApp roots out accounts in three ways: by analysing the unencrypted parts of its platform; by analysing the reports people make about groups or individuals; and the unusual behaviours hidden in the vast flow of metadata that pours into servers every second of the day.

Paedophiles on WhatsApp don’t behave in the same way as everyone else. Most groups on WhatsApp are small (the majority include less than ten people) and their membership is unlikely to change much. Think about your family WhatsApp group, for example. Someone likely added all your family members at the same time and its membership probably remained unchanged for several years.

Groups created to share child sexual abuse material don’t work like that. New members come and go and conversations can be transactional – their purpose is, after all, to share illegal content and help direct people to other sources of videos and images. This can lead to a lot of abnormal behaviour. Administrators of Telegram groups that share images of non-consensual sexual images have been found to ask new members to share their own images as a form of payment, normally within a set period of time after joining. Failure to share something in time can get you kicked out of the group.

Safety teams at WhatsApp have built machine learning systems designed to detect abnormal behaviour. They combine specific knowledge about child abuse content with the ways that known offenders act. Billions of people use WhatsApp and most of them use it in similar ways. That, WhatsApp says, makes abnormal behaviour stand out amongst the noise. It might not know what people say, but WhatsApp still knows a lot about each and every one of the groups and users on its platform. It knows how long a group has existed for, how many members it has and how the members of a group behave. If, for example, someone on WhatsApp is found to have been exchanging messages with accounts known for sharing child sexual abuse material, that person will be flagged and investigated.

Much of this is based on the data linked to your account. Metadata – the who, when, where, and how of people’s actions – can reveal a lot. Just ask Edward Snowden and the NSA. And WhatsApp has plenty of it. The company’s privacy policy outlines the metadata it can collect, from your number and IP address to the phone you use and the operating system it runs. Beyond this, WhatsApp also knows when you registered, how often you message people, who you message and when you’re online. This is on top of any information you provide when signing up, such as your name and profile photo.

Using end-to-end encryption turns the current process for detecting child sexual abuse material on its head. Much of the abuse material that’s found online today comes through scanning everything people are sending. This isn’t possible with end-to-end encryption. Facebook, Google, and Twitter are currently among the 200-plus companies using PhotoDNA, a system, developed by Microsoft, that checks photos and videos uploaded to the web against a list of known child sexual abuse material. The system assigns a hash, or unique code, to each photo or video verified as child exploitation. When something is uploaded to a platform that uses PhotoDNA it is compared against this hash list to see if there’s a match.

While PhotoDNA can’t be used to scan people’s messages for known child abuse content with end-to-end encryption enabled, WhatsApp does use it and other systems to scan the unencrypted parts of its platform. PhotoDNA is used on WhatsApp group images, for example, to find known instances of child sexual abuse material. In the second half of last year, WhatsApp also started using a Google machine learning system to identify new instances of child sexual abuse in photos.

Group names are also scanned, using AI, for potential words related to child sexual abuse. WhatsApp takes data from child safety groups about the language predators use. Paedophiles often use codewords to try and disguise their behaviour. As a result, WhatsApp has designed its systems to try and identify if criminals are intentionally misspelling “child porn” as “child pr0n”, for example. The company says its machine learning systems are designed to find people highly likely to be violating its policies. Humans then review what has been flagged to make sure that accounts are only banned or sent to the NCMEC when there is sufficient evidence. When WhatsApp finds concrete evidence of abuse, such as through PhotoDNA, it automatically bans any accounts associated with it.

Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory who studies encryption policy and is examining end-to-end encryption and moderation, says metadata may provide “a way forward” that doesn’t move towards using “exploitative” alternatives that could break encryption. “When you look at how much they're able to do just with metadata, or just with user avatars, even that is still fairly impressive,” she says. Lucy Qin, who works on applied cryptography at Brown University in Rhode Island, agrees. She says WhatsApp’s use of metadata for detecting abusive content is a step in the “right direction” but adds that more transparency is needed about how such systems work.

While WhatsApp claims its systems are rigorous, law enforcement agencies are less convinced by the power of metadata for bringing people to justice. The NCA and officers who investigated Wilson say that, compared to Facebook, WhatsApp provides very little information about possible offenders. “We know WhatsApp is used for child abuse,” Jones said at the time of Wilson’s sentencing in February 2021. "The fundamental problem is we don't get many reports about it because it is end-to-end encrypted." It is also easier for predators to search for and message children on Facebook. To contact someone on WhatsApp you need their phone number.

An NCA spokesperson adds that content from platforms not using end-to-end encryption is a "vital, direct investigative route". They say it is “highly unlikely” that a search warrant could be made from a tip that does not include content “and instead only includes IP data and an indication from an algorithm that there is a high probability of suspicious or anomalous activity”. Simply put, the what can be far more incriminating than the who, where and when. However, it’s also true that there are plenty of ways that law enforcement can hack into people’s phones or find encryption workarounds to access data – in the case of Wilson investigators say obtaining the phones he used was key to bringing him to justice.

Concerns have also been raised about how effective WhatsApp’s metadata moderation system can be at scale. Nitish Chandan, a researcher at the CyberPeace Foundation think tank, has extensively researched the use of WhatsApp for the sharing of child sexual abuse material in India, where nearly 500 million people use the platform. As well as finding dozens of groups sharing sexual abuse material on WhatsApp he also says it appears some of its systems don’t operate as well outside of the English language. “They don't really work very well on say Hindi, Bengali or the different languages [in India].”

Cathcart says the 400,000 child sexual abuse reports WhatsApp made to the NCMEC in 2020 was ten times higher than in 2019. He adds that improvements it has made now mean it is able to detect more potential offenders than ever before. The company points to cases in Israel and California where its data has helped to catch possible paedophiles sharing illegal content on its platform. But WhatsApp refuses to break down how many of the accounts it bans and how many of the reports it makes to the NCMEC come from its automated systems. It also says it cannot publicly reveal many details about how its machine learning system works as offenders are constantly trying to evade detection.

Facebook’s plan to expand its use of end-to-end encryption gives law enforcement and regulators another stick to beat it with. But despite the company’s history of data abuses, it already does more to track down child sexual abuse material on WhatsApp than its two big end-to-end encrypted rivals – Signal and Apple’s iMessage. In 2020, across all of its products, Apple made just 265 reports to the NCMEC. Signal didn’t make any and doesn’t collect metadata. Neither company responded to requests for comment on whether they have any systems to detect child abuse material. (Telegram, Google’s Android Messages, Zoom, Microsoft Teams and Facebook Messenger all currently offer limited support for end-to-end encryption.)

Even if metadata can reveal a lot, WhatsApp’s less high-tech systems for reporting abuse can leave a lot to be desired. Research from the Canadian Centre for Child Protection shows that while WhatsApp allows people to report child sexual abuse material, its mechanisms for doing so are basic at best. When reporting either a person or a group there is only a generic option to 'report' something, rather than specific options to report spam, abuse or posts that express intentions of self-harm or suicide. And even when a report is made and messages are forwarded to WhatsApp, the option to block someone deletes a chat’s messages.

An option to specifically report child sexual abuse material should be added at the very least, says Lloyd Richardson, director of IT at the Canadian Centre for Child Protection. “It costs money to deal with abuse reporting on your system,” he says. “And the problem is that a lot of these platforms that are end-to-end encrypted are protected by that.” Cathcart says changing reporting mechanisms is something WhatsApp would consider.

But Richardson argues the problems go beyond any one company. “The internet has been playing legislative catch up for the last 20 years,” he says. Richardson argues there is a balance to be found around what companies are allowed to access and says that using end-to-end encryption isn’t just a privacy issue, it’s also a business decision. “Multi-billion dollar companies shouldn't be deciding for society what is acceptable in terms of how they're going to safeguard their services. That’s the job of society and government, not big tech.”

YLVA JOHANSSON HAS the seemingly impossible job of trying to decide what to do about encryption and child safety. The European Commissioner for home affairs is overseeing Europe’s plans for tackling child sexual abuse and exploitation. “I think that end-to-end encryption will be almost everywhere,” she says. “Nobody wants to weaken end-to-end encryption. It is extremely important that we can have privacy, both for individuals, but also for companies, governments and everything. This is a sensitive issue: how we should find the right balance here.”

But the debate on whether people’s privacy and security should be compromised to allow everyone’s messages to be scanned for child sexual abuse material has gone “thermonuclear”. The UK is drafting laws on online harms with the government calling encryption an “unacceptable risk to user safety and society”. In the United States, multiple proposed laws have been criticised for their potential to undermine encryption. Law enforcement requests have historically involved requesting “exceptional access” to encrypted content, which could add backdoors and compromise the very nature of online privacy.

When it comes to proactively scanning people’s content in the name of safety there are no easy answers. In fact, European regulators had to rewrite laws last month after realising that proactive scanning methods, such as PhotoDNA, may no longer be legal. But when the technical nature of end-to-end encryption collides with the horror of child sexual abuse things get heated. Research from Unicef, the UN’s children’s fund, acknowledges that end-to-end encryption is both necessary to protect people’s privacy and security but adds that its use “impedes” efforts to remove content and identify the people sharing it. The debate around end-to-end encryption has become “polarised into absolutist positions”, Unicef’s research says. “It is absolutely wrapped in conflicting interests and needs between security and privacy that need to be unpacked and need to be debated,” says Maria Koomen, who leads the Open Governance Network for Europe, a non-profit group focussed on transparency, and accountability..

As Wilson’s case showed, law enforcement feel they need access to non-encrypted data, such as messages, to build cases against paedophiles. But WhatsApp argues that end-to-end encryption is critical to keeping communications safe and providing metadata provides a starting point for police investigations. Sat in the middle are politicians, like Johansson, who are under pressure to do something. “It would be a disaster if we make safe areas where paedophiles can do whatever they want,” she says. “But it would also be a disaster if we open a backdoor to encrypted communication.” The European Commission is currently consulting on regulations that require online service providers to proactively detect child sexual abuse content by using systems like PhotoDNA. It also wants to create a European version of the NCMEC to help protect children across the continent and better connect law enforcement agencies.

But technical solutions don’t seem to exist. Last summer a leaked Commission discussion paper proposed several ways that end-to-end encryption could theoretically exist alongside the continued collection of what is being shared on messaging platforms. Cryptography experts have long disagreed that technical ways to access data exist. Everything that has been proposed would simply undermine encryption, they say. EDRi, a coalition of 44 European civil liberties groups, said all the measures in the leaked paper would risk undermining privacy and security. Johansson is more optimistic. “I'm not saying that I will present a proposal that all internet companies will love. But I can promise to listen to their concerns,” she says.

“We're also going to find technical solutions that will not open a backdoor in an encrypted environment but also that could detect this kind of material, especially hashed material,” Johansson says. She says new systems for tackling the problem must go “deeper” than what WhatsApp has created. Key among the technical proposals put forward in the Commission’s draft was client-side scanning. Rather than accessing data, client-side scanning is based on similar principles to PhotoDNA and uses a central database of known child sexual abuse images and videos. Instead of every message being scanned as it passes through a company's servers, the scanning moves to people’s phones and tablets. When someone shares an image on WhatsApp, for example, it could be checked against a database before they hit send and the content becomes encrypted.

While research is ongoing into the system’s feasibility, there are plenty of risks and reasons why it may not work. Your phone would need to store a hash list of known child sexual abuse images and videos, which would take up storage space and processing power. Each new layer added to an app increases the risk of vulnerabilities. There’s also the fear that allowing this kind of scanning for child sexual abuse opens the door to similar checks against everything from terrorism material to copyright breaches.

“I think it undermines the privacy and security expectations that users have when they're using an end-to-end encrypted chat app,” Pfefferkorn says. “Once you build it, you're building it in a way that governments around the world can make demands on you, under whatever their local laws might be”. She also highlights research showing how messaging service WeChat has used the principles of hashing and client-side scanning to censor messages that the Chinese government does not like. Leading cryptographer Matthew Green, of John Hopkins University, has warned that even if privacy-preserving methods like this do work they could create the West’s “most powerful, unaccountable automated surveillance network ever”.

WhatsApp is, unsurprisingly, against anything that could weaken encryption. Cathcart believes that its approach of making sure everyone’s messages are encrypted and then using what data the company does gather to report abuse, is the correct one. “I don't think there's some secret magic bullet that people haven’t realised that would give law enforcement access to encrypted content,” Cathcart says. All the technical solutions that have been proposed are “very flawed,” he adds. “This has been something that people have been talking about or asking for, for literally decades, which is how you know there is not some secret, smart answer here.”

Anyone can report sexual abuse, anonymously and safely, to Crimestoppers on 0800 555 111 or online. If you’re concerned about a child’s safety, contact the Child Exploitation and Online Protection Centre at ceop.police.uk.

More great stories from WIRED

This article was originally published by WIRED UK