
cyber crime
No matter where you are in the world, you will undoubtedly be familiar with news stories that highlight the horrific content that children have access to and its impact upon them.
In part the UK’s Online Safety Act was passed to provide the British Government and its media regulator Ofcom with the legal provision to tackle these harms. In the US, Texas Governor Greg Abbott has recently enacted an online safety law for his state to protect children from accessing adult content.
This article is about the digital frontline responders, whose job it is to identify and report child sexual abuse material (CSAM) to mitigate this harm. For the past year I have been working with Dr Bethany Jennings Research Manager for WeProtect Global Alliance, researching the health and wellbeing of the people on the frontline responding to this content. This is some of what we discovered after undertaking a global interview-based study.
Can AI save the day?
Digital frontline responders are the essential safety workers helping to police social media platforms. The nature of their work could include content moderation on online platforms, analysing content for hotlines, providing psychosocial support to victim-survivors, and investigating suspected cases of child sexual abuse and exploitation online, among other things. This is skilled work, requiring an understanding of context.
It is also challenging work that is both disturbing and repetitive. It is work that is largely ignored by society but that frequently takes a toll on the mental health of its workforce. Digital frontline responders often suffer mental health problems such as trouble sleeping, panic attacks and depression.
Many of diginomica’s readers will be aware that Artificial Intelligence (AI) models are being developed for use in content moderation and in law enforcement investigations, but opinion is divided as to whether this will ultimately eliminate the need for human frontline responders to see CSAM.
A recent BBC Radio 4 documentary interviewed Dave Wilner who was Head of Trust and Safety at OpenAI until 2023 and who is now involved in developing new tools and approaches to enable AI to reduce the number of humans needed for the identification and removal of CSAM. One of these is the Atlas of AI, a project involving lots of humans labelling content so that ChatGPT can classify material without humans in the loop. He is confident that we can get to a place where language models can do this classification but said that humans will be needed as part of the process for some time yet. Some humans (digital frontline responders) may always be required because it is difficult to introduce nuance to a machine and criminals are agile in evading detection by adapting to the way that algorithms are designed to work.
What can be done?
At present, we have digital frontline responders who spend between four to six hours a day wading through gruesome content in order to reduce the amount of CSAM available online. Many of these responders are themselves young (often in their twenties) and because of the nature of their work they have to operate in segregated facilities and typically choose not to talk about what they see, and its impact on them, with family and friends.
As you can imagine this takes a toll on their health and wellbeing, and while all we spoke with have some access to counselling (as they had to in order to participate in our research), the level of access and the qualification of the counsellors provided varied significantly. Few believed they would have access to this support once they left the role. This is especially concerning because post-traumatic stress disorder tends to surface once the individual is no longer exposed to the traumatic environment.
We also discovered that there was often a lack of career development opportunities in place for responders, who were typically committed to working in the child protection area but wanted (understandably) to move away from their current role after a few years. Despite generally being paid well for their work, pay often created a problem for responders who became trapped in jobs that they knew they should leave for their mental wellbeing but could not afford to do so. This is an especially pernicious problem for those working in countries in Africa, Asia and South America where a lot of this work is outsourced to.
Technology does, of course, have a role to play in supporting this workforce. There was praise amongst interviewees for AI tooling that previews images and provides warnings of how egregious upcoming material is, so that responders can prepare themselves, eliminating the shock of surprise. Tools that categorised already seen content and detected repeat images were also praised as helpful in reducing the physical and mental workload. Several interviewees were very impressed with facial recognition software for identifying perpetrators, but some felt it was less useful for victim classification because it cannot recognise the age of the victim.
One of our findings is that tools that help responders navigate the reporting of CSAM could be very helpful in improving their health and wellbeing. As one interviewee put it:
I would like tooling with high accuracy that tells you who to contact for each website to get CSAM images pulled off. We send requests to have content removed but if they say it is not their responsibility we do not know if it is or not. This gives me the most frustration, more so than the images themselves. If images we have flagged are still there months later, this impacts my wellbeing the most.
My take
Dr Jennings and I recently ran a workshop at the International Policing and Public Protection Research Institute event in London to talk through our findings and we were heartened by the level of interest in the research and the initiatives that several employers of digital frontline responders are putting in place. We have also briefed a technology vendor directly on our recommendations. The work of trust and safety departments within service providers, and content moderation teams is vital in identifying and responding to CSAM online. As our study shows, of equal importance is providing support for the dedicated individuals on the frontline of response.
Next time you hear a news story about online harms, spare a thought for the digital frontline responders that are putting themselves directly in harm’s way to try and reduce the deluge of nightmarish content posted online. They deserve much more recognition in our brave new world.