A deep dive into deepfakes that demean, defraud and disinform - Ofcom

notion image
  • Two in five people say they have seen at least one deepfake in the last six months – including depictions of sexual content, politicians, and scam adverts
  • Only one in ten are confident in their ability to spot them
  • Ofcom sets out what tech firms can do to tackle harmful deepfakes
As new Ofcom research reveals the prevalence of online deepfakes, we look at what can be done to tackle those that cause harm.
Deepfakes are videos, pictures or audio clips made with artificial intelligence to look real. New Ofcom research, published today, has found that 43% of people aged 16+ say they have seen at least one deepfake online in the last six months – rising to 50% among children aged 8-15.[1]
Among adults who say they have seen deepfake content, one in seven (14%) say they have seen a sexual deepfake.[2] Most of the available evidence indicates that the overwhelming majority of this content features women, many of whom suffer from anxiety, PTSD and suicidal ideation because of their experiences.
Of those who say they have seen a sexual deepfake, two thirds (64%) say it was of a celebrity or public figure, 15% say it was of someone they know, while 6% say it depicted themselves. Worryingly, 17% thought it depicted someone under the age of 18.
The most common type of deepfake 8-15-year-olds say they have encountered was a ‘funny or satirical deepfake’ (58%), followed by a deepfake scam advert (32%).
Fewer than one in ten (9%) of people aged 16+ say they are confident in their ability to identify a deepfake – although older children aged 8-15 are more likely to say so (20%).

Different deepfakes

Recent technological advances in Generative AI (GenAI) have transformed the landscape of deepfake production in the last two years. In a discussion paper, published today, we look at different types of deepfakes and what can be done to reduce the risk of people encountering harmful ones – without undermining the creation of legitimate and innocuous content.[3]
GenAI and synthetic content can augment TV and film; enhance photos and videos; create entertaining or satirical material; and aid the development of online safety technologies. It can also be used to facilitate industry training, medical treatments and criminal investigations.
Some deepfakes, however, can cause significant harm, particularly in the following ways:
  • Deepfakes that demean – by falsely depicting someone in a particular scenario, for example sexual activity. They can be used to extort money or force them to share further sexual content.
  • Deepfakes that defraud – by misrepresenting someone else’s identity. They can be used in fake adverts and romance scams.
  • Deepfakes that disinform – by spreading falsehoods widely across the internet, to influence opinion on key political or societal issues, such as elections, war, religion or health.
In reality, there will be cases where a deepfake cuts across multiple categories. Women journalists, for example, are often the victims of sexualised deepfakes, which not only demean those featured but may contribute towards a chilling effect on critical journalism.

What tech firms could do

Addressing harmful deepfakes is likely to require action from all parts of the technology supply chain – from the developers that create GenAI models through to the user-facing platforms that act as spaces for deepfake content to be shared and amplified.
We have looked at four routes tech firms could take to mitigate the risks of deepfakes:
  • Prevention: AI model developers can use prompt filters to prevent certain types of content from being created; remove harmful content from model training datasets; and use output filters that automatically block harmful content from being generated. They can also conduct ‘red teaming’ exercises – a type of AI model evaluation used to identify vulnerabilities.
    • [4]
  • Embedding: AI model developers and online platforms can embed imperceptible watermarks on content, to make it detectable using a deep learning algorithm; attach metadata to content when it is created; and automatically add visible labels to AI-generated content when it is uploaded.
  • Detection: Online platforms can use automated and human-led content reviews to help distinguish real from fake content, even where no contextual data has been attached to it. For example, machine learning classifiers that have been trained on known deepfake content.
  • Enforcement: Online services can set clear rules within their terms of service and community guidelines about the types of synthetic content that can be created and shared on their platform, and act against users that breach those rules, for example by taking down content and suspending or removing user accounts.
These are not requirements, but all the above interventions could help mitigate the creation and spread of harmful deepfakes. However, there is no silver bullet solution, and tackling them requires a multi-pronged approach.

What Ofcom is doing

Illegal deepfakes can have devastating consequences, and are often targeted at women. We’re working at pace to consult on how platforms should comply with their new duties under the Online Safety Act. That’ll include guidance on protecting women and girls.
If regulated platforms fail to meet their duties when the time comes, we will have a broad range of enforcement powers at our disposal to ensure they are held fully accountable for the safety of their users.
Gill Whitehead, Ofcom's Online Safety Group Director
When the new duties under the Online Safety Act come into force next year, regulated services like social media firms and search engines will have to assess the risk of illegal content or activity on their platforms – including many types of deepfake content (though not all types are captured by the online safety regime) – take steps to stop it appearing, and act quickly to remove it when they become aware of it.
In our draft illegal harms and children’s safety codes, we have recommended robust measures that services can take to tackle illegal and harmful deepfakes. These include measures relating to user verification and labelling schemes, recommender algorithm design, content moderation, and user reporting and complaints.
These represent our ‘first-edition’ codes and we are already looking at how we can strengthen them in the future as our evidence grows.
We are also encouraging tech firms that are not regulated under the Online Safety Act, such as AI model developers and hosts, to make their technology safer by design using measures we have set out today.

Notes

  1. Source: Ofcom Deepfakes Polls, June 2024. 2,000 nationally representative people in the UK aged 16+ were interviewed; and 1,000 nationally representative people in Great Britain aged 8-15 were interviewed.
  1. Of respondents aged 18+ (n=151) who thought they had seen a sexual deepfake.
  1. Discussion papers contribute to the work of Ofcom by sharing the results of our research and encouraging debate in areas of Ofcom’s remit. However, they do not necessarily represent the concluded position of Ofcom on particular matters. Today’s paper is not formal guidance for regulated services. It does not recommend or require specific actions.
  1. Today we have also published a discussion paper on Red Teaming for GenAI Harms, which sets out 10 good practices that firms can adopt today to maximise the impact of any red teaming exercises they already conduct.

Related content

Using research to guide our online safety work

Implementing the Online Safety Act: Additional duties for ‘categorised’ online services