Skip to content

Deepfakes Unveiled: A Chilling Tale for Halloween

Deepfakes Take a Spooky Turn: An Unsettling Guide for Compliance in KYC/AML - The Sumsuber

Deepfake Dilemma Unveiled: A Chilling Tale for Halloween
Deepfake Dilemma Unveiled: A Chilling Tale for Halloween

Deepfakes Unveiled: A Chilling Tale for Halloween

In the digital age, deepfakes have become a concerning phenomenon, used to manipulate public opinion, spread disinformation, and commit financial fraud. From romance scams to pornography, deepfakes pose a significant threat to individuals and society as a whole.

A recent case in the US saw a woman from Utah convicted for an online romance scam that cost her victims over $6 million. The scam involved creating fake personas to deceive victims into forming emotional connections. Similarly, a 73-year old woman was scammed by a deepfake voice claiming to be her grandson in need of bail money.

The use of deepfakes extends beyond scams. As of 2023, 113,000 deepfake porn videos have been uploaded to the internet, representing a 54% increase over the 73,000 videos uploaded in all of 2022. These videos often find their way onto widely available platforms such as Google, Cloudflare, and Twitch.

Voice deepfakes are also used by fraudsters to impersonate family members, colleagues, or public figures for financial gain or to spread false information. Most voice deepfake attacks target credit card service call centers, according to Pindrop, a voice interaction tech company.

The consequences for victims of deepfake porn can be severe, leading to depression, self-injury, and even suicide. To prevent deepfake abuse, it's crucial to be cautious about sharing personal photos and to read the terms and conditions of AI apps. If your photos or videos are stolen and used inappropriately, contact local law enforcement.

Regulations and measures to combat the abuse of deepfake technology are evolving rapidly. The European Union (EU) AI Act, effective from August 2025, mandates that all companies creating, using, or distributing AI-generated synthetic content must clearly label such content.

The UK has strengthened laws specifically targeting intimate deepfake misuse. Creating or distributing sexually explicit deepfake images without consent is penalized with up to two years in prison. Stringent age verification rules for adult websites were introduced in July 2025 to block underage access.

In the US, over 39 states have laws banning the creation and distribution of nonconsensual intimate deepfakes, and over 30 states regulate political deepfakes with disclosure/watermarking requirements. The proposed NO FAKES Act aims to protect individuals’ rights against unlicensed AI replicas of their likeness or voice.

Denmark is pioneering legislation specifically protecting individuals’ rights to their appearance and voice against AI-generated deepfakes. This proposal forms part of copyright protection against harmful deepfake creation and sharing, targeting identity theft and personal rights violations.

Major platforms like Facebook, Instagram, TikTok, and YouTube are mandated to deploy systems to detect, label, and remove deepfake content. Initiatives like the World Economic Forum’s Global Coalition for Digital Safety emphasise cross-sector collaboration and education to combat harmful deepfakes.

These regulations converge on key themes of consent, transparency, labeling, age verification, individual rights protection, and penalties for malicious creators or distributors to address deepfake harms in pornography, romance scams, and advertising content.

As we move forward, it's essential to stay vigilant against deepfake threats and support the development and implementation of effective regulations to protect individuals and society.

[1] European Commission. (2022). Proposal for a Regulation of the European Parliament and of the Council on Artificial Intelligence (AI) (Artificial Intelligence Act). Retrieved from https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Proposal-for-a-Regulation-of-the-European-Parliament-and-of-the-Council-on-Artificial-Intelligence-AI-Artificial-Intelligence-Act_en

[2] World Economic Forum. (2021). Global Coalition for Digital Safety. Retrieved from https://www.weforum.org/projects/global-coalition-for-digital-safety

[3] UK Government. (2023). Online Safety Bill. Retrieved from https://www.gov.uk/government/publications/online-safety-bill/online-safety-bill

[4] Congress.gov. (2024). NO FAKES Act. Retrieved from https://www.congress.gov/bill/117th-congress/house-bill/9145

[5] National Conference of State Legislatures. (2023). Deepfakes. Retrieved from https://www.ncsl.org/research/telecommunications-and-information-technology/deepfakes.aspx

  1. The European Union's AI Act, effective from August 2025, requires companies using artificial-intelligence-generated synthetic content to clearly label such content to promote transparency.
  2. In the UK, creating or distributing sexually explicit deepfake images without consent is punishable by up to two years in prison, as a part of strengthened laws targeting intimate deepfake misuse.
  3. As major platforms like Facebook, Instagram, TikTok, and YouTube strive to protect their users, initiatives like the World Economic Forum’s Global Coalition for Digital Safety emphasize education and cross-sector collaboration to combat harmful deepfakes across social-media, entertainment, politics, and crime-and-justice.

Read also:

    Latest