Legal landscape · Updated 2026

The law is catching up.

Governments worldwide are passing laws to protect individuals from unauthorized AI-generated deepfakes and biometric identity abuse.

This is a reference to the legislation that matters — what it requires, who it protects, and how Veyon's workflows are built to align with each framework.

🇺🇸 TAKE IT DOWN Act 🇪🇺 EU AI Act 🇪🇺 GDPR 🇬🇧 UK Online Safety Act 🌐 Global & Emerging
🇺🇸
United States · Federal Law

TAKE IT DOWN Act

● Effective May 1, 2026
Key Provisions
  • Criminalizes non-consensual intimate deepfakes at the federal level — first such law in the US
  • Platforms must remove flagged non-consensual intimate AI-generated images within 48 hours of a valid takedown notice
  • Covers both real and AI-generated (deepfake) intimate imagery — no distinction between synthetic and authentic content
  • Victims can submit takedown requests directly; platforms face liability for non-compliance
  • Civil and criminal penalties for individuals who knowingly publish such content
How Veyon Helps
  • Veyon's one-click takedown workflow generates compliant notices formatted to TAKE IT DOWN Act requirements
  • Automated delivery to platforms' designated removal channels — within hours, not days
  • Continuous monitoring detects violating content so you can act within the 48-hour window
  • Every takedown is logged with timestamps and reference numbers for legal follow-through
🇪🇺
European Union · AI Regulation

EU AI Act

● In Force  August 2024 — rolling enforcement through 2026
Key Provisions
  • Article 50: AI-generated content — including deepfakes — must be clearly labeled as synthetic. Providers and deployers bear the obligation
  • Deepfake systems classified as high-risk or prohibited-use depending on context; emotional recognition and biometric categorization face strict limits
  • Platforms and deployers must implement technical measures to detect and disclose AI-generated audio, video, and images
  • Fines up to €35M or 7% of global turnover for prohibited AI system violations
  • Article 4 mandates AI literacy training for operators handling AI systems that affect individuals
How Veyon Helps
  • Veyon's identity certificates serve as verifiable proof of authentic identity — directly countering unlabeled deepfakes
  • Continuous monitoring flags synthetic content bearing your likeness that lacks required disclosure labels
  • Evidence packages include metadata and detection results suitable for regulatory complaints under Art. 50
  • Your verified identity record creates a defensible baseline for enforcement authorities
🇪🇺
European Union · Data Protection

GDPR

● In Force  Since May 2018
Key Provisions
  • Article 9: Biometric data — including facial geometry and voice prints — is a special category requiring explicit consent for processing
  • Article 17 (Right to Erasure): Individuals can demand deletion of personal data, including AI-generated content derived from their biometrics, where no lawful basis exists
  • Article 18: Processing can be restricted pending a legal challenge — a powerful interim remedy while takedown disputes are resolved
  • Article 82: Individuals can sue data controllers for damages — including non-material harm — from unlawful biometric processing
  • No consent = no lawful basis for deepfake creation from biometric data. Period.
How Veyon Helps
  • Veyon's takedown notices invoke Art. 17 (erasure) and Art. 18 (restriction) directly, citing the specific GDPR basis for removal
  • Your Veyon identity certificate provides proof of biometric ownership — essential for Art. 82 compensation claims
  • Every detection event is logged with the data required to substantiate a formal GDPR complaint to a supervisory authority
  • Evidence packages export in formats suitable for Data Protection Authority submissions
🇬🇧
United Kingdom · Online Safety

Online Safety Act 2023

● Enforced  2024 — deepfake provisions phased in through 2025
Key Provisions
  • Sharing intimate deepfakes without consent is a criminal offence — up to two years imprisonment for creation with intent to cause distress
  • Platforms are required to proactively identify and remove illegal content, including non-consensual intimate deepfakes
  • Ofcom (the UK regulator) has powers to fine platforms up to £18M or 10% of global turnover for non-compliance
  • The Act also covers cyberflashing and other forms of technology-facilitated abuse — broad coverage of AI-enabled harm
  • UK users can escalate non-compliant platforms directly to Ofcom with formal complaints
How Veyon Helps
  • Veyon generates takedown notices citing the Online Safety Act's specific illegal content provisions for UK-regulated platforms
  • Detection logging provides the evidence trail required for an Ofcom complaint if platforms fail to act
  • Reference numbers and timestamped records document your compliance attempts — necessary if escalation is required
  • One workflow handles both UK and EU platforms, routing notices to the correct regulatory framework automatically
🌐
Global

Emerging & Regional Legislation

● Rapidly evolving

Beyond the landmark frameworks above, jurisdictions across the world are moving on AI identity and deepfake legislation. Enforcement timelines vary, but the direction is uniform: unauthorized use of someone's likeness is becoming illegal everywhere.

🇦🇺
Australia — Criminal Code Amendment

Proposed amendments would criminalize non-consensual creation and sharing of intimate deepfakes. State-level laws in NSW and Victoria already provide civil remedies. Federal legislation expected to pass 2026.

🇰🇷
South Korea — Act on Special Cases

South Korea enacted sweeping deepfake legislation in 2020, amended in 2024. Creating non-consensual sexual deepfakes carries penalties up to 7 years. Platforms must detect and remove violating content within 24 hours.

🇨🇳
China — Deep Synthesis Regulations

In force since January 2023. Requires explicit consent before generating deepfakes. AI-generated content must be labeled. Providers must verify user identity before allowing synthesis of others' likenesses.

🇺🇸
US State Laws — 40+ Jurisdictions

States including California, Texas, Georgia, and New York have enacted biometric privacy laws (BIPA-style) and deepfake election/pornography statutes. State law fills gaps while federal law catches up.

🇧🇷
Brazil — Lei Geral de Proteção de Dados

Brazil's LGPD mirrors GDPR in treating biometric data as sensitive. The ANPD (national authority) is developing AI-specific guidance that will cover deepfake consent requirements.

🌍
More Coming

Canada (AIDA), India (DPDP Act), Japan (AI governance guidelines), and the G7 are all developing frameworks. The global consensus is clear: identity protection is a fundamental right.

Act now

Don't wait for the law to catch up.
Your identity is already at risk.

The world is acting. Protect your identity now — before someone else does it for you.

Protect my identity