Governments worldwide are passing laws to protect individuals from unauthorized AI-generated deepfakes and biometric identity abuse.
This is a reference to the legislation that matters — what it requires, who it protects, and how Veyon's workflows are built to align with each framework.
Beyond the landmark frameworks above, jurisdictions across the world are moving on AI identity and deepfake legislation. Enforcement timelines vary, but the direction is uniform: unauthorized use of someone's likeness is becoming illegal everywhere.
Proposed amendments would criminalize non-consensual creation and sharing of intimate deepfakes. State-level laws in NSW and Victoria already provide civil remedies. Federal legislation expected to pass 2026.
South Korea enacted sweeping deepfake legislation in 2020, amended in 2024. Creating non-consensual sexual deepfakes carries penalties up to 7 years. Platforms must detect and remove violating content within 24 hours.
In force since January 2023. Requires explicit consent before generating deepfakes. AI-generated content must be labeled. Providers must verify user identity before allowing synthesis of others' likenesses.
States including California, Texas, Georgia, and New York have enacted biometric privacy laws (BIPA-style) and deepfake election/pornography statutes. State law fills gaps while federal law catches up.
Brazil's LGPD mirrors GDPR in treating biometric data as sensitive. The ANPD (national authority) is developing AI-specific guidance that will cover deepfake consent requirements.
Canada (AIDA), India (DPDP Act), Japan (AI governance guidelines), and the G7 are all developing frameworks. The global consensus is clear: identity protection is a fundamental right.
The world is acting. Protect your identity now — before someone else does it for you.
Protect my identity