US Enacts Legislation To Combat AI Deepfakes

Last week, the US introduced legislation aimed at tackling AI deepfakes and preventing the unauthorized use of original content for AI training. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT) has received broad support from all US political parties. In addition, the “Take It Down Act,” introduced last month, […]

Advertisement
US Enacts Legislation To Combat AI Deepfakes

Last week, the US introduced legislation aimed at tackling AI deepfakes and preventing the unauthorized use of original content for AI training. The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT) has received broad support from all US political parties.

In addition, the “Take It Down Act,” introduced last month, focuses on removing AI-generated deepfakes that target non-consensual intimate images.

The issue gained widespread attention in January when AI-generated deepfake nude photos of Taylor Swift went viral on X (formerly Twitter), Facebook, and Instagram, sparking a national debate about the risks associated with AI technology.

Beyond addressing deepfakes, the COPIED Act aims to address concerns from journalists, artists, singers, and content creators who believe AI has been exploiting their work without proper credit or compensation.

A Forbes article last month reported that the AI-powered search engine Perplexity AI had used its content without authorization. Wired, a technology magazine based in New York, investigated and found that Perplexity was summarizing articles despite the Robot Exclusion Protocol and accessing restricted sections of their website.

The COPIED Act will establish a system for verifying and tracking all AI-generated content through a digital document known as “content provenance information,” akin to a logbook for news articles, creative works, images, and videos. The act also aims to make it illegal to tamper with this data, helping journalists and other creators protect their work from unauthorized AI use. Additionally, it will grant state officials the authority to enforce the law, allowing legal action against AI companies that remove watermarks or use content without permission or fair compensation.

Advertisement