Cracking down on AI-generated sexually explicit content services, tech giant Google has updated its advertising policy to ban websites and apps involved in the generation of deepfake pornography.
This new policy will be in effect on May 30, 2024. Under this updated Inappropriate Content Policy, advertisers involved in the promotion of deepfake apps or websites will face immediate suspension without warning.
This rule will also apply on those advertisers, who are also found to be providing instructions on creating, endorsing, or comparing various deepfake porn services.
As a result, the violators won’t be able to publish their ads on google.
These changes come at a time when there are growing concerns over the availability of tools that allow users to create manipulated pornographic content.
Some of these apps even masquerade as wholesome applications to secure listings on the Apple App Store and Google Play Store. Simultaneously, they also promote their ability to generate deepfake porn on social media platforms.
Earlier Steps Taken by Google
Already, Google current policies places strong restriction on ads which feature certain sexual contents.
But these updates don’t take any chances on the promotion of “synthetic content that has been altered or generated to be sexually explicit or contain nudity.” which is a clear violation of company’s rules.
Earlier too, google began banning services that produce sexually explicit content from appearing in shopping ads. This ban also covers tutorials and pages advertising deepfake porn generators.
Further, it has also allowed the advertisers to remove these ads before the start of the implementation on May 30.