NEW YORK: In advance of the US elections, which will put its capacity to regulate misleading content produced by emerging artificial intelligence technology to the test, Facebook owner Meta announced significant adjustments to its regulations on digitally produced and edited media on Friday.
According to a blog post by Vice President of Content Policy Monika Bickert, the social media behemoth will begin labeling AI-generated films, photos, and audio uploaded on its platforms as “Made with AI” next month. This expands the scope of the policy, which previously only covered a small portion of doctored content.
Regardless of whether the video was produced using AI or other methods, Bickert stated that Meta will additionally attach distinct and more noticeable labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”.
The organization will handle modified content differently under the new strategy. It will shift from an approach that focuses on eliminating a certain subset of posts to one that maintains the material while informing users about the creative process.
In the past, Meta revealed a plan to use invisible identifiers included into files to identify photos created with generative AI technologies from other companies, although it did not provide a start date at the time.
A representative for the business stated that posts made on Meta’s Facebook, Instagram, and Threads platforms would be subject to the new labeling strategy.
The more noticeable “high-risk” markings will be applied by Meta right away, the spokeswoman stated.
Election-related effects
Months before the US presidential election in November, which tech experts fear could be altered by emerging generative AI systems, the modifications have taken place. In areas like Indonesia, political campaigns have already started using AI tools, going beyond the limitations set forth by suppliers.
Meta’s monitoring board deemed the company’s current policies regarding manipulated media “incoherent” in February following its examination of a Facebook video featuring US President Joe Biden that had edited footage that was misrepresented as suggesting he had acted inappropriately.
The video was allowed to remain up because misleadingly changed videos are only prohibited by Meta’s current “manipulated media” policy if they were created using artificial intelligence or if they make people appear to say things they never actually uttered.