Meta to label AI-generated content on FB, Instagram, Threads in election year
New Delhi, Feb 6 (IANS) As the policy-makers deliberate over how to curb deepfakes and AI-generated content in a year when the India and the US go to elections, Meta on Tuesday said that in the coming months, it will label images that users post to Facebook, Instagram, and Threads that are AI-generated.
Meta is adding a feature for people to disclose when they share AI-generated video or audio so the company can add a label to it.
“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” Nick Clegg, President, Global Affairs, said in a statement.
If the company determines that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, “we may add a more prominent label if appropriate, so people have more information and context”.
Meta’s family of apps, which includes Facebook, Instagram, Messenger, and WhatsApp, are now being used by 3.19 billion people daily, up from the 3.14 billion.
The social networking platform said that it is also working with industry partners on common technical standards for identifying AI content, including video and audio.
“We’ve labeled photorealistic images created using Meta AI since it launched so that people know they are ‘Imagined with AI,’” said Clegg.
“We’re taking this approach through the year, during which a number of important elections are taking place around the world,” said the company.
Since AI-generated content appears across the internet, Meta is working with other companies in the industry to develop common standards for identifying it through forums like the Partnership on AI (PAI).
“The invisible markers we use for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices,” Clegg noted.
“We’re building industry-leading tools that can identify invisible markers at scale so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.
“These are early days for the spread of AI-generated content. As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.
“Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well as content that has,” Clegg said.