Meta Makes ‘AI Info’ Labels Less Noticeable on Altered Content!

  • Editor
  • September 13, 2024
    Updated
meta-makes-ai-info-labels-less-noticeable-on-altered-content

Key Takeaways:

  • Meta is changing how it labels content that has been edited or modified using AI tools on Facebook, Instagram, and Threads.
  • The “AI Info” label will now be hidden in a menu for AI-edited content, rather than appearing directly under the user’s name.
  • Content fully generated by AI will still have the “AI Info” label prominently displayed under the username.
  • These changes aim to “better reflect the extent of AI used” in content, but they have raised concerns about transparency and the potential for misleading users.

Meta is updating its labeling policy for content on Instagram, Facebook, and Threads that have been edited or manipulated using generative AI.

The “AI Info” tag, which was previously displayed directly beneath a user’s name, will now be moved to a menu located in the top-right corner of images and videos edited with AI tools.


Users can click on this menu to check if AI information is available and read what modifications may have been made to the content. This shift is intended to “better reflect the extent of AI used” across images and videos on Meta’s platforms.

The change follows criticism of the earlier “Made with AI” label, introduced in July, which creators and photographers widely criticized for incorrectly tagging real photos they had taken.


The new “AI Info” label will still be displayed for content detected as generated by an AI tool, and Meta will continue to share whether the label was applied due to “industry-shared signals” or if it was self-disclosed by the user. The updates are set to roll out next week.

The “industry-shared signals” mentioned by Meta refer to systems like Adobe’s C2PA-supported Content Credentials metadata, which can be applied to any content created or edited using its Firefly generative AI tools.

Comment
byu/barepixels from discussion
inStableDiffusion

Similar systems, such as Google’s SynthID digital watermarks, are also used to mark content its AI tools generate. However, Meta has not disclosed the full range of systems or the exact number it checks for these signals.

Despite the changes, some critics argue that making the AI labels less visible could make it easier for users to be misled by content that has been manipulated with AI tools, especially as these tools become more sophisticated and capable of creating highly realistic edits.

Comment
byu/miyakami from discussion
inphotoshop

By moving the label to a less prominent location, the concern is that users may not notice the AI modifications, potentially leading to increased misinformation.

Meta’s decision to alter its labeling policy comes at a time when the use of generative AI is rapidly evolving.

Comment
byu/MeaningNo1425 from discussion
inaiwars

As AI editing tools become more advanced and widely available, questions around transparency and trustworthiness of content on social media platforms are becoming increasingly pertinent.

The company maintains that these changes are aimed at aligning the labeling process more closely with users’ expectations, but the shift has raised valid concerns about whether it will lead to more deceptive practices in the long run.

For more news and insights, visit AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *