Articles: Blogs

What do I need to know about labelling AI-generated content on social media?

AI - 22nd August 2024
What do I need to know about labelling AI-generated content on social media?

As aviation, travel and mobility PR and marketing experts, a question that has been posed to us this week is – how does labelling AI-generated content on social media work? Should I be doing it?

In many cases, social media platforms automatically put labels on AI-generated images and videos based on metadata around a piece of content’s origin.

However, the Q2 2024 Sprout Pulse Survey found that 33% of people believe that disclosing whether content is AI generated should be the brands’ responsibility. A similar number, 29%, say it should be the social network. 17% think brands, networks and social media management platforms are all responsible.

Without labelling, potential travellers could be fed fake AI-generated images of destinations, hotels, airline experiences, negatively impacting reputable brands worldwide. Hugely damaging in an industry where trust, safety and reputation are paramount.

We explore the current legislation, what steps social media platforms are taking and what travel, aviation and mobility marketers should be doing.

What is the Government doing about regulating AI?

In February 2024, the UK Government introduced the Introduction to AI Assurance. Within this, it says: “… existing regulators will be responsible for interpreting and implementing the regulatory principles in their respective sectors and establishing clear guidelines on how to achieve these outcomes within a particular sector. By outlining processes for making and assessing verifiable claims to which organisations can be held accountable, AI assurance is a key aspect of broader AI governance and regulation.”

The Artificial Intelligence (Regulation) Bill was proposed in March 2024. It outlines a new body, the AI Authority, which would have various functions designed to help address AI regulation in the UK.

The European Commission has released the EU AI Act, the world’s first comprehensive AI law, affecting both AI providers and deployers. It provides a uniform framework across all EU Member States, based on a forward-looking definition of AI and a risk-based approach. The Commission will issue guidance on the prohibitions prior to their entry into force on 2 February 2025, and the AI Act will apply two years after entry into force on 2 August 2026.

Across the pond, the US Federal Communications Commission (FCC) has yet to announce similar measures for AI-generated content on social media.

What are the social platforms saying?

“We will begin adding “AI info” labels to a wider range of video, audio and image content when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.”

“Content Credentials were developed using the Coalition for Content Provenance and Authenticity (C2PA) standards to make detailed information about the content’s origin and history accessible to everyone. Members can view the Content Credentials on all image and video content that has C2PA metadata attached. By providing a verifiable trail of a content’s origins and edits, Content Credentials helps LinkedIn keep digital information reliable, protect against unauthorized use, and create a transparent, secure digital environment for creators, publishers, and members.

“Although it’s not yet possible to identify and label all AI-generated content, we’re continuing to learn and gather feedback to capture more content and credentials over time.”

“TikTok is starting to automatically label AI-generated content (AIGC) when it’s uploaded from certain other platforms. To do this, we’re partnering with the C2PA and becoming the first video sharing platform to implement their Content Credentials technology.”

“We’re introducing a new tool in Creator Studio requiring creators to disclose to viewers when realistic content – content a viewer could easily mistake for a real person, place, scene, or event – is made with altered or synthetic media, including generative AI. We’re not requiring creators to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance.”

“You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”). In addition, we may label posts containing misleading media to help people understand their authenticity and to provide additional context.”

How should travel, aviation and mobility brands label AI-generated content on social media?

As seen above, social media platforms will automatically begin labelling AI-generated images and videos. But should we be proactively doing so as content creators?

Being transparent and authentic with your audience has always been an important element of social media strategy. Responsible labelling ensures you maintain brand integrity and consumer trust.

Here are some instances where you should flag your use of AI:

  • Any imagery/video/audio that could cause confusion or be misleading about a product or service.
  • Any content that is very deliberate in its use of AI. Perhaps it’s an obvious play on an on-brand image or company photo.
  • If it’s to demonstrate your AI capabilities.

Examples of how to go about labelling AI-generated content

  • Include a small overlay within the image or video. This could be within the caption, or in the bottom right-hand corner. Wording should be along the lines of ‘AI-Generated Content’.
  • Within the text of your post. Examples could be ‘See our AI-generated vision for hotel rooms of the future’ or ‘We’ve created this image used AI to show you how the sunset could look from your aeroplane window during next month’s eclipse’.

Educating your team on labelling AI-generated content on social media

  • Meet with your social media team regularly to update them on the latest AI tools, best practices, regulations and ethical considerations.
  • Set out clear guidelines for how to label AI-generated content and what wording to use to ensure they meet current regulatory standards.
  • If you use a separate content creation or graphic design team, ensure they know your guidelines for AI-generated content and flag to you if an AI tool has been used for any part of an image or video.

The future of AI content

Regulations set out by Government, and protocols by social media companies are likely to evolve in the coming months, if not weeks. Alongside this, AI technologies will continue to advance and improve.

AI-generated content is not going to be a passing fad. It will become an integral part of content creation. Labelling this type of content will build trust with your online audiences and protect you from inadvertently misleading customers.

 

Fancy reading more about AI in travel, aviation and transport? Here’s four ways generative AI is transforming travel and aviation experiences.

Alternatively, get in touch to find out how our travel digital marketing experts can support your content creation.

Chiara Balachandran