Instagram's Adam Mosseri urges critical thinking in the AI-driven social media era
We are officially in the age of artificial intelligence. Chatbots like ChatGPT, Gemini, and others, who harness the power of generative AI are becoming more and more prominent in people's lives. And... it's getting somewhat hard to identify what is real online.
Many people, including some of the CEOs of tech giants, have raised some concerns about the use of generative AI. Notably, Instagram head Adam Mosseri has talked about AI-generated content on social media platforms, sharing his concerns.
In a series of posts on Threads, Mosseri stated that social media platforms need to provide more context to help people identify AI-generated content. He says the company should advise their users not to trust the images they see on there blindly, and not to mistake the content made using AI for real.
Mosseri states that internet platforms need to label generated AI content as best as they can, but some content will inevitably slip through the cracks. Mosseri didn't specify what social media platforms he's talking about in his posts on Threads.
Some images generated by AI are visibly fake, and those are not the ones that Mosseri and other people are very much concerned about. It's those images where the edit is almost impossible to spot, and images that are based on real life but have been altered. Like, for example, the controversial photos Madonna posted.
Anyway, social media has always been a curated version of reality, with people sharing only selected highlights of their lives. AI adds a new layer to this challenge, amplifying the 'unreal' nature of what we see. While AI may make social media appear even less authentic, one could argue it was never fully 'real' to begin with.
Many people, including some of the CEOs of tech giants, have raised some concerns about the use of generative AI. Notably, Instagram head Adam Mosseri has talked about AI-generated content on social media platforms, sharing his concerns.
Mosseri states that internet platforms need to label generated AI content as best as they can, but some content will inevitably slip through the cracks. Mosseri didn't specify what social media platforms he's talking about in his posts on Threads.
It seems as though his vision is in line with user-lead moderation systems like Community Notes on X, for example. Also, it seems similar to other custom moderation filters like those on YouTube or Bluesky.
Some images generated by AI are visibly fake, and those are not the ones that Mosseri and other people are very much concerned about. It's those images where the edit is almost impossible to spot, and images that are based on real life but have been altered. Like, for example, the controversial photos Madonna posted.
Things that are NOT allowed: