Google Adds AI Content Detection Tools to Gemini App
Google is rolling out a new feature in its Gemini app that addresses growing concerns about AI-generated content online. Users can now upload videos and ask whether they were created or edited with Google AI, helping combat the spread of misleading content across social media platforms. This update responds to a real problem: as AI-generated content proliferates, people are becoming increasingly hesitant to share posts for fear of spreading fake material and appearing uninformed.
Understanding SynthID Technology
The new tool leverages Google’s SynthID technology, which embeds invisible digital watermarks into all AI-generated images, audio, text, and video created through Google’s AI tools. When users upload content to Gemini, the system scans for these imperceptible markers across both audio and visual tracks, providing context about which segments contain AI-generated elements. This transparent approach helps verify authenticity while maintaining content quality.
Broader Industry Standards Emerging
Google is working to establish SynthID as an industry standard and has partnered with Nvidia to expand watermarking capabilities across other AI tools. However, other major platforms including Midjourney, OpenAI, and Meta have adopted alternative standards like C2PA instead. Despite these different approaches, all aim to achieve the same goal: universal AI identification and greater transparency.
The SynthID detection feature is now available in Gemini for files up to 100 MB and 90 seconds in length, offering users quick verification and peace of mind about the content they encounter and share online.

