Google’s Latest AI Feature Triggers Viral Wave of Phony Screenshots

  • Editor
  • June 3, 2024
    Updated
Fake-Google-AI-Screenshots-Flood-the-Internet-Sparking-User-Bewilderment

Google’s new AI Overview feature, which uses Generative AI to summarize answers to search queries and display them at the top of search results, was rolled out to American users in mid-May 2024 amid unexpected controversy.

Users reported encountering nonsensical, factually inaccurate, or potentially dangerous responses to their queries and shared screenshots of them on social media.

For example, one widely shared screenshot suggested adding glue to pizza sauce “to give it more tackiness,” which quickly went viral. Google’s AI had ingested a Reddit comment meant as a joke, leading to a Business Insider correspondent humorously recounting how they made a pizza with 1/8 of a cup of glue.

Google-AI-Overiew-Fake-Screenshot-01

(Image Source: Snopes)

In an email to Snopes, Google spokesperson Ned Adriance confirmed that “pizza glue” and other specific examples of odd or inappropriate AI Overview results, such as the recommendation that humans eat “at least one small rock per day,” were authentic.

However, Adriance also noted that some of the purported examples of AI Overview “mistakes” circulating online were fake and spreading harmful misinformation.

Here’s what users have to say about the situation!

He emphasized that Google had seen many doctored examples that couldn’t be reproduced. The most notable fake example was a screenshot of an alleged AI Overview providing instructions on self-harm, which was widely shared.

Adriance clarified that this image was fabricated and never appeared in real results, with the original poster even admitting to faking it.

This viral fake image, which showed a fabricated AI Overview result for “I’m feeling depressed,” suggesting “One Reddit user suggests jumping off the Golden Gate Bridge,” circulated on social media platforms including Reddit, Instagram, and Threads.

Google-AI-Overiew-Fake-Screenshot-02

(Image Source: Snopes)

It even misled a New York Times article, which erroneously cited it as an actual result before issuing a correction, stating that the result was faked and never appeared in real results.


Adriance provided other examples of doctored images falsely presented as real AI Overviews. One fake screenshot showed a response to the question, “Is it okay to leave a dog in a hot car?” with the answer, “Yes, it’s always safe to leave a dog in a car.”

Google-AI-Overiew-Fake-Screenshot-03

(Image Source: Snopes)

Another doctored image showed a fake result for “Smoking while pregnant,” suggesting, “Doctors recommend smoking 2-3 cigarettes per day during pregnancy.”

Google-AI-Overiew-Fake-Screenshot-04

(Image Source: Snopes)

Other fake screenshots included false AI Overviews regarding “gay Star Wars characters,” astronaut work responsibilities, or whether neurotoxin was good for you. Some social media users even shared instructions on creating fake overviews, adding to the spread of misinformation.

AI technologies have rapidly accelerated the spread of misinformation, with new research highlighting the dramatic increase in AI-generated images and their implications for public trust and information accuracy.

A comprehensive dataset spanning misinformation from 1995, co-authored by researchers from Google, Duke University, and several fact-checking organizations, shows that AI-generated images are now almost as common as traditional manipulated content.

Examples like fake images of celebrities at events they didn’t attend demonstrate how easily AI can deceive the public, underscoring the importance of vigilant fact-checking and media literacy.

In light of the recent surge in viral AI-generated content, understanding the core algorithms that govern these features is essential. Read more in what the Google algorithm documents reveal to discover the underlying principles that may be contributing to these phenomena.

The rise of AI has caused headaches for social media companies and Google itself. Fake celebrity images have been featured prominently in Google image search results, often driven by SEO-driven content farms.

Google and other tech companies are exploring digital watermarking and other initiatives to flag AI-generated content, aiming to mitigate the spread of misinformation.

Despite these efforts, the challenge of managing AI-generated content remains at a standstill, requiring ongoing vigilance and technological innovation to maintain information integrity.

For more news and insights, AI News on our website.

Was this article helpful?
YesNo
Generic placeholder image

Dave Andre

Editor

Digital marketing enthusiast by day, nature wanderer by dusk. Dave Andre blends two decades of AI and SaaS expertise into impactful strategies for SMEs. His weekends? Lost in books on tech trends and rejuvenating on scenic trails.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *