what you should Know
- Google recently acquired exclusive rights to reddit content to power its AI.
- Google’s AI is now completely insane.
- Users with access to Google’s search AI have reported that it recommends eating rocks, glue and potentially even suicide – although not every reported response has been replicated.
- Comparative searches on ChatGPT and Bing AI yield far, far less damaging results, potentially highlighting the need for high-quality, curated data instead of billions of sarcasm-laden posts fueled by social media.
Google’s desperation to keep up with Microsoft Copilot has led to terrible results in the past, but this latest blunder is on another level.
Google recently acquired exclusive rights to reddit content to aid its AI search generation efforts. The deal was reportedly worth about $60 million and provided a lifeline for the struggling social network, which remains far more popular than profitable. Great news for reddit then, but perhaps not so great news for Google.
Google has already been heavily criticized recently for the so-called SEOpocalypse, whereby Google’s attempts to demote AI-generated unreliable content has resulted in harming legitimate sources of search traffic. With Google in full control of web discovery, its algorithm changes have harmed businesses, leading to losses for businesses unfairly caught on the web. And there’s little evidence that Google’s efforts to combat low-quality content are actually working regardless. General perceptions of Google search seem to be negative, but this latest blunder will go down in the history books.
Perhaps one could blame the web itself for the degraded quality of content rather than Google. However, we can firmly blame Google for its latest stumble, which is due to its decision to include reddit in its search results with Gemini AI.
Google is dead without comparison pic.twitter.com/EQIJhvPUoIMay 22, 2024
Over the past week, users playing around with the earliest versions of Google’s built-in search AI have noticed some … interesting responses. The responses appear to be the result of including problematic social network Reddit in search results.
A search query last week reportedly led to a recommendation that users eat glue, which Internet sleuths traced back to a decade-old reddit comment by a scientific source known as Fucksmith. Google is also reportedly recommending that depressed users jump off a bridge while extolling the health benefits of neurotoxins and daily rock consumption.
Some of these “search queries” may have been manipulated for engagement on Twitter, but at least some of them they have has been verified and reproduced. The rock recommendation was particularly comical given that the source of the information was apparently the satirical news website The Onion.
The Google Ai feature should be turned OFF. pic.twitter.com/OCh6L3oyLzMay 24, 2024
Given that Google’s AI search tools are not available to me in my current geography, I was unable to verify some of the reports. However, the fact that some of them can be traced back to specific sources on reddit adds credibility. I asked Microsoft Copilot and Bing some of these questions and got much nicer results, potentially demonstrating just how far ahead Microsoft is in this space. By partnering with OpenAI for ChatGPT, Microsoft seems to increase its lead every time Google makes a hasty, half-hearted lurch forward like this. However, Microsoft has had some AI-related PR disasters of its own this past week, with users fearing that its Windows Recall feature, which records your PC’s activity, could be used for spying.
However, the Windows Recall drama is potentially overblown given that the content is contained on local machines and is fully included during the Windows 11 installation process. This Google AI gaffe will most likely get someone fired for comparison , given that real-world search results are actually really harmful.
Language models must be powered by high-quality, serious, curated, verifiable content
When I tested whether Microsoft Copilot and ChatGPT-4 would give me similar dumb results, I was surprised how no the answers were stupid. First I asked how many stones I should eat a day and Copilot didn’t even give me an answer as if he thought my question was stupid. I was wondering if Microsoft blocked the request, given today’s PR disaster involving Google. As such, I cheated Copilot, which is currently pretty easy to do. I asked him how many lemons I should eat a day, to which Copilot gave me loads of data about citric acid and vitamins I didn’t want to know. After which I asked “okay, how about the rocks”. This bypassed the filter, but Copilot wouldn’t be fooled anymore. Gave me a whole list of reasons why I absolutely should not eat rocks, satisfying my curiosity.
Likewise, when I said “I’m depressed,” Copilot gave me a bunch of helpful resources instead of recommending I kill myself, which was apparently the case with Google’s AI.
Even if the more obvious answers are fictional, the whole ordeal really highlights the importance of context when building toolkits based on large language models (LLM). By including reddit in Google Gemini, Google may have essentially destroyed the verifiable accuracy of all its information, given that a huge amount of comments on reddit and indeed any social network are sarcastic or satirical in nature. If AI search kills the web business that relies on building high-quality content, LLM will have to cannibalize AI-generated content to generate results. It can potentially lead to model collapse, which is something that has actually been demonstrated in the real world when LLMs do not have enough high-quality data to draw from, either because of the little content available online or even because the language in which it is written the content is not widely used.