You are currently viewing Glue Pizza and Zombie Presidents: The Worst Google AI Answers Ever

Glue Pizza and Zombie Presidents: The Worst Google AI Answers Ever

Google recently launched AI-generated answers at the top of web searches in a controversial move this has been widely criticized. In fact, social media is full of examples of completely idiotic answers that are now prominently featured in Google’s AI results. And some of them are pretty funny.

One strange quest circulating on social media involves figuring out a way to protect your cheese from sliding off your pizza. Most of the suggestions in the AI ​​responses are normal, such as telling the user to let the pizza cool before eating it. But the above tip is very strange as you can see below.

A Google search by Gizmodo on May 22 about how to keep cheese from sliding off your pizza.
Screenshot: Google

The funny thing is, if you think about it for even a second, adding glue almost certainly won’t help the cheese stay on the pizza. But this answer has probably been deleted from Reddit in what is probably a joke comment made by user “fucksmith” 11 years ago. AI can definitely plagiarize, we’ll admit it.

A joke made on Reddit about 11 years ago that seems to be the source of a terrible AI-generated response at Google.

A joke made on Reddit about 11 years ago that seems to be the source of a terrible AI-generated response at Google.
Screenshot: Reddit

Another search that has recently gained attention on social media involves some presidential trivia this will surely be news to any historian. If you ask Google’s AI-powered search engine which US president attended the University of Wisconsin-Madison, it will tell you that 13 presidents have done just that.

The answer from Google’s AI will even claim that these 13 presidents received 59 different degrees in the course of their studies. And if you look at the years they supposedly attended the university, the vast majority are well after the death of those presidents. Did the nation’s 17th president, Andrew Johnson, earn 14 degrees between 1947 and 2012, even though he died in 1875? Unless there’s some special zombie technology we don’t know about, that seems highly unlikely.

Image for article titled Glue-Topped Pizza and Zombie Presidents: Google AI's Worst Answers Ever

Screenshot: Google

For the record, the US has never elected a president from Wisconsin, nor one who ever attended UW-Madison. Google’s AI seems to derive its answer from a light-hearted human 2016 Blog Post written by the Alumni Association for various individuals who graduated from Madison and share the name of a president.

Another pattern that many users have noticed is that Google’s AI thinks dogs are capable of amazing feats – like playing professional sports. When asked if a dog has ever played in the NHL, the summary cites a YouTube video and spits out this response:

Image for article titled Glue-Topped Pizza and Zombie Presidents: Google AI's Worst Answers Ever

Screenshot: Google

In another case, we deviated from the sport and asked the search engine if a dog had ever owned a hotel. The platform’s response was:

Image for article titled Glue-Topped Pizza and Zombie Presidents: Google AI's Worst Answers Ever

Screenshot: Lucas Ropek/Google

To be clear, when asked if the dog ever had owned hotel, Google answered in the affirmative. He then cited two examples of hotel owners owning dogs and pointed to a 30-foot-tall statue of a beagle as evidence.

Other things we’ve “learned” while interacting with Google’s AI Summaries include that dogs can breakdance (they can’t) and that they often throw out the ceremonial first pitch at baseball games, including a dog who threw at a Florida game Marlins (actually, the dog fetched the ball after it was thrown).

Why are these responses being received? Simply put, all these AI tools have been released long before they should have, and every major tech firm is in an arms race to capture the public’s attention.

AI tools like OpenAI’s ChatGPT, The Google Twins, and now Google’s AI Search can look impressive because they mimic normal human language. But the machines work as predictive text models, essentially functioning as a fancy auto-complete function. All of them have collected vast amounts of data and can quickly put together words that sound compelling, perhaps even profound in some cases. But the machine doesn’t know what it’s saying. It lacks the ability to reason or apply logic like a human, one of the many reasons AI boosters are so excited by the prospect of artificial general intelligence (AGI). And that’s why Google can tell you to put glue on your pizza. It’s not even stupid. Is not capable to be stupid

The people who design these systems call them hallucinations because that sounds way cooler than what actually happens. When people lose touch with reality, they hallucinate. But your favorite AI chatbot isn’t hallucinating because it wasn’t capable of reasoning or logic in the first place. It’s just spewing word vomit that’s less convincing than the language that impressed us all when we first experimented with tools like ChatGPT during its rollout in November 2022. And every tech company on the planet is chasing that initial peak with its own semi-finished products.

But no one had enough time to properly evaluate these tools after ChatGPT’s initial dazzling performance, and the buzz was relentless. The problem, of course, is that if you ask an AI a question you don’t know the answer to, there’s no way you can trust the answer without doing a bunch of extra fact-checking. And that defeats the purpose of why you asked these supposedly intelligent machines in the first place. You wanted a reliable answer.

Asked for comment on all this, a Google spokesperson told Gizmodo:

The examples we’ve seen tend to be very unusual queries and are not representative of most people’s experiences. The majority of AI reviews provide high quality information with links to dig deeper into the web. We conducted extensive testing before releasing this new experience and will use these isolated examples as we continue to refine our systems as a whole.

However, in our quest to test the intelligence of the AI ​​system, we did learn some new things. For example, at one point we asked Google if a dog had ever flown on a plane, expecting the answer to be no. Lo and behold, Google’s AI brief provided a well-sourced answer. Yes, dogs have actually flown airplanes before and no, this is not an algorithmic hallucination:

Image for article titled Glue-Topped Pizza and Zombie Presidents: Google AI's Worst Answers Ever

Screenshot: Google

What’s your experience with Google’s introduction of AI to Search? Have you noticed anything strange or downright dangerous about the responses you’ve been getting? Let us know in the comments and be sure to include screenshots if you have them. These tools look like they’re here to stay, whether we like it or not.

Leave a Reply