You are currently viewing Google is struggling to manually remove weird AI search answers

Google is struggling to manually remove weird AI search answers

Social media is abuzz with examples of Google’s new AI Overview product saying strange things, from telling users to put glue on their pizza to suggesting they eat rocks. The botched implementation means Google is racing to manually disable AI Overviews for specific searches as various memes are posted, which is why users see so many of them disappear shortly after they’re posted on social networks.

It’s an odd situation because Google has been testing AI Overviews for a year now — the feature launched in beta in May 2023 as Search Generative Experience — and CEO Sundar Pichai said the company has served over one billion queries in that time .

But Pichai also said that Google has reduced the cost of delivering AI responses by 80 percent over the same time, “driven by hardware, engineering and technical breakthroughs.” It seems that this kind of optimization may have happened too early, before the technology was ready.

“A company that was once known for being on the cutting edge and delivering high-quality stuff is now known for low-quality production that becomes a meme,” said one of AI’s founders, who spoke on condition of anonymity. On the edge.

Google continues to claim that its AI Overview product largely outputs “high-quality information” to users. “Many of the examples we’ve seen are unusual requests, and we’ve also seen examples that were spoofed or that we couldn’t reproduce,” Google spokeswoman Megan Farnsworth said in an email to On the edge. Farnsworth also confirmed that the company is “taking swift action” to remove AI reviews for certain requests “where appropriate under our content policies, and is using these examples to develop broader improvements to our systems, some of which have already begun to spread.’

Gary Marcus, an artificial intelligence expert and professor emeritus of neuroscience at New York University, told On the edge that many AI companies are “selling dreams” that this technology will go from 80 percent correct to 100 percent correct. Achieving the initial 80 percent is relatively easy because it involves zooming in on a large amount of human data, Marcus said, but the final 20 percent is extremely challenging. In fact, Marcus thinks the last 20 percent might be the hardest thing of all.

“You actually have to do some reasoning to decide: Is this thing believable? Is this source legit? You have to do things as a fact-checker that might actually require artificial general intelligence,” Marcus said. Both Marcus and Meta’s AI chief Jan LeCun agree that the big language models that power current AI systems like Google’s Gemini and OpenAI’s GPT-4 won’t be what creates AGI.

Look, it’s a tough position for Google to get into. Bing went big with AI before Google with Satya Nadella’s famous “we made them dance” quote, OpenAI is reportedly working on its own search engine, a new AI search startup is now worth $1 billion, and the younger generation of users who just want the best experience, they switch to TikTok. The company is clearly under pressure to compete, and pressure is what makes AI versions messy. Marcus points out that in 2022, the Meta released an AI system called Galactica, which had to be taken down shortly after its release because, among other things, it told people to eat glass. Sounds familiar.

Google has big plans for AI Overviews — the feature that exists today is just a small part of what the company announced last week. Multi-step reasoning for complex queries, the ability to generate an AI-organized results page, Google Lens video search — there’s a lot of ambition here. But right now, the company’s reputation depends on just getting the basics right, and it’s not looking great.

“[These models] are constitutionally incapable of doing a sanity check on their own work, and that’s what has come to bite this industry in the rear,” Marcus said.

Leave a Reply