You are currently viewing Google Criticized As AI Overview Makes Blatant Mistakes By Saying President Obama Is Muslim And It’s Safe To Leave Dogs In Hot Cars

Google Criticized As AI Overview Makes Blatant Mistakes By Saying President Obama Is Muslim And It’s Safe To Leave Dogs In Hot Cars

But on social media, users shared a wide array of screenshots showing the AI ​​tool sharing conflicting answers.

Google, Microsoft, OpenAI and others are leading a generative AI arms race as companies in seemingly every industry race to add AI-based chatbots and agents to keep up with competitors. The market is projected to reach $1 trillion in revenue within a decade.

Here are some examples of what went wrong with AI Overviews, according to screenshots shared by users.

When asked how many Muslim presidents the US has had, AI Overviews replied, “The United States has had one Muslim president, Barack Hussein Obama.”

When a user searches for “cheese that doesn’t stick to pizza,” the feature suggests adding “about 1/8 cup of non-toxic glue to the sauce.” Social media users discovered an 11-year-old Reddit comment that appears to be the source.

For the query “Is it okay to leave a dog in a hot car,” the tool at one point said, “Yes, it’s always safe to leave a dog in a hot car,” and went on to reference a fictional Beatles song about it being safe to they leave dogs in hot cars.

Attribution can also be an issue for AI Overviews, especially when it comes to attributing inaccurate information to medical professionals or scientists.

For example, when asked, “How long can I stare at the sun for the best health,” the tool said, “According to WebMD, scientists say that looking at the sun for 5-15 minutes, or up to 30 minutes if you have darker skin , is generally safe and provides the most health benefits.” When asked, “How many stones should I eat each day,” the tool said, “According to geologists at the University of California, Berkeley, people should eat at least one small stone a day,” continuing to list vitamins and digestive benefits.

The tool can also inaccurately answer simple queries, such as compiling a list of fruits that end in “um” or saying that the year 1919 was 20 years ago.

When asked whether or not Google Search violates antitrust laws, AI Overviews said, “Yes, the US Department of Justice and 11 states are suing Google for antitrust violations.”

On the day Google unveiled AI Overviews at its annual Google I/O event, the company said it also plans to bring Assistant-like scheduling capabilities directly into search. It explains that users will be able to search for something like “Create an easy-to-prepare 3-day meal plan for a group” and get a starting point with a wide range of recipes from around the web.

Google did not immediately return a request for comment.

The news follows Google’s wide release of its Gemini imaging tool in February and a hiatus that month after comparable issues.

The tool allows users to enter prompts to create an image, but almost immediately users discover historical inaccuracies and questionable answers that are widely circulated on social media.

For example, when a user asked Gemini to show a German soldier in 1943, the tool rendered a racially diverse set of soldiers wearing German military uniforms from the era, according to screenshots on social media platform X.

When asked for a “historically accurate depiction of a medieval British king,” the model generated another racially diverse set of images, including one of a female ruler, screenshots shown. Users reported similar results when they requested images of the US Founding Fathers, an 18th-century French king, a German couple in the 1800s, and more. The model showed an image of an Asian man in response to a query about Google’s own founders, users reported.

Google said in a statement at the time that it was working to fix problems with Gemini’s image generation, admitting that the tool “misses the mark.” Soon after, the company announced that it would immediately “pause the generation of human images” and “re-release an improved version soon.”

In February, Google DeepMind CEO Demis Hassabis said Google planned to relaunch its image-generating AI tool in the next “few weeks”, but it has not yet been relaunched.

Problems with Gemini’s imaging results have reignited debate in the AI ​​industry, with some groups calling Gemini too “woke” or left-leaning, and others saying the company has not invested enough in the right forms of AI ethics. Google came under fire in 2020 and 2021 for removing the co-heads of its AI ethics group after they published a research paper critical of certain risks of such AI models, and then later reorganized the group structure.

Last year, Pichai was criticized by some employees for the company’s botched and “rushed” launch of Bard, which followed the ChatGPT virality.

Leave a Reply