You are currently viewing Google is in the age of Elizabeth Holmes

Google is in the age of Elizabeth Holmes

The new AI features announced by Google just weeks ago are finally hitting the mainstream — though not in the way Google would prefer.

As you may have learned from recent coverage and chatter (or even experienced yourself), the auto-generated AI reviews that now sit at the top of so many Google search results provide answers that are… well, let’s call them incorrect it’s true, but he doesn’t quite understand it. I’m trying unreal and ridiculous and potentially dangerous instead. Since its launch, AI Overviews has told users to smoke cigarettes while pregnant, add glue to their home-baked pizza, sprinkle used antifreeze on their lawns, and brew mints to cure their appendicitis.

To deal with wrong answers to both straightforward and jokey queries, Google appears to be looking at each incident one by one and adjusting the relevant reviews accordingly. Yet Google’s broken early answers may even extend to other features of the search engine, such as its automatic calculator: One US-based user found, posting a screenshot of X, that Google’s technology couldn’t even scan that metric unit see means a centimeterreading the measure as a whole meter. Search engine optimization expert Lily Ray claims to have independently verified this finding.

The massive rollout of AI Overviews has prompted users and analysts to share other, even bigger, Google discoveries: The main Gemini bot appears to be the first to provide “answers,” then find quotes This process seems to result in very old, spammy and broken links being displayed as supporting information for these answers. However, Google, which is still raking in heaps of digital ad dollars despite losing some of that market share recently, wants to insert more ads into Overviews, some of which may themselves be “AI-powered.”

Meanwhile, himself appearance of AI Overviews now redirects traffic from more reliable sources that typically appear on Google. Contrary to CEO Sundar Pichai’s claims, SEO experts have found that links featured in Reviews don’t earn much increase in clicks from their placement. (This factor, along with misinformation, is only part of the reason many major news organizations, including Slate, have opted out of AI Overviews. A Google spokesperson told me that “such analyzes are not a reliable or comprehensive way to Google Search traffic estimation.”)

Ray’s research found that Google search traffic to publishers overall was down this month, with far more visibility going to posts from Reddit — the site that, incidentally, was the source of the infamous pizza glue recommendation and that signed multi-million dollar deals with Google in favor of more than that. (A Google spokesperson responded: “This is by no means a comprehensive or representative study of traffic to news posts from Google Search.”)

Google was probably aware of all the problems before it released AI Overviews in prime time. Pichai called chatbots’ “hallucinations” (i.e., their tendency to make things up) an “inherent characteristic” and even acknowledged that such tools, engines, and datasets “are not necessarily the best approach to always reach reality . ” That’s something he thinks Google Search’s data and capabilities will fix, Pichai told the Verge. That seems dubious in light of Google’s algorithms, which obscure search visibility for various credible news sources and also likely “set small sites on fire on purpose,” as SEO expert Mike King noted in his study of recently leaked Google Search documents. (A Google spokesperson said this was “categorically false” and that “we would caution against making inaccurate assumptions about Search based on out-of-context, out-of-date, or incomplete information.”)

Specifically: Google’s deceptive AI has been publicly exposed by a while now. Back in 2018, Google demonstrated voice assistant technology that could supposedly call and answer people in real time, but Axios found that the demo may have actually used pre-recorded conversations rather than live ones. (Google declined to comment at the time.) Google’s pre-Gemini chatbot, Bard, was introduced in February 2023 and gave an incorrect answer that temporarily sank the company’s stock price. Later that year, it was revealed that an impressive video demonstration of the company’s multimodal Gemini AI had been edited after the fact to make its reasoning ability appear faster than it actually was. (Cut to another ensuing stock price depression.) And the company’s annual developer conference, held just weeks ago, also showed that Gemini not only generates but highlighting wrong offer to repair your film camera.

In all fairness to Google, which has long been working on AI development, the rapid deployment and build-up of hype around all these tools is probably its way of coping in the age of ChatGPT, a chatbot that, by the way, still generates a significant amount of wrong answers on different subjects. It’s not as if other companies following investor-reassuring AI trends aren’t making their own ridiculous mistakes or falsifying their most impressive demos.

Last month, Amazon’s “Just Walk Out” grocery store concept, which was supposed to be powered by artificial intelligence and didn’t involve humans, actually involved … a lot of people behind the scenes to monitor and program the shopping experience. Similar results were found in a supposedly “AI-powered” human-less drive-thru used by chains like Checkers and Carl’s Jr. There are also “driverless” cruising cars that require remote human intervention almost every few miles driven. ChatGPT’s parent company OpenAI is not immune to this, having hired many people to clean up and polish the animated visual landscapes supposedly generated in bulk from prompts made to its as-yet-unpublic image generator and Sora movies.

All of this, mind you, is just another layer of labor hidden on top of human operations outsourced to countries like Kenya, Nigeria, Pakistan and India, where workers are underpaid or allegedly forced to work in conditions of “modern slavery ” consistently provide feedback to AI bots and tag gruesome images and videos for content moderation purposes. Don’t forget, too, the people who work in the data centers, the chip makers, and the power generators needed in massive amounts to even power all of these things.

So, let’s recap: after years of teasing, debunked claims, staged demos, refusals to provide additional transparency, and using “no humans” branding while actually using many humans in very different (and harmful) ways, these AI creations are still bad. They continue to make things up massively, plagiarize from their sources of learning, and offer information, advice, “news” and “facts” that are wrong, nonsensical, and potentially dangerous to your health, body politic, people who trying to do simple math, and others scratching their heads trying to figure out where their car’s “flasher fluid” is.

Does this remind you of anything else in the history of technology? Perhaps Elizabeth Holmes, who herself faked many demonstrations and made fantastic claims about her company Theranos to sell a “technological innovation” that was simply impossible?

Holmes is now behind bars, but the scandal still lingers in the public imagination, for a reason. In retrospect, the obvious signs should have been obviously, right? Her biotech startup Theranos had no health experts on its board. It promoted outlandish scientific claims unsupported by any authority and refused to explain any rationale for those claims. He established partnerships with massive (and actually trusted) institutions like Walgreens without verifying the safety of his produce. He instilled a deep, frightening culture of secrecy among his employees and made them sign aggressive agreements to that effect. He’s brought in unthinking endorsements from famous and powerful people like Vice President Joe Biden through sheer force of awe alone. And he kept hiding everything that actually fueled his systems and creations until persistent reporters looked for it themselves.

It’s been almost 10 years since Holmes was finally exposed. Yet apparently the throngs of tech watchers and analysts who took her at her word are also willing to put all their faith in the people behind these error-making, buggy, behind-the-scenes AI bots that their creators promise will change everything and everyone. Unlike Theranos, of course, companies like OpenAI have actually created products for public consumption that are functional and can achieve some impressive feats. But on rush push these things everywhere, make them take on tasks they’re probably nowhere near ready for, and yes I keep it’s accessible despite a not-so-obvious history of missteps and mistakes — which is where we seem to be borrowing from the Theranos playbook again. We have learned nothing. And the brains behind chatbots that don’t really teach you anything might actually prefer it that.

Leave a Reply