You are currently viewing Brands are starting to turn against AI

Brands are starting to turn against AI

If we hear it from Apple or Elon Musk, AI is our inevitable future that will radically reshape life as we know it, whether we like it or not. In the calculus of Silicon Valley, what matters is getting there first and carving out the territory so that everyone relies on your tools for years to come. When he speaks to Congress, at least someone like OpenAI CEO Sam Altman will mention the dangers of artificial intelligence and the need for strong regulatory oversight, but in the meantime, everything is going full steam ahead.

Many corporate and individual actors venture into advertising, often with disastrous results. Multiple media outlets have been caught publishing AI-generated garbage under fictitious names; Google has cluttered its search results with fake ‘AI Review’ content; earlier this year, parents were outraged to learn that a pop-up Willy Wonka-themed family event in Scotland had been advertised to them with AI images that bore no resemblance to the gloomy warehouse setting they entered. Amidst all this discontent, it seems there’s a new marketing opportunity to take advantage of: become a part of anti-AI, pro-human countermeasure.

Beauty brand Dove, owned by multinational conglomerate Unilever, made headlines in April when it pledged to “never use AI-generated content to feature real women in its ads,” according to a company statement. Dove explained the choice as being in line with its successful and ongoing “Real Beauty” campaign, first launched in 2004, in which professional models were replaced by “ordinary” women in ads that focused more on the consumer than on the products. “The pledge to never use AI in our communications is just one step,” said Dove’s chief marketing officer, Alessandro Manfredi, in the press release. “We won’t stop until beauty is a source of happiness, not anxiety, for every woman and girl.”

But if Dove has taken a hard line against AI to protect a specific brand value around body image, other brands and ad agencies are worried about the broader reputational risk that comes with relying on automated, generative content that bypasses human control . Like Age of advertising and other industry publications report that contracts between companies and their marketing firms are now more likely to include strict restrictions on how AI is used and who can sign off on it. These regulations not only help prevent low-quality, AI-created images or clones of these clients from appearing in the public arena — they can also limit reliance on AI in internal operations.

Meanwhile, creative social platforms are defining areas designed to remain AI-free, and are getting good customer feedback as a result. Cara, a new artist portfolio site, is still in beta testing, but it’s created a serious buzz among visual artists for its proudly anti-AI ethos. “With the widespread use of generative AI, we decided to build a place that filters generative AI images so that people who want to find authentic advertising messages and artwork can do so easily,” the app’s site states. Cara also aims to protect its users from running out of user data to train AI models, a condition automatically imposed on anyone who uploads their work to the Instagram and Facebook Meta platforms.

“Cara’s mission began as a protest against the unethical practices of AI companies that scour the internet for their generative AI models without consent or respect for people’s rights or privacy,” a company representative said A rolling stone. “This core principle of opposing such unethical practices and the lack of legislation protecting creators and people is what fueled our decision to refuse to host AI-generated images.” They add that since AI tools are likely will become more common in the creative industries, they “want to act and see legislation passed that will protect artists and our intellectual property from current practices.”

Older sites in this space tend to add similar safeguards. PosterSpy, another portfolio site that helps poster artists network and secure paid commissions, has been a vibrant community since 2013, and founder Jack Woodhams wants to keep it a haven for human talent. “I have a pretty strict no-AI policy,” he says A rolling stone. “The website exists to protect artists, and although generative AI users consider themselves artists, that couldn’t be further from the truth. I’ve worked with real artists all over the world, from emerging talents to household names, and to compare the blood, sweat and tears these artists put into their work to a prompt in an AI generator is insulting,” Woodhams says of “real artists who have trained for years to be as skilled as they are today’.

Part of the pressure to set these standards comes from the customers themselves. Game publisher Wizards of the Coast, for example, has repeatedly faced fan outrage over its use of AI in gaming products. Dungeons and Dragons and Magic: The Gathering franchises, despite the company’s various commitments to keeping AI-generated imagery and writing outside of franchises and committed to “the innovation, ingenuity and hard work of talented people”. When the company recently posted a job posting for a chief AI engineer, users raised the alarm again, forcing Wizards of the Coast to clarify that it’s experimenting with AI in video game development, not its tabletop games. The back and forth demonstrates the dangers for brands trying to sidestep the debates surrounding this technology.

It’s also a measure of the vigilance needed to prevent complete AI takeover. On Reddit, which has no general policy against generative AI, it’s up to community moderators to ban or remove such material as they see fit. The company has so far only said that anyone who wants to train AI models on their public data must agree to a formal business deal with them, with CEO Steve Huffman warning that it may report those who don’t to Federal Trade Commission. The Medium publishing platform is a bit more aggressive. “We’re blocking OpenAI because they gave us a protocol to block them, and we’d block almost anyone if we had a way to do it,” says CEO Tony Stubblebine A rolling stone.

At the same time, Stubblebine says, Medium relies on curators to stem the tide of “nonsense” he sees spreading across the internet in the age of nascent AI, preventing any of it from being recommended to users. “There’s no good tool for spotting AI-generated content right now,” he says, “but people spot it right away.” At this point, even automated content filtering can’t be fully automated. “We were deleting millions of spam posts a month,” notes Stubblebine. “Now we’re deleting 10 million.” For him, it’s a way to ensure that real writers get fair exposure and that subscribers can find writing that speaks to them. “There’s this huge difference between what someone will click on and what someone will be glad they paid to read,” says Stubblebine, and those who provide the latter may reap the rewards as the Web grows cluttered with the first. Even Google’s YouTube has promised to add warning labels to videos that are “altered” or “synthetically created” with AI tools.

It’s hard to predict whether institutional resistance to AI will continue to gain momentum, though between a pattern of high-profile AI failures and a growing distrust of the technology, companies that effectively oppose it in one form or another seem poised to buck the cycle of ads (not to mention the consequences of a burst balloon some observers predict). On the other hand, if AI continues to dominate the culture, they may be left to serve a smaller demographic that demands AI-free products and experiences. As with all strategic business decisions, it is unfortunate that there is no bot that can predict the future.

Trend

Leave a Reply