You are currently viewing Standstill under attack for sneaky AI training policy |  TechCrunch

Standstill under attack for sneaky AI training policy | TechCrunch

Amid ongoing issues over how big tech appropriates data from individuals and businesses in training AI services, a firestorm is brewing among Slack users upset about how the Salesforce-owned chat platform is moving forward with its AI vision.

The company, like many others, is using its own user data to train some of its new AI services. But it turns out that if you don’t want Slack to use your data, you have to email the company to opt out.

And the terms of that commitment are hidden in what appears to be an outdated, confusing privacy policy that no one pays attention to. That was the case with Slack until one angry person posted about them on a community site extremely popular with developers, and then that post went viral… which is what happened here.

It all started last night when a note on Hacker News raised the question of how Slack trains its AI services, in direct relation to its privacy principles – no further comment was necessary. That post started a longer conversation—and what seemed like news to current Slack users—that Slack opts users into its AI training by default, and that you have to email a specific address to opt out.

This hacking news thread has since sparked a lot of conversation and questions on other platforms: there’s a new product with a generic name called “Slack AI” that allows users to search for answers and summarize conversation threads, among other things, but why this is not mentioned once by name on this privacy principles page in any way, even to make it clear that the privacy policy applies to it? And why does Slack refer to both “global models” and “AI models?”

Between people who are confused about where Slack applies its privacy principles to AI and people who are surprised and irritated by the idea of ​​sending opt-out emails — at a company that makes a big deal of advertising that “You control your data” – Slack does not come out well.

The shock may be new, but the conditions are not. According to pages on the Internet Archive, the terms apply at least through September 2023. (We’ve asked the company to confirm.)

According to the privacy policy, Slack uses customer data specifically to train “global models” that Slack uses to power channels and emoji recommendations and search results. Slack tells us that data usage has specific restrictions.

“Slack has machine learning models at the platform level for things like channel recommendations and emojis and search results. We don’t build or train these models in such a way that they can learn, remember or be able to reproduce any part of the customer data,” a company spokesperson told TechCrunch. However, the policy does not appear to cover the company’s overall scope and broader plans for training AI models.

In its terms, Slack says that if customers opt out of data training, they will still benefit from the company’s “globally trained AI/ML models.” But again, in this case, it’s unclear why the company uses customer data in the first place to power features like emoji recommendations.

The company also said it doesn’t use customer data to train Slack AI.

“Slack AI is a separately purchased add-on that uses large language mModels (LLMs), but does not train these LLMs on customer data. Slack AI uses LLMs hosted directly on Slack’s AWS infrastructure, so customer data remains internal and is not shared with any LLM provider. This ensures that customer data remains under the control of that organization and exclusively for that organization’s use,” a spokesperson said.

Some of the confusion will likely be resolved sooner rather than later. In response to a critical look at Threads by engineer and writer Gergely Oros, Slack engineer Aaron Maurer acknowledged that the company needs to update the page to reflect “how these privacy principles work with Slack AI.”

Maurer added that these terms were written back when the company didn’t have Slack AI, and these rules reflect the company’s work around search and recommendations. It will be worth exploring the terms for future updates given the confusion surrounding what Slack is currently doing with its AI.

The problems at Slack are a stark reminder that in the fast-paced world of AI development, user privacy should not be an afterthought, and a company’s terms of service should clearly state how and when data is or isn’t used.

We’re launching an AI newsletter! register here to start receiving it in your inboxes on June 5th.

Leave a Reply