Slack AI Privacy Principles Generate Confusion and Consternation

A major mea culpa here. When I first became aware of this situation on May 17, I was traveling home from a conference and didn’t have time (or much connectivity) to read the original document. I also had no time to devote to the topic on Saturday or Sunday due to helping my in-laws move and timing a trail race, so I didn’t return to the topic until Monday. By that time, Slack had updated the document (on May 18) to include the unambiguous sentences I called out. I actually had the thought that it might have changed and consulted the Wayback Machine, but the changes started in the second paragraph and were quite minor until the addition of the Generative AI section at the end. It was close enough and I was moving quickly enough in an attempt to meet my Monday publication deadline that I erroneously assumed the document was the same.

With all that in mind, many of my criticisms are misplaced, and I apologize to those whom I’ve impugned. The May 17 version of the document that triggered this situation does not explicitly call out generative AI or large language models, although none of the examples it gives (channel recommendations, search ranking, autocomplete, and emoji suggestions) involve generative AI.

The main problem with the May 17 version is that it’s old. Its first instance in the Wayback Machine was from October 2020, when it was titled “Privacy principles: search, learning and intelligence” and gives roughly the same examples. Although the document has changed since then, it’s an obvious evolution. It’s notable that the first version doesn’t even use the term “artificial intelligence” or mention AI and ML—it predates ChatGPT and the generative AI boom.

The criticism that Slack requires workspace admins to send email to opt out of the global models is legitimate, although I’m not sure it’s necessarily any harder to send email than to find a setting in Slack’s proliferation of options. From Slack’s perspective, the likelihood of anyone wanting to opt out of pattern-matching that would recommend channels and suggest emoji was probably sufficiently unlikely that no one thought to build a setting. It was only after these features became associated with AI that it seemed unreasonable.

All that said, I still feel like Slack’s mistake in failing to update the document to be more clear wasn’t that bad. The subsequent changes Slack made show that even if the document wasn’t as clear as would be ideal, Slack wasn’t trying to put one over on us. Even in the problematic May 17 version, Slack said:

For any model that will be used broadly across all of our customers, we do not build or train these models in such a way that they could learn, memorise, or be able to reproduce some part of Customer Data.

Of course, because of the lack of trust many people have in the tech industry, even relatively clear statements like that don’t necessarily have the desired effect. “Sure,” one may think, “that’s what you say, but how do we know that’s true?”

And we don’t. There are many lapses, security breaches, and broken promises. But simultaneously, we have to trust the technology we use to a large extent because the only other option is to stop using it.

2 Likes