Slack AI Privacy Principles Generate Confusion and Consternation

Originally published at: Slack AI Privacy Principles Generate Confusion and Consternation - TidBITS

After a section of a Slack document laying out its privacy principles surrounding AI was taken out of context on social media, controversy ensued. Adam Engst attempts to calm the waters, with help from ChatGPT.

2 Likes

It continually annoys me how quick people are to assume all businesses are up to absolutely no good and want to rob us blind of our money, our data, and our freedom of choice. Every single time one of these ā€œleaksā€ comes out, people jump on the out-of-context quotes and extrapolate to infinity on things that were never said and, when the original context is examined, werenā€™t even implied.

I want to blame the shortening of attention spans for the inability to understand that everything exists within a greater context, but I canā€™t help but wonder: are people really getting so much dumber, or is it just the reduced cost of broadcasting unvetted and unedited opinions making it seem that way?

While it may be true that people jumped to conclusions, another lesson might be to have tech companies write things clearly, and not in legalese that sounds like its trying to hide things by default.

BUTā€¦ using ChatGPT to ā€˜proveā€™ that Slack isnā€™t doing naughty thingsā€¦ the same ChatGPT that hallucinates left and right, is of questionable value.

2 Likes

The opacity and mind-numbing length of corporate policy statements, user ā€œagreementsā€ and the like have certainly contributed to the loss of trust in tech companies. I first noticed this several years ago when I tried to read a user agreement before buying some software. The web page including the agreement timed out before I could finish. Since then itā€™s gotten worse, and Iā€™m afraid the tech companies themselves have become a huge problem. Thatā€™s a damned shame because technology has made tremendous contributions to our society, but todayā€™s technology companies are becoming all about profits.

Theyā€™ve always been this dumb. Just to pull one example, Proctor & Gamble in the 1980s had to deal with many years worth of accusations that they were in league with Satan:

2 Likes

A major mea culpa here. When I first became aware of this situation on May 17, I was traveling home from a conference and didnā€™t have time (or much connectivity) to read the original document. I also had no time to devote to the topic on Saturday or Sunday due to helping my in-laws move and timing a trail race, so I didnā€™t return to the topic until Monday. By that time, Slack had updated the document (on May 18) to include the unambiguous sentences I called out. I actually had the thought that it might have changed and consulted the Wayback Machine, but the changes started in the second paragraph and were quite minor until the addition of the Generative AI section at the end. It was close enough and I was moving quickly enough in an attempt to meet my Monday publication deadline that I erroneously assumed the document was the same.

With all that in mind, many of my criticisms are misplaced, and I apologize to those whom Iā€™ve impugned. The May 17 version of the document that triggered this situation does not explicitly call out generative AI or large language models, although none of the examples it gives (channel recommendations, search ranking, autocomplete, and emoji suggestions) involve generative AI.

The main problem with the May 17 version is that itā€™s old. Its first instance in the Wayback Machine was from October 2020, when it was titled ā€œPrivacy principles: search, learning and intelligenceā€ and gives roughly the same examples. Although the document has changed since then, itā€™s an obvious evolution. Itā€™s notable that the first version doesnā€™t even use the term ā€œartificial intelligenceā€ or mention AI and MLā€”it predates ChatGPT and the generative AI boom.

The criticism that Slack requires workspace admins to send email to opt out of the global models is legitimate, although Iā€™m not sure itā€™s necessarily any harder to send email than to find a setting in Slackā€™s proliferation of options. From Slackā€™s perspective, the likelihood of anyone wanting to opt out of pattern-matching that would recommend channels and suggest emoji was probably sufficiently unlikely that no one thought to build a setting. It was only after these features became associated with AI that it seemed unreasonable.

All that said, I still feel like Slackā€™s mistake in failing to update the document to be more clear wasnā€™t that bad. The subsequent changes Slack made show that even if the document wasnā€™t as clear as would be ideal, Slack wasnā€™t trying to put one over on us. Even in the problematic May 17 version, Slack said:

For any model that will be used broadly across all of our customers, we do not build or train these models in such a way that they could learn, memorise, or be able to reproduce some part of Customer Data.

Of course, because of the lack of trust many people have in the tech industry, even relatively clear statements like that donā€™t necessarily have the desired effect. ā€œSure,ā€ one may think, ā€œthatā€™s what you say, but how do we know thatā€™s true?ā€

And we donā€™t. There are many lapses, security breaches, and broken promises. But simultaneously, we have to trust the technology we use to a large extent because the only other option is to stop using it.

2 Likes

Well done Adam for explaining more. Youā€™ve spent quite some time comparing versions of documents, which is something that shouldnā€™t be necessary.

Tech companies will email ā€˜privacy policy update, making changes to this, that and the otherā€™ area, but not summarising what the changes are, or even telling us where they are in a meaningful way. That comes across as trying to hide the changes, or as making it as hard as possible to find the changes while fulfilling their duty to notify us. That in itself builds distrust.

Transparency surrounding revisions to online documents is an interesting issue, and Iā€™m not sure thereā€™s a single right answer. For instance, I regularly make silent changes in TidBITS articles until the point when I publish them in an email issue. After that, I only make changes for very small typos and other infelicities that couldnā€™t cause confusion. If I need to make a more significant correct, I publish another article, much as I just did with iPhones Pause MagSafe Charging During Continuity Camera - TidBITS.

Of course, as you point out, significant changes should be noted. We donā€™t have a good standardized way to do that on the Web, but there should at least be a revision date. Ideally, the Wayback Machine would make comparing versions easy, but I havenā€™t found that to be the case. Maybe I need to do more research into how it works.

This is another instance where Ted Nelsonā€™s Xanadu had it right, at least in theory. Everything was under version control at all times.

1 Like