Originally published at: Slack AI Privacy Principles Generate Confusion and Consternation - TidBITS
After a section of a Slack document laying out its privacy principles surrounding AI was taken out of context on social media, controversy ensued. Adam Engst attempts to calm the waters, with help from ChatGPT.
It continually annoys me how quick people are to assume all businesses are up to absolutely no good and want to rob us blind of our money, our data, and our freedom of choice. Every single time one of these āleaksā comes out, people jump on the out-of-context quotes and extrapolate to infinity on things that were never said and, when the original context is examined, werenāt even implied.
I want to blame the shortening of attention spans for the inability to understand that everything exists within a greater context, but I canāt help but wonder: are people really getting so much dumber, or is it just the reduced cost of broadcasting unvetted and unedited opinions making it seem that way?
While it may be true that people jumped to conclusions, another lesson might be to have tech companies write things clearly, and not in legalese that sounds like its trying to hide things by default.
BUTā¦ using ChatGPT to āproveā that Slack isnāt doing naughty thingsā¦ the same ChatGPT that hallucinates left and right, is of questionable value.
The opacity and mind-numbing length of corporate policy statements, user āagreementsā and the like have certainly contributed to the loss of trust in tech companies. I first noticed this several years ago when I tried to read a user agreement before buying some software. The web page including the agreement timed out before I could finish. Since then itās gotten worse, and Iām afraid the tech companies themselves have become a huge problem. Thatās a damned shame because technology has made tremendous contributions to our society, but todayās technology companies are becoming all about profits.
Theyāve always been this dumb. Just to pull one example, Proctor & Gamble in the 1980s had to deal with many years worth of accusations that they were in league with Satan:
A major mea culpa here. When I first became aware of this situation on May 17, I was traveling home from a conference and didnāt have time (or much connectivity) to read the original document. I also had no time to devote to the topic on Saturday or Sunday due to helping my in-laws move and timing a trail race, so I didnāt return to the topic until Monday. By that time, Slack had updated the document (on May 18) to include the unambiguous sentences I called out. I actually had the thought that it might have changed and consulted the Wayback Machine, but the changes started in the second paragraph and were quite minor until the addition of the Generative AI section at the end. It was close enough and I was moving quickly enough in an attempt to meet my Monday publication deadline that I erroneously assumed the document was the same.
With all that in mind, many of my criticisms are misplaced, and I apologize to those whom Iāve impugned. The May 17 version of the document that triggered this situation does not explicitly call out generative AI or large language models, although none of the examples it gives (channel recommendations, search ranking, autocomplete, and emoji suggestions) involve generative AI.
The main problem with the May 17 version is that itās old. Its first instance in the Wayback Machine was from October 2020, when it was titled āPrivacy principles: search, learning and intelligenceā and gives roughly the same examples. Although the document has changed since then, itās an obvious evolution. Itās notable that the first version doesnāt even use the term āartificial intelligenceā or mention AI and MLāit predates ChatGPT and the generative AI boom.
The criticism that Slack requires workspace admins to send email to opt out of the global models is legitimate, although Iām not sure itās necessarily any harder to send email than to find a setting in Slackās proliferation of options. From Slackās perspective, the likelihood of anyone wanting to opt out of pattern-matching that would recommend channels and suggest emoji was probably sufficiently unlikely that no one thought to build a setting. It was only after these features became associated with AI that it seemed unreasonable.
All that said, I still feel like Slackās mistake in failing to update the document to be more clear wasnāt that bad. The subsequent changes Slack made show that even if the document wasnāt as clear as would be ideal, Slack wasnāt trying to put one over on us. Even in the problematic May 17 version, Slack said:
For any model that will be used broadly across all of our customers, we do not build or train these models in such a way that they could learn, memorise, or be able to reproduce some part of Customer Data.
Of course, because of the lack of trust many people have in the tech industry, even relatively clear statements like that donāt necessarily have the desired effect. āSure,ā one may think, āthatās what you say, but how do we know thatās true?ā
And we donāt. There are many lapses, security breaches, and broken promises. But simultaneously, we have to trust the technology we use to a large extent because the only other option is to stop using it.
Well done Adam for explaining more. Youāve spent quite some time comparing versions of documents, which is something that shouldnāt be necessary.
Tech companies will email āprivacy policy update, making changes to this, that and the otherā area, but not summarising what the changes are, or even telling us where they are in a meaningful way. That comes across as trying to hide the changes, or as making it as hard as possible to find the changes while fulfilling their duty to notify us. That in itself builds distrust.
Transparency surrounding revisions to online documents is an interesting issue, and Iām not sure thereās a single right answer. For instance, I regularly make silent changes in TidBITS articles until the point when I publish them in an email issue. After that, I only make changes for very small typos and other infelicities that couldnāt cause confusion. If I need to make a more significant correct, I publish another article, much as I just did with iPhones Pause MagSafe Charging During Continuity Camera - TidBITS.
Of course, as you point out, significant changes should be noted. We donāt have a good standardized way to do that on the Web, but there should at least be a revision date. Ideally, the Wayback Machine would make comparing versions easy, but I havenāt found that to be the case. Maybe I need to do more research into how it works.
This is another instance where Ted Nelsonās Xanadu had it right, at least in theory. Everything was under version control at all times.