Originally published at: Slack to Stop Storing Historical Content for Free Workspaces - TidBITS
Slack will be reducing its data storage needs and trying to incentivize free teams to upgrade by deleting data older than a year from free workspaces. Only the last 90 days of data is visible anyway; the change affects only those who upgrade to a paid plan and would previously have recovered all old data.
Slack is shooting itself in the foot again. Maybe I should say, its users need to dance!
Iāve been part of the xAPI Cohort, now sponsored by the Learning Guild, for about five years. The previous sponsor had setup a Slack channel that was well used, by the twice a year free learning cohort, to discuss project ideas, build teams, document project process over the 12 week cohort, and share results and code!
The previous sponsor, who purchased a corporate license, investigated how to maintain the slack history and data that was accumulated over nearly a decade of use. They were told āthank you for your purchase,ā but you will also need to buy each of the 5,000 users a license to maintain the archive.
We are seeing our teamās knowlege erode in a horrible way. New users were told to join, and consider purchasing an account, to continue access to the archives. A few did and then, to our horror, the archives began to āzombifyā in front of us.
Free users saw ads saying we needed a paid account to continue to access the archives. Then we saw messages that the content was no longer available. Then, paid users saw these same messages!
Further investigation revealed that the archives were eroding because all the users had not upgraded.
They are still pumping new users into the unsustainable slack economy. The cycle begins again this Fall.
IMHO, we need to discuss an open, and archivable, alternative.
I realise that I donāt know the full purpose and nature of the group, but from your description it doesnāt sound like Slack is the right tool. Itās essentially a chat app so I would be hesitant to use it for anything where I want a permanent record or archive. A forum like Discourse which TidBITS Talk runs on seems more appropriate (though probably not quite right either).
The sponsor (or someone else who has the time and access) needs to write/run a script to download the entire history. Then put the archive on a file-sharing system of some kind. Some mechanism to format/link them in order to preserve discussion threads would be nice, but even a giant directory of text files would be better than nothing.
Mailing lists have had this capability for quite some time. Hopefully Slack will have something, or at minimum, wonāt put up roadblocks preventing the owner from downloading the content.
And I agree with @jzw that some other kind of forum software where the owner owns (and can therefore download, backup and archive) the database is critical here. You simply canāt trust a third-party to be a responsible steward of your data.
For anyone who would like to extract all their historical data from a free workspace, it turns out you can do that without subscribing. You only get public channels (but you can make private channels public temporarily and set them back again afterward) and files are only linked, not downloaded, which is a loss, but you will get all the text in JSON format.
So hereās a question: Whatās a good process/format for making this data human-readable?
Sounds like something a skilled web app developer could put together. JSON data is pretty straightforward, and there plenty of open source libraries for handling it. And there are standard tools (like wget
) that can download links.
So you would need to decide what the results would look like (probably something like a set of linked HTML files), but I think it should be possible to write a program in Python or something similar to either bulk-generate a web site from the content or dynamically generate pages from the JSON data.
Both ChatGPT and my PhD son claim itās trivial to do in Python. Iāll let Tristan take a crack at it when he visits soon.
Wow! Congratulations! Is that recent, or just new to me?
I was too terseāPhD candidateā¦ Heās three years into a machine learning program at Simon Fraser University.