Best way to secure erase external 2TB Samsung SSD drive before returning it?

Any file that is on the FS at the time you perform the encryption will be encrypted. If you throw away the key, there will be no access to those files. With or without bypassing the controller.

The open question is what happens to blocks that contain data from files that have been deleted (empty trash, rm, etc.) but have not so far been overwritten with other data. If those blocks are not encrypted, bypassing the controller could allow you to read out those blocks (not the internal flash on a T2 or M1/2 Mac however). The entire issue boils down to getting a detailed understanding of how the ad-hoc encryption of an APFS works. Does it encrypt every block on the drive (in which case youā€™re safe as soon as encryption has completed), or only those holding files in use (in which case youā€™d have to overwrite those blocks first before you consider the disk safe to discard).

Yes ā€“ that is what I was concerned about. Thank you for clarifying.

Indeed, Iā€™ve been wondering for quite a while about those specifics. Unfortunately, to this date no documentation has been made available to definitively answer the question. Thereā€™s good reason to believe Apple has done this right and all blocks (apart from those previously mapped out) are encrypted, but until they confirm we really have no way of being certain.

@simon @davbro

What I did based on @fischej 's help above, and to workaround the unknown issue of free space and encryption is simply to run the following command first to fill up the drive. Itā€™s simple, takes a few hours, and I can work in the background.

cat </dev/zero >/Volumes/TroublesomeSSD/bigtempfile

Isnā€™t there also the option of diskutil secureErase freespace ā€¦

as explained above
https://talk.tidbits.com/t/best-way-to-secure-erase-external-2tb-samsung-ssd-drive-before-returning-it/19460/43?u=lucas043

Maybe. But the SSD controller may try to outsmart you by simply marking the blocks as garbage in response to seeing the all-zeros blocks.

If this isnā€™t a problem for you, then no problem. If you arenā€™t concerned about someone bypassing the SSD controller, or if youā€™re going to keep on using the SSD (in which case, garbage collection will definitely run at some point before you eventually get rid of it)

But if you want to ensure that actual data is written to flash for every block, try writing random data instead:

cat </dev/random >/Volumes/TroublesomeSSD/bigtempfile

Of course, this may still leave some of your data behind as garbage, to be collected. But if you then delete the temp file and do it again (maybe 2 or 3 times), then you should end up consuming all of the free flash blocks, forcing immediate garbage collection. The SSD will end up running very slowly until the garbage collection completes, but it will be far more likely that garbage blocks containing your deleted file data gets collected.

2 Likes

My brother works for the NSA. His reply to related questions: ā€œTake your drive out to the garage. Get out your drill. Drill several holes through the drive from one side to the other. Throw the drilled drive in the trash or put it in electronic recycling, if that is available.ā€

2 Likes

Piling on a bit. And adding to what Alderete said.

A brief history.

Back before spinning disks had controllers on the drive platters bad spots were a worry of the OS and/or the applications. (Those were the fun days.)

Then as desnities got higher (think 3.5ā€ disks onward) bad spots were statistically going to exist on all disks. If not a first soon after first use. So spare space was reserved. Which was why if you listened closely to a disk seeking to every location occasionally youā€™d hear a hard click as the heads jumped out to the spares area to pick up a replaced spot.

Modern spinning disks work this way in general. Spare locations can take up 1% more or less of the real capcity of a drive. Higher performance drives do things like cache the last 100 accessed bad spot remaps so the head seeks donā€™t wreak performance. Or try and predict them in advance and grab them during quiet times.

Now enter SSD. There is a page system a bit higher up than the sectors us mortals work with. And pages can only be written so many times before they will fail. (Literally physical pits are burned into the semiconductors.) So now to make all of this work the firmwardeon a single disk controller is really more of a very sophisticated OS with a primitive command interface. Over 5+ years ago the Samsung EVO drivers had an OS image that was 380+MB in size. (The original Mac OS was something like 0.001% of that size.

As other have indicated as a page gets used it will be replaced by an unused or maybe not as used one. This process now days involves a LOT of caching so that the slowdowns in prior years doesnā€™t happen on decent drives. And this caching can give some folks heartburn as many SSDs may have dozens of pending writes open at any one time. So if the power fails, oops. But they tend to have built in capacitor setups to keep the power up long enough to flush any pending cached writes.

But this process also requires a LOT of spare pages. So a cheap driver might be 20% over provisioned a pro/enterprise class driver might be 100% over provisioned. So that Samsung 850EVO 1TB drive might have 1.5TB of storage space in it. Along with 500MB of firmware OS and maybe a similar amount of cache.

anandtech.com is a good source of all of these details.

Now to my point. The ONLY way to ensure that the data is gone is to grind the drive into dust. Spinning or SSD. This is what those TLAs do. (And for a side diversion they extract all the electronics and grind them up. The keyboard, USB, Ethernet, Wi-FI controller chips have more memory than hard drives of 30+ years ago. And if youā€™re really good you can store things there to keep it hidden from those TLAs. Mostly. Unless they really want to dig deep.

Why grind it up? Unless you know all the algorithms in the disk driveā€™s OS you have no idea what is in those replaced, relocated, marked for no more use, pages. On both SSDs and spinning disks or how to ensure they are really erased.

For spinning disks for most people a drill bit through the round parts covers all but people who attract TLAs. Takeing them apart and smashing with a hammer can generate flying bits of metal coated ceramic which can do nasty things to skin or eyes. But you get some interesting rare earth magnets.

For SSDs, smash them with a hammer then put them in a yard sale blender and hit the ā€œ10ā€ button.

If TLAs are not looking at you some of the other methings here will be fine.

But be wary of comments about erasing the ENTIRE disk without physical destruction.

And yes turning on disk encryption at first boot is a way to encrypt most of the drive. Except the parts written at the factory and in the minutes before you turn encryption on.

5 Likes

thank you very much for putting the subject in historical perspective. It allows me to better understand the ā€œevolutionā€ (as in natural selection) of those storage devices.

There is one issue that is unclear - forum members who posted to this discussion (including yourself) put a lot of emphasis on life expectancy of SSDs, as illustrated so well in the quote above, with, as a consequence strong reservations of using shell commands which either secureErase an external drive with 3 runs as in

diskutil secureErase 4 disk2

or filling up the disk with random bits and then format + encrypt.

cat </dev/random >/Volumes/TroublesomeSSD/bigtempfile

But we are not talking about doing this twice a week. More likely perhaps maximum once a year. Does once a year put so much stress on the SSD to be worth consideration, ie avoiding it ?

thank you @raleighthings

I must have missed something. I thought we were talking about a one-time process in order to get rid of an SSD for whatever reason, not a maintenance thing. One should always avoid unnecessary writes to an SSD unless you plan on having to replace it periodically.

2 Likes

yes, you are right. I was just extrapolating because SSDs change function - one day backup a few months later something else.

Well yes, but you need to consider what the data on that drive is actually worth.

Do I care if someone spends thousands of dollars to extract my MP3 collection from a trashed drive? No. I couldnā€™t care less.

Do I care if they can extract credit card numbers? Yes. But itā€™s doubtful anyone looking for that kind of information will go through the time and expense. And if someone actually does go hacking the raw flash chips to get card numbers, the bank would flag fraudulent use soon afterward and cancel the car number. So itā€™s unlikely that any thief would find it worth the effort.

If the drive contains classified information that could compromise national security, well thatā€™s a completely different matter. A foreign government may well have the means and desire to do this.

Why should you care about the data written at the factory and in the minutes before you turn it on? Do you really care if an attacker somehow manages to get your copy of Seagate Backup Assistant? And do you seriously think it will remain recoverable after youā€™ve erased it and have been using the drive for several years?

Even if you put lots of content on the drive before encryption (e.g. installing macOS to it), thatā€™s not a problem as long as you turn on encryption before you migrate your data or log-in to your iCloud account. Do you really care if this mythical attacker that can extract files overwritten five years ago learns that you were running macOS?

2 Likes

I was addressing some of the absolutist statements being made. Words like ALL data, ENTIRE disk, etcā€¦

I have my doubts that any of the methods via commands to a desk will do anything to a page that the firmware has marked as ā€œdoneā€. And yes while itā€™s hard to get the data off of one of those, people keep saying things like ā€œerase ALL the dataā€.

Not ā€œerase it well enough for practical purposesā€.

2 Likes

The problem is that you are fighting the goal of the firmware in most SSDs. That goal is to make it last as long as possible by evening out the wear. And so as you write patterns to the entire dirve the firmware will be swaping around pages to keep the wear even. Which will likely mean that a page with say 100 sectors mighy be swapped out with a fresh on after youā€™re only written over 50 of the sectors on that page. So 1/2 of the page is still there with the data.

Which is why if youā€™re serious about hiding the data, just encrypt the drive before doing any real work. Then just forget the encryption key.

1 Like

Which is why I (and several others) have been very careful to differentiate between software-accessible data and data that requires bypassing the SSD controller to access.

The former is easily recovered by anyone with the right software.

The latter will require specialized tools (assuming the SSD controller itself doesnā€™t auto-encrypt data, like Appleā€™s T2, A-series and M-series chips do) and even with those tools, the possibility of recovery will decrease over time as the driveā€™s normal garbage collection algorithms run in the background.

The odds of recovery will eventually go to zero, but we have no way of knowing when, without detailed data about how the SSD controllerā€™s garbage collection works.

2 Likes

Yes, you are right. In this case the data is accounting including bank and credit card info.

In that vein, could we perhaps question whether encryption of the primary and external drives should be automatic, as about all posts above suggest ?. I know it sounds like heresy.

I remember being furious about losing data in the past because I lost the encryption key (drive or file). I could no longer make sense of an encryption key hint which was ā€œobviousā€ at the time. More importantly, we are all mortal. Think about a non computer savvy surviving spouse or family member or colleague stuck with encrypted data. It would be terrible. For this reason, I do not encrypt drives primary or external.

I am just saying that everything has pros and cons, including encryption.

I would be happy to change and encrypt everything if you think that my reasoning does not make sense. You are the expert and I am a dilettante.

Yepā€¦to be absolutely sure that nothing is recoverable nothing less than physical destruction will doā€¦even with multiple over writes if some 3 letter agency (I used to be in the highly classified intelligence biz) really wants to spend the bucks to use as scanning electron microscope on the platters they can get some of it back if they really want to. Not allā€¦but some.

That saidā€¦unless OP is some very high value targetsā€¦over write with random data or fill it with a bunch of image files or whatever, encrypting and then deleting the key or just about any of the other reasonable alternatives in this threadā€¦is really good enough. Just like passwords and a whole bunch of other thingsā€¦better is the enemy of good enough.

3 Likes

One of my favorite techniques to erase a drive is to use a really big hammer :smiley:.

Not only does it make the drive unreadable, but you feel way better afterwards.

2 Likes

Could you please elaborate a little? Does this auto-encryption make FileVault unnecessary?

1 Like

This is not in competition with FV. On a T2/M1/M2-based Mac (and even on Ax-based iPhones), there is a level of hardware encryption that is always present for the internal SSD. If you then choose to run FV2 on top of that, what it does is apply an additional user-specific key on top of the hardware key already in use. That second step ensures that only you can access the flash memory of the SSD through its controller (supplied through the T2 or M1/2).

But even if you forgo FV2, the hardware encryption will still be there to ensure that you cannot just remove that flash memory from the Mac (or access it via something like TDM) and read it out clear text. Thatā€™s why when the flash gets exchanged on a socketed system (like the Mac Studio) it needs to be set up from scratch again by that systemā€™s M1ā€”you cannot take one MSā€™s flash, put it into another MS, and just continue to work. That level of hardware encryption is there to ensure you cannot bypass the controller to access stored data. To access the stored data you have to go through that specific controller on that specific Mac. Break that chain and the data will remain scrambled forever. It is always present, regardless of FV2 setting.

All FV2 does is add another key on top of that encryption to make it not just specific to that system, but specific to a certain user on that system (the admin who supplies the key when FV2 is activated).

1 Like

I think what he is referring to is on drive encryption.

Most all SSDs these days encrypt the data within the drive. This is to prevent someone from removing the memory chips and extracting the raw data from them.

A good controller from a good vendor will have a different (and hard to guess) and decently long encryption key for each controller/drive. Not all do. And in the early days some vendors used the same key in every drive of a particular model or all of them. Which made the protection a bit weak at best.

1 Like