Skip to main content

Timeline for Is shred bad for erasing SSDs?

Current License: CC BY-SA 4.0

18 events
when toggle format what by license comment
Jun 18, 2020 at 21:05 review Low quality posts
Jun 19, 2020 at 1:58
Jun 17, 2020 at 20:49 comment added ilkkachu @supercat, yes, that would be one problem. Spinning disks could do reallocation for bad blocks too, but probably much more rarely.
Jun 17, 2020 at 20:40 comment added supercat @ilkkachu: If a particular block has very many recoverable bit errors in its lifetime, an SSD controller may copy all of the pages from that block to a formerly-unused block and change the block mapping tables to include the new one and exclude the old one, and simply leave the old block forever holding whatever it held when its contents got relocated. Even if one were to fill the device with data multiple times, the old block might never get overwritten.
Jun 17, 2020 at 18:55 comment added Michael @dirkt: Wow, I was not aware that it’s that easy. I thought you’d need a firmware bug or something to access raw data, not just some readily available software and a built-in factory access mode.
Jun 17, 2020 at 18:25 comment added ilkkachu @dirkt, At the risk of being naive... I get it that wear-levelling would make it rather difficult to overwrite a particular file or block. But since they're overwriting the whole disk, i.e. writing as much as data as the nominal capacity, wouldn't that require erasing an equal amount of blocks somewhere on the drive? And they should be pretty much evenly distributed around the drive because of wear-levelling too? Of course the drive probably has some hidden capacity in addition to the nominal, so you'd have to write more than that, but I mean in principle the drive has to erase at some point?
Jun 17, 2020 at 17:50 comment added dirkt @Michael To repeat: blkdiscard doesn't erase blocks. It marks them with TRIM (as would rm in a filesystem with TRIM support), and they typically get scheduled for erasure in the garbage collection process. That also makes them inaccessible for normal reads, but forensic-level understanding and tools are not difficult to obtain, see e.g. here. That's easily in reach of a "hacker", in contrast to e.g. the hardware you'd need to read magnetic flux off harddisk platters.
Jun 17, 2020 at 17:19 comment added Michael @dirkt: As I understand it if you read a block which has been erased with blkdiscard the drive’s firmware won’t return any sensible data for it. Physically the data might still exist, but you’d need forensic-level understanding of the drive and probably special equipment to retrieve it.
Jun 17, 2020 at 4:20 comment added dirkt @ArtemS.Tashkinov Then if "I don't need to protect the drive from the knowledgeable" is the criterion, you may as well just use rm - it's not exactly easy to recover an erased file from ext4, and you have to be quite knowledgeable to do that. And, BTW, I didn't offer any commands like dd (you should be a little bit more attentive). And no, you cannot simply overwrite data on an SSD. No matter which command you use. This misconception is the whole point of the discussion.
Jun 17, 2020 at 4:16 comment added dirkt @Mark It doesn't necessarily means the block is actually erased; the block may be erase some time later asynchronously whenever the SSD is idle and does management tasks, or it may be erase just before the write. All of this is "proper implementation". The idea that TRIM somehow makes "blocks secure" isn't supported anywhere I can find.
Jun 17, 2020 at 4:15 comment added dirkt @Mark: The man page points to [fstrim](fstrim) (as expected), and if you read e.g. ACS-4, it says nothing about "putting blocks into a ready to use state by performing an erase". Instead, there are a number of options for the next read, including "different data returned each time" and "the returned data is zero". So this will be implemented by a bit in the management table that says "this block is ready for re-use".
Jun 17, 2020 at 2:29 comment added Mark @dirkt, blkdiscard, if properly implemented, erases the data blocks and puts them into a "ready to use" state. This is exactly identical to the first half of the "read-modify-write" cycle the drive uses when updating the contents of a data block, and when applied to an entire drive, should result in a drive full of "1"s.
Jun 16, 2020 at 13:14 comment added Artem S. Tashkinov If you intend to overwrite exactly 2MB of data, then it's near instant for most modern desktop CPUs. Try it for 1TB of storage, i.e. without count :-)
Jun 16, 2020 at 13:03 comment added H3R3T1K Well "dd if=/dev/urandom of=/dev/sdc bs=1M count=2" is instant.
Jun 16, 2020 at 11:05 comment added Artem S. Tashkinov The user explicitly said: I'm not dealing with sensitive data here and don't need to protect the drive from a knowledgeable. Is "dd if=/dev/urandom of=/dev/sdc bs=1M count=2" the fastest way to erase a HDD while offering some security towards recovery then? Among all of the commands you've offered this is by far the slowest. Also usually unnecessary as writing zeros is enough to destroy data permanently (and has been for the past ~20 years).
Jun 16, 2020 at 10:50 comment added dirkt How is blkddiscard "equally safe for you use case"? It just discards the blocks and tells the SSD it can re-use them. Someone with firmware-level access to the SSD can still recover data. At least explain the risks.
Jun 16, 2020 at 10:48 comment added H3R3T1K Is "dd if=/dev/urandom of=/dev/sdc bs=1M count=2" the fastest way to erase a HDD while offering some security towards recovery then? I added this command to the question above.
Jun 16, 2020 at 10:40 comment added Chris Davies Seconded +1. Don't "erase" an SSD by writing zeros (or bulk writing anything, for that matter)
Jun 16, 2020 at 10:04 history answered Artem S. Tashkinov CC BY-SA 4.0