Wiping hard disks is part of my company's policy when returning servers. No exceptions.
Good providers will wipe what they have received back from a customer, but we don't trust that as the hosting / cloud business is under constant budget-pressure and cutting corners (
wipefs) is a likely consequence.
With modern SSDs there is "security erase" (
man hdparm or see the - as always well maintained - Arch wiki) which is useful if the device is encrypt-by-default. These devices basically "forget" the encryption key but it also means trusting the devices' implementation security. Which doesn't seem warranted. Still after wiping and trimming, a secure erase can't be a bad idea .
Still there are three things to be aware of when wiping modern hard disks:
- Don't forget to add
ddas it will still default to 512 bytes and that makes writing even zeros less than half the maximum possible speed. SSDs may benefit from larger block sizes matched to their flash page structure. These are usually 128kB, 256kB, 512kB, 1MB, 2MB and 4MB these days.1
- All disks can usually be written to in parallel.
screenis your friend.
- The write speed varies greatly by disk region, so use 2 hours per TB and wipe pass as a conservative estimate. This is better than extrapolating what you see initially in the fastest region of a spinning disk.
- The disks have become huge (we run 12TB disks in production now) but the write speed is still somewhere 100 MB/s ... 300 MB/s. So wiping servers on the last day before returning is not possible anymore with disks larger than 4 TB each (and three passes). Or 12 TB and one pass (where e.g. fully encrypted content allows to just do a final zero-wipe).
|hard disk size||one pass||three passes|
|1 TB||2 h||6 h|
|2 TB||4 h||12 h|
|3 TB||6 h||18 h|
|4 TB||8 h||24 h (one day)|
|5 TB||10 h||30 h|
|6 TB||12 h||36 h|
|8 TB||16 h||48 h (two days)|
|10 TB||20 h||60 h|
|12 TB||24 h||72 h (three days)|
|14 TB||28 h||84 h|
|16 TB||32 h||96 h (four days)|
|18 TB||36 h||108 h|
|20 TB||40 h||120 h (five days)|