tetrapetrichord wrote:
Not sure you read my 4th sentence… AFAIK, data wiped with classic known algorithms is still recoverable, especially in a quantum/post-quantum age. I would prefer to remove and smash an SSD/HDD, and I did a lot of this for a previous employer in the medical sector.
More details than you probably want…
if we're discussing floppy disks or hard disks prior to the advent of embedded servo data tracking, overwriting is a thing. That would be the 1990s, and earlier. Really sloppy head positioning back then. (Trivia: the unique sound made by the Apple II and its floppy disks was directly related to head positioning.) That sloppiness led to the “Orange Book” repeated-overwrite recommendations from that era, either with zeros or with pattern data.
This “Orange Book”was from the US DoD / NCSC / NIST “Rainbow Books” era, which was from the 1980s and into the 1990s. (Red and Orange are probably the most interesting.)
With hard disks and particularly with embedded-servo head tracking form the 1990s and newer, the forensic gear needed to try to recover data got far more expensive, as the head-positioning tracking got vastly more accurate. Basically, you need to bring your own head positioning firmware or your own disk hardware, and this all to deliberately try to get the heads slightly off track, and see what data might have remained. And even that expense and effort probably gets you nothing, as the heads are more accurate, and given even with a single-pass overwrite.
Multi-pass overwrite is an attempt to compensate for sloppy head tracking. When we had 10 MB disks, and floppy disks, tracking could be sloppy. Or exceedingly sloppy. Modern hard disks inherently get their higher capacity with higher density and higher accuracy. And quite possibly with the use of lasers (HAMR, etc), lately.
Another decade or so onward, and Solid State Disks (SSDs) radically changed how storage is implemented. Everything always and inherently gets overwritten, as sectors can’t be re-used otherwise. And wear-leveling means the traditional overwriting implementations are entirely futile. Your sole option for an overwrite is to flood the entire storage device with writes, including over-provisioning.
Add to that more recently, Apple T2 and later always encrypt the stored data always, so erasing an entire volume is little more than a key-rotation operation. Swap the keys, and the data is cryptographically inaccessible. Sectors are filled with unreadable data, and with the generated decryption key long unavailable. Swap to a different Mad, and the data is unreadably encrypted.
As for enabling FileVault specifically, “If you have a Mac with Apple silicon or an Apple T2 Security Chip, your data is encrypted automatically. Turning on FileVault provides an extra layer of security by keeping someone from decrypting or getting access to your data without entering your login password.”
(On Macs prior to T2, FileVault must be specifically enabled. Otherwise, internal storage is not encrypted.)
(FileVault has the added benefit of encrypting the contents of sectors that might eventually become bad.)
Now if your data is still sensitive, or you expect to be targeted by folks with access to gear well past that of any announced quantum computing capabilities (however unlikely that might be), and that’s all certainly your decision, or that of the site security officer, then shred the entire computer, and melt the results into slag.