Just like the title says, is this possible?
Say I have a file named myfile.dat, rm isn't going to do the job, and if I don't have the ability to install wipe or shred or some other tool, can I securely erase this file "by myself"?
Just like the title says, is this possible?
Say I have a file named myfile.dat, rm isn't going to do the job, and if I don't have the ability to install wipe or shred or some other tool, can I securely erase this file "by myself"?
Even with such tools, whether such a method of secure erasing works, depends on the underlying filesystem.
If you are using a modern Copy On Write based filesystem, these tools will not work at all, since they would not write to the previous blocks used by the file.
If you like to do secure erasing, you would either need support for that feature built into the filesystem, or the filesystem would need to implement an interface that allows to retrieve the disk block numbers for a file. The latter method however is a security risk and can only be supported for a privileged user.
ioctl (fd, FIBMAP, &logical_block); allows a program to map logical to physical blocks which could then be overwritten (no idea how -- another ioctl call?). ioctl(fd, _FIOAI, &fai) in mind) is problematic as it is a security problem and as it only works for old filesystems that have just one underlying background storage device associated. Because of the security risk, we defined the new lseek(fd, off, SEK_HOLE) interface for star since that interface originally was to detect holes with block number -1. If you try to write to a block retrieved this way, you even could cause filesystem corruption if the block number is no longer valid at the time of write. Bear in mind that how well file level erasing works in practice depends a great deal on the underlying file system and drive hardware. Modern copy-on-write (journalled) filesystems such as ext4 do not "overwrite" file data in the original location, which makes most file-oriented erase tools, such as shred and wipe, far less useful that they were in years past. SSDs have similar characteristics at the block level, see https://www.howtogeek.com/234683/why-you-cant-securely-delete-a-file-and-what-to-do-instead/
So we would have to assume you are doing this on an old fashioned magnetic hard drive and a non-journalled non-COW filesystem. Perhaps acceptable assumptions in the 1990s, but are they realistic in 2020?
Then, "basic Unix/linux cmmand line tools" seems vague. shred is part of coreutils, which is a set of "basic Unix/Linux command line tools". So at one level, use shred and you meet your requirements!
shred -zu FILENAME Otherwise, you seem to be asking for a way to overwrite the contents of an existing file (perhaps repeatedly), and then delete it, "by myself". You could use dd to copy zeroes to the file, but dd is part of coreutils too, so why in practice that is any more "by myself" than using shred is debatable.
Does "by myself" mean using only sh or bash internal commands?
shred is a proprietary GNU command and not part of the UNIX command line tools, then dd by default enforces to write to new disk blocks even if the underlying filesystem is not a COW filesystem, and BTW: why do you believe that a journalling filesystem is a COW filesystem? data=journal would leave a copy of the actual data in the journal as well). lwn.net/Articles/576276 explains how BTRFS is different from normal FSes like EXT4 and XFS. dd but not GNU shred. The question explicitly kept the scope broad enough to include systems that aren't modern GNU/Linux. So yes, dd if=/dev/urandom of=file conv=notrunc ... seems to be a slow version of what the question is looking for, unless shred does something more than that. Because this problem is typically important, it may not be relevant how contorted a solution is.
But if it's important, this actually may be a practical solution.
When a tool that safely deletes a file exists, it practically needs to know the file system. So a tool that can do this in general practically can not exist.
But we do not need to make assumptions about the file system.
We start with removing the file we want to be forgotten completely, on the file system level, with a normal rm command:
$ rm -f myfile.dat Now, it no longer exists in the file system. But we know that the content still exists on the storage under the filesystem - because removing it takes time. And it is possible, even common, that the filesystem itself still has information about where it is.
We want to have a file system that has no traces of the file left. And seen from the file system surface, there already are no traces left!
Now, we create a suitable new, empty filesystem elsewhere.
And then, we make a copy of the old filesystem on the file level, where it is already clean!
We still do not know much about where information from the file may still be, but what we know is that it is in the storage under the old filesystem. We just ignore that the old filesystem still exists, and clean the storage up by writing pseudorandom bytes from /dev/urandom to it.
That's enough to delete the data.
If you still do not feel sure it's gone, do not try to write actual random numbers from /dev/random. There is not much randomness that can be read from it, and other software need some randomness too.
Just repeat overwriting it with pseudorandom numbers from /dev/urandom until you relax.
Then, you can move the new filesystem as a whole to the original storage place. Done.