How to Securely Delete Files on Linux

Open hard drive in hot swap tray
Biehler Michael/Shutterstock.com

Shred old data files for the same reason you shred old paper documents. We tell you what you need to know about securely deleting Linux files. This tutorial covers the shred command and the secure-delete suite of utilities.

Deleted Files Are Normally Recoverable

Deleting a file doesn’t actually remove it from your hard drive. It’s all down to the way your filesystem uses inodes. These are the data structures within the filesystem that hold the metadata regarding the files. The name of the file, its position on the hard drive, what attributes and permissions it has, and so on are all stored within an inode. A directory is no more than a file itself. One that holds the names and inode numbers of the files that the directory contains.

When you delete a file with rm, the filesystem frees up the appropriate inode and adjusts the directory file. This marks the space on the hard drive that the file used to occupy as unused. Imagine you walk into a library and go through the card index, find a book’s catalog card, and rip it up. The book is still on the shelf. It’s just harder to find.

In other words, the space that was used by the file is now free to be used by other files. But the contents of the old file still sit in that space. Until that space is overwritten, there is a good chance that file can be retrieved.

But completely getting rid of a file isn’t as straightforward as simply overwriting them. As we shall see.

Don’t Do This With SSD’s

These techniques are for traditional electro-mechanical hard disk drives (HDD), and should not be used with solid state drives (SSD). It won’t work and will cause extra writes and unnecessary wear to your SSD. To securely erase data from an SSD, you should use the utility provided by the manufacturer of your SSD.

RELATED: How to Delete Files and Directories in the Linux Terminal

The shred Command

shred is designed to perform the overwriting for you so a deleted file cannot be recovered. It is included in all of the Linux distributions that were tested during the research for this article, including Ubuntu, Fedora, and Manjaro.

In this example, we’re going to be working in a directory called ~/research, which contains many text files. It also contains some other directories which in turn contain other files. We’re going to assume these files are sensitive and must be erased entirely from the hard drive.

We can see the directory tree structure by using the tree command as follows. The -d (directory) option causes tree to list directories only, and not to list all of the files. The directory tree structure looks like this:

tree -d

directory tree structure in a terminal window

Shredding a Single FIle

To shred a single file, we can use the following command. The options we are using are:

  • u: Deallocate and remove the file after overwriting.
  • v: Verbose option, so that shred tells us what it is doing.
  • z: Performs a final overwrite with zeroes.
shred -uvz Preliminary_Notes.txt_01.txt

shred -uvz Preliminary_Notes.txt_01.txt in a terminal window

shred overwrites the file four times by default. The first three passes use random data, and the final pass uses zeroes, as we requested. It then removes the file and overwrites some of the metadata in the inode

shred making four passes

Setting the Number of Overwrite Passes

We can ask shred to use more or fewer overwrite passes by using the -n (number) option. shred will always use at least one pass. The number we provide here is the number of extra passes we require shred to perform. So shred will always do one more pass than the number we ask for. To get three passes in total, we request an extra two passes:

shred -uvz -n 2 Preliminary_Notes.txt_02.txt

shred -uvz -n 2 Preliminary_Notes.txt_02.txt in a terminal window

As expected, shred makes three passes.

shred making three passes in a terminal window

Fewer passes—fewer shreddings if you like— is obviously faster. But is it less secure? Three passes, interestingly, is probably more than enough.

RELATED: You Only Need to Wipe a Disk Once to Securely Erase It

Shredding Multiple FIles

Wildcards can be used with shred to select groups of files to be erased. The * represents multiple characters, and the ? represents a single character. This command would delete all of the remaining  “Preliminary_Notes” files in the current working directory.

shred -uvz -n 2 Preliminary_Notes_*.*

shred -uvz -n 2 Preliminary_Notes_*.* in a terminal window

The remaining files are each processed by shred in turn.

output from shred in a terminal window

shred has no recursive option, so it cannot be used to erase directory trees of nested directories.

The Trouble With Securely Deleting Files

As good as shred is, there’s an issue. Modern journaling file systems such as ext3 and ext4 go to tremendous efforts to ensure they don’t break, become corrupt, or lose data. And with journaling filesystems, there’s no guarantee that the overwriting is actually taking place over the hard drive space used by the deleted file.

If all you’re after some peace of mind that the files have been deleted a bit more thoroughly than rm would have done it, then shred is probably fine. But don’t make the mistake of thinking that the data is definitely gone and is totally irrecoverable. That’s quite possibly not the case.

RELATED: Why You Can’t “Securely Delete” a File, and What to Do Instead

The secure-delete Suite

The secure-delete commands try to overcome the best efforts of journaling filesystems and to succeed in overwriting the file securely. But exactly the same caveats apply. There is still no guarantee that the overwriting is actually taking place over the region of the hard drive that you need it to obliterate the file of interest. There’s more chance, but no guarantee.

The secure-delete commands use the following sequence of overwrites and actions:

  • 1 overwrite with 0xFF value bytes.
  • 5 overwrites with random data.
  • 27 overwrites with special values defined by Peter Gutmann.
  • 5 more overwrites with random data.
  • Rename the file to a random value.
  • Truncate the file.

If all of that seems excessive to you, you’re in good company. It also seems excessive to Peter Gutmann, a professor at the University of Aukland. He published a paper in 1996 discussing these techniques, from which arose the urban myth that you need to use all of the techniques discussed in that paper at once.

Peter Gutmann has since tried to get the genie back in the bottle saying “A good scrubbing with random data will do about as well as can be expected.”

But we are where we are, and these are the array of techniques employed by the secure-delete commands. But first, we need to install them.

Installing secure-delete

Use apt-get to install this package onto your system if you’re using Ubuntu or another Debian-based distribution. On other Linux distributions, use your Linux distribution’s package management tool instead.

sudo apt-get install secure-delete

sudo apt-get install secure-delete in a terminal window

There are four commands included in the secure-delete bundle.

  1.  srm is a secure rm, used to erase files by deleting them and overwriting their hard drive space.
  2. sfill is a tool to overwrite all free space on your hard drive.
  3. sswap is used to overwrite and cleanse your swap space.
  4. sdmem is used to cleanse your RAM.

The srm Command

You use the srm command much as you would use the rm command. To remove a single file, use the following command. The -z (zeroes) option causes smr to use zeroes for the final wipe instead of random data. The -v (verbose) option makes srm inform us of its progress.

srm -vz Chapter_One_01.txt

srm -vz Chapter_One_01.txt in a terminal window

The first thing you’ll notice is that srm is slow. It does provide some visual feedback as it is working, but it is a relief when you see the command prompt again.

output from srm in a terminal window

You can use -l (lessen security) option to reduce the number of passes to two, which speeds things up dramatically.

srm -lvz Chapter_One_02.txt

srm -lvz Chapter_One_02.txt in a terminal window

srm informs us that this—in its opinion—is less secure, but it still deletes and overwrites the file for us.

Output from srm in a terminal window

You can use the -l (lessen security) option twice, to reduce the number of passes down to one.

srm -llvz Chapter_One_03.txt

srm -llvz Chapter_One_03.txt in a terminal window

Using srm with Multiple Files

We can also use wildcards with srm. This command will erase and wipe the remaining parts of chapter one:

srm -vc Chapter_One_0?.txt

srm -vc Chapter_One_0?.txt in a terminal window

The files are processed by srm in turn.

srm wiping multiple files in a terminal window

Deleting Directories and Their Contents With srm

The -r (recursive) option will make srm delete all subdirectories and their contents. You can pass the path to the first directory to srm.

In this example, we’re deleting everything the current directory, ~/research. This means all of the files in ~/research and all of the subdirectories are securely removed.

srm -vz *

srm -vz * in a terminal window

srm starts processing the directories and files.

srm starting to process in a terminal window

It eventually returns you to the command prompt. On the test machine that this article was researched on, this took around one hour to remove about 200 files distributed between the current directory and three nested directories.

srm complete in a terminal window

All of the files and subdirectories were removed as expected.

The sfill Command

What if you are concerned about a file that you have deleted using rm, how can you go over that old ground and make sure it is overwritten?  The sfill command will overwrite all of the free space on your hard drive.

As it does this, you will notice that you have less and less free space on your hard drive, right up to the point where is no free space at all. When sfill completes, it releases all of the free space back to you. If you are administering a multi-user system, this would be very disruptive, so this is a maintenance task that should be conducted out of hours.

Even on a single user computer, the loss of hard drive space means it is unusable once sfill has used most of the space. This is something that you would start and then walk away from.

To try to speed things up a bit, you can use the -l (lessen security) option. The other options are the -v (verbose) and -z  (zeroes) options we have seen previously. Here, we are asking sfill to securely overwrite all of the free space in the /home directory.

sudo sfill -lvz /home

sudo sfill -lvz /home in a terminal window

Make yourself comfortable. On the test computer—which only has a 10 GB hard drive— this was started mid-afternoon, and it completed sometime overnight.

sfill output in a terminal window

It’ll churn away for hours. And this is with the -l (lessen security) option. But, eventually, you’ll be returned to the command prompt.

The sswap Command

The sswap command overwrites the storage in your swap partition. The first thing we need to do is identify your swap partition. We can do this with the blkid command, which lists block devices.

sudo blkid

sudo blkid in a terminal window

You need to locate the word “swap”, and make a note of the block device it is attached to.

output of blkid in a terminal window

We can see the swap partition is connected to /dev/sda5.

We need to turn off disk writes to the swap partition for the duration of the overwriting. We will use the swapoff command:

sudo swapoff /dev/sda5

sudo swapoff /dev/sda5 in a terminal window

We can now use the sswap command.

We will use /dev/sda5 as part of the command line for the sswap command. We’ll also use the -v (verbose) option and -ll (lessen security) options, which we used earlier.

sudo sswap -llv /dev/sda5

sudo sswap -llv /dev/sda5 in a terminal window

sswap starts working its way through your swap partition, overwriting everything that it is in it. It doesn’t take as long as sfill. It just feels like it.

Once it has completed, we need to reinstate the swap partition as an active swap space. We do this with the swapon command:

sudo swapon /dev/sda5

sudo swapon /dev/sda5 in a terminal window

The sdmem Command

The secure-delete package even contains a tool to wipe the Random Access Memory (RAM) chips in your computer.

A cold boot attack requires physical access to your computer very shortly after it is turned off. This type of attack can, potentially, allow the retrieval of data from your RAM chips.

If you think you need to protect yourself against this type of attack—and it would be a stretch for most people to think they needed to—you can wipe your RAM before you switch off your computer. We’ll use the -v (verbose) and -ll (lessen security) options once more.

sudo sdmem -vll

sudo sdmem -vll in a terminal window

The terminal window will fill up with asterisks as an indication that sdmem is working its way through your RAM.

output from sdmem in a terminal window

The Easy Option: Just Encrypt Your Drive

Instead of securely deleting files, why not secure your hard drive or your home folder using encryption?

If you do that, no one can access anything, whether it is a live file or a deleted file. And you don’t have to be on your guard and remember to securely erase sensitive files because all of your files are already protected.

Most Linux distributions ask whether you want to use encryption at install time. Saying “yes” will save a lot of future aggravation. You may not deal with secret or sensitive information. But if you think you may give or sell the computer to someone else when you are finished with it, encryption will simplify that too.

Nieuwste artikelen

spot_img

Related Stories

Leave A Reply

Vul alstublieft uw commentaar in!
Vul hier uw naam in