Thanassis Tsiodras wrote in about a utility for adding additional error correction redundancy to your backup data:
The way storage quality has been nose-diving in the last
years, you’ll inevitably end up losing data because of
bad sectors. Backing up, using RAID and version control
repositories are some of the methods used to cope ;
here’s another that can help prevent data loss from bad
sectors. It is a software-only method, and it has saved me from a lot of grief.
The technique uses Reed-Solomon coding to add additional parity bytes to your data. If you suffer partial damage to the storage media, these files can still be recoverable.
Storage media are of course block devices, that work or fail on 512-byte sector boundaries (for hard disks and floppies, at least – in CDs and DVDs the sector size is 2048 bytes). This is why the shielded stream must be interleaved every N bytes (that is, the encoded bytes must be placed in the shielded file at offsets 1,N,2N,…,2,2+N,etc): In this way, 512 shielded blocks pass through each sector (for 512 byte sectors), and if a sector becomes defective, only one byte is lost in each of the shielded 255-byte blocks that pass through this sector. The algorithm can handle 16 of those errors, so data will only be lost if sector i, sector i+N, sector i+2N, … up to sector i+15N are lost! Taking into account the fact that sector errors are local events (in terms of storage space), chances are quite high that the file will be completely recovered, even if a large number of sectors (in this implementation: up to 127 consecutive ones) are lost.
The application works similar to any other command line archiving utility, so you can tar your files as normal and then send them to the freeze.sh script. Running melt.sh on the archive will return your original data, even if there was a reasonable amount of corruption to the file. Thanks, Thanassis!
ADVERTISEMENT