I have a file which is wobbly. It is a good file, but it wobbled, and the letters got in the wrong places.
It is a file I was reading at home, on my home PC, from my home hard drive (no cloud stuff) and suddenly discovered had a lot of NUL characters in it, 3062 to be precise. Ewan suggested in the storage meeting that this might have been caused by an XFS crash - the PC is on a UPS but occasionally slightly experimental X clients lock it up and on a few occasions "security" features I foolishly left running had prevented me from logging in from my laptop and shutting it down cleanly.
Anyway, this file is part of a directory I synchronise to work, and surenuf, the file on my work PC was equally corrupted. And the corrupted file was backed up. Backups are not so helpful when the file you back up is corrupted...(the timestamp said December 2001). The file had been migrated from another older hard drive.
Now it so happens that this file was not precious, it was an RFC, so I could just get a fresh one. Comparing them, there seemed to be more changes...(and the NULs confused some diff tools) - the file seemed to have changed mysteriously in about eight places; in one 749 bytes were missing.
Back in the days, there were computer virus which would alter files on your hard drive... I don't believe that's what happened here - I invoke Hanlon's razor. Or a variant of it. ``Never attribute to malice that which can be adequately explained by 'stuff happens'''
It may be worth investigating my backups from back then - I was still using optical writeable CDs for my home PC backups back then, and I still have them - whether I can read them is another question. But it seems I need to write a utility to background checksum my files for me.
For my precious data at home that I don't copy to work machines (when it's not work related), I have a stronger system, based on unison and multiple mutually independent backups, but the question is still whether I would have noticed the one change that should not propagate to the backup? Worth a test...
It is a file I was reading at home, on my home PC, from my home hard drive (no cloud stuff) and suddenly discovered had a lot of NUL characters in it, 3062 to be precise. Ewan suggested in the storage meeting that this might have been caused by an XFS crash - the PC is on a UPS but occasionally slightly experimental X clients lock it up and on a few occasions "security" features I foolishly left running had prevented me from logging in from my laptop and shutting it down cleanly.
Anyway, this file is part of a directory I synchronise to work, and surenuf, the file on my work PC was equally corrupted. And the corrupted file was backed up. Backups are not so helpful when the file you back up is corrupted...(the timestamp said December 2001). The file had been migrated from another older hard drive.
Now it so happens that this file was not precious, it was an RFC, so I could just get a fresh one. Comparing them, there seemed to be more changes...(and the NULs confused some diff tools) - the file seemed to have changed mysteriously in about eight places; in one 749 bytes were missing.
Back in the days, there were computer virus which would alter files on your hard drive... I don't believe that's what happened here - I invoke Hanlon's razor. Or a variant of it. ``Never attribute to malice that which can be adequately explained by 'stuff happens'''
It may be worth investigating my backups from back then - I was still using optical writeable CDs for my home PC backups back then, and I still have them - whether I can read them is another question. But it seems I need to write a utility to background checksum my files for me.
For my precious data at home that I don't copy to work machines (when it's not work related), I have a stronger system, based on unison and multiple mutually independent backups, but the question is still whether I would have noticed the one change that should not propagate to the backup? Worth a test...
No comments:
Post a Comment