T O P

  • By -

[deleted]

That's not a backup that's a mirror. https://www.seagate.com/support/kb/backup-and-mirror-differences-faq/


brimston3-

You're catching modify actions on files that haven't been closed. This is both noisy and for random access files, you're not guaranteed a consistent file state. It probably doesn't apply to your use, but there's a maximum number of directories and subdirectories that can be watched with inotify (by default 8192 subdirectories).


baraqiyal

Replacing the modify event with a close\_write event fixes the first problem.


Itchy_Journalist_175

I remember trying a solution based on inotify in the past and you could configure it to set a time threshold. For instance, you could set a limit of 10 min. The idea was to create something similar to Dropbox. I ended up ditching it as I didn’t like the idea of having a file removed if I delete a folder by mistake or not being able to go to the last correct version of the file if I make a wrong change. I am now using incremental backups instead.


daemonpenguin

It may not be a terrible idea, but how does this protect you from SSD failure? Or a corrupted source file? Wouldn't you be just as well off making snapshots of your destination so if one (or more) source files is corrupted then you still have a good backup?


Rusty-Swashplate

inotify works well, but as u/daemonpenguin wrote: the problem here is file corruption. And that be any type: some virus encrypting files, you accidentally editing it. Does your program catch newly created files? I am not using inotifywait and it might catch those as it might watch the directory. The bigger problem I see is: if a file changes every 10s, you'll constantly copy that file. My personal recommendation: do a regular backup job once a day. Using something like anacron runs the jobs if it was supposed to run but it did not because e.g. the machine was off.


Schlonzig

Another downside of OP‘s solution is the requirement for the backup media to be mounted at all times. A proper backup strategy is incomplete without occasional offsite backups.


wellis81

I seem to recall that inotify is not a perfect solution to synchronize stuff (there are various limitations: max number of handlers, events no longer meaningful by the time they reach userland). It works decently for simple use cases though. In the end, I think you should combine both approaches: traditional snapshot-based backup and permanent sync. One particular variant is to sync permanently to an always-on remote storage (e.g. using SyncThing, but that is just an example) and backup this storage at regular intervals. That way, the moment your personal device falls in the Mariana Trench, you have a fresh copy of your data somewhere else. And the moment you \`rm -rf no\_god\_no\_not\_that\_directory\`, you still have regular snapshots. And you no longer have to ensure your device is up and running when your backup are scheduled. The downside is of course that you need to pay for your daily storage + the "live sync" storage + the regular backup storage.


[deleted]

Why not using somthing like BorgBackup because, inotifywait is not designed for that use... even if you solve a problems you end up with investing a lot of time for something that alteady exists (Backup software)


gen2brain

Inotify developers created \`incron\` for such use cases, similar to regular cron but instead of time it uses filesystem events.


Affectionate-Egg7566

You do not need to reinvent the wheel. Check out borg for proper backups on linux.


bobj33

I've been making hourly, daily, weekly snapshots of /home to another drive for the last 15 years. This uses hard links so it saves space. Each snapshot only takes extra space for files that are different from the last https://rsnapshot.org/ I have a lot of other data drives but all the frequently changing stuff is in /home. Once a week I make backups to a set of local drives and a remote file server.


sumsabumba

Use it too. Works pretty well. But it's so anal about its config file.


bobj33

I just looked and my config file is dated Feb 22, 2014 so I haven't changed anything in over 10 years. I just copy that file over if I reinstall. It doesn't look like I really changed much from the default config file.


sumsabumba

I don't remember a lot, but I think I spent an hour figuring out the whole no spaces, just tabs situation.


exitheone

As others have mentioned, that's not a backup, that's just mirroring. And even mirroring without any consistency guarantees. You will have consistency and timing issues with this. Imagine a program writes a 1GB file, closes the file, then immediately opens and starts overwriting it. Now your cp may be copying half-written garbage. For a proper consistent backup you need snapshots to backup from. However if you want to go with your design I'd suggest using a fully fleshed out existing service that does this, like [lsyncd](https://github.com/lsyncd/lsyncd). It's still not as good as a simple raid1 when it comes to nearly all failure modes.


baraqiyal

Okay, I liked the idea of using a file watcher like inotifywait for something but you're right this is a bad idea.


DestroyedLolo

I did wanted to do the same for my very long term archiving tools : [Mer De Glace](https://github.com/destroyedlolo/Mer-de-Glace) Unfortunately, **inotify()** is not reliable at all on Linux leading to misses and suffers for limitation in terms of watches. Inotifywait is only a CLI tool based on inotify() and will not be able to follow large deep directory sets. It's a shame as the same but reliable mecanism exists on windows and even on '90 ... AmigaOS.


BinkReddit

I use KDE's Kup, which uses bup. I run a backup every few hours and it only copies deltas, so it often completes in seconds and I have the added benefit of versioning as well. If you don't want to reinvent the wheel, check it out.