Difference between revisions of "Realtime event backup to cloud storage"

From ZoneMinder Wiki
Jump to navigationJump to search
m
 
(38 intermediate revisions by 2 users not shown)
Line 1: Line 1:
It's possible to use lsync daemon, which leverages the inotify https://en.wikipedia.org/wiki/Inotify kernel subsystem to run triggers based on events in a directory (create, modify, delete). Triggers can be varied (you can run any arbitrary command) but in the default use case, is as a filesystem backup / clone using the rsync command.
You may also want to consider other options. Inotify has command line tools, and there is an incrond, which uses cron syntax to run commands based on inotify.
==About==
After recently migrating from another popular NVR software on Windows to Zoneminder on Linux I started looking for a way to do near-real-time backups of my events to offsite storage, simply as a way to have a copy should the local ZM server be compromised by break-in, fire, etc.  Software such as Dropbox, Google Drive, etc. make it relatively easy to do this on Windows but the setup on Linux takes bit more effort.
After recently migrating from another popular NVR software on Windows to Zoneminder on Linux I started looking for a way to do near-real-time backups of my events to offsite storage, simply as a way to have a copy should the local ZM server be compromised by break-in, fire, etc.  Software such as Dropbox, Google Drive, etc. make it relatively easy to do this on Windows but the setup on Linux takes bit more effort.


I wanted to keep it as simple as possible and don't need my backup solution to interface with the ZM database at all.  After experimenting with several other options, I have settled on an application called lsyncd.  It runs as a service and leverages inotify and rsync to watch a directory or directories and mirror only new changes to another storage location.  Lsyncd should be available as a package for most Linux distributions.  I used the instructions here for my Rocky Linux system;
I wanted to keep it as simple as possible and don't need my backup solution to interface with the ZM database.  After experimenting with several other options, I have settled on an application called lsyncd.  It runs as a service and leverages inotify and rsync to watch a directory or directories and mirror only new changes to another storage location.  Lsyncd should be available as a package for most Linux distributions.  I used the installation instructions here for my Rocky Linux system;


https://docs.rockylinux.org/10/guides/backup/mirroring_lsyncd/
https://docs.rockylinux.org/10/guides/backup/mirroring_lsyncd/
Note the version of lsyncd included in EPEL for Rocky 8 (2.2.2-9) has a noticeable memory leak.  Building from source solved this issue.
At this point it is worthwhile to at least skim through the docs.
At this point it is worthwhile to at least skim through the docs.
Line 11: Line 18:
https://linux.die.net/man/1/rsync
https://linux.die.net/man/1/rsync
Both lsyncd and rsync are highly configurable and incredibly powerful.  For many people the use case will be very simple like my own which I will describe here - watch a single event directory and copy new files to a remote location, but much more complex configurations are certainly possible. In my case I'll be writing to an Amazon S3 bucket which I have already setup using S3fs-FUSE according to the documentation here;   
Both lsyncd and rsync are highly configurable.  For many people the use case will be very simple like my own which I will describe here - watch my Zoneminder storage location and copy new files to a remote location, but much more complex configurations are certainly possible. In my case I'll be writing to an Amazon S3 bucket which I have already setup using S3FS-FUSE according to the documentation here;   


https://zoneminder.readthedocs.io/en/latest/userguide/options/options_storage.html
https://zoneminder.readthedocs.io/en/latest/userguide/options/options_storage.html
Note you do not need to add the S3FS as a storage location in ZM to use lsyncd to copy files there, that is only required for ZM to write events directly to S3Configuring rsync is beyond the scope of this document but this solution will work with any target that rsync is capable of writing to. Be sure to have your remote location configured and accessible via rsync before proceeding.
You can also use an Rclone mount point as a destinationI've tested this with Google Drive but it should work with any cloud provider supported by Rclone;
 
https://rclone.org/#providers
 
Follow the instructions here to set up your cloud storage as a local mountpoint;
 
https://rclone.org/commands/rclone_mount/


Once lsyncd is installed you will need to create a configuration file in /etc/lsyncd.conf.  I've included mine as an example below.  Note this script uses LUA syntax, not bash. Be VERY CAREFUL if you are testing remote write/sync/copy operations to a destination that already has important data on it.  I suggest initial testing with an empty bucket or target filesystem.  You have been warned.
==Setup==
 
Once lsyncd is installed you will need to create a configuration file in /etc/lsyncd.conf.  I've included mine as an example below.  Note this file uses Lua syntax, not Bash. Be '''VERY CAREFUL''' if you are testing remote write/sync/copy operations to a destination that already has important data on it (see links below).  I highly recommend initial testing with an empty bucket or target filesystem.  You have been warned.


  <nowiki>settings {
  <nowiki>settings {


   logfile = "/var/log/lsyncd.log",
   logfile = "/var/log/lsyncd/lsyncd.log",


   statusFile = "/var/log/lsyncd-status.log",
   statusFile = "/var/log/lsyncd/lsyncd-status.log",


   statusInterval = 10,
   statusInterval = 10,
Line 29: Line 44:
   maxProcesses = 1,
   maxProcesses = 1,


   insist = true,
   insist = true


   }
   }
Line 48: Line 63:


   init = false,
   init = false,
  exclude = { '*.jpg' },


   rsync = {
   rsync = {
Line 53: Line 70:
     archive = true,
     archive = true,


     compress = false
     ignore_times = true,
 
    inplace = true,
 
    whole_file = true,
 
    prune_empty_dirs = true


   }
   }
Line 59: Line 82:
}</nowiki>
}</nowiki>


Logging is pretty self-explanatory.  Be sure to set up log rotation after you are done setting lsyncd up, if required.
''logfile = '' is pretty self-explanatory.  Be sure to set up log rotation after you are done setting lsyncd up, if required.  By default the packaged lsyncd service logs to the systemd journal, which also by default echoes to /var/log/messages.  Since the application logs are sufficient I've added the 'StandardOutput=null' option to the lsyncd service's unit file to prevent this over-duplication, but I've left StandardError with the existing default logging to the journal.


''maxProcesses = 1'' is the default.  If you are syncing multiple sources or targets you may want to increase this.
''maxProcesses = 1'' is the default.  If you are syncing multiple sources or targets you may want to increase this.


''insist = true'' allows the service to start even if the target is not ready.  I got errors without this which went away when I added it. YMMV.
''insist = true'' allows the service to start even if the target is not ready.  According to the docs "In production mode it is recommended to have insist on."


''source = ''The directory where ZM events are stored.
''source = ''The directory where ZM events are stored.


''target = ''The location to copy events to.  Must be accessible via rsync.
''target = ''The location to copy events to.  This will be the mount point for the cloud storage configured previously.
 
''targetdir = "/somedirectory"'' (not shown, part of sync block) subdirectory to use on the target.  I'm just using the root directory in a dedicated bucket.  You may not want this and should specify something here.
 
''delay = ''This is how often rsync will run if there are any filesystem changes.  Default is 15 seconds.  For many users this will be close enough to real-time for the purpose.  This can be combined with (or substituted) for another value in the settings field;
 
''maxDelay = # '' (not shown, part of settings block) will queue this # of file changes before calling rsync.  Between delay and/or maxDelay it is possible to tune the timing of your file copies to your exact preference.  inotify waits for write_close  to add files to the transfer list so there are no issues with rsync trying to upload uncompleted files.  I typically only record short 10 second clips.  If you use your cameras to continuously record be aware that files in the process of being written will not be copied until they are closed.  It may be further possible to remove this limitation with the inotifyMode directive, though I have not tested this.
 
''exclude = { '*.jpg' }''  These are generally used by the Zoneminder GUI but are not really needed as standalone files in my case (though some people may save .jpgs on purpose, in which case you will want to remove this line).  I'm only interested in having backups of the videos, so not syncing these reduces the number of uploaded files significantly. You can break them out by type '*snapshot*.jpg' , 'alarm.jpg' , '*capture*.jpg' etc., or just exclude them all as shown.  Functions like zmNinja's 24 Hour Review may create hundreds of snapshots all at once, which lsyncd will immediately sync, so you can avoid that with these filters.  The log snippet below did not include this filter.
 
''archive = true ''This preserves ownership and permissions on the copied files.


''targetdir = ''(not shown) subdirectory to use on the targetI'm just using the root directory in a dedicated bucketYou may not want this and should specify something here.
''ignore_times = true'' This sets rsync's --ignore-times directive.  For the purpose it is being used here it is safe to assume you will never have a newer copy of the files you are uploading already existing in your destinationRemoving this check reduces API calls and is a noticeable optimization for AWSIf your cloud provider does not meter traffic you probably won't notice any difference with this either way.


''delay - ''this is how often to run rsync if it is not triggered by any filesystem changesDefault is 15 secondsFor many users this will be close enough to real-time for the purposeThis can be combined with (or substituted) for another value in the settings field;
''inplace = true'' and ''whole_file = true'' Sets these rsync options and are a further optimization for S3 API use efficiency.  This makes rsync's functionality less "sync" and more straight "copy", again because we can safely assume only newly created files will ever be transferredThese options used together make this method of backing up events very close in terms of total traffic cost to natively writing to S3 straight from ZoneminderOnce again if your provider does not charge for API calls this will not matter to you.
 
''prune_empty_dirs = true'' Because they are watched by inotify, rsync will create empty remote directories even for events Zonmeminder deletes locallyIf you are not syncing deletes you will want to use this flag, see ''delete'' below.


''maxDelay - # '' (not shown) will queue this # of file changes before calling rsync.  Between delay and/or maxDelay it is possible to tune the timing of your file copies to your exact preference.  inotify waits for write_close  to add files to the transfer list so there are no issues with rsync trying to upload uncompleted files.  I typically only record short 10 second clips.  If you use your cameras to continuously record be aware that files in the process of being written will not be copied until they are closed.


Two other very important variables in this file are;
Two other very important variables in this file are;


''delete = false'' this is pretty self explanatory, file deletes will not be synced from source to target.  In the case of my S3 bucket I have lifecycle rules that handle this.  If you prefer for your remote storage location to synchronize the deletions made by Zoneminder you can set this to true.   
''delete = false'' this is also pretty self explanatory, file deletes will not be synced from source to target.  In the case of my S3 bucket I have lifecycle rules that handle this.  If you prefer for your remote storage location to synchronize the deletions made by Zoneminder you can set this to true. This also means if you accidentally delete events from your local storage they will be deleted from the cloud as wellUse with caution. 


''init = false'' causes the lsyncd to not do a complete initial synchronization between source and target on service startup.  On S3 this is expensive in terms of API calls and may take a while.  I know I'm going to miss some events during system maintenance/OS updates etc anyway so I don't consider this worth the cost, but you can set to true if you want it.
''init = false'' causes the lsyncd to not do a complete initial synchronization between source and target on service startup.  On S3 this is expensive in terms of API calls and may take a while.  I know I'm going to miss some events during system maintenance/OS updates etc anyway so I don't consider this worth the cost, but you can set to true if you want it.  Be warned it will make your target an exact mirror of your source.


Having delete and init set to false also should make this config fairly safe to run against an existing target if you just blindly copy it to your machine without heeding my previous warning. You can also add options to be added directly to the rsync calls. You can sync multiple directories to multiple targets with multiple options.  There are exclude options which you may find useful. It is far too much to cover here so read the docs, but at this point you should get the idea.
Having delete and init set to false also should make this config fairly safe to use against an existing target if you just run it on your machine without heeding my previous warning. ''Don't change or remove these unless you are absolutely sure you know what you are doing.'' There are also options besides true and false so read the docs for more info. It is far too much to cover here but the simple config above is all I need for my purposes.


Once the service is configured you can control it with systemctl like any other systemd service.  You can check the logs for startup or running errors with journalctl.  If syncs are working correctly you can watch them as uploads are logged very shortly after the events are saved to local disk;
Once the service is configured you can control it with systemctl like any other systemd service.  By default it will be disabled so you will need to enable it to run automatically after suitable testing.  You can check the logs for errors with journalctl (see note about logging above).  If syncs are working correctly you can watch them as uploads are logged very shortly after the events are saved to local disk;


  <nowiki>[root@nvr system]# tail -50 /var/log/lsyncd.log
  <nowiki>[root@nvr system]# tail -50 /var/log/lsyncd/lsyncd.log
/1/2025-12-11/8707/8707-video.mp4
/1/2025-12-11/8707/8707-video.mp4
/1/2025-12-11/8707/snapshot.jpg
/1/2025-12-11/8707/snapshot.jpg
Line 138: Line 172:
[root@nvr system]#</nowiki>  
[root@nvr system]#</nowiki>  


I'm happy to have offsite copies of all my events now.  While they are not tracked in the Zoneminder database, they are still saved in folders by monitor # and date for easy retrieval should the ZM database become unavailable.  Props to the ZM devs for their great software, and I hope this writeup proves useful to some other users in the future.
Note:  If you receive errors similar to "maximum number of watches reached" in the logs you may need to increase the default limits on your system for /proc/sys/fs/inotify/max_user_instances and /proc/sys/fs/inotify/max_user_watches.  See here for more details;
 
https://support.scc.suse.com/s/kb/360054835111?language=en_US
 
I'm happy to again have offsite copies of all my events now.  While they are not tracked in the Zoneminder database, they are still saved in folders by monitor id and date for easy retrieval should the ZM database become unavailable.  Props to the ZM devs for their great software, and I hope this writeup proves useful to some other users in the future.
 
==See Also==
* https://github.com/lsyncd/lsyncd/issues/668 - Beware that lsync will do rsync with deletes on by default. With some casual testing, you might inadvertedly delete your home directory or root filesystem. From the github: "''I was told many times delete by default was a bad idea and was a result of the usual use case in the sense of replicating the target exactly like the source. I suppose the people telling so have a point.''"
* https://docs.rockylinux.org/10/books/learning_rsync/06_rsync_inotify/
* https://linuxvox.com/blog/what-is-the-proper-way-to-use-inotify/ - Using inotify-tools command line interface
* https://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/ - Probably easier than lsyncd. (Was not available in default Debian Bullseye (have to use backports https://qa.debian.org/madison.php?package=incron&table=archived&a=&c=&s=#) but is in most other Debian releases.).
* https://wiki.alpinelinux.org/wiki/Inotifyd
* https://wiki.archlinux.org/title/Incron

Latest revision as of 15:27, 22 February 2026

It's possible to use lsync daemon, which leverages the inotify https://en.wikipedia.org/wiki/Inotify kernel subsystem to run triggers based on events in a directory (create, modify, delete). Triggers can be varied (you can run any arbitrary command) but in the default use case, is as a filesystem backup / clone using the rsync command.

You may also want to consider other options. Inotify has command line tools, and there is an incrond, which uses cron syntax to run commands based on inotify.

About

After recently migrating from another popular NVR software on Windows to Zoneminder on Linux I started looking for a way to do near-real-time backups of my events to offsite storage, simply as a way to have a copy should the local ZM server be compromised by break-in, fire, etc. Software such as Dropbox, Google Drive, etc. make it relatively easy to do this on Windows but the setup on Linux takes bit more effort.

I wanted to keep it as simple as possible and don't need my backup solution to interface with the ZM database. After experimenting with several other options, I have settled on an application called lsyncd. It runs as a service and leverages inotify and rsync to watch a directory or directories and mirror only new changes to another storage location. Lsyncd should be available as a package for most Linux distributions. I used the installation instructions here for my Rocky Linux system;

https://docs.rockylinux.org/10/guides/backup/mirroring_lsyncd/

Note the version of lsyncd included in EPEL for Rocky 8 (2.2.2-9) has a noticeable memory leak. Building from source solved this issue.

At this point it is worthwhile to at least skim through the docs.

https://lsyncd.github.io/lsyncd/manual/config/file/

https://linux.die.net/man/1/rsync

Both lsyncd and rsync are highly configurable. For many people the use case will be very simple like my own which I will describe here - watch my Zoneminder storage location and copy new files to a remote location, but much more complex configurations are certainly possible. In my case I'll be writing to an Amazon S3 bucket which I have already setup using S3FS-FUSE according to the documentation here;

https://zoneminder.readthedocs.io/en/latest/userguide/options/options_storage.html

You can also use an Rclone mount point as a destination. I've tested this with Google Drive but it should work with any cloud provider supported by Rclone;

https://rclone.org/#providers

Follow the instructions here to set up your cloud storage as a local mountpoint;

https://rclone.org/commands/rclone_mount/

Setup

Once lsyncd is installed you will need to create a configuration file in /etc/lsyncd.conf. I've included mine as an example below. Note this file uses Lua syntax, not Bash. Be VERY CAREFUL if you are testing remote write/sync/copy operations to a destination that already has important data on it (see links below). I highly recommend initial testing with an empty bucket or target filesystem. You have been warned.

settings {

   logfile = "/var/log/lsyncd/lsyncd.log",

   statusFile = "/var/log/lsyncd/lsyncd-status.log",

   statusInterval = 10,

   maxProcesses = 1,

   insist = true

   }



sync {

   default.rsync,

   source = "/home/Cameras",

   target = "/media/aws",

   delete = false, 

   delay = 10,    

   init = false,

   exclude = { '*.jpg' },

   rsync = {

     archive = true,

     ignore_times = true,

     inplace = true,

     whole_file = true,

     prune_empty_dirs = true

   }

}

logfile = is pretty self-explanatory. Be sure to set up log rotation after you are done setting lsyncd up, if required. By default the packaged lsyncd service logs to the systemd journal, which also by default echoes to /var/log/messages. Since the application logs are sufficient I've added the 'StandardOutput=null' option to the lsyncd service's unit file to prevent this over-duplication, but I've left StandardError with the existing default logging to the journal.

maxProcesses = 1 is the default. If you are syncing multiple sources or targets you may want to increase this.

insist = true allows the service to start even if the target is not ready. According to the docs "In production mode it is recommended to have insist on."

source = The directory where ZM events are stored.

target = The location to copy events to. This will be the mount point for the cloud storage configured previously.

targetdir = "/somedirectory" (not shown, part of sync block) subdirectory to use on the target. I'm just using the root directory in a dedicated bucket. You may not want this and should specify something here.

delay = This is how often rsync will run if there are any filesystem changes. Default is 15 seconds. For many users this will be close enough to real-time for the purpose. This can be combined with (or substituted) for another value in the settings field;

maxDelay = # (not shown, part of settings block) will queue this # of file changes before calling rsync. Between delay and/or maxDelay it is possible to tune the timing of your file copies to your exact preference. inotify waits for write_close to add files to the transfer list so there are no issues with rsync trying to upload uncompleted files. I typically only record short 10 second clips. If you use your cameras to continuously record be aware that files in the process of being written will not be copied until they are closed. It may be further possible to remove this limitation with the inotifyMode directive, though I have not tested this.

exclude = { '*.jpg' } These are generally used by the Zoneminder GUI but are not really needed as standalone files in my case (though some people may save .jpgs on purpose, in which case you will want to remove this line). I'm only interested in having backups of the videos, so not syncing these reduces the number of uploaded files significantly. You can break them out by type '*snapshot*.jpg' , 'alarm.jpg' , '*capture*.jpg' etc., or just exclude them all as shown. Functions like zmNinja's 24 Hour Review may create hundreds of snapshots all at once, which lsyncd will immediately sync, so you can avoid that with these filters. The log snippet below did not include this filter.

archive = true This preserves ownership and permissions on the copied files.

ignore_times = true This sets rsync's --ignore-times directive. For the purpose it is being used here it is safe to assume you will never have a newer copy of the files you are uploading already existing in your destination. Removing this check reduces API calls and is a noticeable optimization for AWS. If your cloud provider does not meter traffic you probably won't notice any difference with this either way.

inplace = true and whole_file = true Sets these rsync options and are a further optimization for S3 API use efficiency. This makes rsync's functionality less "sync" and more straight "copy", again because we can safely assume only newly created files will ever be transferred. These options used together make this method of backing up events very close in terms of total traffic cost to natively writing to S3 straight from Zoneminder. Once again if your provider does not charge for API calls this will not matter to you.

prune_empty_dirs = true Because they are watched by inotify, rsync will create empty remote directories even for events Zonmeminder deletes locally. If you are not syncing deletes you will want to use this flag, see delete below.


Two other very important variables in this file are;

delete = false this is also pretty self explanatory, file deletes will not be synced from source to target. In the case of my S3 bucket I have lifecycle rules that handle this. If you prefer for your remote storage location to synchronize the deletions made by Zoneminder you can set this to true. This also means if you accidentally delete events from your local storage they will be deleted from the cloud as well. Use with caution.

init = false causes the lsyncd to not do a complete initial synchronization between source and target on service startup. On S3 this is expensive in terms of API calls and may take a while. I know I'm going to miss some events during system maintenance/OS updates etc anyway so I don't consider this worth the cost, but you can set to true if you want it. Be warned it will make your target an exact mirror of your source.

Having delete and init set to false also should make this config fairly safe to use against an existing target if you just run it on your machine without heeding my previous warning. Don't change or remove these unless you are absolutely sure you know what you are doing. There are also options besides true and false so read the docs for more info. It is far too much to cover here but the simple config above is all I need for my purposes.

Once the service is configured you can control it with systemctl like any other systemd service. By default it will be disabled so you will need to enable it to run automatically after suitable testing. You can check the logs for errors with journalctl (see note about logging above). If syncs are working correctly you can watch them as uploads are logged very shortly after the events are saved to local disk;

[root@nvr system]# tail -50 /var/log/lsyncd/lsyncd.log
/1/2025-12-11/8707/8707-video.mp4
/1/2025-12-11/8707/snapshot.jpg
/1/2025-12-11/8707/alarm.jpg
/6/2025-12-11/8708/
/6/2025-12-11/
/6/
/6/2025-12-11/8708/8708-video.mp4
/6/2025-12-11/8708/snapshot.jpg
/6/2025-12-11/8708/alarm.jpg
/8/2025-12-11/8706/8706-video.mp4
/8/2025-12-11/8706/
/8/2025-12-11/
/8/
Thu Dec 11 14:30:25 2025 Normal: Finished a list after exitcode: 0
Thu Dec 11 14:31:50 2025 Normal: Calling rsync with filter-list of new/modified files/dirs
/2/2025-12-11/8709/
/2/2025-12-11/
/2/
/
/2/2025-12-11/8709/8709-video.mp4
/2/2025-12-11/8709/snapshot.jpg
/2/2025-12-11/8709/alarm.jpg
Thu Dec 11 14:32:00 2025 Normal: Finished a list after exitcode: 0
Thu Dec 11 14:32:04 2025 Normal: Calling rsync with filter-list of new/modified files/dirs
/2/2025-12-11/8710/
/2/2025-12-11/
/2/
/
/2/2025-12-11/8710/8710-video.mp4
/2/2025-12-11/8710/snapshot.jpg
/2/2025-12-11/8710/alarm.jpg
Thu Dec 11 14:32:11 2025 Normal: Finished a list after exitcode: 0
Thu Dec 11 14:32:45 2025 Normal: Calling rsync with filter-list of new/modified files/dirs
/2/2025-12-11/8711/
/2/2025-12-11/
/2/
/
/2/2025-12-11/8711/8711-video.mp4
/2/2025-12-11/8711/snapshot.jpg
/2/2025-12-11/8711/alarm.jpg
Thu Dec 11 14:32:51 2025 Normal: Finished a list after exitcode: 0
Thu Dec 11 14:37:36 2025 Normal: Calling rsync with filter-list of new/modified files/dirs
/3/2025-12-11/8712/
/3/2025-12-11/
/3/
/
/3/2025-12-11/8712/8712-video.mp4
/3/2025-12-11/8712/snapshot.jpg
/3/2025-12-11/8712/alarm.jpg
Thu Dec 11 14:37:41 2025 Normal: Finished a list after exitcode: 0
[root@nvr system]# 

Note: If you receive errors similar to "maximum number of watches reached" in the logs you may need to increase the default limits on your system for /proc/sys/fs/inotify/max_user_instances and /proc/sys/fs/inotify/max_user_watches. See here for more details;

https://support.scc.suse.com/s/kb/360054835111?language=en_US

I'm happy to again have offsite copies of all my events now. While they are not tracked in the Zoneminder database, they are still saved in folders by monitor id and date for easy retrieval should the ZM database become unavailable. Props to the ZM devs for their great software, and I hope this writeup proves useful to some other users in the future.

See Also