During a backup job, the utility opens each file, apply the filters if configured and reads the content in chunks (by default 1 megabyte), checking if the hash of the data slice is already in the catalog. This to ensure that only new data is added to the archive and to avoid storing data slices already seen for saving space and provide data deduplication.
The backup job generates two files: a ZIP archive containing the new data slices and a catalog with a summary of all the snapshots, the files, their properties (filename, timestamp, size) and the referenced data slices. The configuration and a log file is also associated and saved to the same location.
Optionally, an external command can be run when the backup process is over.
After the first run, incremental backup will store in the catalog new or updated files and only the portion of data that has changed.
Upon each backup run or manually the archive is automatically purged to remove expired files from the catalog and delete from the ZIP archives data slices no more referenced by any file.
Information about an archive can be queried and the utility returns for each snapshot the number of files and the associated size in the backup folder. Details regarding files and directories included into a single snapshot can be requested as well.
During a restore job, a given snapshot is extracted from the archive. This can take place into the original location, into a different directory, or merged with an existing set of directories, as specified in the configuration file. Likewise a backup, each existing file is analyzed and only the data slices that need to be restored are extracted from the archive. The data extracted is then verified to ensure the job has been successful.
For both backup and restore, the utility runs an analysis at first and provide a summary, waiting for the user's input before proceeding.