Menu

#14 Copy/sync to archive storage

open
nobody
None
5
2010-04-07
2010-04-07
Anonymous
No

Hi!

Maybe I missed it in the menus/manuals but it would be very helpful if I could just "archive" a folder structure to some other drive. My current workflow is like this:

- Import images via Mapivi to a local drive. This gives a structure like
/media/Photos/2010/unsorted/R001/event1
/media/Photos/2010/unsorted/R001/event2
/media/Photos/2010/unsorted/R002/event1
/media/Photos/2010/unsorted/R002/event2
/media/Photos/2010/unsorted/R002/event3
and so on. All folders containing "raw" (ie. R) images ie. the original JPG (and sometimes RAW files), each R-folder is approximately the size of a DVD.
- I do the cataloguing, IPTC tagging and so on in the unsorted/R*
- Once catalogued the unsorted/R* move up in the hierarchy to 2010/R*, now they might go to a DVD. This move already causes some issue in mapivi as it believes the files to reside in the wrong directory and/or doesn't notice that ther're new dirs. This can be solved by just opening the directories by hand and then cleaning the database.
- Derived images creates some new "D-folders" (D001, D002)

Now, all this involves quite some work therefore a backup is imperative. For general backup I use network connected RAID storage (those are quite cheap these days and more convenient for frequent backups than DVD plus it allows to store off site easily). Usually, I do this backup by using rsync from the local drive to the NAS. Usage of rsync results in only changed files to be copied. Still it operates on changed files.

And here actually an issue arises: though it is very sensible to store all metadata right where they belong, this still results in a bunch of unnecessary copies as each time a keyword is added the image is actually changed therefore not a keyword is propagated but a huge image file. Here it would actually be more sensible and economical to just propagate the metadata say by exiftool dumping out an xmp and apply that in the target folder if the file already exists. If there is still a difference then a copy of the file itself is necessary.

Besides: due to this copy mechanism in the end there exist two identical file structures with (hopefully ;) identical content. As far as I can see there is currently no way to tell mapivi that there is an easy way to "recover" an image file from such a backup or that a search should actually be done on the local copy only taking the archive into consideration only if no local copy exists (some sort of "dark archive").

Therefore the proposal would be:
- allow for a directory that is handled as "dark archive"
- allow for "backup to archive" which uses
* "propagate metadata to dark archive" if the file does not exist
* "propagate file" if the file does not exist in the archive or if the archive file differs after metadata propagation
- allow for "recover from archive"

Note: there are work arounds right now for all issues. Probably they're too specific and I should do a plugin. On the other hand, I think they could come in very handy for many users especially given the current price tags for TB-range RAID arrays.

So this sounds a longish list of missing features: Mapivi is just great! I really love it :) Keep up working!

:)

Discussion


Log in to post a comment.