From: Angus C. <ac...@sa...> - 2006-05-05 14:55:23
|
Jeff Vian wrote: > On Wed, 2006-05-03 at 11:34 +1000, david wrote: > >> On Tue, 2006-05-02 at 09:37 -0500, ATM Logic wrote: >> >> >>> Yes, I agree totally on the backup not found... At least I had one from a >>> Month and a Half ago... Now I have shown the wife how to backup our >>> accounting system she has backed it up twice in the last hour :) >>> >>> >> This sounds like a good time for a backup discussion. There is no way >> that I can trust myself to run backup manually, so I have this running >> as a daily cron job: >> >> >> pg_dump -U sql-ledger -f /backup/postgresql/`date +%y%m%d%H%M`.SL SL >> rdiff-backup /backup/ me@remote::/backup/ 1> /dev/null 2>> /backup/log >> echo "`date +%y%m%d%H%M` backup complete" >> /backup/log >> >> > Just a nitpicking comment here on the syntax. > You are calling date twice, once before starting the backup and once > after it completes. The return from the two calls most likely will be > different in at least the seconds value and may be different in the > minutes value (or depending on time of day may also be different in the > hours value). > > I would recommend you do a single call to date and assign the value to a > variable, then use that variable to name the file and return the > message. Something like TDATE=`date +%y%m%d%H%M` prior to the call to > pg_dump, then use ${TDATE} in place of your date calls in both places > above. This will avoid possible inconsistencies between the file name > and the results reported by the echo command. > > >> That should give me two daily copies of the database, one local, one >> remote. Naturally, I never test my backup system. That would be much too >> sensible, and by test, I mean restore and compare with what it really >> should be.. I do know that this produces lots of files that LOOK ok! So >> far I've not had any failures. If the above doesn't work I'm in trouble. >> >> What do others do? My system is not "live" - I enter data after the >> event, so once per day is reasonably sane. What would you do if you were >> depending on live data being entered from lots of locations and the loss >> of an hour's entries could be catastrophic? >> >> Another approach is to back it up to a source code repository. That way, your dump file can be checked in and you can have copies going back to the beginning of time. You can check a copy out for testing. You can bring up a new revision of the software on another machine, test the new revision on the old dataset, and know that your main setup is still ok, etc, etc, etc. It's a handy way to deal with it and it doesn't take much more space than a raw dump file. Angus Carr. |