You can subscribe to this list here.
2003 |
Jan
(11) |
Feb
(50) |
Mar
(10) |
Apr
(10) |
May
(7) |
Jun
(44) |
Jul
(23) |
Aug
(13) |
Sep
(27) |
Oct
(41) |
Nov
(21) |
Dec
(21) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(23) |
Feb
(20) |
Mar
(9) |
Apr
(6) |
May
(5) |
Jun
(10) |
Jul
(8) |
Aug
(16) |
Sep
(8) |
Oct
(7) |
Nov
(8) |
Dec
(11) |
2005 |
Jan
(6) |
Feb
(3) |
Mar
(11) |
Apr
(14) |
May
(17) |
Jun
(19) |
Jul
(9) |
Aug
(5) |
Sep
(6) |
Oct
(1) |
Nov
|
Dec
(3) |
2006 |
Jan
(7) |
Feb
(5) |
Mar
(4) |
Apr
(8) |
May
(10) |
Jun
|
Jul
(5) |
Aug
(11) |
Sep
|
Oct
(6) |
Nov
(7) |
Dec
(1) |
2007 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
(4) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
(8) |
Nov
(1) |
Dec
(1) |
2008 |
Jan
(1) |
Feb
(1) |
Mar
(2) |
Apr
|
May
(1) |
Jun
(8) |
Jul
(1) |
Aug
|
Sep
(1) |
Oct
(9) |
Nov
(3) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
(3) |
Feb
|
Mar
(11) |
Apr
|
May
(3) |
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
(11) |
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
|
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
(3) |
2017 |
Jan
(7) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bosschaert, M. <m.b...@An...> - 2003-09-30 14:29:28
|
Hi again, I keep having troubles with making differential backups. I use flexbackup 1.2.0 on a Dell poweredge 1400, a PV100T Travan TR5 tape drive and debian 3.0r1 When I run 'flexbackup -newtape', followed by e.g. 'flexbackup -set data' , I get a fully extractable backup on tape. All functions work fine. Now I change some data and do a 'flexbackup -set data -differential'. The backup is made without problem, but i cannot extract and 'flexbackup -list -num 3' gives: cardiores2:/home/rdcard# flexbackup -list -num 3 flexbackup version 1.2.0 (http://flexbackup.sourceforge.net) /etc/flexbackup.conf syntax OK |------------------------------------------------------------ | Checking 'buffer' on this machine... Ok | Checking /bin/sh on this machine... bash2 |------------------------------------------------------------ | Logging output to "flexbackup.list.200309301530.log" | Positioning tape at file number 3 |------------------------------------------------------------ At block 767. |------------------------------------------------------------ | buffer -m 32m -p 75 -s 64k -t -u 100 -i "/dev/nht0" | afio -t -z -D | /usr/bin/flexbackup -v -b 64k - |------------------------------------------------------------ Kilobytes Out 0 afio: "-": No input |------------------------------------------------------------ At block 767. |------------------------------------------------------------ I've tried various settings for backupformat and compression, but no difference. Could it be a faulty tape, tapedrive, or general problems with this particular drive or is it a software problem. Are there any tests I can run to pin down the cause? Thanks Mike |
From: Mike F. <mi...@LI...> - 2003-09-30 14:10:17
|
Hi, Is it possible to store backups on the remote server like with remote type (server:/dev/type0) but into directory (server:/data/backup)? If not - do you plan to implement such feature? Thanks, Mike. |
From: Edwin H. <ed...@co...> - 2003-09-29 13:51:54
|
[ Simon J. Blandford wrote: ] > buffer (writer): write of data failed: Input/output error > bytes to write=32768, bytes written=-1, total written 5841504K > afio: "-" [offset 5728m+576k+0]: Fatal error: > afio: "-": Broken pipe How big are your tapes? "buffer" died while trying to write a chunk of data, at about 5.7GB worth. You might also check syslog messages to see if the tape driver had anything to say. (Something like that is more likely, because for a tape full you get an ENOSPACE message usually not an EIO) If the driver seems ok and the tape wasn't full, maybe try with "buffer" set to off to check that that isn't the problem. (it'll be slower, but it'll be the same data on the tape) -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: Simon J. B. <ho...@si...> - 2003-09-29 08:56:58
|
I get this message at the end of the log during backup (file name mangled to xxx to protect identity of web server). I've tried two different tapes with the same result. The backup system was working a couple of weeks ago. Thanks, SimonB home/backup/xxx.co.uk/usr/share/doc/rhl-ig-x86-es-8.0/figs/partitions/dstrct-reprt.png.z -- (85%) home/backup/xxx.co.uk/usr/share/doc/rhl-ig-x86-es-8.0/figs/partitions/extended-partitions.png.z -- (96%) buffer (writer): write of data failed: Input/output error bytes to write=32768, bytes written=-1, total written 5841504K afio: "-" [offset 5728m+576k+0]: Fatal error: afio: "-": Broken pipe ERROR from backup, exiting offending command(s): cd "/" && (printf "//--/tmp/label.26641 flexbackup.volume_header_info\n" && find . -regex "\./\(proc\|tmp\)/.*" -prune -o -xdev ! -type s ! -regex ".*/[Cc]ache/.*" ! -regex ".*~"$ ! -regex ".*/dev/.*" ! -regex ".*/mnt/.*" ! -regex ".*/proc/.*" ! -regex ".*/var/tmp/.*" ! -regex ".*/var/run/.*" ! -regex ".*/var/lock/.*" ! -regex ".*/var/spool/mqueue.*" ! -regex ".*/tmp/.*" ! -regex ".*home/backup/local/.*" -print ) | afio -o -E /tmp/nocompress.26641 -z -1 m -P gzip -Q -4 -Z -M 2m -T 3k -v -b 32k - | buffer -m 24m -p 75 -s 32k -t -u 100 -B -o "/dev/nst0" |
From: Edwin H. <ed...@co...> - 2003-09-29 04:42:56
|
[ James Handsel wrote: ] > > Can anyone tell me why it shows the backup completeing successfully, but still exiting w/an error? > It's something further up. Run with verbose off; it'll be easier to spot if the filenames aren't printed. -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: James H. <jha...@la...> - 2003-09-28 22:05:30
|
I get the following message at the end of a backup: afio: 3382m+640k+0 bytes written in 576 seconds. The operation was successful. Kilobytes Out 3463808 ERROR from backup, exiting offending command(s): cd "/data01" && (printf "//--/tmp/label.2149 flexbackup.volume_header_info\n" && f ind . -depth -xdev ! -type s ! -regex ".*/[Cc]ache/.*" ! -regex ".*~"$ ! -regex ". *core" ! -regex ".*\.qic"$ -print ) | afio -o -z -1 m -v -b 128k - | buffer -m 10m -p 75 -s 128k -t -u 100 -B -o "/dev/tape" Can anyone tell me why it shows the backup completeing successfully, but still exiting w/an error? Thanks . . . jim |
From: peter g. <pc...@ag...> - 2003-09-27 12:42:51
|
* Cavan Kelly <ca...@jc...> [030927 04:59]: > Good morning, > > I'm trying to run a restore remotely. It's already failed twice because > of a power failures at my location - power stayed on at the server. > > What I'd like to know is: Is there any way I can issue a restore command > from here that will complete even if my terminal session is disconnected? > I realize this is probably not directly related to flexbackup but was > hoping someone else has run into the problem because selective restores > seem to take hours. > > My current command looks like this: > > /usr/bin/flexbackup -extract -files /root/flexbackup/extract-list Try: nohup /usr/bin/flexbackup -extract -files /root/flexbackup/extract-list & HTH, /pg -- Peter Green : Agathon Group : pc...@ag... ------------------------------------------------------------- marked is neutralized to unmarked -- http://www.googlism.com/ |
From: peter g. <pc...@ag...> - 2003-09-27 12:40:07
|
* Edwin Huffstutler <ed...@co...> [030926 16:34]: > > [ peter green wrote: ] > > I'm running flexbackup in rsync mode, and when backing up qmail's Maildir/s > > (basically, one file per email message) I can get errors. The problem is > > that users download emails during the backup process, so when rsync goes to > > grab them, they have "disappeared". The rsync error is, e.g., > > > > send_files failed to open //home/wjtl/users/jeremy/new/1064220493.9326.beta.itickets.com: No such file or directory > > > > We *need* to backup email directories, so excluding them isn't an option. We > > cannot really afford to shut off POP3/IMAP services while backups run > > either. So I'm thinking we want flexbackup to ignore these (non-fatal!) > > errors. > > > > Any ideas how we can do this? TIA, > > Looking though the rsync docs, there doesn't seem to be any way to > brush off that kind of problem. > > Does rsync flat-out stop at that point, or does it actually continue with > the rest of the sync? If it continues and flexbackup just quits when it > detects the rsync failed, we might be able to do something. Yes, it looks like it continues and flexbackup "fails" at the end. Nothing is written to the backup directory and the log is not compressed. (I *do* have it set up to compress the log, which seems to happen if the backup is successful.) > This kind of thing keeps coming up. I guess some kind of "ignore errors in > the pipeline" flag can be put in, even though I still don't like it. I've > not wanted to add explicit support for these > files-changing-during-the-backup scenarios people are coming up with, > but.... The thing is that these files don't change, they disappear. In that case, at least, flexbackup really shouldn't care, and should just treat it as if the file was never there in the first place. IMO... Thanks for the response! /pg -- Peter Green : Agathon Group : pc...@ag... |
From: Cavan K. <ca...@jc...> - 2003-09-27 11:04:12
|
Good morning, I'm trying to run a restore remotely. It's already failed twice because of a power failures at my location - power stayed on at the server. What I'd like to know is: Is there any way I can issue a restore command from here that will complete even if my terminal session is disconnected? I realize this is probably not directly related to flexbackup but was hoping someone else has run into the problem because selective restores seem to take hours. My current command looks like this: /usr/bin/flexbackup -extract -files /root/flexbackup/extract-list Thanks, -- Cavan Kelly |
From: Edwin H. <ed...@co...> - 2003-09-26 23:44:35
|
[ peter green wrote: ] > I'm running flexbackup in rsync mode, and when backing up qmail's Maildir/s > (basically, one file per email message) I can get errors. The problem is > that users download emails during the backup process, so when rsync goes to > grab them, they have "disappeared". The rsync error is, e.g., > > send_files failed to open //home/wjtl/users/jeremy/new/1064220493.9326.beta.itickets.com: No such file or directory > > We *need* to backup email directories, so excluding them isn't an option. We > cannot really afford to shut off POP3/IMAP services while backups run > either. So I'm thinking we want flexbackup to ignore these (non-fatal!) > errors. > > Any ideas how we can do this? TIA, Looking though the rsync docs, there doesn't seem to be any way to brush off that kind of problem. Does rsync flat-out stop at that point, or does it actually continue with the rest of the sync? If it continues and flexbackup just quits when it detects the rsync failed, we might be able to do something. This kind of thing keeps coming up. I guess some kind of "ignore errors in the pipeline" flag can be put in, even though I still don't like it. I've not wanted to add explicit support for these files-changing-during-the-backup scenarios people are coming up with, but.... -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: peter g. <pc...@ag...> - 2003-09-26 15:28:36
|
I'm running flexbackup in rsync mode, and when backing up qmail's Maildir/s (basically, one file per email message) I can get errors. The problem is that users download emails during the backup process, so when rsync goes to grab them, they have "disappeared". The rsync error is, e.g., send_files failed to open //home/wjtl/users/jeremy/new/1064220493.9326.beta.itickets.com: No such file or directory We *need* to backup email directories, so excluding them isn't an option. We cannot really afford to shut off POP3/IMAP services while backups run either. So I'm thinking we want flexbackup to ignore these (non-fatal!) errors. Any ideas how we can do this? TIA, /pg -- Peter Green : Agathon Group : pc...@ag... ------------------------------------------------------------- skullcap is needed -- http://www.googlism.com/ |
From: Edwin H. <ed...@co...> - 2003-09-26 13:58:44
|
[ AP - Simon Blandford wrote: ] > I have been using Flexbackup sucessfully to back up to ADR tape. However, > I am now getting an error, apparently after the data is all writen > (verify?), but there is nothing in the log to say exactly what the error > was. > > Here are the last few lines of the log... <snip> It's something further up in the log. afio had something to complain about mid-stream, most likely, and then exited at the end with a non-zero exit code at then end If you can't find it in the log, you can run with "-d verbose=false" so all the filenames are not printed, and it might be easier to see. -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: Edwin H. <ed...@co...> - 2003-09-26 13:51:51
|
[ Cavan Kelly wrote: ] > Good morning, > > I need to restore a file the name of which contains a space i.e. > "customer file". > > Can anyone tell me how to specify this file in extract-list? I think it > might have something to do with %20 but I've tried various syntax (es?) > and haven't found a winning combination. flexbackup [args] -extract -onefile <subdir>/customer\ file flexbackup [args] -extract -onefile "<subdir>/customer file" or put "<subdir>/customer\ file" on one line of the file list to use with -extract -flist. -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: Cavan K. <ca...@jc...> - 2003-09-26 13:28:34
|
Good morning, I need to restore a file the name of which contains a space i.e. "customer file". Can anyone tell me how to specify this file in extract-list? I think it might have something to do with %20 but I've tried various syntax (es?) and haven't found a winning combination. Thanks, -- Cavan Kelly |
From: AP - S. B. <si...@au...> - 2003-09-26 09:53:25
|
I have been using Flexbackup sucessfully to back up to ADR tape. = However, I am now getting an error, apparently after the data is all = writen (verify?), but there is nothing in the log to say exactly what = the error was. Here are the last few lines of the log... ....... var/www/twiki/templates/view.print.tmpl -- okay var/www/twiki/templates/view.rss.tmpl -- okay var/www/twiki/templates/view.tmpl -- okay var/yp -- okay afio: 12317m+256k+0 bytes written in 9605 seconds. The operation was = successful. Kilobytes Out 12612864 ERROR from backup, exiting offending command(s): cd "/" && (printf "//--/tmp/label.31789 flexbackup.volume_header_info\n" = && find . -regex "\./\(proc\|tmp\)/.*" -prune -o -xdev ! -type s ! = -regex ".*/[Cc]ache/.*" ! -regex ".*~"$ ! -regex ".*/dev/.*" ! -regex = ".*/mnt/.*" ! -regex ".*/proc/.*" ! -regex ".*/var/tmp/.*" ! -regex = ".*/var/run/.*" ! -regex ".*/var/lock/.*" ! -regex = ".*/var/spool/mqueue.*" ! -regex ".*/tmp/.*" ! -regex = ".*home/backup/local/.*" -print ) | afio -o -E /tmp/nocompress.31789 -z = -1 m -P gzip -Q -4 -Z -M 2m -T 3k -v -b 32k - | buffer -m 24m -p 75 -s = 32k -t -u 100 -B -o "/dev/nst0" SimonB |
From: Mathieu A. <ma...@ma...> - 2003-09-25 16:15:15
|
+-le 25/09/2003 17:27 +0200, Andrea Francesconi =E9crivait : | Hi. | Is there any possibility to receive flexbackup log by e-mail with fb | v1.2.0? Doesn't cron do that automatically for you ? --=20 Mathieu Arnold |
From: Andrea F. <afr...@si...> - 2003-09-25 15:33:26
|
Hi. Is there any possibility to receive flexbackup log by e-mail with fb v1.2.0? Thank you, fRANz |
From: Bosschaert, M. <m.b...@An...> - 2003-09-24 15:42:05
|
> > buffer (writer): write of data failed: Input/output error > > bytes to write=10240, bytes written=-1, total written 700K > > This point to your problem. (last command in the pipeline). > buffer choked > after about 700k. > > Make sure you've run "flexbackup -test-tape-drive" and you've > got the tape > drive, driver, and stuff like blocksize parameters working > ok. See the FAQ. > > You could also try streaming large files to the tape directly > and make sure > it works. > "flexbackup -test-tape-drive" passes without complaining. Also I'v tried various settings for the blocksize (eg blksize=32 or 64). Strange thing is that I can backup the system as the first set (/usr /root /boot /etc /bin /lib /sbin /var), errors start coming in the next action -dir /usr). Mike |
From: Edwin H. <ed...@co...> - 2003-09-24 15:04:13
|
[ Bosschaert, MAR wrote: ] > buffer (writer): write of data failed: Input/output error > bytes to write=10240, bytes written=-1, total written 700K This point to your problem. (last command in the pipeline). buffer choked after about 700k. Make sure you've run "flexbackup -test-tape-drive" and you've got the tape drive, driver, and stuff like blocksize parameters working ok. See the FAQ. You could also try streaming large files to the tape directly and make sure it works. -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: Edwin H. <ed...@co...> - 2003-09-24 14:59:16
|
[ Jan Tammen wrote: ] > I'm trying to do a backup on a remote server's harddisk. How can I > achieve this using rsync/rsh? You mean archive-to-disk, but to a directory on different machine from the one on which you are running flexbackup? flexbackup currently doesn't work that way (there's a few wrinkles involved). With tape drives its a bit easier, so that is supported. Run flexbackup *FROM* the machine with the disk, specifying the remote machines to be backed up (host1:/dir, host2:/dir, etc.). Most of the processing actually occurs on the remote end. (Or use NFS...) -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: Bosschaert, M. <m.b...@An...> - 2003-09-24 09:40:51
|
Hi, I try to get flexbackup working, however I get the following error: flexbackup -dir / ... ./bin/autom4te ./bin/automake ./bin/setterm ./bin/autoscan ./bin/getkeycodes ./bin/libtool ./bin/ssh-agent ./bin/ksplash ./bin/kspread ./bin/WPrefs buffer (writer): write of data failed: Input/output error bytes to write=10240, bytes written=-1, total written 700K ERROR from backup, exiting offending command(s): cd "/usr" && find . -depth -xdev ! -type s ! -regex ".*/[Cc]ache/.*" ! -regex ".*~"$ -print0 | tar --create --null --files-from=- --ignore-failed-read --same-permissions --no-recursion --totals --label "level 0 /usr Wed Sep 24 10:43:19 2003 tar+gzip from cardiores2.antonius.net" --verbose --sparse --file - | gzip -4 | buffer -m 10m -p 75 -t -u 100 -B -o "/dev/nht0" This error is reproducible whenever I make a backup of a somewhat larger set of data. Smaller sets (eg. /etc) backup with no errors. Also when I do a -newtape and then run a (large) set of data, there are no problems. I run debian 3.0, flexbackup 1.2.0. Any help highly appreciated Mike CONTENTS of /etc/flexbackup.conf $type = 'tar'; $set{'home'} = "/home"; $set{'data'} = "/data"; $set{'backup'} = "/data/backup"; $set{'system'} = "/bin /boot /lib /usr /sbin /mnt /dev /var /etc /opt /initrd /root"; $set{'mysql'} = "/data/mysql"; $set{'htdocs'} = "/data/httpd"; $set{'etc'} = "/etc"; $prune{'/'} = "tmp proc"; $compress = 'gzip'; # one of false/gzip/bzip2/zip/compress/hardware $compr_level = '4'; # compression level (1-9) (for gzip/bzip2/zip) $buffer = 'buffer'; # one of false/buffer/mbuffer $buffer_megs = '10'; # buffer memory size (in megabytes) $buffer_fill_pct = '75'; # start writing when buffer this percent full $buffer_pause_usec = '100'; # pause after write (tape devices only) $device = '/dev/nht0'; $blksize = '0'; $mt_blksize = "0"; $pad_blocks = 'true'; $remoteshell = 'ssh'; # command for remote shell (rsh/ssh/ssh2) $remoteuser = ''; # if non-null, secondary username for remote shells $label = 'true'; # somehow store identifying label in archive? $verbose = 'true'; # echo each file? $sparse = 'true'; # handle sparse files? $indexes = 'true'; # false to turn off all table-of-contents support $staticfiles = 'false'; $atime_preserve = 'false'; $traverse_fs = 'false'; $exclude_expr[0] = '.*/[Cc]ache/.*'; $exclude_expr[1] = '.*~$'; $erase_tape_set_level_zero = 'true'; $erase_rewind_only = 'false'; $logdir = '/var/log/flexbackup'; # directory for log files $comp_log = 'gzip'; # compress log? false/gzip/bzip2/compress/zip $staticlogs = 'false'; # static log filenames w/ no date stamp $prefix = ''; # log files will start with this prefix $tmpdir = '/tmp'; # used for temporary refdate files, etc $stampdir = '/var/lib/flexbackup'; # directory for backup timestamps $indexes = 'true'; $index = '/var/lib/flexbackup/index'; # DB filename for tape indexes $keyfile = '00-index-key'; # filename for keyfile if archiving to dir $sprefix = ''; # stamp files will start with this prefix $afio_nocompress_types = 'mp3 MP3 Z z gz gif zip ZIP lha jpeg jpg JPG taz tgz deb rpm bz2'; $afio_echo_block = 'false'; $afio_compress_threshold = '3'; $afio_compress_cache_size = '2'; $tar_echo_record_num = 'false'; $cpio_format = 'newc'; $dump_length = '0'; $dump_use_dumpdates = 'false'; $star_fifo = 'true'; $star_acl = 'true'; $star_format = 'exustar'; $star_echo_block_num = 'false'; $pax_format = 'ustar'; $zip_nocompress_types = 'mp3 MP3 Z z gz gif zip ZIP lha jpeg jpg JPG taz tgz deb rpm bz2'; $pkgdelta_archive_list = 'rootonly'; $pkgdelta_archive_unowned = 'true'; $pkgdelta_archive_changed = 'true'; |
From: Jan T. <jan...@ew...> - 2003-09-23 18:18:19
|
Hi. I'm trying to do a backup on a remote server's harddisk. How can I achieve this using rsync/rsh? Thanks |
From: Edwin H. <ed...@co...> - 2003-09-20 18:29:00
|
[ Maher Atwah wrote: ] > I would like to use flexbackup to backup large filesystems to another > server filesystem. I have specified the copy option and that uses cpio. > The problem with cpio is that it fails with any file over 2GB. I > recompiled cpio with 64 option and that causes any file over 2GB to be > copied as 0 bytes. > > Is there a way to get cpio to work for files over 2GB? It depends on the system you are using and how cpio was compiled. > The other question, the online documentation of the configuration file > specify that it is possible to use copy and rsync for disk mode: > "'copy' or 'rsync' are extra options if running in archive-to-disk > mode." > > But when I specify "rsync" it errors with option not supported. I have > downloaded and installed "RedHat 9 RPM: flexbackup-1.2.0-1.noarch.rpm". > How can I get rsync to work? The rsync stuff is checked in, but I didn't make a real release with it yet. If you pull CVS it'll be there, or there was a test "flexbackup-1.2.1a.tar.gz" you can find in www.flexbackup.org/tarball I think. (Note you'll also need rsync >= 2.5.6.) -Edwin -- Edwin Huffstutler ed...@co... GnuPG Key ID: AE782DC9 |
From: Maher A. <sa...@mi...> - 2003-09-19 20:01:50
|
I would like to use flexbackup to backup large filesystems to another server filesystem. I have specified the copy option and that uses cpio. The problem with cpio is that it fails with any file over 2GB. I recompiled cpio with 64 option and that causes any file over 2GB to be copied as 0 bytes. Is there a way to get cpio to work for files over 2GB? The other question, the online documentation of the configuration file specify that it is possible to use copy and rsync for disk mode: "'copy' or 'rsync' are extra options if running in archive-to-disk mode." But when I specify "rsync" it errors with option not supported. I have downloaded and installed "RedHat 9 RPM: flexbackup-1.2.0-1.noarch.rpm". How can I get rsync to work? Thanks, Mike |
From: Josh B. <jo...@ag...> - 2003-09-14 19:28:56
|
Mathieu, > > That's because you already have the right version. Don't trust CPAN :) Thanks. I'll just install it by hand, then. Any chance we could build a Bundle::Flexbackup? -- Josh Berkus Aglio Database Solutions San Francisco |