You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(23) |
Jul
(37) |
Aug
(13) |
Sep
(33) |
Oct
(37) |
Nov
(1) |
Dec
(12) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(1) |
Feb
(7) |
Mar
(34) |
Apr
(41) |
May
(20) |
Jun
(13) |
Jul
(2) |
Aug
(20) |
Sep
(13) |
Oct
(8) |
Nov
(15) |
Dec
(32) |
| 2004 |
Jan
(65) |
Feb
(20) |
Mar
(29) |
Apr
(27) |
May
(37) |
Jun
(9) |
Jul
(7) |
Aug
(6) |
Sep
(16) |
Oct
|
Nov
(1) |
Dec
(18) |
| 2005 |
Jan
(18) |
Feb
(3) |
Mar
|
Apr
(14) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2006 |
Jan
|
Feb
|
Mar
(23) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
|
| 2007 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
|
Jun
(13) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Cornelius H. <ha...@ic...> - 2004-05-01 16:08:54
|
Hi, today I upgraded to bobs-0.6.2. I also upgraded PHP, so now I have DB4 instead of DB3 support. I noticed lots of warnings in bobs.log: Warning: dba_open(/backup/blackbox.dirindex.db,n): No such handler: db3 in /backup/current/process/cmd/cmd.1049041308.php on line 115 I greped the source for db3 & db4 and it seems to me, that you are first trying db3, and if that fails db4. Well it seems to work ok, but due to the warnings it is very confusing. Maybe it would be a good idea to suppress these warnings if access via db4 is possible. Or am I missing some thing? Please CC me, cause I'm not subscribed. Conny |
|
From: Joe Z. <joe...@za...> - 2004-04-28 20:32:12
|
Arnaud CONCHON wrote:
> there's still the problem of the incremental backups....
> although i have set up my backups with 1 incremental file to keep, the
> system keeps on backuping a new file everyday... ( i backup a server
> that stores about 60 .PST Outlook files, with an average size of 200
> MB....)
>
> Hope a nice solution will be chosen soon !
> Keep up the good work
>
> Au revoir.
> Arnaud.
The incremental backup cleanup is not yet implemented. I'm working on
that now. In the meantime you can use 'find' to manually remove the old
files. This command will remove incremental files over 30 days old:
find /var/bobsdata/incremental/your_server/your_share/ -ctime +30 -ctime
-999 -exec rm -v {} ';'
Joe
|
|
From: Arnaud C. <arn...@ar...> - 2004-04-28 08:39:17
|
you're right.... everything works fine now there's still the problem of the incremental backups.... although i have set up my backups with 1 incremental file to keep, the system keeps on backuping a new file everyday... ( i backup a server that stores about 60 .PST Outlook files, with an average size of 200 MB....) Hope a nice solution will be chosen soon ! Keep up the good work Au revoir. Arnaud. Joe Zacky a écrit : > SourceForge.net wrote: > >> hi there.....i am trying to use the new version of bobs, primarilly >> with nfs, >> and i have this messages in the browse current link: >> >> Could not create a database of type db3 or db4 (tried both) >> Warning: no such handler: in /var/www/html/bobs/inc/class_db.php on >> line 79 >> >> Warning: Unable to find DBA identifier 0 in >> /var/www/html/bobs/inc/class_db.php >> on line 308 >> No files were found in >> >> the backup process appears to be ok, i tried it with a few files, but >> in the >> checking i have some messages about problems in write files, and i >> changed permission >> and nothing happens... >> can you help? thanks! >> >> Rodrigo ro...@po... >> >> > The problem is in /var/www/html/bobs/inc/class_db.php: > > 43 // check what dba type we should use > 44 $tmpfile = tempnam("/tmp", "BOBS"); > 45 $dbcheck = @dba_open($tmpfile, "c", "db3"); > 46 if ( $dbcheck === FALSE ) { > 47 $dbcheck = @dba_open($tmpfile, "c", "db4"); > 48 if ( $dbcheck === FALSE ) { > > On my redhat 9 system, for some strange reason when this is run as > user apache it fails to open the database. The temp file is created > with 0 bytes. If I run it as myself or root it works fine. > > If I remove the '@' from @dba_open I see these errors when I "Browse > Current": > ------ > Warning: driver initialization failed in > /var/www/html/bobs/inc/class_db.php on line 45 > > Warning: no such handler: db4 in /var/www/html/bobs/inc/class_db.php > on line 47 > Could not create a database of type db3 or db4 (tried both) > Warning: no such handler: in /var/www/html/bobs/inc/class_db.php on > line 79 > > Warning: Unable to find DBA identifier 0 in > /var/www/html/bobs/inc/class_db.php on line 308 > No files were found in > ------ > > Database creation during backups work because they are run by root > from /etc/cron.daily/backup.php. > > I'll bet this only occurs on some systems/configurations. > > I even created a directory /var/www/tmp and made apache the > owner/group and it still failed, do I don't think it's a permissions > problem. > > Aha! I found the answer here: > http://us4.php.net/manual/en/function.dba-open.php > > Note: Up to PHP 4.3.5 open mode 'c' is broken for several internal > handlers and truncates the database instead of appending data to an > existant database. Also dbm and ndbm fail on mode 'c' in typical > configurations (this cannot be fixed). > > I have php-4.2.2-17.2 so mode "c" is broken. If I change the mode to > "n" it work fine! I'll commit these changes later. > > Joe > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: Oracle 10g > Get certified on the hottest thing ever to hit the market... Oracle > 10g. Take an Oracle 10g class now, and we'll give you the exam FREE. > http://ads.osdn.com/?ad_id=3149&alloc_id=8166&op=click > _______________________________________________ > Bobs-devel mailing list > Bob...@li... > https://lists.sourceforge.net/lists/listinfo/bobs-devel |
|
From: Joe Z. <jz...@co...> - 2004-04-28 06:36:28
|
SourceForge.net wrote: >hi there.....i am trying to use the new version of bobs, primarilly with nfs, >and i have this messages in the browse current link: > >Could not create a database of type db3 or db4 (tried both) >Warning: no such handler: in /var/www/html/bobs/inc/class_db.php on line 79 > >Warning: Unable to find DBA identifier 0 in /var/www/html/bobs/inc/class_db.php >on line 308 >No files were found in > >the backup process appears to be ok, i tried it with a few files, but in the >checking i have some messages about problems in write files, and i changed permission >and nothing happens... >can you help? thanks! > >Rodrigo >ro...@po... > > The problem is in /var/www/html/bobs/inc/class_db.php: 43 // check what dba type we should use 44 $tmpfile = tempnam("/tmp", "BOBS"); 45 $dbcheck = @dba_open($tmpfile, "c", "db3"); 46 if ( $dbcheck === FALSE ) { 47 $dbcheck = @dba_open($tmpfile, "c", "db4"); 48 if ( $dbcheck === FALSE ) { On my redhat 9 system, for some strange reason when this is run as user apache it fails to open the database. The temp file is created with 0 bytes. If I run it as myself or root it works fine. If I remove the '@' from @dba_open I see these errors when I "Browse Current": ------ Warning: driver initialization failed in /var/www/html/bobs/inc/class_db.php on line 45 Warning: no such handler: db4 in /var/www/html/bobs/inc/class_db.php on line 47 Could not create a database of type db3 or db4 (tried both) Warning: no such handler: in /var/www/html/bobs/inc/class_db.php on line 79 Warning: Unable to find DBA identifier 0 in /var/www/html/bobs/inc/class_db.php on line 308 No files were found in ------ Database creation during backups work because they are run by root from /etc/cron.daily/backup.php. I'll bet this only occurs on some systems/configurations. I even created a directory /var/www/tmp and made apache the owner/group and it still failed, do I don't think it's a permissions problem. Aha! I found the answer here: http://us4.php.net/manual/en/function.dba-open.php Note: Up to PHP 4.3.5 open mode 'c' is broken for several internal handlers and truncates the database instead of appending data to an existant database. Also dbm and ndbm fail on mode 'c' in typical configurations (this cannot be fixed). I have php-4.2.2-17.2 so mode "c" is broken. If I change the mode to "n" it work fine! I'll commit these changes later. Joe |
|
From: Rene R. <re...@gr...> - 2004-04-25 22:41:27
|
On Sun, 2004-04-25 at 20:59, Joe Zacky wrote: > Rene Rask wrote: > > > > >X should be the number of backup runs. > >Example: Backups run on monday,tuesday,and wednesday. > >I have chosen to keep 20 incremental backups. 20/3 is 6.6 times one week > >(7 days) which gives 46.6 days to keep backups and we round the number > >up to 47 to be safe. > > > >20/3 is the number of weeks > >times 7 is the day count. > > > >I hope this makes sense. > > > > > It makes mathematical sense, but it's confusing and not intuitive. I > would never guess that by looking at the configuration screen. > Additionally, if someone changes the days of their backups, say from 3 > to 5, that shortens the retention period even though backups have only > been running 3 times per week instead of 5. If we use this method, we > should calculate the retention date and show that on the screen so > people would know what it is. A message like: "Backups will be retained for 45 days with the current setting" should do the trick. I would expect incremental to behave as I described. I guess my way of thinking about the backup is a bit different from yours, so it is good to explain this to people in an easy to understand way. > >It seems like I'm wrong here. Guess we could use the find method. But > >I'd still prefer the db method. That is better if we want to do more > >advanced things in the future. > > > > > The database method is safer and more accurate because the modification > date in the incremental archive could be manually changed, say if > someone moved the directory or restored it from a tape backup. I'll play > with the database. The 'find' method was just so easy... I suggest you take a look at the code where I use the db for searching. You basically just make a search for files older than X. Just like you can do with the current setup. It shouldn't be hard since its basically done already :) Cheers |
|
From: Joe Z. <jz...@co...> - 2004-04-25 22:03:02
|
Rene Rask wrote: > >X should be the number of backup runs. >Example: Backups run on monday,tuesday,and wednesday. >I have chosen to keep 20 incremental backups. 20/3 is 6.6 times one week >(7 days) which gives 46.6 days to keep backups and we round the number >up to 47 to be safe. > >20/3 is the number of weeks >times 7 is the day count. > >I hope this makes sense. > > It makes mathematical sense, but it's confusing and not intuitive. I would never guess that by looking at the configuration screen. Additionally, if someone changes the days of their backups, say from 3 to 5, that shortens the retention period even though backups have only been running 3 times per week instead of 5. If we use this method, we should calculate the retention date and show that on the screen so people would know what it is. >It seems like I'm wrong here. Guess we could use the find method. But >I'd still prefer the db method. That is better if we want to do more >advanced things in the future. > > The database method is safer and more accurate because the modification date in the incremental archive could be manually changed, say if someone moved the directory or restored it from a tape backup. I'll play with the database. The 'find' method was just so easy... Cheers, Joe |
|
From: Rene R. <re...@gr...> - 2004-04-25 18:13:12
|
On Sun, 2004-04-25 at 18:16, Joe Zacky wrote: > >Anyway. A simple "delete after X incremental backups" would be a good > >start. X not being days, since we can be sure the backup is run every > >day, requires a little date manipulation or some other system of telling > >when and what to delete. > > > > > Here you're saying X is "how many." I'm confused, should X be number of > days or number of files? > X should be the number of backup runs. Example: Backups run on monday,tuesday,and wednesday. I have chosen to keep 20 incremental backups. 20/3 is 6.6 times one week (7 days) which gives 46.6 days to keep backups and we round the number up to 47 to be safe. 20/3 is the number of weeks times 7 is the day count. I hope this makes sense. > > > The tests I ran on 2 of my redhat systems showed that using find with > -ctime picked the dates the file was backed up. That is, not the date it > was last modified (-mtime), but the date it was rsync'd to bobs. > > I didn't realize that information was in the database. I never did > anything with the database so I wasn't thinking about it. I'll have to > play with that - I agree that sounds like the right way to find the files. It seems like I'm wrong here. Guess we could use the find method. But I'd still prefer the db method. That is better if we want to do more advanced things in the future. Cheers Rene |
|
From: Joe Z. <jz...@co...> - 2004-04-25 16:20:45
|
Rene Rask wrote:
>On Sun, 2004-04-25 at 07:18, Joe Zacky wrote:
>
>
>>Rene,
>>
>>I was going to add a routine to cleanup incremental backups but now I'm
>>not sure. I was planning to remove incremental files that were backed up
>>over X days ago, but now I see the field text is "How many incremental
>>backups to keep (0 = infinite)", which seems to mean how many copies of
>>an incremental file to keep. So if incremental is set to 9, then there
>>could be up to 9 copies of any incremental file. This means the
>>incremental backup directory could be up to 9 times as large as the
>>current backup.
>>
>>Here's some points to ponder:
>>
>>If we keep X copies of incrementals:
>>o The amount of storage required "could" be much more than if we kept
>>X days.
>>o It's going to be programmatically difficult and time consuming to
>>read through all the file names in incremental and count how many there
>>are of each.
>>
>>If we keep X days worth of incrementals:
>>o The only backup copy of a file would be deleted after X days if the
>>file hasn't been changed. That's probably not a good thing.
>>o Removing the incremental files is easily done with a command like this:
>> find /path/to/bobsdata/incremental/<server>/<share>/ -type f
>>-ctime +<days> -ctime -999 -exec rm -v {} ';'
>>
>>So I'm wondering 1) what your intention is for this field, 2) what are
>>you doing on your bobs system to cleanup incremental files, and 3) how
>>do you suggest I proceed?
>>
>>The question is what do we want our selection criteria to be: "how many"
>>or "how long?"
>>
>>
>>
>
>Hi Joe
>Here is my view on it.
>
>My idea was to have X days worth of files in the incremental store. That
>would allow to say that I want 90 days of incremental backups. In my
>situation, if a file hasn't changed for 90 days it probably done.
>
>
Here you're saying that X is "how long".
>I have no problem with not having a backup of a file in incrementals. If
>that is a problem I need to increase the timespan I keep the files and
>possibly increase my disk capacity if that is needed. That is not a BOBS
>problem, but a backup administrators problem.
>
>Thew way I delete files is to set some limits. Say files older than 6
>months AND larger than 500 MB ( and sometimes by type, like .avi, .mov,
>mpg and so on. Depends on the work being done.)
>Generally my tolerance towards large files is biased so they are deleted
>first.
>A 500MB file is seldom a "from scratch work" but something made from
>another file. Like a movie rendering. Exceptions are files like
>photoshop (.psd) files which can be large and original works.
>
>That is just my situation others probably have different scenarios.
>
>Another feature I like is to have bobs decide when incrementals are
>deleted. Say, when only 50 GB of diskspace is left, delete until there
>is 100 GB free, oldest files first. (I guess this will be a problem when
>having different time spans for the various backups.)
>
>Anyway. A simple "delete after X incremental backups" would be a good
>start. X not being days, since we can be sure the backup is run every
>day, requires a little date manipulation or some other system of telling
>when and what to delete.
>
>
Here you're saying X is "how many." I'm confused, should X be number of
days or number of files?
>Please used the database(s) to search for files to delete. The time on
>disk is not accurate. I think it reflects the time the file was last
>changed (on the originating server).
>The database has the correct information by using the date tagging
>system.
>I used the find command when deleting but I know it isn't the correct
>way to do it.
>
>
The tests I ran on 2 of my redhat systems showed that using find with
-ctime picked the dates the file was backed up. That is, not the date it
was last modified (-mtime), but the date it was rsync'd to bobs.
I didn't realize that information was in the database. I never did
anything with the database so I wasn't thinking about it. I'll have to
play with that - I agree that sounds like the right way to find the files.
>
>Cheers
>Rene
>
>
>
|
|
From: Rene R. <re...@gr...> - 2004-04-25 15:38:53
|
On Sun, 2004-04-25 at 07:18, Joe Zacky wrote:
> Rene,
>
> I was going to add a routine to cleanup incremental backups but now I'm
> not sure. I was planning to remove incremental files that were backed up
> over X days ago, but now I see the field text is "How many incremental
> backups to keep (0 = infinite)", which seems to mean how many copies of
> an incremental file to keep. So if incremental is set to 9, then there
> could be up to 9 copies of any incremental file. This means the
> incremental backup directory could be up to 9 times as large as the
> current backup.
>
> Here's some points to ponder:
>
> If we keep X copies of incrementals:
> o The amount of storage required "could" be much more than if we kept
> X days.
> o It's going to be programmatically difficult and time consuming to
> read through all the file names in incremental and count how many there
> are of each.
>
> If we keep X days worth of incrementals:
> o The only backup copy of a file would be deleted after X days if the
> file hasn't been changed. That's probably not a good thing.
> o Removing the incremental files is easily done with a command like this:
> find /path/to/bobsdata/incremental/<server>/<share>/ -type f
> -ctime +<days> -ctime -999 -exec rm -v {} ';'
>
> So I'm wondering 1) what your intention is for this field, 2) what are
> you doing on your bobs system to cleanup incremental files, and 3) how
> do you suggest I proceed?
>
> The question is what do we want our selection criteria to be: "how many"
> or "how long?"
>
Hi Joe
Here is my view on it.
My idea was to have X days worth of files in the incremental store. That
would allow to say that I want 90 days of incremental backups. In my
situation, if a file hasn't changed for 90 days it probably done.
I have no problem with not having a backup of a file in incrementals. If
that is a problem I need to increase the timespan I keep the files and
possibly increase my disk capacity if that is needed. That is not a BOBS
problem, but a backup administrators problem.
Thew way I delete files is to set some limits. Say files older than 6
months AND larger than 500 MB ( and sometimes by type, like .avi, .mov,
mpg and so on. Depends on the work being done.)
Generally my tolerance towards large files is biased so they are deleted
first.
A 500MB file is seldom a "from scratch work" but something made from
another file. Like a movie rendering. Exceptions are files like
photoshop (.psd) files which can be large and original works.
That is just my situation others probably have different scenarios.
Another feature I like is to have bobs decide when incrementals are
deleted. Say, when only 50 GB of diskspace is left, delete until there
is 100 GB free, oldest files first. (I guess this will be a problem when
having different time spans for the various backups.)
Anyway. A simple "delete after X incremental backups" would be a good
start. X not being days, since we can be sure the backup is run every
day, requires a little date manipulation or some other system of telling
when and what to delete.
Please used the database(s) to search for files to delete. The time on
disk is not accurate. I think it reflects the time the file was last
changed (on the originating server).
The database has the correct information by using the date tagging
system.
I used the find command when deleting but I know it isn't the correct
way to do it.
Cheers
Rene
|
|
From: Joe Z. <jz...@co...> - 2004-04-25 05:24:31
|
Rene,
I was going to add a routine to cleanup incremental backups but now I'm
not sure. I was planning to remove incremental files that were backed up
over X days ago, but now I see the field text is "How many incremental
backups to keep (0 = infinite)", which seems to mean how many copies of
an incremental file to keep. So if incremental is set to 9, then there
could be up to 9 copies of any incremental file. This means the
incremental backup directory could be up to 9 times as large as the
current backup.
Here's some points to ponder:
If we keep X copies of incrementals:
o The amount of storage required "could" be much more than if we kept
X days.
o It's going to be programmatically difficult and time consuming to
read through all the file names in incremental and count how many there
are of each.
If we keep X days worth of incrementals:
o The only backup copy of a file would be deleted after X days if the
file hasn't been changed. That's probably not a good thing.
o Removing the incremental files is easily done with a command like this:
find /path/to/bobsdata/incremental/<server>/<share>/ -type f
-ctime +<days> -ctime -999 -exec rm -v {} ';'
So I'm wondering 1) what your intention is for this field, 2) what are
you doing on your bobs system to cleanup incremental files, and 3) how
do you suggest I proceed?
The question is what do we want our selection criteria to be: "how many"
or "how long?"
Cheers,
Joe
|
|
From: Rene R. <re...@gr...> - 2004-04-19 07:40:23
|
On Mon, 2004-04-19 at 04:26, Joe Zacky wrote: > I put release 0.6.2 on sourceforge. > > Rene, I guess you need to do what you do on freshmeat. > > Joe > I updated the freshmeat entry. Rene |
|
From: Joe Z. <jz...@co...> - 2004-04-19 02:26:58
|
I put release 0.6.2 on sourceforge. Rene, I guess you need to do what you do on freshmeat. Joe |
|
From: Jochen M. | steptown.c. <j.m...@st...> - 2004-04-16 08:04:27
|
Hi Joe, Yes. That't the way I see it, just a start. I'd also like to make it > optional to send email about the status of backups. > That, would be fine tool. Maybe it would also make sense (in the future) to have reporting in the gui (just an idea). Ah, I hope to have some leasure time, to code for bobs. Unfortunately at present, it doesn't look like Cheers Jochen Am Fr, 2004-04-16 um 04.45 schrieb Joe Zacky: > Jochen Metzger wrote: > > >Yes, but it is a good kickoff to get things startet concerning the > >topic logging, isn't it? > > Cheers Jochen |
|
From: Joe Z. <jz...@co...> - 2004-04-16 02:46:11
|
Jochen Metzger wrote: >Yes, but it is a good kickoff to get things startet concerning the >topic logging, isn't it? > >Cheers >Jochen > > Yes. That't the way I see it, just a start. I'd also like to make it optional to send email about the status of backups. Cheers, Joe |
|
From: Jochen M. | steptown.c. <j.m...@st...> - 2004-04-15 08:34:47
|
Hi Joe, > > > I'm in California, quite a ways from Florida. uups .. I still have to learn a lot about the States. Well I asked my wife and now I know. East-Coast and West-Coast. Sorry about that, Jochen |
|
From: Jochen M. <j.m...@om...> - 2004-04-15 08:32:26
|
Hi Joe, thx for your info ... > >> > >> > >What exactly, do you log at present ? > >Only, what is backuped from cmdloop with the above command? > > > > > It's exactly what you would see if you ran cmdloop manually from the > shell. It needs work. Mostly it's the file names that are getting > processed. > Yes, but it is a good kickoff to get things startet concerning the topic logging, isn't it? Cheers Jochen |
|
From: Joe Z. <jz...@co...> - 2004-04-15 02:24:02
|
Jochen Metzger wrote:
>Am Mi, 2004-04-14 um 06.43 schrieb Joe Zacky:
>
>
>>I'm just using output redirection in /etc/init.d/cmdloopd
>>like this:
>>
>> ${CMDLOOP_DIR}/cmdloop >> $LOGFILE 2>&1 &
>>
>>
>>
>What exactly, do you log at present ?
>Only, what is backuped from cmdloop with the above command?
>
>
It's exactly what you would see if you ran cmdloop manually from the
shell. It needs work. Mostly it's the file names that are getting
processed.
Mostly it's this:
2004-04-13 21:35:37: Starting
/backup/bobsdata/current/process/cmd/cmd.1080703636.bash
2004-04-13 21:35:42: Finished
/backup/bobsdata/current/process/cmd/cmd.1080703636.bash
But there's also some of this:
2004-04-11 14:24:18: Starting
/backup/bobsdata/current/process/cmd/cmd.1080703623.bash
Starting backup of uranus.zebra.com / full_drive
receiving file list ... done
backup_dir is /backup/bobsdata/incoming/uranus.zebra.com/full_drive
./
dev/ttyp0
etc/ntp/
usr/local/etc/tmp/
All I added to the output was the date/time and server/share so you
could at least tell what backups are running and how long they took.
>
>
>
>>As they say in Europe:
>>Cheers,
>>Joe
>>
>>
>>
>Re: Cheers
>Jochen
>
>BTW. How do you say in Florida?
>
>
I'm in California, quite a ways from Florida. Sometimes they say
"Sincerely" or "See ya", but usually we just sign our name. Personally I
like "Cheers" much better - it makes me feel like going downstairs for a
glass of wine.
I'm going downstairs now.
See ya,
Joe
|
|
From: Jochen M. <ml...@om...> - 2004-04-14 08:31:10
|
Am Mi, 2004-04-14 um 06.43 schrieb Joe Zacky:
Hi Joe,
> Jochen Metzger wrote:
>
> >Hi Joe,
> >hi Rene,
> >
> >>
> >My Question to Joe:
> >How did you implement the logging function you talked about. Did
> >you use the "logger" function from bash ?
> >
> No. I think the logger logs everything to the syslog. When doing an
> rsync backup there's a message for every file backed up so I gave bobs
> it's own log.
Yes, that seems much better
> I'm just using output redirection in /etc/init.d/cmdloopd
> like this:
>
> ${CMDLOOP_DIR}/cmdloop >> $LOGFILE 2>&1 &
>
What exactly, do you log at present ?
Only, what is backuped from cmdloop with the above command?
> >
> >2. I came across a problem, that occurs, when rsync is not installed
> > on the destination server, you want to backup from to the
> >bobs-server.
> >
> >You obtain an error, an the system will not go on. So it would be
> >helpful to implement that in configuration check as well:
> >
> >Path:
> >Connect with ssh
> >Check if rsync is present (-> whereis rsync)
> >
> >Question:
> >Could you implement that in the configuration check, Joe?
> >
> >
> But the configuration check does test rsync if you select a server
> configured for rsync then click "Check Configuration":
>
> Testing rsync over ssh for backup method "rsync_ssh" using this command:
> rsync -n -e ssh 192.168.0.3:/
>
Uups, that's pretty fine.
Well there's a learning for me:
Always use the frontend, when creating a new backup task.
For a new server I just copied some older backup.ini - files
So that is not really secure, when something is wrong.
THX for that hint.
> >Cheers
> >
> >I hope, I can contribute some stuff in the future. I still
> >think, it is a lovely project
> >
> >Jochen
> >
> The rsync over ssh contribution you made has changed my life. I rarely
> use ftp anymore at work and I've been able to simplify many jobs. rsync
> over ssh is great for keeping computers in sync. You can even run remote
> commands over ssh. It's great!
>
That's cool. THX
> I'm glad you're keeping up on the mailing list. You're input is very
> helpful.
>
Hopefully I'll contribute some further stuff in the future.
> As they say in Europe:
> Cheers,
> Joe
>
Re: Cheers
Jochen
BTW. How do you say in Florida?
--
omatix.de internet services
omatix solutions & trainings
omatix onlineverlag
Jochen Metzger
j.m...@om...
Telefon +49(30) 78709298
Fax +49(30) 78709296
--
Hosting für Ihr Business:
http://hosting.omatix.de
|
|
From: Joe Z. <jz...@co...> - 2004-04-14 04:55:12
|
Jochen Metzger wrote:
>Hi Joe,
>hi Rene,
>
>though I am not involved in contributing new stuff since my last
>contribution due to lack of time, I am always studying the list.
>
>1. I am thinking a lot about logging, because I think it is really
>important to know what is going on, or if a backup failed due
>to some circumstances.
>
>My Question to Joe:
>How did you implement the logging function you talked about. Did
>you use the "logger" function from bash ?
>
No. I think the logger logs everything to the syslog. When doing an
rsync backup there's a message for every file backed up so I gave bobs
it's own log. I'm just using output redirection in /etc/init.d/cmdloopd
like this:
${CMDLOOP_DIR}/cmdloop >> $LOGFILE 2>&1 &
>
>2. I came across a problem, that occurs, when rsync is not installed
> on the destination server, you want to backup from to the
>bobs-server.
>
>You obtain an error, an the system will not go on. So it would be
>helpful to implement that in configuration check as well:
>
>Path:
>Connect with ssh
>Check if rsync is present (-> whereis rsync)
>
>Question:
>Could you implement that in the configuration check, Joe?
>
>
But the configuration check does test rsync if you select a server
configured for rsync then click "Check Configuration":
Testing rsync over ssh for backup method "rsync_ssh" using this command:
rsync -n -e ssh 192.168.0.3:/
>Cheers
>
>I hope, I can contribute some stuff in the future. I still
>think, it is a lovely project
>
>Jochen
>
The rsync over ssh contribution you made has changed my life. I rarely
use ftp anymore at work and I've been able to simplify many jobs. rsync
over ssh is great for keeping computers in sync. You can even run remote
commands over ssh. It's great!
I'm glad you're keeping up on the mailing list. You're input is very
helpful.
As they say in Europe:
Cheers,
Joe
|
|
From: Jochen M. <ml...@om...> - 2004-04-12 18:34:31
|
Hi Joe, hi Rene, though I am not involved in contributing new stuff since my last contribution due to lack of time, I am always studying the list. 1. I am thinking a lot about logging, because I think it is really important to know what is going on, or if a backup failed due to some circumstances. My Question to Joe: How did you implement the logging function you talked about. Did you use the "logger" function from bash ? 2. I came across a problem, that occurs, when rsync is not installed on the destination server, you want to backup from to the bobs-server. You obtain an error, an the system will not go on. So it would be helpful to implement that in configuration check as well: Path: Connect with ssh Check if rsync is present (-> whereis rsync) Question: Could you implement that in the configuration check, Joe? Cheers I hope, I can contribute some stuff in the future. I still think, it is a lovely project Jochen -- omatix.de internet services omatix solutions & trainings omatix onlineverlag Jochen Metzger j.m...@om... Telefon +49(30) 78709298 Fax +49(30) 78709296 -- Hosting für Ihr Business: http://hosting.omatix.de -- omatix.de internet services omatix solutions & trainings omatix onlineverlag Jochen Metzger j.m...@om... Telefon +49(30) 78709298 Fax +49(30) 78709296 -- Hosting für Ihr Business: http://hosting.omatix.de |
|
From: Jochen M. <j.m...@om...> - 2004-04-12 18:33:35
|
Hi Joe, hi Rene, though I am not involved in contributing new stuff since my last contribution due to lack of time, I am always studying the list. 1. I am thinking a lot about logging, because I think it is really important to know what is going on, or if a backup failed due to some circumstances. My Question to Joe: How did you implement the logging function you talked about. Did you use the "logger" function from bash ? 2. I came across a problem, that occurs, when rsync is not installed on the destination server, you want to backup from to the bobs-server. You obtain an error, an the system will not go on. So it would be helpful to implement that in configuration check as well: Path: Connect with ssh Check if rsync is present (-> whereis rsync) Question: Could you implement that in the configuration check, Joe? Cheers I hope, I can contribute some stuff in the future. I still think, it is a lovely project Jochen -- omatix.de internet services omatix solutions & trainings omatix onlineverlag Jochen Metzger j.m...@om... Telefon +49(30) 78709298 Fax +49(30) 78709296 -- Hosting für Ihr Business: http://hosting.omatix.de |
|
From: Rene R. <re...@gr...> - 2004-04-12 02:48:38
|
On Mon, 2004-04-12 at 04:36, Joe Zacky wrote: > I've started using bobs at work. That's why I've been more active in > development. It's working great. After the initial backup the > incremental backups take only a few minutes, even when backing up remote > computers. > > The db3/db4 fix is working fine. The cmdloop starting from init.d is > working good for me. The bobs.log file has been helpful. I'm ready for > release 0.6.2. The db3/db4 fix alone is enough to warrant a new release. > > Shall I go ahead and release 0.6.2? > Go for it! I can't see any reasons why not to. These fixes should really be helpful to people. Cheers Rene |
|
From: Joe Z. <jz...@co...> - 2004-04-12 02:36:19
|
I've started using bobs at work. That's why I've been more active in development. It's working great. After the initial backup the incremental backups take only a few minutes, even when backing up remote computers. The db3/db4 fix is working fine. The cmdloop starting from init.d is working good for me. The bobs.log file has been helpful. I'm ready for release 0.6.2. The db3/db4 fix alone is enough to warrant a new release. Shall I go ahead and release 0.6.2? Joe |
|
From: Jochen M. <j.m...@om...> - 2004-04-07 18:00:07
|
Cool, > > > >You speak portuguese ? > >Great ! > > > > > Not I! But Google does: http://www.google.com/language_tools > I like these guys being handy with tec-stuff make life easier Cheers Jochen |
|
From: Joe Z. <joe...@za...> - 2004-04-07 16:07:39
|
Jochen Metzger | steptown.com wrote: >Hi Joe, > >A > > >>Download a versão dos cvs ou espere até a liberação 0.6.2 >>Repara o problema db3/db4. >> >> >> >You speak portuguese ? >Great ! > > Not I! But Google does: http://www.google.com/language_tools Joe |