From: <bac...@li...> - 2006-06-07 07:31:53
|
A BUGNOTE has been added to this bug. ====================================================================== http://bugs.bacula.org/bug_view_advanced_page.php?bug_id=0000620 ====================================================================== Reported By: nuddelaug Assigned To: Dan Langille ====================================================================== Project: bacula Bug ID: 620 Category: PostgreSQL Reproducibility: always Severity: major Priority: normal Status: assigned ====================================================================== Date Submitted: 05-23-2006 10:43 PDT Last Modified: 06-07-2006 00:31 PDT ====================================================================== Summary: Directory Error on PostgreSQL restart Description: when restarting PostgreSQL Bacula Director stops working with following Error (and many more DB related) 3-May 19:38 server01-dir: *Console*.2006-05-23_19.37.58 Fatal error: sql_get.c:555 sql_get.c:555 query SELECT PoolId,Name,NumVols,MaxVols,UseOnce,UseCatalog,AcceptAnyVolume,AutoPrune,Recycle,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,MaxVolBytes,PoolType,LabelType,LabelFormat FROM Pool WHERE Pool.Name='Default' failed: FATAL: terminating connection due to administrator command server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ====================================================================== ---------------------------------------------------------------------- Dan Langille - 06-05-2006 05:12 PDT ---------------------------------------------------------------------- I'm just back from a few days away, and I find this failed job. There are other similar failed jobs. Notes: 1 - the machine known as ngaio has been powered off for some time. So it is appropriate that the connection attempts should fail. 2 - I do not understand why the all the other jobs should fail 3 - I do recall restarting PostgreSQL 04-Jun 05:56 bacula-dir: Start Backup JobId 9256, Job=ngaio.2006-06-04_00.55.06 04-Jun 05:59 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: bnet.c:853 Could not connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out Retrying ... 04-Jun 06:43 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: bnet.c:853 Could not connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out Retrying ... 04-Jun 07:27 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: bnet.c:853 Could not connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out Retrying ... 04-Jun 08:11 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: bnet.c:853 Could not connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out Retrying ... 04-Jun 08:55 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: bnet.c:853 Could not connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out Retrying ... 04-Jun 09:39 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: bnet.c:853 Could not connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out Retrying ... 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Fatal error: bnet.c:859 Unable to connect to File daemon on ngaio.example.org:9102. ERR=Operation timed out 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Error: sql_update.c:169 sql_update.c:169 update UPDATE Job SET JobStatus='E', EndTime='2006-06-04 10:12:03', ClientId=22, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=179, VolSessionTime=1148331062, PoolId=1, FileSetId=74, JobTDate=1149430323 WHERE JobId=9256 failed: 04-Jun 10:12 bacula-dir: sql_update.c:169 UPDATE Job SET JobStatus='E', EndTime='2006-06-04 10:12:03', ClientId=22, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=179, VolSessionTime=1148331062, PoolId=1, FileSetId=74, JobTDate=1149430323 WHERE JobId=9256 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: Error updating job record. sql_update.c:169 update UPDATE Job SET JobStatus='E', EndTime='2006-06-04 10:12:03', ClientId=22, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=179, VolSessionTime=1148331062, PoolId=1, FileSetId=74, JobTDate=1149430323 WHERE JobId=9256 failed: 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Fatal error: sql_get.c:282 sql_get.c:282 query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name FROM Job WHERE JobId=9256 failed: 04-Jun 10:12 bacula-dir: sql_get.c:282 SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name FROM Job WHERE JobId=9256 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: Error getting job record for stats: sql_get.c:282 query SELECT VolSessionId,VolSessionTime,PoolId,StartTime,EndTime,JobFiles,JobBytes,JobTDate,Job,JobStatus,Type,Level,ClientId,Name FROM Job WHERE JobId=9256 failed: 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Fatal error: sql_get.c:631 sql_get.c:631 query SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='ngaio-fd' failed: 04-Jun 10:12 bacula-dir: sql_get.c:631 SELECT ClientId,Name,Uname,AutoPrune,FileRetention,JobRetention FROM Client WHERE Client.Name='ngaio-fd' 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: Error getting client record for stats: Client record not found in Catalog. 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Fatal error: sql_get.c:857 sql_get.c:857 query SELECT MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,MediaType,VolStatus,PoolId,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,InChanger,EndFile,EndBlock,VolParts,LabelType,LabelDate,StorageId FROM Media WHERE VolumeName='DLT7000-JYN244' failed: 04-Jun 10:12 bacula-dir: sql_get.c:857 SELECT MediaId,VolumeName,VolJobs,VolFiles,VolBlocks,VolBytes,VolMounts,VolErrors,VolWrites,MaxVolBytes,VolCapacityBytes,MediaType,VolStatus,PoolId,VolRetention,VolUseDuration,MaxVolJobs,MaxVolFiles,Recycle,Slot,FirstWritten,LastWritten,InChanger,EndFile,EndBlock,VolParts,LabelType,LabelDate,StorageId FROM Media WHERE VolumeName='DLT7000-JYN244' 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Warning: Error getting Media record for Volume "DLT7000-JYN244": ERR=Media record for Vol=DLT7000-JYN244 not found in Catalog. 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Fatal error: sql_get.c:340 sql_get.c:340 query SELECT VolumeName,MAX(VolIndex) FROM JobMedia,Media WHERE JobMedia.JobId=9256 AND JobMedia.MediaId=Media.MediaId GROUP BY VolumeName ORDER BY 2 ASC failed: 04-Jun 10:12 bacula-dir: sql_get.c:340 SELECT VolumeName,MAX(VolIndex) FROM JobMedia,Media WHERE JobMedia.JobId=9256 AND JobMedia.MediaId=Media.MediaId GROUP BY VolumeName ORDER BY 2 ASC 04-Jun 10:12 bacula-dir: ngaio.2006-06-04_00.55.06 Error: Bacula 1.38.8 (14Apr06): 04-Jun-2006 10:12:03 JobId: 9256 Job: ngaio.2006-06-04_00.55.06 Backup Level: Full Client: "ngaio-fd" FileSet: "ngaio files" 2006-04-23 11:00:58 Pool: "Default" Storage: "DLT" Scheduled time: 04-Jun-2006 00:55:05 Start time: 04-Jun-2006 05:56:59 End time: 04-Jun-2006 10:12:03 Elapsed time: 4 hours 15 mins 4 secs Priority: 10 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None Volume name(s): Volume Session Id: 179 Volume Session Time: 1148331062 Last Volume Bytes: 0 (0 B) Non-fatal FD errors: 1 SD Errors: 0 FD termination status: SD termination status: Waiting on FD Termination: *** Backup Error *** ---------------------------------------------------------------------- Dan Langille - 06-05-2006 05:14 PDT ---------------------------------------------------------------------- Here is a failed verify job from the same period: 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Fatal error: sql_find.c:229 sql_find.c:229 query SELECT JobId FROM Job WHERE Type='B' AND JobStatus='T' AND ClientId=2 ORDER BY StartTime DESC LIMIT 1 failed: 04-Jun 10:12 bacula-dir: sql_find.c:229 SELECT JobId FROM Job WHERE Type='B' AND JobStatus='T' AND ClientId=2 ORDER BY StartTime DESC LIMIT 1 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Fatal error: Unable to find JobId of previous Job for this client. 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Error: sql_update.c:169 sql_update.c:169 update UPDATE Job SET JobStatus='f', EndTime='2006-06-04 10:12:05', ClientId=2, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=0, VolSessionTime=0, PoolId=NULL, FileSetId=NULL, JobTDate=1149430325 WHERE JobId=9259 failed: 04-Jun 10:12 bacula-dir: sql_update.c:169 UPDATE Job SET JobStatus='f', EndTime='2006-06-04 10:12:05', ClientId=2, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=0, VolSessionTime=0, PoolId=NULL, FileSetId=NULL, JobTDate=1149430325 WHERE JobId=9259 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Warning: Error updating job record. sql_update.c:169 update UPDATE Job SET JobStatus='f', EndTime='2006-06-04 10:12:05', ClientId=2, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=0, VolSessionTime=0, PoolId=NULL, FileSetId=NULL, JobTDate=1149430325 WHERE JobId=9259 failed: 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Error: Bacula 1.38.8 (14Apr06): 04-Jun-2006 10:12:05 JobId: 9259 Job: Verify.2006-06-04_00.59.00 FileSet: polo files Verify Level: DiskToCatalog Client: polo-fd Verify JobId: 0 Verify Job: Start time: 04-Jun-2006 10:12:05 End time: 04-Jun-2006 10:12:05 Files Examined: 0 Non-fatal FD errors: 1 FD termination status: Termination: *** Verify Error *** 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Error: sql_update.c:113 sql_update.c:113 update UPDATE Job SET JobStatus='f',Level='d',StartTime='2006-06-04 10:12:05',ClientId=2,JobTDate=1149430325,PoolId=0 WHERE JobId=9259 failed: 04-Jun 10:12 bacula-dir: sql_update.c:113 UPDATE Job SET JobStatus='f',Level='d',StartTime='2006-06-04 10:12:05',ClientId=2,JobTDate=1149430325,PoolId=0 WHERE JobId=9259 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Fatal error: sql_update.c:113 update UPDATE Job SET JobStatus='f',Level='d',StartTime='2006-06-04 10:12:05',ClientId=2,JobTDate=1149430325,PoolId=0 WHERE JobId=9259 failed: 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Error: sql_update.c:169 sql_update.c:169 update UPDATE Job SET JobStatus='f', EndTime='2006-06-04 10:12:05', ClientId=2, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=0, VolSessionTime=0, PoolId=NULL, FileSetId=NULL, JobTDate=1149430325 WHERE JobId=9259 failed: 04-Jun 10:12 bacula-dir: sql_update.c:169 UPDATE Job SET JobStatus='f', EndTime='2006-06-04 10:12:05', ClientId=2, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=0, VolSessionTime=0, PoolId=NULL, FileSetId=NULL, JobTDate=1149430325 WHERE JobId=9259 04-Jun 10:12 bacula-dir: Verify.2006-06-04_00.59.00 Warning: Error updating job record. sql_update.c:169 update UPDATE Job SET JobStatus='f', EndTime='2006-06-04 10:12:05', ClientId=2, JobBytes=0, JobFiles=0, JobErrors=0, VolSessionId=0, VolSessionTime=0, PoolId=NULL, FileSetId=NULL, JobTDate=1149430325 WHERE JobId=9259 failed: ---------------------------------------------------------------------- Dan Langille - 06-05-2006 05:16 PDT ---------------------------------------------------------------------- By later on that same day, things had improved: 04-Jun 15:15 bacula-dir: Start Backup JobId 9263, Job=pepper.2006-06-04_15.15.00 04-Jun 15:15 pepper-fd: Generate VSS snapshots. Driver="VSS WinXP", Drive(s)="C" 04-Jun 15:15 pepper-fd: VSS Writer: "Microsoft Writer (Service State)", State: 1 (VSS_WS_STABLE) 04-Jun 15:15 pepper-fd: VSS Writer: "MSDEWriter", State: 1 (VSS_WS_STABLE) 04-Jun 15:15 pepper-fd: VSS Writer: "Microsoft Writer (Bootable State)", State: 1 (VSS_WS_STABLE) 04-Jun 15:15 pepper-fd: VSS Writer: "WMI Writer", State: 1 (VSS_WS_STABLE) 04-Jun 15:48 polo-sd: End of Volume "DLT7000-JYN244" at 112:14410 on device "DLT" (/dev/nsa0). Write of 64512 bytes got 0. 04-Jun 15:48 polo-sd: pepper.2006-06-04_15.15.00 Error: Re-read last block at EOT failed. ERR=block.c:958 Read zero bytes at 112:0 on device "DLT" (/dev/nsa0). 04-Jun 15:48 polo-sd: End of medium on Volume "DLT7000-JYN244" Bytes=44,610,724,688 Blocks=691,555 at 04-Jun-2006 15:48. 04-Jun 15:49 polo-sd: Please mount Volume "DLT7000-004" on Storage Device "DLT" (/dev/nsa0) for Job pepper.2006-06-04_15.15.00 04-Jun 16:49 polo-sd: Please mount Volume "DLT7000-004" on Storage Device "DLT" (/dev/nsa0) for Job pepper.2006-06-04_15.15.00 04-Jun 18:49 polo-sd: Please mount Volume "DLT7000-004" on Storage Device "DLT" (/dev/nsa0) for Job pepper.2006-06-04_15.15.00 04-Jun 19:00 bacula-dir: pepper.2006-06-04_15.15.00 Fatal error: Network error with FD during Backup: ERR=Connection reset by peer 04-Jun 19:00 bacula-dir: pepper.2006-06-04_15.15.00 Fatal error: No Job status returned from FD. 04-Jun 19:00 bacula-dir: pepper.2006-06-04_15.15.00 Error: Bacula 1.38.8 (14Apr06): 04-Jun-2006 19:00:48 JobId: 9263 Job: pepper.2006-06-04_15.15.00 Backup Level: Full Client: "pepper-fd" Windows XP,MVS,NT 5.1.2600 FileSet: "pepper files" 2006-01-28 14:49:11 Pool: "Default" Storage: "DLT" Scheduled time: 04-Jun-2006 15:15:00 Start time: 04-Jun-2006 15:15:02 End time: 04-Jun-2006 19:00:48 Elapsed time: 3 hours 45 mins 46 secs Priority: 10 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None Volume name(s): DLT7000-JYN244 Volume Session Id: 180 Volume Session Time: 1148331062 Last Volume Bytes: 1 (1 B) Non-fatal FD errors: 0 SD Errors: 0 FD termination status: Error SD termination status: Error Termination: *** Backup Error *** ---------------------------------------------------------------------- Dan Langille - 06-05-2006 05:17 PDT ---------------------------------------------------------------------- At present, the DLT is just waiting for a new tape: *status storage=DLT Connecting to Storage daemon DLT at bacula.unixathome.org:9103 polo-sd Version: 1.38.8 (14 April 2006) i386-portbld-freebsd4.11 freebsd 4.11-STABLE Daemon started 22-May-06 16:51, 179 Jobs run since started. Running Jobs: Writing: Full Backup job pepper JobId=9263 Volume="DLT7000-004" pool="Default" device=""DLT" (/dev/nsa0)" Files=9,802 Bytes=6,923,824,183 Bytes/sec=112,920 FDReadSeqNo=191,419 in_msg=162183 out_msg=5 fd=12 Writing: Full Backup job BackupCatalog JobId=9264 Volume="" pool="Default" device=""DLT" (/dev/nsa0)" Files=0 Bytes=0 Bytes/sec=0 FDReadSeqNo=6 in_msg=6 out_msg=4 fd=14 ==== Jobs waiting to reserve a drive: ==== Terminated Jobs: JobId Level Files Bytes Status Finished Name ====================================================================== 9247 Incr 184 5,054,359,591 OK 03-Jun-06 09:19 xeon 9248 Incr 228 47,413,066 OK 03-Jun-06 15:19 pepper 9249 Full 1 101,716,955 OK 03-Jun-06 15:21 BackupCatalog 9250 Full 134 952,584 OK 04-Jun-06 00:55 undef 9251 Full 34,888 2,857,161,698 OK 04-Jun-06 01:06 xeon 9252 Full 20,721 11,994,796,180 OK 04-Jun-06 05:44 wocker 9253 Full 1,236 314,445,066 OK 04-Jun-06 05:45 bast 9254 Full 3,777 60,960,440 OK 04-Jun-06 05:46 dfc 9255 Full 63,233 2,701,906,826 OK 04-Jun-06 05:56 polo 9256 Full 0 0 Other 04-Jun-06 06:27 ngaio ==== Device status: Device "FileStorage" (/home/bacula/db) is not open or does not exist. Device "DLT" (/dev/nsa0) open but no Bacula volume is mounted. Device is BLOCKED waiting for media. Total Bytes Read=0 Blocks Read=0 Bytes/block=0 Positioned at File=0 Block=0 ==== In Use Volume status: ==== Data spooling: 0 active jobs, 0 bytes; 34 total jobs, 85,074,050 max bytes/job. Attr spooling: 0 active jobs, 0 bytes; 34 total jobs, 390,519 max bytes. * ---------------------------------------------------------------------- Dan Langille - 06-05-2006 12:16 PDT ---------------------------------------------------------------------- After restarting bacula-sd. Notice the writing of the label at the end of this paste. The time 15:14 corresponds to when I restarted bacula: [dan@polo:~] $ grep restart /var/log/messages Jun 5 15:14:31 polo sudo: dan : TTY=ttyp0 ; PWD=/usr/home/dan ; USER=root ; COMMAND=/usr/local/etc/rc.d/bacula-sd.sh restart [dan@polo:~] $ $ bconsole -c /usr/local/etc/bconsole.conf Connecting to Director bacula.unixathome.org:9101 1000 OK: bacula-dir Version: 1.38.8 (14 April 2006) Enter a period to cancel a command. *mes 05-Jun 15:14 undef-fd: undef.2006-06-05_00.55.00 Fatal error: job.c:1599 Comm error with SD. bad response to Append Data. ERR=Broken pipe 05-Jun 15:14 bacula-dir: undef.2006-06-05_00.55.00 Error: bnet.c:426 Write error sending 280 bytes to Storage daemon:bacula.unixathome.org:9103: ERR=Broken pipe 05-Jun 15:14 bacula-dir: undef.2006-06-05_00.55.00 Error: Bacula 1.38.8 (14Apr06): 05-Jun-2006 15:14:31 JobId: 9265 Job: undef.2006-06-05_00.55.00 Backup Level: Incremental, since=2006-06-04 00:55:03 Client: "undef-fd" i386-portbld-freebsd5.4,freebsd,5.4-STABLE FileSet: "undef files" 2006-01-29 03:18:14 Pool: "Default" Storage: "DLT" Scheduled time: 05-Jun-2006 00:55:00 Start time: 05-Jun-2006 09:46:24 End time: 05-Jun-2006 15:14:31 Elapsed time: 5 hours 28 mins 7 secs Priority: 10 FD Files Written: 0 SD Files Written: 0 FD Bytes Written: 0 (0 B) SD Bytes Written: 0 (0 B) Rate: 0.0 KB/s Software Compression: None Volume name(s): Volume Session Id: 182 Volume Session Time: 1148331062 Last Volume Bytes: 1 (1 B) Non-fatal FD errors: 0 SD Errors: 0 FD termination status: Error SD termination status: Error Termination: *** Backup Error *** 05-Jun 15:14 bacula-dir: Start Backup JobId 9266, Job=xeon.2006-06-05_00.55.01 05-Jun 15:14 polo-sd: Wrote label to prelabeled Volume "DLT7000-JNY227" on device "DLT" (/dev/nsa0) * ---------------------------------------------------------------------- kern - 06-05-2006 14:21 PDT ---------------------------------------------------------------------- Dan, the problem you describe doesn't seem to be at all related to the problem reported in this bug report, which is that Bacula should attempt to reconnect with PostgreSQL after a disconnect. Your bug appears to be due to either the OS or Bacula being totally hosed. The SQL errors don't even have an appropriate error message printed. ---------------------------------------------------------------------- kern - 06-05-2006 14:24 PDT ---------------------------------------------------------------------- Concerning the original bug report: this is worth fixing only if PostgreSQL has an option for the client libraries (supplied by PostgreSQL) to reconnect automatically. Such an option is available for MySQL and Bacula uses it, but I am not a PostgreSQL expert, so someone else will need to tell me if such an option exists and how to set it. If such an option does not exist, then this is not a bug that will be fixed, because I see no reason to add code after every SQL call to attempt to reconnect -- this would be too much code and impossible to maintain. The proper place for this is in the PostgreSQL client libraries supplied as part of PostgreSQL. ---------------------------------------------------------------------- Dan Langille - 06-05-2006 15:09 PDT ---------------------------------------------------------------------- The reason I started pasted these things is I recall restarting PostgreSQL. ---------------------------------------------------------------------- Dan Langille - 06-06-2006 20:23 PDT ---------------------------------------------------------------------- I've been reading http://www.postgresql.org/docs/8.1/static/libpq.html and considering PQreset (second function up from the bottom of the page). I think it would be sufficient for the Director to check the status of the connection each time a new job was started. If the connection is faulty, call PQreset. The connection status can be queried with PQstatus (http://www.postgresql.org/docs/8.1/static/libpq-status.html). Kern: If that makes sense and I'm not missing anything, I'll try playing with this soon. I'm asking for a sanity check please in case I've missed something essential regarding Director and job initiation. ---------------------------------------------------------------------- kern - 06-07-2006 00:31 PDT ---------------------------------------------------------------------- Please try what you are suggesting. It could improve the situation by limiting the damage to a single job. However, IMO, this is a feature that should be in the PostgreSQL client library (i.e. the request should be sent upstream to the PostgreSQL developers). If "auto-reset" is turned on by the calling program, it should attempt to re-establish connection with the server on every failed request where it determines that the connection has been lost. With such code, PostgreSQL becomes much more robust and there will be no jobs that fail. Bug History Date Modified Username Field Change ====================================================================== 05-23-06 10:43 nuddelaug New Bug 06-05-06 05:12 Dan Langille Bugnote Added: 0001724 06-05-06 05:14 Dan Langille Bugnote Added: 0001725 06-05-06 05:16 Dan Langille Bugnote Added: 0001726 06-05-06 05:17 Dan Langille Bugnote Added: 0001727 06-05-06 12:16 Dan Langille Bugnote Added: 0001728 06-05-06 14:21 kern Bugnote Added: 0001729 06-05-06 14:24 kern Bugnote Added: 0001730 06-05-06 15:09 Dan Langille Bugnote Added: 0001731 06-06-06 20:23 Dan Langille Bugnote Added: 0001732 06-07-06 00:31 kern Bugnote Added: 0001733 ====================================================================== |