From: Alexandre D. <ad...@fo...> - 2003-11-02 18:07:36
|
Hello, The dumphtml or the ziphtml action (from the latest version of the cvs) generates an incomplete dump. For dumphtml, I have an average of 120 pages generated but stops before end (there is quite more in your wiki around 1000). For the zip file, the generation stops at the middle with a corrupted zip archive. It seems that this function generates a sig11 on the related httpd daemon (only with this function). rcs_id('$Id: main.php,v 1.99 2003/03/07 02:39:47 dairiki Exp $'); rcs_id('$Id: loadsave.php,v 1.80 2003/03/07 02:46:57 dairiki Exp $'); Apache/1.3.28 (Unix) PHP/4.3.3 mod_gzip/1.3.19.1a mod_perl/1.28 mod_python/2.7.8 Python/2.3 mod_ssl/2.8.15 OpenSSL/0.9.7c Is it a known issue ? or is it related to the size of the Wiki (more than 1000 pages) ? or something else related to PHP itself ? Thanks a lot. Have a nice day. adulau -- -- Alexandre Dulaunoy (adulau) -- http://www.foo.be/ -- http://pgp.ael.be:11371/pks/lookup?op=get&search=0x44E6CBCD -- "Knowledge can create problems, it is not through ignorance -- that we can solve them" Isaac Asimov |
From: Frank S. <fra...@rn...> - 2003-11-03 10:06:55
|
> >>> "ad...@fo..." 11/02/03 18:07 >>> > > Hello, > > The dumphtml or the ziphtml action (from the latest version of the > cvs) generates an incomplete dump. For dumphtml, I have an average of > 120 pages generated but stops before end (there is quite more in your > wiki around 1000). For the zip file, the generation stops at the > middle with a corrupted zip archive. It seems that this function > generates a sig11 on the related httpd daemon (only with this > function). > > rcs_id('$Id: main.php,v 1.99 2003/03/07 02:39:47 dairiki Exp $'); > rcs_id('$Id: loadsave.php,v 1.80 2003/03/07 02:46:57 dairiki Exp $'); > > Apache/1.3.28 (Unix) PHP/4.3.3 mod_gzip/1.3.19.1a mod_perl/1.28 > mod_python/2.7.8 Python/2.3 mod_ssl/2.8.15 OpenSSL/0.9.7c > > Is it a known issue ? or is it related to the size of the Wiki (more > than 1000 pages) ? or something else related to PHP itself ? The last time I tried this (on a wiki with ~1500 pages) what I found was that each page would take longer to dump than the page before. You'll be able to see if you're having the same problem by inspecting the XHTML generated - each page will have a "phpwiki source:" comment which gets progressively longer. The solution is to change your index.php somewhere around line 57 from if (!defined('DEBUG')) define ('DEBUG', 1); to if (!defined('DEBUG')) define ('DEBUG', false); Once I'd done this I could quite happily do a proper zipdump. frank |
From: Bob A. <apt...@cy...> - 2004-01-06 04:51:21
|
Hi, [... drifting in from the ancient past ...] On Mon, 3 Nov 2003 10:06:51 -0000 "Frank Shearar" <fra...@rn...> wrote: > > >>> "ad...@fo..." 11/02/03 18:07 >>> > > The dumphtml or the ziphtml action (from the latest version of the > > cvs) generates an incomplete dump. For dumphtml, I have an average of > > 120 pages generated but stops before end (there is quite more in your > > wiki around 1000). For the zip file, the generation stops at the > > middle with a corrupted zip archive. It seems that this function > > generates a sig11 on the related httpd daemon (only with this > > function). > > > > rcs_id('$Id: main.php,v 1.99 2003/03/07 02:39:47 dairiki Exp $'); > > rcs_id('$Id: loadsave.php,v 1.80 2003/03/07 02:46:57 dairiki Exp $'); > > > > Apache/1.3.28 (Unix) PHP/4.3.3 mod_gzip/1.3.19.1a mod_perl/1.28 > > mod_python/2.7.8 Python/2.3 mod_ssl/2.8.15 OpenSSL/0.9.7c > > > > Is it a known issue ? or is it related to the size of the Wiki (more > > than 1000 pages) ? or something else related to PHP itself ? > > The last time I tried this (on a wiki with ~1500 pages) what I found was > that each page would take longer to dump than the page before. You'll be > able to see if you're having the same problem by inspecting the XHTML > generated - each page will have a "phpwiki source:" comment which gets > progressively longer. > > The solution is to change your index.php somewhere around line 57 from > > if (!defined('DEBUG')) define ('DEBUG', 1); > > to > > if (!defined('DEBUG')) define ('DEBUG', false); > > Once I'd done this I could quite happily do a proper zipdump. I'm running 1.3.3 and sometime in the indeterminate past my automated job to pull the full (archive) zip dumps started generating corrupted archives. Oddly, these files were smaller than the zip snapshots. Recently I dug into the code looking for an answer and I started substituting files from CVS into the current application. This didn't work because my problem wasn't in the code, it was in php.ini. Once I increased memory_limit from 8M to 32M and increased max_execution_time from 90 to 180, the problem of corrupt archives went away. I'm virtually certain that increasing memory_limit solved the problem, though I have no idea what the actual limit should be (MB consumed per wiki page or per MB of wiki content); I doubt that increasing max_execution_time helped any. Either way, the problem is resolved for the time being, and hopefully I'll get my wiki moved to a more modern, supported platform soon. Getting the zip dump working has made this much more achievable. One other question: is there a way to get a zip dump without going through the webserver, i.e. is there a php command-line script I can run to get the zip dump? My version of Apache has become very unstable as of late and I was worried that I wouldn't be able to pull a decent backup before the server just tanked completely. Thanks much, -- Bob |