|
From: Jim C. <li...@yg...> - 2003-05-04 21:30:51
|
On Wednesday, April 30, 2003, at 03:45 PM, Lachlan Andrew wrote: > Looking through the ChangeLog for libtool 1.5 (Released 14 April): > > 2003-03-21 Peter O'Gorman <pe...@po...> > * libtool.m4 (darwin): Check compiler is apple gcc, add > -single_module support so that dyloading c++ shared libraries will > work. > > That sounds like just the patch we need! If someone tells me the > steps (autoconf? automake? ???), I'll download and run 1.5 on the > weekend. I tried using version 1.5 this afternoon with no joy. I am still getting undefined symbols for everything that is supposed to be coming from the C++ library. However I am not sure that I really know what I am doing ;) I built the standard GNU Libtool 1.5 distribution and ran libtoolize --copy --force in the top level ht://Dig directory. I tested with and without rerunning aclocal, which libtoolize implied might be necessary. Neither approach helped. If anyone has suggestions on other thing to try in addition to simply updating libtool, I would be happy to give them a try. Jim |
|
From: Gabriele B. <bar...@in...> - 2003-05-01 19:58:11
|
Ciao Lachlan! > 2003-03-21 Peter O'Gorman <pe...@po...> > * libtool.m4 (darwin): Check compiler is apple gcc, add >-single_module support so that dyloading c++ shared libraries will >work. I hadn't checked libtool recently. But dynlib were exactly the problem I encountered with libtool. I hope this would fix. >That sounds like just the patch we need! If someone tells me the >steps (autoconf? automake? ???), I'll download and run 1.5 on the >weekend. I guess you only have to launch ( I am not pretty sure about it though - give a look at the man page): libtoolize --copy inside the htdig source directory. I don't think you need to run autoconf or automake either. However, in general, here are the steps I usually follow with ht://Check and - every death of pope (as we say in Italy - that is to say very rarely :-D ) - with ht://Dig. You can find them here: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/htcheck/htcheck/README.developers from the 'Browse CVS' section of SF for ht://Check. Step 1 ( I think it is not necessary now) ======= aclocal Created macro file aclocal.m4 with definitions for libtool. Step 2 ======= libtoolize --copy Copies libtool files. Step 3 ======= autoconf Create the file 'configure' Step 4 ======= automake --foreign --add-missing --copy -v -f Step 5 ====== autoreconf Ciao ciao -Gabriele -- Gabriele Bartolini: Web Programmer, ht://Dig & IWA/HWG Member, ht://Check maintainer Current Location: Prato, Tuscany, Italy bar...@in... | http://www.prato.linux.it/~gbartolini | ICQ#129221447 > "Leave every hope, ye who enter!", Dante Alighieri, Divine Comedy, The Inferno |
|
From: Lachlan A. <lh...@us...> - 2003-05-19 10:42:13
|
Thanks for your help with the auto* process, Gabriele.
Am I right in thinking that running
autoreconf
twice in the last step should make no difference? When I run =20
autoreconf (from Autoconf 2.57) the configure file is very=20
different from the one in CVS. What's more, the one I generate=20
doesn't work :( (I think there is a bug in the g++ compatability=20
library which it trips up on.)
Any ideas?
Thanks,
Lachlan
On Fri, 2 May 2003 05:55, Gabriele Bartolini wrote:
> I don't think you need to run autoconf or automake either. However,
> in general, here are the steps I usually follow with ht://Check and
> - every death of pope (as we say in Italy - that is to say very
> rarely :-D ) - with ht://Dig.
> Step 5
autoreconf
|
|
From: Gabriele B. <g.b...@co...> - 2003-05-19 11:44:15
|
Ciao Lachlan! Il lun, 2003-05-19 alle 12:41, Lachlan Andrew ha scritto: > Thanks for your help with the auto* process, Gabriele. No worries. :-P > Am I right in thinking that running > autoreconf > twice in the last step should make no difference? When I run > autoreconf (from Autoconf 2.57) the configure file is very > different from the one in CVS. What's more, the one I generate > doesn't work :( (I think there is a bug in the g++ compatability > library which it trips up on.) > > Any ideas? I have always tried to avoid to autoreconf tool, as I don't trust things that do things in 'behalf of you'. However, are u sure you did not change the config.sub and config.guess files? What about aclocal.m4? Also, autoreconf recalls autoheader, which can be pretty dangerous IMHO. So ... try to run the other tools separately. Which versions are you using of autoconf (2.57, right?), automake and libtool? -Gabriele -- Gabriele Bartolini - Web Programmer Comune di Prato - Prato - Tuscany - Italy g.b...@co... | http://www.comune.prato.it > find bin/laden -name osama -exec rm {} ; |
|
From: Lachlan A. <lh...@us...> - 2003-05-20 10:30:19
|
Greetings Gabriele, I wasn't running any of the other tools. I just thought that, being=20 the last step in the process you recommended, running autoreconf by=20 itself should give the same files back. When I run it twice in a=20 row, it gives the the same configure file each time, but that file=20 is different from the one in CVS. Does that mean you didn't run =20 autoreconf when you updated the files three months ago? Lachlan On Mon, 19 May 2003 21:44, Gabriele Bartolini wrote: > Il lun, 2003-05-19 alle 12:41, Lachlan Andrew ha scritto: > > Am I right in thinking that running > > autoreconf > > twice in the last step should make no difference? When I run > > autoreconf (from Autoconf 2.57) the configure file is very > > different from the one in CVS. > > However, are u sure you did not change the config.sub and > config.guess files? What about aclocal.m4? > > So ... try to run the other tools separately. Which versions are > you using of autoconf (2.57, right?), automake and libtool? |
|
From: Ted Stresen-R. <ted...@ma...> - 2003-05-27 17:22:10
|
Hi, I just heard from one of the developers of libtool, Peter O'Gorman, who has been doing a lot of fixing to get libtool to work properly with OS X. In short, his input was: use version 1.5 of libtool. Judging by the number of changes to libtool that deal specifically with darwin (OS X) related issues, I'd say this is a promising path to follow. Peter is trying to build htdig from the most recent snapshots with libtool 1.5 and will get back to me with the results. Peter also said that you can tell what version of libtool we are using by looking "at the configure line". Specifically, he says: "Your developer can easily check if he is using libtool-1.5 by looking at the configure line: checking how to recognise dependent libraries... pass_all (on 1.5) and file_magic Mach-O dynamically linked shared library ( anything other than 1.5)" A quick search of the htdig source would indicate that we are using a pre-1.5 version of libtool. I hope this helps. Ted Stresen-Reuter On Tuesday, May 20, 2003, at 05:29 AM, Lachlan Andrew wrote: > Greetings Gabriele, > > I wasn't running any of the other tools. I just thought that, being > the last step in the process you recommended, running autoreconf by > itself should give the same files back. When I run it twice in a > row, it gives the the same configure file each time, but that file > is different from the one in CVS. Does that mean you didn't run > autoreconf when you updated the files three months ago? > > Lachlan > > On Mon, 19 May 2003 21:44, Gabriele Bartolini wrote: >> Il lun, 2003-05-19 alle 12:41, Lachlan Andrew ha scritto: >>> Am I right in thinking that running >>> autoreconf >>> twice in the last step should make no difference? When I run >>> autoreconf (from Autoconf 2.57) the configure file is very >>> different from the one in CVS. >> >> However, are u sure you did not change the config.sub and >> config.guess files? What about aclocal.m4? >> >> So ... try to run the other tools separately. Which versions are >> you using of autoconf (2.57, right?), automake and libtool? > > > > ------------------------------------------------------- > This SF.net email is sponsored by: ObjectStore. > If flattening out C++ or Java code to make your application fit in a > relational database is painful, don't do it! Check out ObjectStore. > Now part of Progress Software. http://www.objectstore.net/sourceforge > _______________________________________________ > htdig-dev mailing list > htd...@li... > https://lists.sourceforge.net/lists/listinfo/htdig-dev > |
|
From: Jim C. <li...@yg...> - 2003-05-04 18:31:43
|
Hi - I am still having problems with this patch applied. The dig makes it much farther than it did last time around, but compression still appears to be an issue. I receive the following a few times toward the end of the dig. WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = 246532 WordDB: PANIC: Input/output error WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = 246532 WordDB: PANIC: Input/output error Then at the end of the dig, I see the following. WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = 246532 WordDB: PANIC: Input/output error WordDBCursor::Get(17) failed DB_RUNRECOVERY: Fatal error, run database recovery The referenced page number is the same for every error message. For this test, the default value was used for wordlist_page_size. Both wordlist_compress and wordlist_compress_zlib were set to true, and compression_level was set to 8. I used a fresh copy of the CVS code (as of yesterday) with dbase.patch2 applied. Jim On Sunday, April 27, 2003, at 02:27 AM, Lachlan Andrew wrote: > OK, here is another patch... > > The problem was that I was using the counts of clean and dirty cache > pages, but they were not recorded correctly in the old BDB 2.x > (although that didn't show up in my initial tests...). This patch > contains the fix, copied from BDB 3.3.11. > > Thanks again for your help :) > > Lachlan > > On Sun, 27 Apr 2003 12:54, Jim Cole wrote: >> I am still running into fatal problems with OS X. I no longer get >> the segfault, but instead see the output shown below. > <dbase.patch2> |
|
From: Ted Stresen-R. <ted...@ma...> - 2003-05-04 20:49:09
|
Hi, Might this behavior have something to do with the size of the site you are indexing? I am indexing four sites on my local machine (not very much to index, in truth) and it occurred to me that perhaps, in order to trigger the bug, you have to index a larger amount of information. I'll give it a whirl indexing Yahoo! to see what happens... just kidding... but I will try another site that is larger than what I have on my hard drive to see if that makes a difference. Also, if you have a chance, could post the config file you are using so we are all using the same attribute values? Ted Stresen-Reuter http://www.tedmasterweb.com/ On Sunday, May 4, 2003, at 01:31 PM, Jim Cole wrote: > Hi - I am still having problems with this patch applied. The dig makes > it much farther than it did last time around, but compression still > appears to be an issue. I receive the following a few times toward the > end of the dig. > > WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = > 246532 > WordDB: PANIC: Input/output error > WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = > 246532 > WordDB: PANIC: Input/output error > > Then at the end of the dig, I see the following. > > WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = > 246532 > WordDB: PANIC: Input/output error > WordDBCursor::Get(17) failed DB_RUNRECOVERY: Fatal error, run database > recovery > > The referenced page number is the same for every error message. For > this test, the default value was used for wordlist_page_size. Both > wordlist_compress and wordlist_compress_zlib were set to true, and > compression_level was set to 8. I used a fresh copy of the CVS code > (as of yesterday) with dbase.patch2 applied. > > Jim > > On Sunday, April 27, 2003, at 02:27 AM, Lachlan Andrew wrote: > >> OK, here is another patch... >> >> The problem was that I was using the counts of clean and dirty cache >> pages, but they were not recorded correctly in the old BDB 2.x >> (although that didn't show up in my initial tests...). This patch >> contains the fix, copied from BDB 3.3.11. >> >> Thanks again for your help :) >> >> Lachlan >> >> On Sun, 27 Apr 2003 12:54, Jim Cole wrote: >>> I am still running into fatal problems with OS X. I no longer get >>> the segfault, but instead see the output shown below. >> <dbase.patch2> > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > htdig-dev mailing list > htd...@li... > https://lists.sourceforge.net/lists/listinfo/htdig-dev > |
|
From: Neal R. <ne...@ri...> - 2003-05-05 16:09:23
|
Jim, I'm ultimately responsible for this bug, as I re-enabled the zlib WordDB compression feature... I'd like to get it it fixed.. but I have benn totally unable to duplicate it! Please try adding this line to your .conf file and re-run your index. wordlist_page_size: 32768 Also please include complete information on your platform.. Linux distro version, CPU type.. Thanks. On Sun, 4 May 2003, Jim Cole wrote: > Hi - I am still having problems with this patch applied. The dig makes > it much farther than it did last time around, but compression still > appears to be an issue. I receive the following a few times toward the > end of the dig. > > WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = 246532 > WordDB: PANIC: Input/output error > WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = 246532 > WordDB: PANIC: Input/output error > > Then at the end of the dig, I see the following. > > WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = 246532 > WordDB: PANIC: Input/output error > WordDBCursor::Get(17) failed DB_RUNRECOVERY: Fatal error, run database > recovery > > The referenced page number is the same for every error message. For > this test, the default value was used for wordlist_page_size. Both > wordlist_compress and wordlist_compress_zlib were set to true, and > compression_level was set to 8. I used a fresh copy of the CVS code (as > of yesterday) with dbase.patch2 applied. > > Jim > > On Sunday, April 27, 2003, at 02:27 AM, Lachlan Andrew wrote: > > > OK, here is another patch... > > > > The problem was that I was using the counts of clean and dirty cache > > pages, but they were not recorded correctly in the old BDB 2.x > > (although that didn't show up in my initial tests...). This patch > > contains the fix, copied from BDB 3.3.11. > > > > Thanks again for your help :) > > > > Lachlan > > > > On Sun, 27 Apr 2003 12:54, Jim Cole wrote: > >> I am still running into fatal problems with OS X. I no longer get > >> the segfault, but instead see the output shown below. > > <dbase.patch2> > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > htdig-dev mailing list > htd...@li... > https://lists.sourceforge.net/lists/listinfo/htdig-dev > Neal Richter Knowledgebase Developer RightNow Technologies, Inc. Customer Service for Every Web Site Office: 406-522-1485 |
|
From: Jim C. <li...@yg...> - 2003-05-07 02:36:54
|
Hi - With wordlist_page_size set to 32768 I was unable to reproduce the problem. However I also failed to reproduce the problem when I again ran without the setting this morning. I am going to start another run without the attribute tonight. As for platform, I am currently working with OS X running on a dual G4. The OS X version is 10.2.5. Darwin magni 6.5 Darwin Kernel Version 6.5: Mon Apr 7 17:05:38 PDT 2003; root:xnu/xnu-344.32.obj~1/RELEASE_PPC Power Macintosh powerpc Jim On Monday, May 5, 2003, at 10:08 AM, Neal Richter wrote: > > Jim, > I'm ultimately responsible for this bug, as I re-enabled the zlib > WordDB compression feature... I'd like to get it it fixed.. but I have > benn totally unable to duplicate it! > > Please try adding this line to your .conf file and re-run your index. > > wordlist_page_size: 32768 > > Also please include complete information on your platform.. Linux > distro > version, CPU type.. > > Thanks. > > > On Sun, 4 May 2003, Jim Cole wrote: > >> Hi - I am still having problems with this patch applied. The dig makes >> it much farther than it did last time around, but compression still >> appears to be an issue. I receive the following a few times toward the >> end of the dig. >> >> WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = >> 246532 >> WordDB: PANIC: Input/output error >> WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = >> 246532 >> WordDB: PANIC: Input/output error >> >> Then at the end of the dig, I see the following. >> >> WordDB: CDB___memp_cmpr_read: unable to uncompress page at pgno = >> 246532 >> WordDB: PANIC: Input/output error >> WordDBCursor::Get(17) failed DB_RUNRECOVERY: Fatal error, run database >> recovery >> >> The referenced page number is the same for every error message. For >> this test, the default value was used for wordlist_page_size. Both >> wordlist_compress and wordlist_compress_zlib were set to true, and >> compression_level was set to 8. I used a fresh copy of the CVS code >> (as >> of yesterday) with dbase.patch2 applied. >> >> Jim >> >> On Sunday, April 27, 2003, at 02:27 AM, Lachlan Andrew wrote: >> >>> OK, here is another patch... >>> >>> The problem was that I was using the counts of clean and dirty cache >>> pages, but they were not recorded correctly in the old BDB 2.x >>> (although that didn't show up in my initial tests...). This patch >>> contains the fix, copied from BDB 3.3.11. >>> >>> Thanks again for your help :) >>> >>> Lachlan >>> >>> On Sun, 27 Apr 2003 12:54, Jim Cole wrote: >>>> I am still running into fatal problems with OS X. I no longer get >>>> the segfault, but instead see the output shown below. >>> <dbase.patch2> >> >> >> >> ------------------------------------------------------- >> This sf.net email is sponsored by:ThinkGeek >> Welcome to geek heaven. >> http://thinkgeek.com/sf >> _______________________________________________ >> htdig-dev mailing list >> htd...@li... >> https://lists.sourceforge.net/lists/listinfo/htdig-dev >> > > Neal Richter > Knowledgebase Developer > RightNow Technologies, Inc. > Customer Service for Every Web Site > Office: 406-522-1485 > > |