|
From: Lachlan A. <lh...@us...> - 2003-06-22 03:01:18
|
Greetings all, I think I've finally found the *source* of the recursions etc in the=20 database compression. Once 3.2.0b5 is out, I'll remove all the hacks=20 to limit explicit recursion, and to keep the cache clean... The problem is with the freelist of pages used when the compressed=20 page is larger than a "real" page. It is part of the same=20 environment as the rest of the database, and so shares the cache. =20 That means that writing a page can cause access to the cache, which=20 may require writing dirty pages etc. The solution seems to be simply to make it a "standalone" database. Can anyone see any problems with that approach? Do we need the=20 environment for anything? Cheers, Lachlan On Wed, 18 Jun 2003 22:36, Lachlan Andrew wrote: > I've just come across a database bug :( It was reporting > WordKey::Compare: key length for a or b < info.num_length > repeatedly when I ran a large dig without -i. > > I haven't tried repeating it yet, because the dig that produced it > takes three days!! (It uses a rather inefficient > external_transport) I'll try to replicate it using a more > manageable data set. --=20 lh...@us... ht://Dig developer DownUnder (http://www.htdig.org) |