|
From: Ann W. H. <aha...@ib...> - 2010-01-06 17:29:38
|
Gary Franklin wrote: > When I say "delete the table" I mean the tables are dropped. The records are not being > deleted from the tables. Hmmm.... That's interesting. Are the records big enough that they might be fragmented? Are there large blobs? Is this problem specific to a version of Firebird? > > As Alex indicated in his post, the transaction information I included in my original > post was obtained by running the "gstat -h" command after the runs completed. I can > try capturing the additional gstat information while the test is running. That would be interesting but probably less useful than what appears from gstat -a or perhaps even gfix -v ... > > The test runs involve a single user running two threads, doing the same operations over > and over - creating tables, populating the tables, running queries and dropping the > tables. > > I am also in contact with Dmitry and we are discussing the best way for me to get a > test environment put together so he can recreate the problem. Just off the top of my head, here are a couple of possibilities and some ways to test them. It sounds as if pages from dropped tables aren't being reused. There could be lots of reasons for that. One is that not all pages are being released - possibly overflow or blob pages aren't being reclaimed. gfix -validate will report those pages as orphaned. Another is an optimization that could work badly in this case, specifically, that the server finds free pages to allocate by looking at Page Inventory Pages (PIPs) that occur at regular places in the database. Each PIP has one bit for each page in its range - the bit is zero or one, depending on whether the page is free or in use. (No, I don't remember which). In a moderate to large database, walking the PIPs to find a free page is a nuisance, so Firebird keeps the page number of the last PIP that had free space. If it only looks forward from there, it won't find pages that were freed on prior PIPs. One way to test for this would be to stop the server between runs. If that improves space usage, it's a smoking gun. The mental model for most databases is that they hold valuable data that the users want to save - not that they model a volatile state that's going to be destroyed when the test is over. The last-PIP optimization works pretty well in the first model and badly in the second. A way of dealing with the second case would be to reset the saved last-PIP when an earlier PIP crosses some threshold of free pages. Best regards, Ann |