dbi-interbase-devel Mailing List for DBD::InterBase (Page 6)
Status: Beta
Brought to you by:
edpratomo
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
(44) |
Sep
(33) |
Oct
(36) |
Nov
(1) |
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(4) |
Feb
(3) |
Mar
(2) |
Apr
(1) |
May
(4) |
Jun
(15) |
Jul
(24) |
Aug
(8) |
Sep
(4) |
Oct
(5) |
Nov
(1) |
Dec
(4) |
2002 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
(7) |
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2003 |
Jan
(1) |
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Mark D. A. <md...@di...> - 2000-10-04 15:57:57
|
> > > Correct me if I'm wrong, but Apache::DBI will dynamicly give you a new > > > $dbh if the connection has died (see The Morning Bug and $dhb->ping). > > > > probably true if i was using Apache::DBI, but i'm just using DBI directly. > > ohmygod! Then you are either connecting to the DB in each request (which > IMHO is horrible, though not extremely sho in interbase) or you > are keeping a connection alive yourself (in which case you should be using > Apache::DBI). i'm keeping the connection alive myself, basically a re-write of Apache::DBI. I'm also not using Apache::Debug or CGI::Carp or several other modules that i got fed up with and rewrote my own versions of. All of them are so short, i found it easier that way. by the way, i experimented briefly with "local" mode, where there is no separate interbase server process; the engine is linked into each child process. you get this by installing classic instead of SS, and then using ib_protocol=local in the dsn. the performance was outstanding, but every once in a while a child would freeze forever -- i could automatically kill those, but then all new children would freeze too. some sort of lock manager thing. i never resolved it. > > i could do that too; i just haven't had the need yet, as my atomic modification operations > > currently fit into single sql requests. > > So you never make more than one write pr reuquest? if i do, i don't care if one succeeds and the other doesn't. but i know this will change eventually. > I'd like to know why you feel that this is unreliable, have I missed > something? probably not. but by my understanding, if you generate a primary key value for an insert by taking max+1 of the "current" ones, then every once in a while an insert will fail. it isn't like you'll ever get a re-used primary key value, because at any isolation level the server will enforce uniqueness. at least, that is true for some databases. maybe interbase is smart enough to schedule the transactions so that they serialize -- that is certainly technically possible, and some databases do it in some isolation levels, it is just that in practice rdbms's don't seem able to do what a person reads about in Gray & Reuter. i use a generator for this, because it will increment every time, and is guaranteed to be atomic without me having to figure out exactly what interbase does. sure, every once in a while i skip a value, but at least i never get an insert conflict. another benefit of this is that i then know what the pkey value is of the new row, as an insert doesn't return anything but a success code. > > i don't have much intuition about ib's versioning intuition and performance. > > for example, suppose i had a $dbh which had a permanent readonly (isc_tpb_read) > > read_committed transaction that it never committed, that was used for various reports, > > would i get better performance that way? > > Have you read this?: http://www.dbginc.com/tech_pprs/IB.html > > IMHO read_committeed is a one way ticket to pain, but for certain data it > will probably work, but I don't think it's worth bother. you are probably right, but that paper doesn't really say anything the ib docs don't. -mda |
From: Flemming F. <di...@sw...> - 2000-10-04 11:38:25
|
On Wed, 4 Oct 2000, Michael Samanov wrote: > Wednesday, October 04, 2000, 10:52:37 AM, you wrote: > > >> probably true if i was using Apache::DBI, but i'm just using DBI directly. > > FF> ohmygod! Then you are either connecting to the DB in each request (which > FF> IMHO is horrible, though not extremely sho in interbase) > > I think, classic architecture gives us noticeable (not great, I hope) > connection overhead. If I wont't be too lazy, I'll try to benchmark > connections and 'll post the results here :-) If you connect to the bd at each request you save som RAM in that you don't need to keep n interbase processes around all the time, but you take a hit because of the connection overhead (esp. with Classic, it needs to be forked off at each request) > BTW, I've been thinking about some two-three-more-tier solution, so > that many apachies share the single DB connection. Has somebody been > trying or at least thinking about that? Well, if you can isolate all of your buisness logic in a middletier so that your apachen can each have a low-overhead connection to that process, which will in turn connect to IB through one or two static connections then I think you are onto something right. But I don't think it's worth the bother if you are just making a DB connection cache, because you will need one DB connection pr apache anyway if they are all serving a request. If you need database connections directly from your webserver then I'd reccomend using Apache::DBI, I've tried without it and it's *slow*. Because of the client meory leaks talked about you should probably keep an eye on the size of those apache processes and twiddle the max requests pr child parameter in apache, but you're probably already doing this, right? |
From: Michael S. <mi...@ch...> - 2000-10-04 10:14:27
|
Hello Flemming, Wednesday, October 04, 2000, 10:52:37 AM, you wrote: >> probably true if i was using Apache::DBI, but i'm just using DBI directly. FF> ohmygod! Then you are either connecting to the DB in each request (which FF> IMHO is horrible, though not extremely sho in interbase) I think, classic architecture gives us noticeable (not great, I hope) connection overhead. If I wont't be too lazy, I'll try to benchmark connections and 'll post the results here :-) BTW, I've been thinking about some two-three-more-tier solution, so that many apachies share the single DB connection. Has somebody been trying or at least thinking about that? Best regards, Michael mailto:mi...@ch... |
From: Flemming F. <di...@sw...> - 2000-10-04 06:49:34
|
On Tue, 3 Oct 2000, Mark D. Anderson wrote: > > > in the handler.pl, the first time through > > > This is not Possible(tm) > > > > Correct me if I'm wrong, but Apache::DBI will dynamicly give you a new > > $dbh if the connection has died (see The Morning Bug and $dhb->ping). > > probably true if i was using Apache::DBI, but i'm just using DBI directly. ohmygod! Then you are either connecting to the DB in each request (which IMHO is horrible, though not extremely sho in interbase) or you are keeping a connection alive yourself (in which case you should be using Apache::DBI). > > > for me, that means that i'd have to keep my transactions so short that > > > i might as well do autocommit. most of my inserts/updates are either single table > > > or can be ordered in such a way they don't have to be combined in a txn. > > > > I do all my work inside one transaction pr. request to apache, the > > transaction is committed at the end of the autohandler, so if something > > goes wrong in the course of the call no changes get written to the > > database (that is the point of using transactions) > > ah, so the lifetime of your transaction is one http request; that makes more sense to me. > > i thought you were keeping a transaction alive across http requests, which gets trickier > programmatically. No, it is impossible, you never know what apache that gets to server the next reuqest from a user (indeed, you don't even know if there will *be* a next request) so making transactions that span more than one http request is impossible. WhatI do to make a multi-request procedure seem as a transaction to the user is to fetch all the info I need out of the db and store it in the users session data and only write it back to the db when the user has finished editing it and clicked save. > i could do that too; i just haven't had the need yet, as my atomic modification operations > currently fit into single sql requests. So you never make more than one write pr reuquest? What I really like about having a a transaction pr request is that there is nothing wrong with my data, no matter where I die in the request, this is really helpfull for debugging. > > They race, same thing happens with read_committed, the timing is just > > different. > > um, i think that example applies to just about any isolation level. > it is a well-known tempting-but-unreliable-way to generate primary key values. No, the insert from the other transactions will not be visible to the current transaction unless they had committed before the current transaction started. I'd like to know why you feel that this is unreliable, have I missed something? > true; i wouldn't do it unless i knew for a fact that (a) it resulted in better performance, > and (b) it was safe. fair enough, as long as you are not landing planes or moving money:) I can only say that I'm pretty happy with the performance of "repeatable read". > i don't have much intuition about ib's versioning intuition and performance. > for example, suppose i had a $dbh which had a permanent readonly (isc_tpb_read) > read_committed transaction that it never committed, that was used for various reports, > would i get better performance that way? Have you read this?: http://www.dbginc.com/tech_pprs/IB.html IMHO read_committeed is a one way ticket to pain, but for certain data it will probably work, but I don't think it's worth bother. |
From: Mark D. A. <md...@di...> - 2000-10-03 21:01:20
|
> > in the handler.pl, the first time through > This is not Possible(tm) > > Correct me if I'm wrong, but Apache::DBI will dynamicly give you a new > $dbh if the connection has died (see The Morning Bug and $dhb->ping). probably true if i was using Apache::DBI, but i'm just using DBI directly. > > If you do prepare in handler.pl then what happens when two requests try > to use the same statement, in a new connection? i'm using prepare_cached() in DBI, which keeps the cache in the $dbh. If i lose the $dbh, the cache goes too -- i'm not maintaining my own cache that would live from $dbh to $dbh. > > > Well, you do hang on to resources much longer than needed, so maybe it's > > > not a complete win, have you tried not hanging on to the statements? > > > > what, risk memory corruption? :) > > Eh?, there is no corruption, either it works or you get a sigsevg. that was a joke :). > > for me, that means that i'd have to keep my transactions so short that > > i might as well do autocommit. most of my inserts/updates are either single table > > or can be ordered in such a way they don't have to be combined in a txn. > > I do all my work inside one transaction pr. request to apache, the > transaction is committed at the end of the autohandler, so if something > goes wrong in the course of the call no changes get written to the > database (that is the point of using transactions) ah, so the lifetime of your transaction is one http request; that makes more sense to me. i thought you were keeping a transaction alive across http requests, which gets trickier programmatically. i could do that too; i just haven't had the need yet, as my atomic modification operations currently fit into single sql requests. > > > > if i were to switch over to holding transactions longer, i'd want read_committed > > isolation. > > I've thought about using that, but it can lead to terrible inconsitency > in the database. > > considder some_table that has an autoincrementing field some_table_id > that is used like this: > insert into some_table (some_field) values (?) > select max(some_table_id) from some_table > > What happens when these queries are run in two transactions at the same > time? > They race, same thing happens with read_committed, the timing is just > different. um, i think that example applies to just about any isolation level. it is a well-known tempting-but-unreliable-way to generate primary key values. > > You could also be making decisions based on the result of several > queries and then making the wrong decision because you got to read one > transactions output that changed all the data you rely on (but you only > read half of the changed data) > > IMHO read_committed is useless for all but the most simple/non-critical > work. true; i wouldn't do it unless i knew for a fact that (a) it resulted in better performance, and (b) it was safe. i don't have much intuition about ib's versioning intuition and performance. for example, suppose i had a $dbh which had a permanent readonly (isc_tpb_read) read_committed transaction that it never committed, that was used for various reports, would i get better performance that way? -mda |
From: Flemming F. <di...@sw...> - 2000-10-03 17:11:42
|
"Mark D. Anderson" wrote: > > Hmm, in that case, I take it that you do all/most of your prepares the > > first time the modules are run (in <%once> maybe?) > > in the handler.pl, the first time through This is not Possible(tm) Correct me if I'm wrong, but Apache::DBI will dynamicly give you a new $dbh if the connection has died (see The Morning Bug and $dhb->ping). If you do prepare in handler.pl then what happens when two requests try to use the same statement, in a new connection? > > Well, you do hang on to resources much longer than needed, so maybe it's > > not a complete win, have you tried not hanging on to the statements? > > what, risk memory corruption? :) Eh?, there is no corruption, either it works or you get a sigsevg. Hanging onto resources too long (especially transactions) has really terrible effects on the database drainbrammaged transaction handling of a couple of weeks ago was really hard on the db, I can imagine that keeping statements around and also expecting them to work (do you re-prepare *when* your connection has died?) is horrible too. > > I find the latest version+the memset patch to be pretty solid, in that I > > havn't seen any errors yet. > > well, now that commit_retaining is removed, i could imagine things working with > autocommit off, but the default transaction parameter of snapshot (aka "concurrency") > mean that you won't see other people's committed changes til you commit. Yes, that is the point of using a transaction:) > for me, that means that i'd have to keep my transactions so short that > i might as well do autocommit. most of my inserts/updates are either single table > or can be ordered in such a way they don't have to be combined in a txn. I do all my work inside one transaction pr. request to apache, the transaction is committed at the end of the autohandler, so if something goes wrong in the course of the call no changes get written to the database (that is the point of using transactions) > if i were to switch over to holding transactions longer, i'd want read_committed > isolation. I've thought about using that, but it can lead to terrible inconsitency in the database. considder some_table that has an autoincrementing field some_table_id that is used like this: insert into some_table (some_field) values (?) select max(some_table_id) from some_table What happens when these queries are run in two transactions at the same time? They race, same thing happens with read_committed, the timing is just different. You could also be making decisions based on the result of several queries and then making the wrong decision because you got to read one transactions output that changed all the data you rely on (but you only read half of the changed data) IMHO read_committed is useless for all but the most simple/non-critical work. |
From: Mark D. A. <md...@di...> - 2000-10-03 15:59:51
|
> Hmm, in that case, I take it that you do all/most of your prepares the > first time the modules are run (in <%once> maybe?) in the handler.pl, the first time through > > not sure what that has to do with the prepare issue; prepare *always* has to be done, > > and doing all prepare's at start up time and using prepare_cached() both seem > > complete wins. > > Well, you do hang on to resources much longer than needed, so maybe it's > not a complete win, have you tried not hanging on to the statements? what, risk memory corruption? :) > > as for autocommit, that is necessary until there is full control over transaction > > parameters. > > What? > > Are you talking about the old version that hung onto transactions > forever and deadlocked like there was no tomorrow or the latest version? > > I find the latest version+the memset patch to be pretty solid, in that I > havn't seen any errors yet. well, now that commit_retaining is removed, i could imagine things working with autocommit off, but the default transaction parameter of snapshot (aka "concurrency") mean that you won't see other people's committed changes til you commit. for me, that means that i'd have to keep my transactions so short that i might as well do autocommit. most of my inserts/updates are either single table or can be ordered in such a way they don't have to be combined in a txn. if i were to switch over to holding transactions longer, i'd want read_committed isolation. -mda |
From: Flemming F. <di...@sw...> - 2000-10-03 15:11:08
|
"Mark D. Anderson" wrote: > actually, i'm doing exactly that. HTML::Mason, mod_perl, and up to 20 txns/sec > (with load testing; our actual traffic is more like 1-2 txns/sec). > a mixture of reads and writes, though mostly reads. Hmm, in that case, I take it that you do all/most of your prepares the first time the modules are run (in <%once> maybe?), that would mean that you have a fairly good chance of getting all zeros in your malloc'ed memory so you would never see the bug that bit me (garbage in the sqlda that did not get zeroed by interbase's prepare) I think that you have to be in a situation where you prepare a statement after some memory has been used and free()'ed to run into the this bug, I can reproduce it pretty reliably by now. BTW, the bugfix is pretty straight forward, (see below), so I have no idea why ed hasn't included it yet, maybe he's off on vacation or something? #define IB_alloc_sqlda(sqlda, n) \ { \ short len = n; \ if (sqlda) { \ safefree(sqlda);sqlda = NULL; \ } \ if (!(sqlda = (XSQLDA*) safemalloc(XSQLDA_LENGTH(len)))) { \ do_error(sth, 2, "Fail to allocate XSQLDA"); \ } \ memset(sqlda,0,XSQLDA_LENGTH(len)); \ sqlda->sqln = len; \ sqlda->version = SQLDA_VERSION1; \ } > we've been hitting all sorts of problems, with interbase servers that grow without > bound, or lock up permanently on semop(), or spin cpu forever, or disconnect > client connections for no apparent reason, or use completely braindead > query optimization. Urgh. > not sure what that has to do with the prepare issue; prepare *always* has to be done, > and doing all prepare's at start up time and using prepare_cached() both seem > complete wins. Well, you do hang on to resources much longer than needed, so maybe it's not a complete win, have you tried not hanging on to the statements? > as for autocommit, that is necessary until there is full control over transaction > parameters. What? Are you talking about the old version that hung onto transactions forever and deadlocked like there was no tomorrow or the latest version? I find the latest version+the memset patch to be pretty solid, in that I havn't seen any errors yet. > though i wrote that patch DBD::InterBase to do that, i haven't integrated it into our > production system and probably won't til it is clear what edwin will do with it; i > don't want to be reliant on a fork. We are running on a fork, edwins version is too buggy (at the moment anyway) |
From: Michael S. <mi...@ch...> - 2000-10-02 10:47:35
|
Hello Edwin, Thursday, September 28, 2000, 10:52:08 PM, you wrote: EP> - Makefile.PL now works for FreeBSD (Michael Samanov), as the result, EP> it's now a bit unfriendly for Linux users (!) :-) but this shouldn't be EP> a big problem, just remember that IB 6.0 for Linux's libgds resides in EP> /usr/lib Here's the improved variant. I wrote the *GREATEST* :-) function that tests the locations and it seems like it founds all the dirs almost by itself. I hoped, I enumerated the whole set of the paths, if I didn't, it could be corrected easily. The patch is to be applied to 0.21.1. Best regards, Michael mailto:mi...@ch... |
From: Mark D. A. <md...@di...> - 2000-10-01 21:50:23
|
in 20.6, i sometimes get: Use of uninitialized value in subroutine entry at /home/local/DBD-InterBase-0.20.6/blib/lib/DBD/InterBase.pm line 149 which is: DBD::InterBase::st::_prepare($sth, $statement, $attribs) or return undef; the thing is, i know those 3 args are defined (i put a die in front if they weren't). i don't think i've seen the "in subroutine entry" variant of this diagnostic before -- anyone know what that signifies? -mda |
From: Mark D. A. <md...@di...> - 2000-10-01 15:58:26
|
> Well, you are writing small one-off autocomitting scripts, I'm doing > HTML::Mason (mod_perl) modules that get called n times pr second. > > I think the reason that we are hitting a lot of problems is that we are > actually the only ones using the driver for web stuff, we are using > multible concurrent transactions and the system is very complex compared > to simple one-at-a-time scripts. actually, i'm doing exactly that. HTML::Mason, mod_perl, and up to 20 txns/sec (with load testing; our actual traffic is more like 1-2 txns/sec). a mixture of reads and writes, though mostly reads. we've been hitting all sorts of problems, with interbase servers that grow without bound, or lock up permanently on semop(), or spin cpu forever, or disconnect client connections for no apparent reason, or use completely braindead query optimization. i'm always about this close to taking a week out and switching over to postgres. not sure what that has to do with the prepare issue; prepare *always* has to be done, and doing all prepare's at start up time and using prepare_cached() both seem complete wins. as for autocommit, that is necessary until there is full control over transaction parameters. though i wrote that patch DBD::InterBase to do that, i haven't integrated it into our production system and probably won't til it is clear what edwin will do with it; i don't want to be reliant on a fork. -mda |
From: Flemming F. <di...@sw...> - 2000-10-01 12:57:31
|
"Mark D. Anderson" wrote: > > > execute 5 of them, but not actually take the penalty for preparing > > all 17. > > i do a prepare of all my statements at init time. since it only > happens once, i don't really care if i do all 17. Well, you are writing small one-off autocomitting scripts, I'm doing HTML::Mason (mod_perl) modules that get called n times pr second. > i'm also using prepare_cached(), btw, which seems to work fine. I think the reason that we are hitting a lot of problems is that we are actually the only ones using the driver for web stuff, we are using multible concurrent transactions and the system is very complex compared to simple one-at-a-time scripts. Hopefully more people will start using ib soon for web things, so that I don't have to fix all the bugs:) |
From: Mark D. A. <md...@di...> - 2000-09-30 15:12:31
|
> execute 5 of them, but not actually take the penalty for preparing all > 17. i do a prepare of all my statements at init time. since it only happens once, i don't really care if i do all 17. i'm also using prepare_cached(), btw, which seems to work fine. -mda |
From: Flemming F. <di...@sw...> - 2000-09-30 00:28:27
|
Ok, I managed to get a stacktrace out of the dying apache and I've been tinkering with it a bit tonight, the short story is: I found the bug and iced it (yay!). The sqlda needs to be zeroed before using it: (hint: the memset line is new) #define IB_alloc_sqlda(sqlda, n) \ { \ short len = n; \ if (sqlda) { \ safefree(sqlda);sqlda = NULL; \ } \ if (!(sqlda = (XSQLDA*) safemalloc(XSQLDA_LENGTH(len)))) { \ do_error(sth, 2, "Fail to allocate XSQLDA"); \ } \ memset(sqlda,0,XSQLDA_LENGTH(len)); \ sqlda->sqln = len; \ sqlda->version = SQLDA_VERSION1; \ } Ed: Could you *please* change the mailing list settings to "reply to list"? BTW is there ANY good reason for this to be a macro in stead of a function? |
From: Edwin P. <ed....@co...> - 2000-09-28 23:01:57
|
Michael Samanov wrote: > > I'm not thinking it's a deal of SS or classic server: you are client > and it is server, and you are both different processes communicating > via TCP/IP. If IBPerl code shows the same problem then it means that > some leakage is in the Interbase client library - gds.a (or, more > likely, gds.so). > I've just tried this test using DSQL C API, and it seems that _both_ drivers, mine and IBPerl has leaks. For now I haven't found where exactly it leaks. Need investigate this further. Later, Edwin. > I just have tested - FreeBSD has the same problem. > > Best regards, > Michael mailto:mi...@ch... |
From: Flemming F. <di...@sw...> - 2000-09-28 21:44:30
|
Flemming Frandsen wrote: > my $sth = $scopedb->prepare("UPDATE entry SET vote_points=? WHERE > entry_id=?") or die ...; > while (my ($entry_id, $points) = each %votes) { > $sth->execute($points,$entry_id) or die $DBI::errstr; > } > > This works though (much too slow to be a real workaround) > while (my ($entry_id, $points) = each %votes) { > my $sth = $scopedb->prepare("UPDATE entry SET vote_points=? WHERE > entry_id=?") or die ...; > $sth->execute($points,$entry_id) or die $DBI::errstr; > } Ok, did a bit more testing, it turns out that %votes was empty. The first version works when %votes isn't empty. So: The bug now seems to be triggered when you prepare a query and doesn't execute it. I have tried to test my theory, but I haven't had much luck. Idea: Maybe this would be a good time to do lazy prepare? Just store the SQL until we actually need the statement handle, this should be optional, both on the $dbh and on the individual statement, because: Sometimes you really want the error at the prepare, not at execute, but other times you would like to prepare 17 queries and then (depending on the phase of the moon and the number of bugs on the windshield) execute 5 of them, but not actually take the penalty for preparing all 17. |
From: Flemming F. <di...@sw...> - 2000-09-28 21:10:35
|
Edwin Pratomo wrote: > - commit_retaining is replaced with commit_transaction (Flemming > Frandsen) This allows folks to actually use transactions:) Anyway, the bug I talked about in prepare has been cornered, so now all that remains is the matter of dispatching it:) I used to prepare a statement and then execute it n times (a LOT of times), this is where it fails: my $sth = $scopedb->prepare("UPDATE entry SET vote_points=? WHERE entry_id=?") or die ...; while (my ($entry_id, $points) = each %votes) { $sth->execute($points,$entry_id) or die $DBI::errstr; } This works though (much too slow to be a real workaround) while (my ($entry_id, $points) = each %votes) { my $sth = $scopedb->prepare("UPDATE entry SET vote_points=? WHERE entry_id=?") or die ...; $sth->execute($points,$entry_id) or die $DBI::errstr; } The same bug shows up when using $dbh->prepare_cached() The driver segfaults during the prepare, so it is quite severe (it also does this in my version, only not as often?) Something tells me that something is done in prepare that ought to be done in execute, or that something in execute doesn't quite clean up after itself. |
From: Michael S. <mi...@ch...> - 2000-09-28 19:44:51
|
Hello Edwin, Thursday, September 28, 2000, 6:55:49 PM, you wrote: >> While using the DBD::InterBase driver I've noticed that some large >> repetitive querying jobs chew up a gradually increasing amount of >> memory, I've reduced it to this minimal case: EP> I'm not sure this is because of the buggy IB SS for Linux, but you'd EP> better use the Classic until the next release of SS for Linux. EP> I cc this message to dbi-interbase-devel list, where there might be some EP> helpful comments from other users. I'm not thinking it's a deal of SS or classic server: you are client and it is server, and you are both different processes communicating via TCP/IP. If IBPerl code shows the same problem then it means that some leakage is in the Interbase client library - gds.a (or, more likely, gds.so). I just have tested - FreeBSD has the same problem. Best regards, Michael mailto:mi...@ch... |
From: Edwin P. <ed....@co...> - 2000-09-28 18:49:50
|
Changes since 0.21: - Fixed do() method, now returns undef on error (Alexei V. Drougov) - Fixed ping() method (Mark D. Anderson) - commit_retaining is replaced with commit_transaction (Flemming Frandsen) - Makefile.PL now works for FreeBSD (Michael Samanov), as the result, it's now a bit unfriendly for Linux users (!) :-) but this shouldn't be a big problem, just remember that IB 6.0 for Linux's libgds resides in /usr/lib - removed unnecessary malloc.h which generated warning when compiling on FreeBSD (Michael Samanov) Rgds, Edwin. |
From: Jon B. <jon...@ca...> - 2000-09-28 16:34:40
|
ed....@co... said: > jb...@ca... wrote: > > Hi All, > > While using the DBD::InterBase driver I've noticed that some large > repetitive querying jobs chew up a gradually increasing amount of > memory, I've reduced it to this minimal case: > my $sth = $dbh->prepare('SELECT * FROM TEST WHERE COL_A="abcdefg";'); > for(my $n=0;$n<1000000;$n++) { > $sth->execute; > $sth->finish; > } > I don't think this is because of some leak in the driver. Worse, the > increment size of the process changes drastically each time I run your > script. Fetching all the result instead of finish gives no difference. I tried a few other things as well, like explicitly undef'ing every related variable but that didn't have any effect. There's clearly a leak somewhere. > Same thing happens when I run a similar test using IBPerl, which of > course has different cleanup procedures. > During the run, watching with top I noticed the memory usage > gradually creeps up, surely it should be constant? I'm inclined to agree with you if it happens with IBPerl as well, aren't both of these linked with libgds.so? > I'm not sure this is because of the buggy IB SS for Linux, but you'd > better use the Classic until the next release of SS for Linux. I cc > this message to dbi-interbase-devel list, where there might be some > helpful comments from other users. I've just de-installed IB SS and install IB CS and rebuilt the DBD::Interbase for good measure, and I still get the same problem. Thanks for the suggestions, Jon Barker |
From: Flemming F. <di...@sw...> - 2000-09-28 15:58:37
|
I have just tested the latest version of dbi-interbase (from CVS) and the last round of bugfixing seems to have killed the nasty deadlocking bug. I'm starting to see a new bug that is really nasty, for some queries in some situations the driver seems to cause a segv some time during prepare, the really REALLY strange thing is that my own version doesn't do that. I have diffed them, but I don't see any real difference in what the two versions actually work. |
From: Edwin P. <ed....@co...> - 2000-09-28 14:52:51
|
jb...@ca... wrote: > > Hi All, > > While using the DBD::InterBase driver I've noticed that some large > repetitive querying jobs chew up a gradually increasing amount of > memory, I've reduced it to this minimal case: > my $sth = $dbh->prepare('SELECT * FROM TEST WHERE COL_A="abcdefg";'); > for(my $n=0;$n<1000000;$n++) { > $sth->execute; > $sth->finish; > } I don't think this is because of some leak in the driver. Worse, the increment size of the process changes drastically each time I run your script. Fetching all the result instead of finish gives no difference. Same thing happens when I run a similar test using IBPerl, which of course has different cleanup procedures. > During the run, watching with top I noticed the memory usage > gradually creeps up, surely it should be constant? > I tried this with a MySQL driver and memory usage appeared constant > while watching it with top. > > I'm running RedHat 6.1 > InterBase SS 6 > DBD::InterBase 0.21 > Perl 5.6.0 I'm not sure this is because of the buggy IB SS for Linux, but you'd better use the Classic until the next release of SS for Linux. I cc this message to dbi-interbase-devel list, where there might be some helpful comments from other users. Rgds, Edwin. |
From: Michael S. <mi...@ch...> - 2000-09-28 13:39:26
|
Hello dbi-interbase-devel, I did it at last. I hope it became more convenient, transparent and fool-proof. I can't CVS commit it now because of the technical troubles, but it applies to 0.21. The patch fixes FreeBSD bug when it's not being compiled and asks for dirs more accurate. I'm waiting for bugreports because I'm sure it won't work on platforms other than FreeBSD :-) Best regards, Michael mailto:mi...@ch... |
From: Edwin P. <ed....@co...> - 2000-09-27 16:35:20
|
"Mark D. Anderson" wrote: > > > the SS of IB 6.0 for _Linux_ is not supposed to use on production > > application. > > i've tried classic too, and actually found its performance to be better, but i had > problems with server processes waiting forever on gds_lock_mgr, which is not used > with the SS. some problem with semaphore operations (i went into the debugger > and attached). > > i haven't heard it stated anywhere that SS is less or more favored for linux? Here is the original posting. Rgds, Edwin. > Subject: Re: IB Super Server on Linux Crash > Date: Tue, 19 Sep 2000 15:19:04 -0700 > From: "Charlie Caro" <cc...@in...> > Reply-To: int...@me... > To: int...@me... > ALERT > > I've seen the "gds__alloc: memory pool corrupted" message reported several times > over the last few months against IB Super Server on Linux. So I went > investigating. > > As a result of that investigation, I've concluded that internal memory pools > managed by gds__alloc/gds__free() are not properly protected between Linux > threads for SuperServer. > > The bug results from the use of some very old V4 "transitional" macros and is > fixed by some simple code cleanup. > > The macro V4_THREADING is locally defined in the source code module jrd/gds.c to > activate the macros V4_MUTEX_(UN)LOCK. However, V4_THREADING is defined in > jrd/gds.c only for NT, Solaris and HP; it is not defined for Linux. > > This code should be modified to replace V4_MUTEX_(UN)LOCK with > THD_MUTEX_(UN)LOCK in jrd/gds.c. > > Users should switch to IB Classic on Linux until the problem has been fixed. > Inprise engineers are preparing a fix for the InterBase project at SourceForge. > > Regards, > Charlie > > P.S. Ann or Helen, you might want to bounce this message to the appropriate > lists for wider dissemination. |
From: Mark D. A. <md...@di...> - 2000-09-27 15:22:52
|
> > > * Explicit commits/rollbacks always commit_transcation / > > > rollback_transaction and then end the transaction, completely. > > > * any operation that needs a query should check to see if one is active > > > and start one if not. > > > > the above is how the new dbd driver works in autocommit mode, but not how > > it used to work, and it was definitely a problem. i haven't ranted as much about > > Is it? It would be very useful if you can reproduce the problem for me. not sure what you mean -- i'm saying the current driver is ok in this respect, for autocommit, but the old one was not. it was easy to get locking problems before, when coupled with the default transaction parameters. ann harrison has confirmed for me that starting a transaction after the previous one ended (vs. waiting for demand) is a Bad Idea. > the SS of IB 6.0 for _Linux_ is not supposed to use on production > application. i've tried classic too, and actually found its performance to be better, but i had problems with server processes waiting forever on gds_lock_mgr, which is not used with the SS. some problem with semaphore operations (i went into the debugger and attached). i haven't heard it stated anywhere that SS is less or more favored for linux? -mda |