Re: [Dbi-interbase-devel] Announce: Bugfix:)
Status: Beta
Brought to you by:
edpratomo
From: Mark D. A. <md...@di...> - 2000-10-03 21:01:20
|
> > in the handler.pl, the first time through > This is not Possible(tm) > > Correct me if I'm wrong, but Apache::DBI will dynamicly give you a new > $dbh if the connection has died (see The Morning Bug and $dhb->ping). probably true if i was using Apache::DBI, but i'm just using DBI directly. > > If you do prepare in handler.pl then what happens when two requests try > to use the same statement, in a new connection? i'm using prepare_cached() in DBI, which keeps the cache in the $dbh. If i lose the $dbh, the cache goes too -- i'm not maintaining my own cache that would live from $dbh to $dbh. > > > Well, you do hang on to resources much longer than needed, so maybe it's > > > not a complete win, have you tried not hanging on to the statements? > > > > what, risk memory corruption? :) > > Eh?, there is no corruption, either it works or you get a sigsevg. that was a joke :). > > for me, that means that i'd have to keep my transactions so short that > > i might as well do autocommit. most of my inserts/updates are either single table > > or can be ordered in such a way they don't have to be combined in a txn. > > I do all my work inside one transaction pr. request to apache, the > transaction is committed at the end of the autohandler, so if something > goes wrong in the course of the call no changes get written to the > database (that is the point of using transactions) ah, so the lifetime of your transaction is one http request; that makes more sense to me. i thought you were keeping a transaction alive across http requests, which gets trickier programmatically. i could do that too; i just haven't had the need yet, as my atomic modification operations currently fit into single sql requests. > > > > if i were to switch over to holding transactions longer, i'd want read_committed > > isolation. > > I've thought about using that, but it can lead to terrible inconsitency > in the database. > > considder some_table that has an autoincrementing field some_table_id > that is used like this: > insert into some_table (some_field) values (?) > select max(some_table_id) from some_table > > What happens when these queries are run in two transactions at the same > time? > They race, same thing happens with read_committed, the timing is just > different. um, i think that example applies to just about any isolation level. it is a well-known tempting-but-unreliable-way to generate primary key values. > > You could also be making decisions based on the result of several > queries and then making the wrong decision because you got to read one > transactions output that changed all the data you rely on (but you only > read half of the changed data) > > IMHO read_committed is useless for all but the most simple/non-critical > work. true; i wouldn't do it unless i knew for a fact that (a) it resulted in better performance, and (b) it was safe. i don't have much intuition about ib's versioning intuition and performance. for example, suppose i had a $dbh which had a permanent readonly (isc_tpb_read) read_committed transaction that it never committed, that was used for various reports, would i get better performance that way? -mda |