From: Doug C. <de...@fl...> - 2002-03-04 17:48:07
|
> Now my understanding is that, contrary to Sybase and DB2, Oracle does not > use read locks to implement serializable isolation. So you find out at > transaction commit time that the transaction was unserializable, rather > than having the transaction block halfway through. This does sound like > optimistic locking to me. I think postgres might also work this way. > P.S. I would personally be tempted to think of this as a misfeature of > oracle since it means that with very-high-concurrency data and a > high isolation level, the probability of transaction failure is > large. Perhaps I'm wrong on this; there certainly are performance > advantages to the non-blocking model. It used to be (and should still be) that Postgress, and Interbase, use versioning to support higher concurrency. The design is quite similar to the caching algorithm I proposed (but it works in the database because it is in control of the transaction!). Once a transaction starts, all reads return versions of rows from the transaction start time (at least in the atomic transaction case). This means that transactions which only read never fail. So, report generators and backup programs can run concurrently with other transactions and there is no interference. Only two transactions which both write can interfere with each other. Clearly if two transactions attempt to write-lock the same row, one will have to fail (or wait). So, select for update can make these transactions fail faster, and is a good thing. In databases with full versioning, I don't think the "probability of transaction failure is large," it can be quite the contrary, depending on how the database is being used. e |