From: Nickolay S. <sk...@bs...> - 2002-12-15 11:26:22
|
Hello, Pavel ! Sunday, December 15, 2002, 1:51:27 PM, you wrote: Just a little corrections where I have knowledge. > If I'm not mistaken, IB/FB has an optimisation that allows to advance OIT > on rollback. Transactions maintain in-memory undo log of all changes > made, and this log is used to unwind all changes on rollback, so OIT can > advance (transaction is marked as committed). Transaction is marked as > rolled back and thus interesting only when server goes up after abnormal > termination (all active trn are marked as rolled back). So, the rollback > command or when client dies does not freeze OIT, only server abend does. > Of course, one can disable trn undo log in trn parameters. I'm also not > sure if CS and SS behave the same way (but should). If amount of work done in transaction is large (~10000 rows modification or more) transaction-level savepoint is no longer used to undo transaction work on rollback. >> Record versions created by the OAT and subsequent transactions can not >> be garbage collected because they might be needed to provide a >> consistent view for an active transaction. > True, but that same apply for OIT transactions, because if they are > limbo, they could be committed manualy. >> The OIT is reset in one of two ways: a sweep or a backup and restore. > True, "by the way" garbage collection can't reset OIT. To do that, all > row versions made by transaction must be removed, and only sweep and gbak > read whole database and thus GC all those versions. > The big trn state table (due to big difference between OIT and actual > transaction) only slow down a bit trn start, not the normal work. And of > course, memory consumption is higher. But the real pain of stalled OIT is > blocked GC and amount of back-versions that remain in database. If they > run over from head-row datapage to other datapages, the performance goes > south pretty quickly. >> Read-only read-committed transactions do not affect the >> OAT - they can go on forever without affecting garbage collection. In >> this case, unlike the OIT, the transaction must be declared as read-only, >> not just be de-facto read-only. > I'd like to add that they are actually handled internaly as pre- > committed. They should be a solution to OIT/ OAT problem caused by use of > CommitRetaing in 7x24 apps. >> The performance impact of the OIT, especially in the SS is probably not >> significant. The impact of the OAT is quite significant, particularly >> in applications that repeatedly update the same records. > I think that impact of OIT is almost the same as OAT. Applications that > update the same records are problem on its own, because they made > exceptionally long back-version chains due to inefficient GC (we > discussed that in ib-architect some time ago). What prevents GC OAT or OIT is not significant for this discussion. Uncommited serializable transaction prevents GC efficiently and this is a security flaw. Malicious or simply uneducated user with no rights can cause this on a production database and shit is going to happen then (exceptionally in 24x7 usage scenario). >> That effect could be reduced by some bookkeeping that told the server >> which record versions were important so intermediate versions could be >> removed. At the moment, it seems to me that the cost of that >> bookkeeping and the complexity of removing intermediate record versions >> is not worth the cost and risk. > What do you mean by intermediate versions ? Many versions of single row > that are made by one transaction ? I think that the bookkeeping is > already there as undo log. It's primarily used to undo the changes made > by procedures and triggers, but also at transaction level on rollback if > undo log is not disabled (if it's disabled, it's still there for proc and > triggers). One transaction cannot create more then one version of a record in the database (that is the reason why explicit lock+update in one transaction makes essentialy no overhead over singular update). I think that Ann was talking about removal of intermediate records in a chain of back record versions and leave just enough versions to maintain consistency of all active snapshots. This would solve a particular problem of performance degradation when having old uncommited serializable transactions. But I don't think it is needed to implement because of following: 1. It is difficult (costly and risky if you want) 2. It is going to break internal database history consistency, i.e. you'll not be able to restore snapshot for recent committed transactions. This may be useful feature (it is implemented in Oracle 9i and is very popular among DBA). I still recommend to look at problem from security point of view and add ability of limit period of guarantied stapshot consistency (in hours, like in Oracle) and kill transaction when its snapshot is getting inconsistent. Oracle internals and operation seems very, very close to Firebird internals. Saying more, Firebird CS have better potential to scale then Oracle now. Firebird CS codebase is already cluster-ready and it should work exteremely nice in cluster because of very clever VIO code. I'm curious when first clustered solution will be implemented with Firebird... -- Best regards, Nickolay Samofatov mailto:sk...@bs... |