Deadlock detection is handled by the TxDag class in the same package. The TxDag class maintains a directed acyclic graph of WAITS_FOR edges induced by transactions waiting on locks for resources. The Queue class constructor requires a TxDag instance as one of its parameters, and all Queue instances for the same database should use the same TxDag instance. TxDag implements the online deadlock detection algorithm described by Bayer and provides atomic updates when multiple edges need to be added at once – if adding those edges would result in a cycle, then no changes are made and a DeadlockException is thrown out of TxDag.addEdges() and out of Queue.lock(), which calls TxDag.addEdges().
The TxDag class is based on an in memory boolean matrix W to store the edges and an in memory int matrix M to store the data computed by the deadlock detection algorithm (the closure of the WAITS_FOR relations is maintained as path counts between vertices of the graph). A vertex corresponds to a transaction. Transactions are automatically assigned row/column indices as edges are added to the graph. When a transaction completes, the application must invoke TxDag.removeEdges( Object tx, waiting := false ). This removes all edges involving that transactions and also reclaims the index assigned to that transaction. Other than that, the TxDag is maintained automatically by the Queue objects. Only the rows and columns in M that correspond to vertices with at least one edge ( x WAITS_FOR y ) are actually used in the computation, so this minimizes the overall cost of updating the closure.
Given that deadlock detection is being handled through coordination across Queue instances using a shared TxDag instance, how does this change the design?
[mailto:email@example.com] On Behalf Of Kevin Day
Sent: Tuesday, March 28, 2006 2:23 PM
To: JDBM Developer listserv
Subject: re: [Jdbm-developer] 2PL: transactions, threads and lockin g (rese nd!)
It may very well be that we don't need a lock manager - but I'm not clear on how we can do deadlock detection if there isn't some sort of central repository of locks... I think this is where the Queue get() call comes in in the code segment you wrote?
I'm not entirely sure of how a user of the locking subsystem would get the queue for the resource that is being locked - or is there some other mechanism that you had in mind that I'm not intuiting? Or was the idea to have the resource itself hold a reference to it's own queue?
That pushes the requirement for recid -> object mapping out of the locking sub-system, which may be desirable (otherwise, as you describe, you wind up with weak hash maps all over the place) - but it tightly couples the locking subsystem with the resource design. This could be problematic if we want to have optimistic locking vs 2PL, etc... I'm not sure...
Maybe the record manager itself needs to hold meta data about a given record (including it's resource queue)?
These are architectural decisions that I think it's time to start digging into.
Resend as plain test to get around the listserve message size limit. -
------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Jdbm-developer mailing list Jdbmfirstname.lastname@example.org https://lists.sourceforge.net/lists/listinfo/jdbm-developer