I am sure that you are going to come back with this question, so let me answer up front.  I do not have support for the intention/implict lock model implemented yet.  I am not sure whether the SIX is essentially escalated to an X lock.  Broadly, it is my understanding that the exclusive access will block until there are no other share locks on the parent/child resources, at which time the write operations will go through.


My intention has been to prove out the basic queue and deadlock mechanisms and then go back and add the support for intention and implict locking to the Queue.  It is basically a protocol for how you acquire and release locks on resources.




From: []
Sent: Wednesday, March 29, 2006 8:09 PM
To: Thompson, Bryan B.
Subject: re[8]: [Jdbm-developer] re[14]: 2PL: transactions, threads <s nip>




One other question - just to make sure I'm clear on things:  You aren't suggesting that a tx can't escalate a lock on a record from S to X, right?  You are just saying that we can use lock hierarchies with intention locks to keep us from having to deal with thousands of locks, right?


So, when I do a read on a given record, I'd grab a IS lock on some higher level layer (yet to be determined), that gates access to say a range of 500 records that happens to include the record of interest.  I would then do my read.  If I then later decide that I need to actually update the record, I would attempt to upgrade my IS lock to a SIX lock, at which point I would be granted exlusive access to the 500 records, or I would block.


It seems that there is definitely no reason that the granularity of the above couldn't be the individual record (i.e. why block 500 records when I only need to use one of them?) - but at the cost of having to maintain a lot of individual locks.


Unless those 500 records happened to be selected in a manner that would group them in a manner that avoided deadlock (and this would require much more information than is available to the database layer), I'm not sure that I see how this could help with avoiding deadlock.


Keep in mind that a properly designed application will prevent most of this kind of issue at the application level - very likely using some sort of locking mechanism.


Anyway, I think the idea of heirarchal lock structures is interesting - I'm just not seeing how to apply it just yet...


Actually, what do you think about this (this is a completely random, half-baked thought, so it may be bunk):


In jdbm2, I think that we actually do have a meaningful higher level grouping of records - the records that have been used first by a given transaction may make for a nice layer for lock resolution...  For example:


If Tx1 is the first tx to read RowA, RowB and RowC, then those 3 rows become part of a lock grouping called L1.

If Tx2 comes along and reads RowB and RowD, then only RowD becomes part of locking group L2.  RowB remains in locking group L1.


So, Tx1 will, by definition have IS access on L1.  If it decides that it needs to make any updates, it escalates it's access to L1 to SIX.


If Tx2 then decides to modify RowB, it would have to escalate it's access to L1 to SIX (which would block until Tx1 terminates).



I have no idea whether this would be better than just arbitrarily assigning ranges of recids to lock groups - but maybe...



- K




Not terminating a transaction is going to do more than just cause memory leaks.  If you are using locking then it will force other transactions requiring incompatible access to the same resources to block.  If any of those resources is frequently used, then it will lock everyone else out of your database!
I have another concern with releasing a transaction from a thread.  A transaction t which is not bound to a thread is not running, so any other transactions waiting on resources locked by t will be unable to run until t is rebound to another thread.  People doing this will have to be very careful.  And as I mentioned earlier, transaction restart is going to be interesting if you are passing a transaction along from one thread pool to another.  Restart may not be possible in that scenario.
I also have questions about reentrant and escalating locks.  I am of a mind that neither of these is a good thing.  Let me explain.
First, by a reentrant lock, do you mean not merely that requesting a lock which you already own is a NOP but that a counter tracks the #of times you have obtained a given lock?  That kind of pattern suites recursive programming styles inwhich the locks are given up as you back out, but it is not very suited to 2PL in my mind, where the goal is to steadily acquire locks until the transaction holds all the locks that it needs and to never acquire another lock once it has started releasinglocks.
Second, escalating a lock mode, e.g., from S (share) to X (exclusive) may not be such a good idea.  The design proposed by Gray et alia used a directed graph of resource to support intention and implicit locks.  Rather than escalating a lock on a specific resource, you would obtain an intention lock, e.g., SIX (share intention exclusive) on a parent resource.  This would give you share access to all child resources and you could request an exclusive lock on specific child resources.  
If people want to use 2PL without intention and implicit locking, then there are various consequences.  First, the memory burden is much higher since each individual resource requires its own queue to manage locking.  Second, there is anincreased likelihood of deadlock since you can not use intention locks to coordinate locking over a child resources, which in turn reduces concurrency.
The TxDag class can be reused to support deadlock detection without intention and implicit locking modes.  For example, you could leverage the ReadWriteLock class in java.util.concurrent to detect deadlocks using TxDag.  However, the ReadWriteLock class does NOT permit escalation of a read lock into a write lock.  There are a lot of good reasons to avoid this since it makes deadlock much more likely.
I realize that the transparent use of locks in jdbm would seem a natural fit for obtaining a share/read lock when you access a resource and then escalating to a write/exclusive lock when you update a resource, but I am not sure that thisis a practical scheme.

From: [] On Behalf Of Kevin Day
Sent: Wednesday, March 29, 2006 9:17 AM
To: JDBM Developer listserv
Subject: re[6]: [Jdbm-developer] re[14]: 2PL: transactions, threads <s nip>



Yes.  Very elegantly put - I wish I had thought to explain it that way!


Another thing that I like about this strategy is that it gives an implicit mechanism for protecting against memory leaks.  If a developer creates a transaction and neither commits or aborts it, they can't create and use another one in the same thread without signalling their intention...  Also, the thread local storage map could be used to see if a terminated thread still has a tx associated with it - if it does, then there's a memory leak.  jdbm could check for that at shutdown and provide a warning log or something.


That's a little thing, but it could keep a user of the package from accidentally shooting themselves in the foot...


- K


So, the answer is that only the current thread for a tx may release the thread <-> tx bond?  So, if the thread is blocked awaiting a lock it will not be possible to release that bond?

From: [] On Behalf Of Kevin Day
Sent: Tuesday, March 28, 2006 9:26 PM
To: JDBM Developer listserv
Subject: re[4]: [Jdbm-developer] re[14]: 2PL: transactions, threads <s nip>



hmmm - I'm not seeing this.  Setting the current thread's lock context (i.e. the transaction) shouldn't require any access to any queues at all.  If the current thread is blocked, then it wouldn't be able to make the call to release thetransaction from the current lock context...


Here's how I'm thinking about it:


The queue for a given resource captures the entities that are waiting on that resource (or have locks on that resource).  When the lock call is made, the current thread's lock context is retrieved, and that context is used to capture thereservation.  If the lock can not be immediately granted, the call to lock() enters a wait loop.  At a basic level, this could be achieved by synchronizing on the queue itself, and using wait/notify.  The monitor on the wait loop is an event that callsnotify when locks are released from the queue.



Why do we need to look at all queues to determine if a single lock request should block or not?


- K



From: Thompson, Bryan B.
Sent: Tuesday, March 28, 2006 7:05 PM
To: 'Kevin Day'; JDBM Developer listserv
Subject: RE: re[2]: [Jdbm-developer] re[14]: 2PL: transactions, threads


Yes, prototypes may be the way to go.

My concern is that blocking is something that a tx may do for any resource
that it is accessing.  However the notion of setting the bond between a tx
and a thread has been discussed in terms of a specific resource / lock.  You
simply do not know at the Queue level (a single resource) whether or not the
transaction is blocked on waiting in a queue for another resource.  E.g.,
this decision needs to be made with respect to all resource locks requested
by the tx, not with respect to a single resource.


[] On Behalf Of Kevin Day
Sent: Tuesday, March 28, 2006 6:59 PM
To: JDBM Developer listserv
Subject: re[2]: [Jdbm-developer] re[14]: 2PL: transactions, threads <snip>

Hey - I used to work for Motorolla - we had manuals of TLAs we had to carry
Any reason to not hold a map of all locks keyed by transaction?  I suppose
the locks could be held by the tx itself, but it kind of starts pushing the
locking code into the transaction management code.  I could see where maybe
the locked *resource* could be held by the transaction (we will need
something like this for caching purposes anyway) - if you are already
maintaining a map of resource -> queues, could we combine those two to
provide the notification for lock release?
I really have no conceptual feel for the "correct" way for this to fall out
- I think that we need to do some high level thinking about the interactions
of the layers...  A proof of concept implementation would be a very good
starting place for helping us get our heads around the problem space.
Back to the question about enforcing the behavior of the 'current'
transaction for the thread not changing if the tx is blocked, I want to make
sure that I fully understand your concern.  Are you concerned about an
application calling setCurrentThreadTransaction() while another thread has
that transaction as *it's* current thread transaction?
Or something else?
- K


Yes, you do have a thing for TLAs!

I think that there are unanswered questions below, notably.  It sounds like
a transaction probably needs to maintain a collection of all locks that it
has obtained and not yet released or requested and not yet obtained.  I do
not handle this in the locking package since I was trying to avoid making
decisions on the behalf of people who were designing their own Transaction
API.  However, maybe it would make sense if I did a sample design that
demonstrated how the locking API could be integrated for that purpose?

Bryan wrote:

Given that the thread associated with a transaction may not change while
that transaction is blocked, how are you going to enforce that at the
lockMgr level?  There are many, many possible Queue instances.  You
would need to know whether or not the transaction was blocked in any Queue
instance.  Are we going to use a thread local variable for that as well?
If so, it seems that this would have to be set by Queue.lock().

[] On Behalf Of Kevin Day
Sent: Tuesday, March 28, 2006 5:29 PM
To: 'JDBM Developer listserv '
Subject: [Jdbm-developer] re[14]: 2PL: transactions, threads <snip>

Gotcha.  Gotta love those TLAs...
To answer the question about impact on the design I (actually,
Alex) suggested:
I don't think there is much impact at all, except that the waits for graph
needs to hold transaction references, and not references to the current
As long as the call to lock() either includes the transaction as a
parameter, or does it's own thread local lookup to obtain the transaction,
then all is well.
If you have the transaction as a parameter, then we can always wrap the
library to allow for thread local storage of the current active
transaction.  If you do the thread local thing internal to the lock library,
then we just use the library as is.
The biggest thing is that we have got to have the extra level of indirection
that allows us to provide a lock context that is not just the current
thread, but any object (e.g. Transaction) that we want to associate with the
current thread.  The current javadocs in the header of the lock class you
emailed earlier imply that the thread is itself the lock context, and I
think we will want to relax that a bit with one layer of indirection.
- K
<snip >