From: Luke O. <lu...@me...> - 2003-04-14 22:22:06
|
Quoting Ian Bicking <ia...@co...>: > On Mon, 2003-04-14 at 14:52, Luke Opperman wrote: > ... > Hmmm... purge, delete, etc, aren't well thought out yet. I.e., > they don't ensure consistency. Hmm. What consistency should happen here? The problem we identified was _not_ a thread/race-condition, but a single-thread issue that can be identified completely separate from SQLObject. I've attached a test script I used, that does the insert/purge/retrieve steps I mentioned last time, directly on a CacheSet. > Actually there shouldn't be a locks there -- if None is returned > from get(), the caller is responsible for creating an object and > putting it into the cache with put(). Otherwise you can't be sure > that two objects won't be created. I'm confused. Are you saying it is the caller's responsibility to lock a get/fail/put cycle? I can understand that, in which case an alternate fix to Cache.py is to remove all locking. Doesn't change the fact that locking is very broken in the currently released file, singlethreaded or not. :) Or is the intention that Cache is expected to handle the locking, but by the caller executing get and put in a specific order (this is more like the code appears to be...). Not sure I agree with this plan, and I'd definitely prefer it was more explicit (CacheSetInst.createLock.acquire()/release() by the caller?). Ok, i've blabbered enough. It appears the second one is what you were trying to implement, and I see your point about ensuring two copies aren't created if you don't have a lock in some way around the whole get/fail/put cycle... from my look it seems SQLObject.__new__ is doing the right thing then, so I'm feeling less comfy... > I have a feeling if there's a problem, it's in > SQLObject.__new__, where maybe it isn't careful enough about this > sequence of calls. Anyway, I'll look at this more closely later. > Taxes due tomorrow :( Enjoy, - Luke |