From: Ian B. <ia...@co...> - 2004-02-06 00:49:26
|
On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: >> I think there's the most general interest in optimistic locking, i.e., >> rows have a timestamp, and if the timestamp has been updated since the >> object was fetched/synced you get some sort of conflict exception when >> you try to commit changes. This is implemented almost entirely >> outside >> of the database, so cross-database compatibility should be easy. >> Though >> the rest may not be exactly easy. > > This is not what I'm looking for though. > >> Anyway, it seems a lot better than table locking, and it's a bit >> better > > Some databases only supports table locking, like the old mysql classic > backend (don't know if that is still the case). > > Any good database would support row looking, only they support various > sorts > of modes. > > Table locking can be good to have if you wish to sum a part of or an > entire > table and then write the result without allowing any changes to the > table. > > Row locking sort of works like your suggestion, only blocking, which > can be > fine if the transaction is really fast (< 500ms) and you really want > to keep > consistancy, but not fling an error out because they couldn't both > write > within the same 200 ms or so. Row locking could be difficult with SQLObject's caching. Ideally SQLObject shouldn't have to do a select when you fetch an object, if it knows what's in the table. But if you are doing locking, there's a good chance you'll want to lock reads, so that someone can't read the row, calculate updates based on those values, then clobber your updates without having seen them. But potentially you could instantiate a SQLObject instance from the cache and never do a select before doing your update, and SQLObject wouldn't recognize that its cache was out of date. Well... in that case, though, locking isn't your only problem. If you have a single process accessing the database, then thread locks can do what you want. But presumably you have more than one process, and potentially non-SQLObject clients. I don't know, concurrency is challenging. >> right (on the application level) without transactions. In fact, >> without >> transactions I think you can't do it, because you might send one >> update, >> and the second update (which is required for consistency) could fail. > > No, can't be done without transactions. Or, any implementation of this > that > I have seen always requires a transaction ending, either by commit or > transaction close. > >>> may be considered CRITICAL. >> >> Yeah, that'd be cool. Is there a logging module backported to 2.2? > > No, I think it's not. I can't find it in the 2.2 docs anyway and > there's no > python logging extra module in Debian. (good measurement, hehe) > > Are we trying to be 2.2 compatible? Then SQLObject could include a > wrapper > which implements the logger, handler, formatter and record. > > I can write something and send a patch for all of it. If you run 2.2 > the > wrapper is used, if 2.3 then logging module is used. I checked online and found something at http://www.red-dove.com/python_logging.html -- I think it's the module the 2.3 logging module was based on. There's a good chance that the logging.py distributed with 2.3 could be dropped into 2.2 as well. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |