Thread: [SQLObject] Locking and logging.
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
From: Peter G. <pe...@fs...> - 2004-02-05 11:57:50
|
Hello! If this message has been sent twice it's because I have had problems with my ISP's SMTP lately. I'm a big fan of SQLObject, very nicely done! I have two things, one question and one patch. I haven't found a way to do row or table locking for a transaction. Basically, I'd like to do something like conn = DBConnection.PostgresConnection('yada') trans = conn.transaction() p = Person(1, trans) p.lock() # table locking p.selectForUpdate("yada") # row locking ... do something that only one client may do at a time ... trans.commit() There are many ways to do it, but since some databases cannot lock on rows and databases have support for different modes I really don't know how to make it super general. The selectForUpdate() should work the same way as any select, except that if the database supports row locking it will use a select for update statement. For table locking it's a bit tricky since there are so many different modes that varies by database implementation. Any suggestions? The second thing is that I have made a very simple patch that allows database connections to use a logger if specified. import logging logger = logging.getLogger('test') hdlr = logging.FileHandler('test.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.DEBUG) conn.debug = 1 conn.logger = logger The above code will make use of Python 2.3's logging facilities. I'd like to split up SQLObject logging into more logging. For example, SQL may be considered DEBUG level while inability to connect to a database may be considered CRITICAL. Cheers! /Peter Gebauer |
From: Ian B. <ia...@co...> - 2004-02-05 16:39:47
|
Peter Gebauer wrote: > Hello! If this message has been sent twice it's because I have had problems > with my ISP's SMTP lately. > > I'm a big fan of SQLObject, very nicely done! > I have two things, one question and one patch. > > I haven't found a way to do row or table locking for a transaction. > Basically, I'd like to do something like > > conn = DBConnection.PostgresConnection('yada') > trans = conn.transaction() > p = Person(1, trans) > p.lock() # table locking > p.selectForUpdate("yada") # row locking > ... do something that only one client may do at a time ... > trans.commit() > > There are many ways to do it, but since some databases cannot lock on rows > and databases have support for different modes I really don't know how to > make it super general. > > The selectForUpdate() should work the same way as any select, except that if > the database supports row locking it will use a select for update statement. > > For table locking it's a bit tricky since there are so many different modes > that varies by database implementation. > > Any suggestions? I think there's the most general interest in optimistic locking, i.e., rows have a timestamp, and if the timestamp has been updated since the object was fetched/synced you get some sort of conflict exception when you try to commit changes. This is implemented almost entirely outside of the database, so cross-database compatibility should be easy. Though the rest may not be exactly easy. Anyway, it seems a lot better than table locking, and it's a bit better than row locking, but it catches the conflict later. It's harder to do right (on the application level) without transactions. In fact, without transactions I think you can't do it, because you might send one update, and the second update (which is required for consistency) could fail. Oh well. > The second thing is that I have made a very simple patch that allows > database connections to use a logger if specified. > > import logging > logger = logging.getLogger('test') > hdlr = logging.FileHandler('test.log') > formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') > hdlr.setFormatter(formatter) > logger.addHandler(hdlr) > logger.setLevel(logging.DEBUG) > > conn.debug = 1 > conn.logger = logger > > The above code will make use of Python 2.3's logging facilities. > I'd like to split up SQLObject logging into more logging. For example, > SQL may be considered DEBUG level while inability to connect to a database > may be considered CRITICAL. Yeah, that'd be cool. Is there a logging module backported to 2.2? Ian |
From: Peter G. <pe...@fs...> - 2004-02-06 00:15:58
|
> I think there's the most general interest in optimistic locking, i.e., > rows have a timestamp, and if the timestamp has been updated since the > object was fetched/synced you get some sort of conflict exception when > you try to commit changes. This is implemented almost entirely outside > of the database, so cross-database compatibility should be easy. Though > the rest may not be exactly easy. This is not what I'm looking for though. > Anyway, it seems a lot better than table locking, and it's a bit better Some databases only supports table locking, like the old mysql classic backend (don't know if that is still the case). Any good database would support row looking, only they support various sorts of modes. Table locking can be good to have if you wish to sum a part of or an entire table and then write the result without allowing any changes to the table. Row locking sort of works like your suggestion, only blocking, which can be fine if the transaction is really fast (< 500ms) and you really want to keep consistancy, but not fling an error out because they couldn't both write within the same 200 ms or so. > right (on the application level) without transactions. In fact, without > transactions I think you can't do it, because you might send one update, > and the second update (which is required for consistency) could fail. No, can't be done without transactions. Or, any implementation of this that I have seen always requires a transaction ending, either by commit or transaction close. > >may be considered CRITICAL. > > Yeah, that'd be cool. Is there a logging module backported to 2.2? No, I think it's not. I can't find it in the 2.2 docs anyway and there's no python logging extra module in Debian. (good measurement, hehe) Are we trying to be 2.2 compatible? Then SQLObject could include a wrapper which implements the logger, handler, formatter and record. I can write something and send a patch for all of it. If you run 2.2 the wrapper is used, if 2.3 then logging module is used. /Peter |
From: Ian B. <ia...@co...> - 2004-02-06 00:49:26
|
On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: >> I think there's the most general interest in optimistic locking, i.e., >> rows have a timestamp, and if the timestamp has been updated since the >> object was fetched/synced you get some sort of conflict exception when >> you try to commit changes. This is implemented almost entirely >> outside >> of the database, so cross-database compatibility should be easy. >> Though >> the rest may not be exactly easy. > > This is not what I'm looking for though. > >> Anyway, it seems a lot better than table locking, and it's a bit >> better > > Some databases only supports table locking, like the old mysql classic > backend (don't know if that is still the case). > > Any good database would support row looking, only they support various > sorts > of modes. > > Table locking can be good to have if you wish to sum a part of or an > entire > table and then write the result without allowing any changes to the > table. > > Row locking sort of works like your suggestion, only blocking, which > can be > fine if the transaction is really fast (< 500ms) and you really want > to keep > consistancy, but not fling an error out because they couldn't both > write > within the same 200 ms or so. Row locking could be difficult with SQLObject's caching. Ideally SQLObject shouldn't have to do a select when you fetch an object, if it knows what's in the table. But if you are doing locking, there's a good chance you'll want to lock reads, so that someone can't read the row, calculate updates based on those values, then clobber your updates without having seen them. But potentially you could instantiate a SQLObject instance from the cache and never do a select before doing your update, and SQLObject wouldn't recognize that its cache was out of date. Well... in that case, though, locking isn't your only problem. If you have a single process accessing the database, then thread locks can do what you want. But presumably you have more than one process, and potentially non-SQLObject clients. I don't know, concurrency is challenging. >> right (on the application level) without transactions. In fact, >> without >> transactions I think you can't do it, because you might send one >> update, >> and the second update (which is required for consistency) could fail. > > No, can't be done without transactions. Or, any implementation of this > that > I have seen always requires a transaction ending, either by commit or > transaction close. > >>> may be considered CRITICAL. >> >> Yeah, that'd be cool. Is there a logging module backported to 2.2? > > No, I think it's not. I can't find it in the 2.2 docs anyway and > there's no > python logging extra module in Debian. (good measurement, hehe) > > Are we trying to be 2.2 compatible? Then SQLObject could include a > wrapper > which implements the logger, handler, formatter and record. > > I can write something and send a patch for all of it. If you run 2.2 > the > wrapper is used, if 2.3 then logging module is used. I checked online and found something at http://www.red-dove.com/python_logging.html -- I think it's the module the 2.3 logging module was based on. There's a good chance that the logging.py distributed with 2.3 could be dropped into 2.2 as well. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Peter G. <pe...@fs...> - 2004-02-06 09:13:26
|
> If you have a single process accessing the database, then thread locks > can do what you want. But presumably you have more than one process, > and potentially non-SQLObject clients. I don't know, concurrency is > challenging. The reason you'd want the locking on database level is just because of the reasons mentioned above. However, if SQLObject can provide a mechanism for this in some way we have a feature that other objectifications of flat databases lack. And a very useful feature too. > I checked online and found something at > http://www.red-dove.com/python_logging.html -- I think it's the module > the 2.3 logging module was based on. There's a good chance that the > logging.py distributed with 2.3 could be dropped into 2.2 as well. I will check it out as soon as I get home from work. If this is interresting next step would be to define what log-level each action and result should have. /Peter |
From: Andy T. <an...@ha...> - 2004-02-06 11:04:27
|
Ian Bicking wrote: > On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: > [snip] >>> >>> >>> Yeah, that'd be cool. Is there a logging module backported to 2.2? >> >> >> No, I think it's not. I can't find it in the 2.2 docs anyway and >> there's no >> python logging extra module in Debian. (good measurement, hehe) >> >> Are we trying to be 2.2 compatible? Then SQLObject could include a >> wrapper >> which implements the logger, handler, formatter and record. >> >> I can write something and send a patch for all of it. If you run 2.2 the >> wrapper is used, if 2.3 then logging module is used. > > > I checked online and found something at > http://www.red-dove.com/python_logging.html -- I think it's the module > the 2.3 logging module was based on. There's a good chance that the > logging.py distributed with 2.3 could be dropped into 2.2 as well. > > -- > Ian Bicking | ia...@co... | http://blog.ianbicking.org > > That *is* the code the 2.3 logging module cames from, and it has the same interface. I use it quite happily with 2.2, works like a charm. Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ |
From: Ian B. <ia...@co...> - 2004-02-06 00:50:48
|
On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: > I can write something and send a patch for all of it. If you run 2.2 the > wrapper is used, if 2.3 then logging module is used. Sure, but be sure to keep the locking patch separate from the logging patch. Also, be sure to work off the Subversion repository, svn://colorstudy.com/trunk/SQLObject -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Peter G. <pe...@fs...> - 2004-02-06 15:49:59
|
> Sure, but be sure to keep the locking patch separate from the logging > patch. Also, be sure to work off the Subversion repository, > svn://colorstudy.com/trunk/SQLObject Since subversion is not a part of Debian testing I don't have Subversion. Is it really stable enough to use? |
From: Ian B. <ia...@co...> - 2004-02-06 17:19:30
|
Peter Gebauer wrote: >>Sure, but be sure to keep the locking patch separate from the logging >>patch. Also, be sure to work off the Subversion repository, >>svn://colorstudy.com/trunk/SQLObject > > Since subversion is not a part of Debian testing I don't have Subversion. > Is it really stable enough to use? I haven't used it enough to say for sure, but they claim to be on the verge of a 1.0 release (I think the current release is a candidate for 1.0). I haven't encountered any problems at all. It's available in Debian unstable, and doesn't bring in many unstable dependencies or conflict with testing versions of packages (at least that was my experience on my server that runs testing). I'd be more reluctant too, if it wasn't for CVS being so lame. Or maybe it's just that SF's CVS servers are so flakey that it's tainted my CVS experience. Ian |