sqlobject-discuss Mailing List for SQLObject (Page 395)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: David M. <da...@re...> - 2004-02-07 14:36:26
|
Hi, What's the best way, within SQLObject, to delete an entire results set from a table? In other words, the equivalent of an SQL query like: DELETE FROM mytable WHERE last_name = 'Jones' AND age > 50 I know I can iterate through an SQLObject result set and invoke .destroySelf() on every element, but this feels painfully inefficient. Can someone please advise me of the fast, SQLOBject-esque way of doing it? Also, what's the best way of pumping a raw query? I know conn.getConnection().cursor().execute(query) works, but is there a better way, and hopefully one which won't cause any problems within the SQLObject data structures? -- Kind regards David -- leave this line intact so your email gets through my junk mail filter |
From: Peter G. <pe...@fs...> - 2004-02-07 00:54:34
|
> Basically, the SQLObject package becomes sqlobject, and SQLObject.py > becomes main.py. .new() becomes __init__() (i.e., class instantiation > creates a row), and you use .get() to retrieve an already-existant row. Thank's for clearing that up. > That sounds good to me. I don't know the overhead of sending a logging > message when no one is paying attention, so I don't know if it's worth Me niether. Since it only logs through a level "filter" it should be one method call and one int equality test at the most even if there is nothing to actualy log. At any rate, even if you log every database event the bottleneck wouldn't be streaming a ~50 character string to a file or console, I think the actual communication with the database will remain the actual bottleneck. Especially if you are not communicating with a local database. > it to keep the connection debug attribute. There's a good chance it > doesn't matter. The debug argument should stay, though, because that's > an easy way to control output, and it could just configure the logger to > print output to stdout. > > I believe the logging package also introduces the concept of a hierarchy > of loggers. So there could be, say, a general sqlobject logger, and > per-connection loggers. Or the connection logger could be a settable > attribute. Or there could be a general logger, and a SQL logger that > logged all communication to the database. I don't really have much > experience to draw upon using that module. Right now I can follow every connection-related matter through the logs. Stuff like object instansiation is not included, I think that's a waste of log. The interresting thing is the database communication and that's being logged well enough. I haven't put a lot of time to it yet, so it wouldn't be a tremendous loss of work if you choose to discard it. The logger is passed as a named argument to the connection constructor at instansiation time. I'm rewriting the examples to work with a logger too, I could rewrite them to work with the new 0.6 SQLObject design while I'm at it. /Peter |
From: Ian B. <ia...@co...> - 2004-02-06 21:21:21
|
Peter Gebauer wrote: > Hey there! > > Just started working on my logging patch and when I was ready to try it out > I found out that there are differences from the latest SVN snapshot compared > to the 0.5.1 release, differences that break old code. > > For instance, there is no SQLObject.new(). Any plans on updating the > examples and/or adding some documentation/comments to give a hint on how to > make new instances of SQLObjects? The changes go with my general plans for 0.6. I just committed them last night, so I haven't had a chance to update examples or docs, though tests are up to date. Basically, the SQLObject package becomes sqlobject, and SQLObject.py becomes main.py. .new() becomes __init__() (i.e., class instantiation creates a row), and you use .get() to retrieve an already-existant row. > Also, I'd like to have some input on logging info. For the first try I just > replaced the printDebug() calls with debug messages, except the for the > unwarranted rollback which is a warning. (it raises an exception when > rollback or commit is executed outside transactions) > > I might sum several queries, like automatic schema generation, to info > output and I will most likely add debug information for transactions too. > Anything else I should think about? That sounds good to me. I don't know the overhead of sending a logging message when no one is paying attention, so I don't know if it's worth it to keep the connection debug attribute. There's a good chance it doesn't matter. The debug argument should stay, though, because that's an easy way to control output, and it could just configure the logger to print output to stdout. I believe the logging package also introduces the concept of a hierarchy of loggers. So there could be, say, a general sqlobject logger, and per-connection loggers. Or the connection logger could be a settable attribute. Or there could be a general logger, and a SQL logger that logged all communication to the database. I don't really have much experience to draw upon using that module. I agree that schema generation, and perhaps some other stuff, can be at an INFO level, since it's definitely more interesting than other queries. Ian |
From: Peter G. <pe...@fs...> - 2004-02-06 19:21:10
|
Hey there! Just started working on my logging patch and when I was ready to try it out I found out that there are differences from the latest SVN snapshot compared to the 0.5.1 release, differences that break old code. For instance, there is no SQLObject.new(). Any plans on updating the examples and/or adding some documentation/comments to give a hint on how to make new instances of SQLObjects? Also, I'd like to have some input on logging info. For the first try I just replaced the printDebug() calls with debug messages, except the for the unwarranted rollback which is a warning. (it raises an exception when rollback or commit is executed outside transactions) I might sum several queries, like automatic schema generation, to info output and I will most likely add debug information for transactions too. Anything else I should think about? /Peter G |
From: Ian B. <ia...@co...> - 2004-02-06 17:19:30
|
Peter Gebauer wrote: >>Sure, but be sure to keep the locking patch separate from the logging >>patch. Also, be sure to work off the Subversion repository, >>svn://colorstudy.com/trunk/SQLObject > > Since subversion is not a part of Debian testing I don't have Subversion. > Is it really stable enough to use? I haven't used it enough to say for sure, but they claim to be on the verge of a 1.0 release (I think the current release is a candidate for 1.0). I haven't encountered any problems at all. It's available in Debian unstable, and doesn't bring in many unstable dependencies or conflict with testing versions of packages (at least that was my experience on my server that runs testing). I'd be more reluctant too, if it wasn't for CVS being so lame. Or maybe it's just that SF's CVS servers are so flakey that it's tainted my CVS experience. Ian |
From: Peter G. <pe...@fs...> - 2004-02-06 15:49:59
|
> Sure, but be sure to keep the locking patch separate from the logging > patch. Also, be sure to work off the Subversion repository, > svn://colorstudy.com/trunk/SQLObject Since subversion is not a part of Debian testing I don't have Subversion. Is it really stable enough to use? |
From: Andy T. <an...@ha...> - 2004-02-06 11:04:27
|
Ian Bicking wrote: > On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: > [snip] >>> >>> >>> Yeah, that'd be cool. Is there a logging module backported to 2.2? >> >> >> No, I think it's not. I can't find it in the 2.2 docs anyway and >> there's no >> python logging extra module in Debian. (good measurement, hehe) >> >> Are we trying to be 2.2 compatible? Then SQLObject could include a >> wrapper >> which implements the logger, handler, formatter and record. >> >> I can write something and send a patch for all of it. If you run 2.2 the >> wrapper is used, if 2.3 then logging module is used. > > > I checked online and found something at > http://www.red-dove.com/python_logging.html -- I think it's the module > the 2.3 logging module was based on. There's a good chance that the > logging.py distributed with 2.3 could be dropped into 2.2 as well. > > -- > Ian Bicking | ia...@co... | http://blog.ianbicking.org > > That *is* the code the 2.3 logging module cames from, and it has the same interface. I use it quite happily with 2.2, works like a charm. Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ |
From: Peter G. <pe...@fs...> - 2004-02-06 09:13:26
|
> If you have a single process accessing the database, then thread locks > can do what you want. But presumably you have more than one process, > and potentially non-SQLObject clients. I don't know, concurrency is > challenging. The reason you'd want the locking on database level is just because of the reasons mentioned above. However, if SQLObject can provide a mechanism for this in some way we have a feature that other objectifications of flat databases lack. And a very useful feature too. > I checked online and found something at > http://www.red-dove.com/python_logging.html -- I think it's the module > the 2.3 logging module was based on. There's a good chance that the > logging.py distributed with 2.3 could be dropped into 2.2 as well. I will check it out as soon as I get home from work. If this is interresting next step would be to define what log-level each action and result should have. /Peter |
From: Ian B. <ia...@co...> - 2004-02-06 00:50:48
|
On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: > I can write something and send a patch for all of it. If you run 2.2 the > wrapper is used, if 2.3 then logging module is used. Sure, but be sure to keep the locking patch separate from the logging patch. Also, be sure to work off the Subversion repository, svn://colorstudy.com/trunk/SQLObject -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2004-02-06 00:49:26
|
On Feb 5, 2004, at 6:17 PM, Peter Gebauer wrote: >> I think there's the most general interest in optimistic locking, i.e., >> rows have a timestamp, and if the timestamp has been updated since the >> object was fetched/synced you get some sort of conflict exception when >> you try to commit changes. This is implemented almost entirely >> outside >> of the database, so cross-database compatibility should be easy. >> Though >> the rest may not be exactly easy. > > This is not what I'm looking for though. > >> Anyway, it seems a lot better than table locking, and it's a bit >> better > > Some databases only supports table locking, like the old mysql classic > backend (don't know if that is still the case). > > Any good database would support row looking, only they support various > sorts > of modes. > > Table locking can be good to have if you wish to sum a part of or an > entire > table and then write the result without allowing any changes to the > table. > > Row locking sort of works like your suggestion, only blocking, which > can be > fine if the transaction is really fast (< 500ms) and you really want > to keep > consistancy, but not fling an error out because they couldn't both > write > within the same 200 ms or so. Row locking could be difficult with SQLObject's caching. Ideally SQLObject shouldn't have to do a select when you fetch an object, if it knows what's in the table. But if you are doing locking, there's a good chance you'll want to lock reads, so that someone can't read the row, calculate updates based on those values, then clobber your updates without having seen them. But potentially you could instantiate a SQLObject instance from the cache and never do a select before doing your update, and SQLObject wouldn't recognize that its cache was out of date. Well... in that case, though, locking isn't your only problem. If you have a single process accessing the database, then thread locks can do what you want. But presumably you have more than one process, and potentially non-SQLObject clients. I don't know, concurrency is challenging. >> right (on the application level) without transactions. In fact, >> without >> transactions I think you can't do it, because you might send one >> update, >> and the second update (which is required for consistency) could fail. > > No, can't be done without transactions. Or, any implementation of this > that > I have seen always requires a transaction ending, either by commit or > transaction close. > >>> may be considered CRITICAL. >> >> Yeah, that'd be cool. Is there a logging module backported to 2.2? > > No, I think it's not. I can't find it in the 2.2 docs anyway and > there's no > python logging extra module in Debian. (good measurement, hehe) > > Are we trying to be 2.2 compatible? Then SQLObject could include a > wrapper > which implements the logger, handler, formatter and record. > > I can write something and send a patch for all of it. If you run 2.2 > the > wrapper is used, if 2.3 then logging module is used. I checked online and found something at http://www.red-dove.com/python_logging.html -- I think it's the module the 2.3 logging module was based on. There's a good chance that the logging.py distributed with 2.3 could be dropped into 2.2 as well. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Peter G. <pe...@fs...> - 2004-02-06 00:15:58
|
> I think there's the most general interest in optimistic locking, i.e., > rows have a timestamp, and if the timestamp has been updated since the > object was fetched/synced you get some sort of conflict exception when > you try to commit changes. This is implemented almost entirely outside > of the database, so cross-database compatibility should be easy. Though > the rest may not be exactly easy. This is not what I'm looking for though. > Anyway, it seems a lot better than table locking, and it's a bit better Some databases only supports table locking, like the old mysql classic backend (don't know if that is still the case). Any good database would support row looking, only they support various sorts of modes. Table locking can be good to have if you wish to sum a part of or an entire table and then write the result without allowing any changes to the table. Row locking sort of works like your suggestion, only blocking, which can be fine if the transaction is really fast (< 500ms) and you really want to keep consistancy, but not fling an error out because they couldn't both write within the same 200 ms or so. > right (on the application level) without transactions. In fact, without > transactions I think you can't do it, because you might send one update, > and the second update (which is required for consistency) could fail. No, can't be done without transactions. Or, any implementation of this that I have seen always requires a transaction ending, either by commit or transaction close. > >may be considered CRITICAL. > > Yeah, that'd be cool. Is there a logging module backported to 2.2? No, I think it's not. I can't find it in the 2.2 docs anyway and there's no python logging extra module in Debian. (good measurement, hehe) Are we trying to be 2.2 compatible? Then SQLObject could include a wrapper which implements the logger, handler, formatter and record. I can write something and send a patch for all of it. If you run 2.2 the wrapper is used, if 2.3 then logging module is used. /Peter |
From: Ian B. <ia...@co...> - 2004-02-05 16:42:46
|
Bruno Trevisan wrote: > On Wed, 4 Feb 2004, Ian Bicking wrote: >>I'm not really confident about __connection__ in general, it seems kind >>of fragile. > > > So, given that, is there any problems about adding _connection to > persistent class constructors instead of using __connection__ in the > module context (apart from the obvious code repetition) ? No, you can add it to the superclass as well, or assign it to the superclass, like SuperClass._connection = ..., and all subclasses should pick it up (so long as you haven't instantiated any of them of course). Ian |
From: Ian B. <ia...@co...> - 2004-02-05 16:39:47
|
Peter Gebauer wrote: > Hello! If this message has been sent twice it's because I have had problems > with my ISP's SMTP lately. > > I'm a big fan of SQLObject, very nicely done! > I have two things, one question and one patch. > > I haven't found a way to do row or table locking for a transaction. > Basically, I'd like to do something like > > conn = DBConnection.PostgresConnection('yada') > trans = conn.transaction() > p = Person(1, trans) > p.lock() # table locking > p.selectForUpdate("yada") # row locking > ... do something that only one client may do at a time ... > trans.commit() > > There are many ways to do it, but since some databases cannot lock on rows > and databases have support for different modes I really don't know how to > make it super general. > > The selectForUpdate() should work the same way as any select, except that if > the database supports row locking it will use a select for update statement. > > For table locking it's a bit tricky since there are so many different modes > that varies by database implementation. > > Any suggestions? I think there's the most general interest in optimistic locking, i.e., rows have a timestamp, and if the timestamp has been updated since the object was fetched/synced you get some sort of conflict exception when you try to commit changes. This is implemented almost entirely outside of the database, so cross-database compatibility should be easy. Though the rest may not be exactly easy. Anyway, it seems a lot better than table locking, and it's a bit better than row locking, but it catches the conflict later. It's harder to do right (on the application level) without transactions. In fact, without transactions I think you can't do it, because you might send one update, and the second update (which is required for consistency) could fail. Oh well. > The second thing is that I have made a very simple patch that allows > database connections to use a logger if specified. > > import logging > logger = logging.getLogger('test') > hdlr = logging.FileHandler('test.log') > formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') > hdlr.setFormatter(formatter) > logger.addHandler(hdlr) > logger.setLevel(logging.DEBUG) > > conn.debug = 1 > conn.logger = logger > > The above code will make use of Python 2.3's logging facilities. > I'd like to split up SQLObject logging into more logging. For example, > SQL may be considered DEBUG level while inability to connect to a database > may be considered CRITICAL. Yeah, that'd be cool. Is there a logging module backported to 2.2? Ian |
From: Bruno T. <bt...@as...> - 2004-02-05 12:19:42
|
Hi On Wed, 4 Feb 2004, Ian Bicking wrote: > I'm not really confident about __connection__ in general, it seems kind > of fragile. So, given that, is there any problems about adding _connection to persistent class constructors instead of using __connection__ in the module context (apart from the obvious code repetition) ? > Otherwise you could track down that code in SQLObjectMeta and put in > some print statements to see why __connection__ wasn't found. I'll do that and if I find something I'll let you know. []'s Bruno Trevisan bt...@as... |=3D| Async Open Source |=3D| D. Alexandrina, 253= 4 http://www.async.com.br/ |=3D| +55 16 261-2331 |=3D| 13566-290 |=3D| +55 16 9781-8717 |=3D| S=E3o Carlos, SP, B= rasil |
From: Peter G. <pe...@fs...> - 2004-02-05 11:57:50
|
Hello! If this message has been sent twice it's because I have had problems with my ISP's SMTP lately. I'm a big fan of SQLObject, very nicely done! I have two things, one question and one patch. I haven't found a way to do row or table locking for a transaction. Basically, I'd like to do something like conn = DBConnection.PostgresConnection('yada') trans = conn.transaction() p = Person(1, trans) p.lock() # table locking p.selectForUpdate("yada") # row locking ... do something that only one client may do at a time ... trans.commit() There are many ways to do it, but since some databases cannot lock on rows and databases have support for different modes I really don't know how to make it super general. The selectForUpdate() should work the same way as any select, except that if the database supports row locking it will use a select for update statement. For table locking it's a bit tricky since there are so many different modes that varies by database implementation. Any suggestions? The second thing is that I have made a very simple patch that allows database connections to use a logger if specified. import logging logger = logging.getLogger('test') hdlr = logging.FileHandler('test.log') formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.DEBUG) conn.debug = 1 conn.logger = logger The above code will make use of Python 2.3's logging facilities. I'd like to split up SQLObject logging into more logging. For example, SQL may be considered DEBUG level while inability to connect to a database may be considered CRITICAL. Cheers! /Peter Gebauer |
From: Ian B. <ia...@co...> - 2004-02-05 08:54:24
|
I'm doing some moving around of files for 0.6, including renaming the package to "sqlobject" instead of "SQLObject", to be more in line with other packages. And to avoid SQLObject/SQLObject/SQObject.py. SQLObject.py is also renamed to main.py, and other internal modules are made lower case. Of course, this just ruins any CVS history, so I figured why not futz around with that stuff at the same time? Just started learning Subversion for work today, so I thought I'd try it out. It's fairly easy for someone experienced with CVS, I think, and server setup is easy. The new repository is at svn://colorstudy.com/trunk/SQLObject , and you can check it out like: $ svn co svn://colorstudy.com/trunk/SQLObject Right now no one except myself has write access. I'm not sure how best to set that up, it'll probably involve ssh accounts or something. Mail to -cvs would be working, except SF and my mail server aren't getting along. There's a good chance the repository will move at some point, but I have a feeling I'll stick with Subversion. Arch seems neat and all, but Subversion seems more accessible to me, and the tools feel fairly polished (already better than CVS). They'll be a bunch of other stuff happening in the repository soon. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2004-02-05 05:57:36
|
On Jan 26, 2004, at 11:27 AM, Daniel Savard wrote: > Hello Ian, > > Ian Bicking wrote: > >> Daniel Savard wrote: >> >>> Here is a patch that add simple inheritance to SQLObject 0.8.1. >>> If you try it, please send me some comments (to the list or my >>> email). Thanks. >> >> >> Thanks for contributing this. It looks like it required less >> modifications than I expected. One thing I wondered about: will this >> work with the addColumn class method? It doesn't seem like it -- the >> inheritance structure seems to be constructed entirely at the time of >> class instantiation. To do this, superclasses would need to know >> about all their subclasses, so they could add the column to >> subclasses as well. > > It is possible to keep a dictionary of child classes at construct > time. This is what I did until I found the SQLObject's global > registry. > I may re-add a child class dictionary to be able to tell each child > class about the new column. Same for deleting a column. > >>> A new class attribute '_inheritable' is added. When this new >>> attribute is set to 1, the class is marked 'inheritable' and two >>> colomns will automatically be added: childID (INT) and childName >>> (TEXT). When a class inherits from a class that is marked >>> inheritable, a new colomn (ForeignKey) will automatically be added: >>> parent. >> >> >> Is childID needed? It seems like the child should share the parent's >> primary key. > > It should be done. Is it simpler ? > > There will only be hole in the id sequence (but delete will also > creates holes...). May this cause confuse the sequence function in > the SQL database ? A hole shouldn't really matter. IDs just have to be unique, not sequential. Holes also happen when transactions are rolled back. OTOH, separate keys could be more general for other sets of tables. So maybe I'm neutral. >>> The columns childID and childName will respectivly contain the >>> id and the name of the child class (for exemple, 1 and 'Employee'. >>> This will permit to call a new function: getSubClass() that will >>> return a child class if possible. >> >> The column parent is a foreign key that point to the parent >> class. It works as all SQLObject's foreign keys. There will also be >> a parentID attribute to retreive the ID of the parent class. >> >> These seem weird to me, but I think that's the disconnect between >> RDBMS inheritance (which is really just another kind of >> relationship), and OO inheritance. >> >> I'd rather see Person(1) return an Employee object, instead of having >> to use getSubClass(). Or, call getSubClass() simply "child", so that >> someEmployee.child.parent == someEmployee. > > It may be more 'phytonic' to return the subclass directly as you > suggest. Also, it should be easy to implements. As all attributes of > Person will be available there should be no distinction for the > caller. > >> But at that point, it really doesn't look like inheritance. Rather >> we have a polymorphic one-to-one (or one-to-zero) relation between >> Person and Employee, and potentially other tables (exclusive with the >> Employee relation). The functional difference is that the join is in >> some ways implicit, in that Employee automatically brings in all of >> Person's columns. At which point Employee looks kind of like a view. >> >> Could this all be implemented with something like: >> >> class Person(SQLObject): >> child = PolyForeignKey() >> # Which creates a column for the table name, and maybe one for the >> # foreign ID as well. >> >> class Employee(SQLObject): >> parent = ColumnJoin('Person') >> # ColumnJoin -- maybe with another name -- brings in all the >> columns >> # of the joined class. >> >> This seems functionally equivalent to this inheritance, but phrased >> as relationships. And, phrased as a relationship, it's more >> flexible. For instance, you could have multiple ColumnJoins in a >> class, without it being very confusing, and multiple PolyForeignKeys, >> for different classes of objects. > > This may be OK for relationship but not for inheritance as there may > be non-SQLObject attributes and functions that we need to inherits. Hmm... true. I've been trying to think of a good compromise, but I haven't yet. I've been thinking about some changes to columns, and maybe some of this can fit in too. Like we can attach columns to a specific table, which means that Employee would inherit Person's columns, but those columns would be bound to the person table. That doesn't really deal with Person being polymorphic (i.e., that a Person can actually be an Employee). But it's still reasonable that Person would understand that it was a superclass, and have knowledge of its subclasses. I'll definitely keep thinking about this, though. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: Ian B. <ia...@co...> - 2004-02-05 05:45:18
|
On Feb 4, 2004, at 5:28 PM, Bruno Trevisan wrote: > Hi there > > I'm going through an odd problem while getting connected to a > databse within a class context. > > My code has the usual call: > > __connection__ = PostgresConnection(... > > the call is made in the module context and then every class that > inherits SQLObject within the module has access to the connection > and is able to query the database. This is for the development > environment. I'm not really confident about __connection__ in general, it seems kind of fragile. The basic mechanism is simple enough. When the class is instantiated (that is, when the class: stuff gets run), we look for a _connection attribute. If it's not found, then we look at sys.modules[cls.__module__].__connection__. If for some reason the module can't be found, or __connection__ can't be found, then it will fail. I don't know why this would happen, but I can imagine weird things happening in the module loading that would cause the problem. Otherwise you could track down that code in SQLObjectMeta and put in some print statements to see why __connection__ wasn't found. -- Ian Bicking | ia...@co... | http://blog.ianbicking.org |
From: David M. <da...@re...> - 2004-02-05 01:52:19
|
Hi, I'm a total newcomer to SQLObject, having finally chosen it from the n other 'SQL-db-as-python-object' wrappers. I had originally vetoed SQLObject because it appeared to *require* classes to be declared to mimic existing databases. But on reading a bit deeper in the doco, and coming across the magic '_fromDatabase' attribute, I came up with a convenience layer (attached) for opening up and working with existing databases (well, databases that already have an 'id' index column. Quick look at what it does: >>> import sqlobj >>> db = sqlobj.mysqlobj(db='test', user='test', passwd='test') >>> print db.tables ['address', 'people'] >>> addr = db.address >>> print addr.columns ['street', 'suburb', 'city', 'phone'] >>> pps = db.people.select() >>> for p in pps: print p <people 1 first='fred' last='daggg' age=34L> <people 2 first='mary' last='smith' age=27L> <people 3 first='adam' last='jones' age=42L> <people 4 first='jane' last='doe' age=35L> Before developing it any further, I thought to post here what I've done, so that people might: 1) advise if anything similar, preferably better, has already been done (urls please) 2) advise on any pitfalls my approach might suffer Lastly, thanks to SQLObject devs for a fine layer. It's the missing piece in my (yet another) python web framework www.freenet.org.nz/python/pyweb Cheers David -- leave this line intact so your email gets through my junk mail filter |
From: Bruno T. <bt...@as...> - 2004-02-04 23:28:46
|
Hi there I'm going through an odd problem while getting connected to a databse within a class context. My code has the usual call: __connection__ =3D PostgresConnection(... the call is made in the module context and then every class that inherits SQLObject within the module has access to the connection and is able to query the database. This is for the development environment. In the production environment this isn't working. Although __connection__ is a valid database connection, objects created from that module classes cannot access the database. If I replace the call to __connection__ in the module context to a call like: _connection =3D PostgresConnection(... in the class __init__ method, then it works fine. I guess this is due to some mismatch between the configuration of the two environments, so far I have checked for python version (running 2.2.1 in both) and SQLObject version (both match, CVS latest) and I couldn't find anything. Can anyone point me a possible reason for this to happen? Where else to look for the problem? Thank you, []'s Bruno Trevisan bt...@as... |=3D| Async Open Source |=3D| D. Alexandrina, 253= 4 http://www.async.com.br/ |=3D| +55 16 261-2331 |=3D| 13566-290 |=3D| +55 16 9781-8717 |=3D| S=E3o Carlos, SP, B= rasil |
From: <sw...@fr...> - 2004-01-31 03:29:27
|
Hi guys ! Just a little mail to drop the following patch adding support for DateTim= e column types in SQLite db managed by SQLObject. I used TIMESTAMP type sin= ce DATETIME is not available in SQLite. Cheers -- Philippe |
From: Daniel S. <sa...@gn...> - 2004-01-26 17:27:22
|
Hello Ian, Ian Bicking wrote: > Daniel Savard wrote: > >> Here is a patch that add simple inheritance to SQLObject 0.8.1. >> If you try it, please send me some comments (to the list or my >> email). Thanks. > > > Thanks for contributing this. It looks like it required less > modifications than I expected. One thing I wondered about: will this > work with the addColumn class method? It doesn't seem like it -- the > inheritance structure seems to be constructed entirely at the time of > class instantiation. To do this, superclasses would need to know > about all their subclasses, so they could add the column to subclasses > as well. It is possible to keep a dictionary of child classes at construct time. This is what I did until I found the SQLObject's global registry. I may re-add a child class dictionary to be able to tell each child class about the new column. Same for deleting a column. >> A new class attribute '_inheritable' is added. When this new >> attribute is set to 1, the class is marked 'inheritable' and two >> colomns will automatically be added: childID (INT) and childName >> (TEXT). When a class inherits from a class that is marked >> inheritable, a new colomn (ForeignKey) will automatically be added: >> parent. > > > Is childID needed? It seems like the child should share the parent's > primary key. It should be done. Is it simpler ? There will only be hole in the id sequence (but delete will also creates holes...). May this cause confuse the sequence function in the SQL database ? >> The columns childID and childName will respectivly contain the id >> and the name of the child class (for exemple, 1 and 'Employee'. This >> will permit to call a new function: getSubClass() that will return a >> child class if possible. > > The column parent is a foreign key that point to the parent > class. It works as all SQLObject's foreign keys. There will also be > a parentID attribute to retreive the ID of the parent class. > > These seem weird to me, but I think that's the disconnect between > RDBMS inheritance (which is really just another kind of relationship), > and OO inheritance. > > I'd rather see Person(1) return an Employee object, instead of having > to use getSubClass(). Or, call getSubClass() simply "child", so that > someEmployee.child.parent == someEmployee. It may be more 'phytonic' to return the subclass directly as you suggest. Also, it should be easy to implements. As all attributes of Person will be available there should be no distinction for the caller. > But at that point, it really doesn't look like inheritance. Rather we > have a polymorphic one-to-one (or one-to-zero) relation between Person > and Employee, and potentially other tables (exclusive with the > Employee relation). The functional difference is that the join is in > some ways implicit, in that Employee automatically brings in all of > Person's columns. At which point Employee looks kind of like a view. > > Could this all be implemented with something like: > > class Person(SQLObject): > child = PolyForeignKey() > # Which creates a column for the table name, and maybe one for the > # foreign ID as well. > > class Employee(SQLObject): > parent = ColumnJoin('Person') > # ColumnJoin -- maybe with another name -- brings in all the columns > # of the joined class. > > This seems functionally equivalent to this inheritance, but phrased as > relationships. And, phrased as a relationship, it's more flexible. > For instance, you could have multiple ColumnJoins in a class, without > it being very confusing, and multiple PolyForeignKeys, for different > classes of objects. This may be OK for relationship but not for inheritance as there may be non-SQLObject attributes and functions that we need to inherits. > Ian Daniel |
From: Daniel S. <sql...@xs...> - 2004-01-26 15:05:08
|
Hello Ian, Ian Bicking wrote: > Daniel Savard wrote: > >> Here is a patch that add simple inheritance to SQLObject 0.8.1. >> If you try it, please send me some comments (to the list or my >> email). Thanks. > > > Thanks for contributing this. It looks like it required less > modifications than I expected. One thing I wondered about: will this > work with the addColumn class method? It doesn't seem like it -- the > inheritance structure seems to be constructed entirely at the time of > class instantiation. To do this, superclasses would need to know > about all their subclasses, so they could add the column to subclasses > as well. It is possible to keep a dictionary of child classes at construct time. This is what I did until I found the SQLObject's global registry. This dictionary may be used to tell children when adding columns to a parent class. >> A new class attribute '_inheritable' is added. When this new >> attribute is set to 1, the class is marked 'inheritable' and two >> colomns will automatically be added: childID (INT) and childName >> (TEXT). When a class inherits from a class that is marked >> inheritable, a new colomn (ForeignKey) will automatically be added: >> parent. > > > Is childID needed? It seems like the child should share the parent's > primary key. It should be done. Is it simpler ? There will only be hole in the id sequence (but delete will also creates holes...) May it confuse the sequence getter ? (It will not be used anymore on children classes) >> The columns childID and childName will respectivly contain the id >> and the name of the child class (for exemple, 1 and 'Employee'. This >> will permit to call a new function: getSubClass() that will return a >> child class if possible. > > The column parent is a foreign key that point to the parent > class. It works as all SQLObject's foreign keys. There will also be > a parentID attribute to retreive the ID of the parent class. > > These seem weird to me, but I think that's the disconnect between > RDBMS inheritance (which is really just another kind of relationship), > and OO inheritance. > > I'd rather see Person(1) return an Employee object, instead of having > to use getSubClass(). Or, call getSubClass() simply "child", so that > someEmployee.child.parent == someEmployee. It may be more 'phytonic' to return the subclass directly. I will look for that. As all attributes of Person will be available there should be no distinction for the caller if we return a child instead of the wanted class. > But at that point, it really doesn't look like inheritance. Rather we > have a polymorphic one-to-one (or one-to-zero) relation between Person > and Employee, and potentially other tables (exclusive with the > Employee relation). The functional difference is that the join is in > some ways implicit, in that Employee automatically brings in all of > Person's columns. At which point Employee looks kind of like a view. > > Could this all be implemented with something like: > > class Person(SQLObject): > child = PolyForeignKey() > # Which creates a column for the table name, and maybe one for the > # foreign ID as well. > > class Employee(SQLObject): > parent = ColumnJoin('Person') > # ColumnJoin -- maybe with another name -- brings in all the columns > # of the joined class. > > This seems functionally equivalent to this inheritance, but phrased as > relationships. And, phrased as a relationship, it's more flexible. > For instance, you could have multiple ColumnJoins in a class, without > it being very confusing, and multiple PolyForeignKeys, for different > classes of objects. This may be OK for relationship but not for inheritance as there may be non-SQLObject attributes and functions that we need to inherits from parent class. > Ian Daniel Savard |
From: Victor Ng <vi...@ga...> - 2004-01-23 14:26:09
|
Is there a way to get SQLObject to use multicolumn constraints? vic |
From: Ian B. <ia...@co...> - 2004-01-22 20:25:40
|
Brad Bollenbach wrote: > [I've submitted a request to Sourceforge re: why I'm unable to post to > sqlobject-discuss. Got the blanket response (from it being assigned to > someone), but no word yet on a resolution.] > > Hi, > > Here's the unit test I wrote (including slightly modifying an existing > test class) to demonstrate how instance-level validation should work. I > include only the relevant parts from my copy of test.py: > > class Age2GreaterThanAge1(Validator.FancyValidator): > def validatePython(self, values, state): > cur_obj = state.soObject > if values.has_key('age1'): > age1 = int(values['age1']) > else: > age1 = cur_obj.age1 > > if values.has_key('age2'): > age2 = int(values['age2']) > else: > age2 = cur_obj.age2 > > if not (age2 > age1): > raise Validator.InvalidField( > self.message('badAges', 'Age 2 must be greater than > age1'), value, state) > > class SOValidation(SQLObject): > _validator = Age2GreaterThanAge1() > > name = StringCol(validator=Validator.PlainText(), default='x', > dbName='name_col') > name2 = StringCol(validator=Validator.ConfirmType(str), default='y') > name3 = IntCol(validator=Validator.Wrapper(fromPython=int), > default=100) > age1 = IntCol(notNone = True, default = 1) > age2 = IntCol(notNone = True, default = 2) > > class ValidationTest(SQLObjectTest): > > classes = [SOValidation] > > ...snipped some field validation tests here... > > def testInstanceValidation(self): > obj = SOValidation.new(age1 = 5, age2 = 18) > self.assertRaises(Validator.InvalidField, setattr, obj, 'age2', 4) > self.assertRaises(Validator.InvalidField, obj.set, age1 = 20, > age2 = 19) > self.assertRaises(Validator.InvalidField, SOValidation.new, age1 > = 7, age2 = 3) > > Does this look like a reasonable way for this functionality to work? Yes, that looks good. Well, I've been using a plain "validator" attribute in other places, but "_validator" fits better with the SQLObject style (so far). It should take a list of validators as well, in which case all validators must pass (in order). > I started hunting through the code to implement but, of course, noticed > that column setting seems to be happening in more than one place. It > seems to me that set() should be the main method to use for setting > values (e.g. a foo.bar = 1, would get ultimately end up in a call to > foo.set(bar = 1)), and that there should be exactly this one place where > all real database updates happen, so that we can also make this the > precisely one place where all field- and instance-level validation takes > place. There should be a third, internal method that all sets go through. Well... unless we wait until columns are implemented as descriptors, which should simplify some of this stuff. There's two setters right now, one for a batch set (.set()), and one for setting individual columns (_SO_setValue). For now, they should both be changed to call some third method. One thing I'm not sure about -- the validator protocol expects a dictionary of values, which I'm not sure that we have one available. It also has to be the appropriate kind of values -- either the database values or the Python values, depending on which way the conversion is going. So... maybe this really should be done alongside the descriptor change. Ian |