sqlobject-discuss Mailing List for SQLObject (Page 430)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Nick <ni...@dd...> - 2003-04-30 21:11:45
|
On Wed, 2003-04-30 at 15:25, Luke Opperman wrote: > Is there a reason you can't add an artificial surrogate key? Because it's sloppy :) I prefer to fix the tool to fit the practice rather than change a perfectly good practice to fit the tool. In my case I've got a permissions table that maps groups and files to permissions. It would be very messy to create intermediate tables just so SQLObject can do RelatedJoins. On Wed, 2003-04-30 at 15:46, Ian Bicking wrote: > I would subclass RelatedJoin, changing the performJoin method. In > general I think any table that doesn't have a key should be represented > with a join, and rows in that table won't be turned into full objects. > That means they won't be mutable, but it shouldn't be a problem to just > add and delete rows instead of changing them. Yes, that's pretty much how I did it with my old framework. However, this can create a *lot* of write queries if you're changing a bunch of different columns. Which goes back to the whole transaction discussion. Nick |
From: Ian B. <ia...@co...> - 2003-04-30 20:53:20
|
On Wed, 2003-04-30 at 10:29, Nick wrote: > On Wed, 2003-04-30 at 08:55, Brad Bollenbach wrote: > > Can we change the behaviour of this to "feel" more like an SQL insert, > > so that if a column's allowed to be null, I don't have to specify it > > in the new()? > > This patch should do what you want. Note that I've changed notNull to > default to True instead of False, because it seems to make more sense to > do so. If you *don't' specify a keyword argument to new, I would think > you would *want* to see an error there instead if you didn't > specifically tell it to not expect one. I don't know... just because you don't give a default, you can still use None for a value (assuming the column allows NULL). It was my intention to make the default explicit, even if it isn't explicit in SQL, so default=None does what Brad wants. I can see why it would be inconvenient, but only slightly, and I think that's countered by it being a useful discipline (and more closely matching Python, which doesn't use implicit defaults). Ian |
From: Ian B. <ia...@co...> - 2003-04-30 20:48:36
|
On Wed, 2003-04-30 at 12:11, Luke Opperman wrote: > This gets to a current struggle in SQLObject over whether to name > things by SQL convention or Python convention. It's split right now, > but for instance: > > notNull vs notNone > TextCol/CharCol vs StringCol Yeah, it's notNull now, but if I stick with Python types it should really be notNone (which isn't bad, actually). Maybe I'll make that change before 0.4. I'm not sold on using Python types, though I suspect there will be more confusion if I use SQL types (like DateTimeCol in MySQL, vs. TimestampCol in Postgres). We can all agree on the Python types, because there's only one Python, but there's many databases, not all of which adhere to the SQL standard (DBMConnection obviously does not, for instance, nor would MetaKit). Ian |
From: Ian B. <ia...@co...> - 2003-04-30 20:45:39
|
On Wed, 2003-04-30 at 15:03, Nick wrote: > How do you create a class that accesses a table with no key? For > example, a Person table references a PhoneNumber where the only 2 > columns are person_id and phone_number (not a reference to another table > but a phone number)? Currently, you can fudge the values of a > RelatedJoin for the correct query to be generated, e.g. > otherClass=intermediateTable and joinColumn=otherColumn, but are the > returned values correct? Will operations work correctly? I would subclass RelatedJoin, changing the performJoin method. In general I think any table that doesn't have a key should be represented with a join, and rows in that table won't be turned into full objects. That means they won't be mutable, but it shouldn't be a problem to just add and delete rows instead of changing them. I'd be happy to include any novel joins people make (like this one). Ian |
From: Luke O. <lu...@me...> - 2003-04-30 20:39:53
|
I'd be mighty surprised if this worked, SQLObject makes some fundamental assumptions about having a single column integer key field to uniquely identify objects. "keyless" tables can more easily thought of as composite key tables (for me), and supporting such things in SQLObject would require major modification. (In this case, a composite of person_id and phone_number uniquely identify a record.) Is there a reason you can't add an artificial surrogate key? (Specifically addressing Joins, the query may work but it will then attempt to create PhoneNumber objects for every 'id' returned, as PhoneNumber(id) (really cls(id) ), and this will surely fail. :)) - Luke > How do you create a class that accesses a table with no key? For > example, a Person table references a PhoneNumber where the only 2 > columns are person_id and phone_number (not a reference to another > table > but a phone number)? Currently, you can fudge the values of a > RelatedJoin for the correct query to be generated, e.g. > otherClass=intermediateTable and joinColumn=otherColumn, but are > the > returned values correct? Will operations work correctly? |
From: Nick <ni...@dd...> - 2003-04-30 20:04:22
|
Question: How do you create a class that accesses a table with no key? For example, a Person table references a PhoneNumber where the only 2 columns are person_id and phone_number (not a reference to another table but a phone number)? Currently, you can fudge the values of a RelatedJoin for the correct query to be generated, e.g. otherClass=intermediateTable and joinColumn=otherColumn, but are the returned values correct? Will operations work correctly? Nick |
From: Nick <ni...@dd...> - 2003-04-30 19:33:26
|
On Wed, 2003-04-30 at 12:46, Bud P.Bruegger wrote: > There is some friction with this approach tough: > > * expressing decisions in SQL terminology is not natural for non-SQL > backends > > * sticking to standard SQL terminology is not natural with > non-standard SQL dbms... Although, the class *is* called SQLObject, not DBObject :) Nick |
From: Bud P. B. <bu...@si...> - 2003-04-30 19:21:31
|
On 30 Apr 2003 12:29:01 -0500 Nick <ni...@dd...> wrote: > I'll buy that... that's also what psycopg returns. So, should the > naming be in accordance with Python types or with SQL types? I think with SQL types. After all, the decisions about the best physical representation of python types cannot be made automatically (there are many options...). So there seems no way of avoiding to deal with SQL. [Is this true? I adapted this conviction from lower-level languages that require more decisions on physical representation than Python] There is some friction with this approach tough: * expressing decisions in SQL terminology is not natural for non-SQL backends * sticking to standard SQL terminology is not natural with non-standard SQL dbms... --b |
From: Nick <ni...@dd...> - 2003-04-30 17:41:46
|
[ earlier discussion about implementing different SQL types and converting them to Python types ] I've been looking at the code for Constraints and Col, and it looks to me like there needs to be a distiction between real constraints (MaxLength, Unique, Not Null, for example) and type checking/casting. It's a little mixed right now. In terms of type checking and conversion, I would propose: 1) make validators part of Col instead of just random functions in Constraints that get referenced in Col. 2) at the rate it's going, there are going to be a lot of parallel _mysql, _postgres, _sqlite, etc. functions for every operation under Col. There needs to be a way to bundle a bunch of functions for the particular db api, probably related to whatever _connection type you're using. 3) for each db api, there needs to be convert-from-result-to-python and convert-from-python-to-sql-value functions as part of the Col class, encapsulated by the mechanism described in 2. I've got some ideas how to do this, but I don't want to go ripping through the code if you're going somewhere else with this. Nick |
From: Nick <ni...@dd...> - 2003-04-30 17:29:46
|
I'll buy that... that's also what psycopg returns. So, should the naming be in accordance with Python types or with SQL types? Nick On Wed, 2003-04-30 at 12:06, Bud P.Bruegger wrote: > That would probably be consistent. I wasn't disturbed by the name > earlier because DateTime could as well refer to the mx.DateTime type > of python that is stored as the (non-standard) MySQL DateTime data > type... > > On 30 Apr 2003 11:53:28 -0500 > Nick <ni...@dd...> wrote: > > why is the column type in SQLObject DateTimeCol instead of > > TimestampCol? |
From: Luke O. <lu...@me...> - 2003-04-30 17:25:02
|
This gets to a current struggle in SQLObject over whether to name things by SQL convention or Python convention. It's split right now, but for instance: notNull vs notNone TextCol/CharCol vs StringCol I think Ian recently changed his mind to go with notNull, but was opposed to TextCol still (sorry if I'm misquoting you, Ian). My personal opinion is that anything that describes the database (notNull) should be in database-speak, otherwise it should be python. Columns are a little harder, as they cross bewteen the two; Constraints also muddy this (should it be TimestampCol, with a validDateTime constraint?) Consistency may be hard to come by. Perhaps my preference would be to support both, with TimestampCol being an alias to DateTimeCol, and TextCol being a mostly-transparent subclass of StringCol. - Luke Quoting Nick <ni...@dd...>: > Okay, correct me if I'm wrong here, but isn't the ANSI standard for > storing ISO dates in a database using a TIMESTAMP type? I'm pretty > sure > it is... so why is the column type in SQLObject DateTimeCol instead > of > TimestampCol? I realize that each database backend uses its own > type, > but shouldn't SQLObject's naming stick to the standards? > > Hoping not to start a db holy war, > Nick |
From: Bud P. B. <bu...@si...> - 2003-04-30 17:15:08
|
That would probably be consistent. I wasn't disturbed by the name earlier because DateTime could as well refer to the mx.DateTime type of python that is stored as the (non-standard) MySQL DateTime data type... --b On 30 Apr 2003 11:53:28 -0500 Nick <ni...@dd...> wrote: > Okay, correct me if I'm wrong here, but isn't the ANSI standard for > storing ISO dates in a database using a TIMESTAMP type? I'm pretty sure > it is... so why is the column type in SQLObject DateTimeCol instead of > TimestampCol? I realize that each database backend uses its own type, > but shouldn't SQLObject's naming stick to the standards? > > Hoping not to start a db holy war, > Nick > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > sqlobject-discuss mailing list > sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlobject-discuss > /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Nick <ni...@dd...> - 2003-04-30 16:54:06
|
Okay, correct me if I'm wrong here, but isn't the ANSI standard for storing ISO dates in a database using a TIMESTAMP type? I'm pretty sure it is... so why is the column type in SQLObject DateTimeCol instead of TimestampCol? I realize that each database backend uses its own type, but shouldn't SQLObject's naming stick to the standards? Hoping not to start a db holy war, Nick |
From: Nick <ni...@dd...> - 2003-04-30 15:41:38
|
sorry, I send the wrong diff from an earlier try... this is the correct one. I'll also smack it in the message body so the list can see it. Nick Index: SQLObject/Col.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/Col.py,v retrieving revision 1.17 diff -u -u -r1.17 Col.py --- SQLObject/Col.py 29 Apr 2003 09:37:06 -0000 1.17 +++ SQLObject/Col.py 30 Apr 2003 15:38:38 -0000 @@ -20,7 +20,7 @@ def __init__(self, name=None, dbName=None, default=NoDefault, foreignKey=None, alternateID=False, alternateMethodName=None, - constraints=None, notNull=False, + constraints=None, notNull=True, unique=NoDefault): # This isn't strictly true, since we *could* use backquotes # around column names, but why would anyone *want* to Index: SQLObject/SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.30 diff -u -u -r1.30 SQLObject.py --- SQLObject/SQLObject.py 29 Apr 2003 09:47:48 -0000 1.30 +++ SQLObject/SQLObject.py 30 Apr 2003 15:38:39 -0000 @@ -674,7 +674,7 @@ # Then we check if the column wasn't passed in, and # if not we try to get the default. - if not kw.has_key(column.name): + if not kw.has_key(column.name) and column.notNull: default = column.default # If we don't get it, it's an error: ------- end of patch ------- On Wed, 2003-04-30 at 08:55, Brad Bollenbach wrote: > I have a class: > > class MerchantAccount(SQLObject): > _columns = [ > StringCol('merchantID', length = 15, alternateID = True, notNull = True), > StringCol('companyName', length = 50, notNull = True), > StringCol('address1', length = 255, notNull = False), > StringCol('address2', length = 255, notNull = False), > IntCol('telephone', notNull = False), > IntCol('fax', notNull = False), > StringCol('url', length = 100, notNull = False), > StringCol('lang', length = 6, notNull = False) > ] > MultipleJoin('MerchantProxyAccount') > > and then in a nearby piece of code, an insert: > > from merchant import MerchantAccount, MerchantProxyAccount, PurchaseTransaction > > ma = MerchantAccount.new(merchantID = "foo", companyName = "bar") > > When run, this raises an error: > > bradb@mothra:~/1ave/merchant/scripts$ python insert_test_data.py > Traceback (most recent call last): > File "insert_test_data.py", line 5, in ? > ma = MerchantAccount.new(merchantID = "desjardins", companyName = "Premiere Avenue") > File "/usr/lib/python2.2/site-packages/SQLObject/SQLObject.py", line 682, in new > raise TypeError, "%s did not get expected keyword argument %s" % (cls.__name__, repr(column.name)) > TypeError: MerchantAccount did not get expected keyword argument 'address1' > > Can we change the behaviour of this to "feel" more like an SQL insert, > so that if a column's allowed to be null, I don't have to specify it > in the new()? |
From: Nick <ni...@dd...> - 2003-04-30 15:30:40
|
On Wed, 2003-04-30 at 08:55, Brad Bollenbach wrote: > Can we change the behaviour of this to "feel" more like an SQL insert, > so that if a column's allowed to be null, I don't have to specify it > in the new()? This patch should do what you want. Note that I've changed notNull to default to True instead of False, because it seems to make more sense to do so. If you *don't' specify a keyword argument to new, I would think you would *want* to see an error there instead if you didn't specifically tell it to not expect one. Nick |
From: Brad B. <br...@bb...> - 2003-04-30 13:57:04
|
I have a class: class MerchantAccount(SQLObject): _columns = [ StringCol('merchantID', length = 15, alternateID = True, notNull = True), StringCol('companyName', length = 50, notNull = True), StringCol('address1', length = 255, notNull = False), StringCol('address2', length = 255, notNull = False), IntCol('telephone', notNull = False), IntCol('fax', notNull = False), StringCol('url', length = 100, notNull = False), StringCol('lang', length = 6, notNull = False) ] MultipleJoin('MerchantProxyAccount') and then in a nearby piece of code, an insert: from merchant import MerchantAccount, MerchantProxyAccount, PurchaseTransaction ma = MerchantAccount.new(merchantID = "foo", companyName = "bar") When run, this raises an error: bradb@mothra:~/1ave/merchant/scripts$ python insert_test_data.py Traceback (most recent call last): File "insert_test_data.py", line 5, in ? ma = MerchantAccount.new(merchantID = "desjardins", companyName = "Premiere Avenue") File "/usr/lib/python2.2/site-packages/SQLObject/SQLObject.py", line 682, in new raise TypeError, "%s did not get expected keyword argument %s" % (cls.__name__, repr(column.name)) TypeError: MerchantAccount did not get expected keyword argument 'address1' Can we change the behaviour of this to "feel" more like an SQL insert, so that if a column's allowed to be null, I don't have to specify it in the new()? -- Brad Bollenbach BBnet.ca |
From: Frank B. <fb...@fo...> - 2003-04-30 13:26:11
|
Hallo, Ian Bicking hat gesagt: // Ian Bicking wrote: > Yes, I didn't have time at that point to look into what seemed like a > difficult problem. Then you said it was fixed, and I was happy to have > missed the problem altogether... but I'll look at it again. Thank you. I don't want to press things, of course. Currently in some of my apps I can switch of caching for the objects, that get used in MultipleJoins, because the tables aren't that big. I do however have one table with more than 40.000 long entries, where I'd prefer to have the caching enabled, as the selects take rather long. I now discovered the expire() method of the CacheFactory. Is this usable for expiring single objects? And where does the CacheFactory object hide? I tried __connection__.cache, but that's a CacheSet... You see, I don't quite grok how the Cache works, yet. ciao -- Frank Barknecht _ ______footils.org__ |
From: Brad B. <br...@bb...> - 2003-04-30 13:17:33
|
On 04/29/03 23:57, Paul Chakravarti wrote: > > On Tuesday, April 29, 2003, at 01:17 PM, Luke Opperman wrote: > > > >4. does persisting any object persist all temp objects it touches? > >This makes sense from the example above: create a tempPerson, start > >adding temp employees to it, I'd like to simply persist tempPerson > >and have it handle the rest. but does it work the other way (persist > >one of the employees, and the whole tree/graph is persisted)? Should > >there be an alternate way to persist just one? (In the context of a > >transaction this becomes particularily important: when do things I > >didn't explicitly add to the transaction but that are changed via > >join/FK get persisted?) > > > > Although I am just an interested observer here I would say that I am > slightly concerned that this proposal may add significant complexity to > support what I believe to be a marginal set of cases. The more iterations through my brain this discussion makes, the more I lean towards thinking that SQLObject should leave its transaction support as is. I've used many of these frameworks, in both Perl and Python: Modeling, Class::DBI, took a look at MiddleKit and finally SQLObject. What drew me to this framework to begin with was the relatively flat learning curve. I hope we keep it that way. -- Brad Bollenbach BBnet.ca |
From: Nick <ni...@dd...> - 2003-04-30 04:59:19
|
On Tue, 2003-04-29 at 22:57, Paul Chakravarti wrote: > Although I am just an interested observer here I would say that I am > slightly concerned that this proposal may add significant complexity to > support what I believe to be a marginal set of cases. While think there needs to be some kind of transactional capability intrinsic to SQLObject beyound those provided by the DB API, I agree that this looks a bit complex for a simple concept. > I believe that the simplicity and cleanness of the SQLObject interface > and orthogonality with the underlying database functionality is a > strong selling point. Definitely. There are similar projects out there, like Modeling, that can cover *every single case*. I like the KISS principle, myself. The big feature of SQLObject in my mind (and what drew me to this kind of project) is RAD. > Trying to build a generic object store capable of persisting an > arbitrary object graphs in a coherent/transactional manner together > seems to be excessive and probably impossible if you are trying to > support concurrent access from multiple processes (without implementing > coherent distributed faulting). I agree... I advocate some kind of system where you can either cache everything up in a transaction-like manner in a pythonic way until you call commit, or else an autocommit model like it currently uses. That will cover 99% of your users and maintains the simplicity of the interface. Nick |
From: Paul C. <pa...@pa...> - 2003-04-30 03:57:25
|
On Tuesday, April 29, 2003, at 01:17 PM, Luke Opperman wrote: > > 4. does persisting any object persist all temp objects it touches? > This makes sense from the example above: create a tempPerson, start > adding temp employees to it, I'd like to simply persist tempPerson > and have it handle the rest. but does it work the other way (persist > one of the employees, and the whole tree/graph is persisted)? Should > there be an alternate way to persist just one? (In the context of a > transaction this becomes particularily important: when do things I > didn't explicitly add to the transaction but that are changed via > join/FK get persisted?) > Although I am just an interested observer here I would say that I am slightly concerned that this proposal may add significant complexity to support what I believe to be a marginal set of cases. I believe that the simplicity and cleanness of the SQLObject interface and orthogonality with the underlying database functionality is a strong selling point. In particular delegating the 'hard' transactional aspects to the database (effectively ignoring them at the ORM layer) and where necessary requiring the object to sync with the database (by turning _cacheValues off) strikes me as a pragmatic choice particularly for web applications where concurrent access is required (though possibly pessimistic in some scenarios) Trying to build a generic object store capable of persisting an arbitrary object graphs in a coherent/transactional manner together seems to be excessive and probably impossible if you are trying to support concurrent access from multiple processes (without implementing coherent distributed faulting). PaulC |
From: Brad B. <br...@bb...> - 2003-04-30 00:39:48
|
On 04/29/03 17:42, Luke Opperman wrote: > > > > I think t.insertObject(c) isn't a good idea. Client(1) is an > > object > > that may be in use in several places, and is presumed to be > > transparently persistent. When you put it into the transaction > > you've > > made temporary for all users of the object. Bad bugs will follow. > > At > > least in threaded situations. > > > > This is why I think the object has to be instantiated as part of > > the > > transaction, or perhaps copied if you don't like instantiation... > > > > t = Transaction() > > c = Client(1) > > ctmp = c.inTransaction(t) > > # or... > > ctmp = Client(1, t) > > > > Ian > > > > I couldn't agree more, this is also refers to the challenges (from an > interface perspective) of "insertObject" for new objects. Hence my > semantics note, "a temporary object (including one in Transaction: > i'm considering transactions to be a simple wrapper around a > temporary object implementation) is not visible to anything that > doesn't explicitly reference the python-side instance of it". This is > also why it seems to me that any Real objects referenced by or to a > temp object need to immediately become temp (part of the > transaction). > > My original interface suggestion is very close to what you just wrote, > with cInst.temp() or cClass.temp(...) returning temporary objects, > outside of being referenced by any other thread unless explicitly > passed by the programmer. > > Anyways, all just wanking until something gets built. :) I'll get on > it. Disclaimer: The answers I'm giving are half-finished, really. I can't say either one is particularly satisfactory, but hopefully someone else will have a few more ideas to add to, and hopefully simplify the mix. Personally, I see many issues to flesh out here yet, and preserve the goal of keeping SQLObject painfully simple to use. Ease-of-use is the reason I switched from Modeling to SQLObject to begin with. The problem is: i. how can SQLObject be transactional? ii. should SQLObject be transactional (or should it be left to using the chosen backend to support it)? There's two main answers that I can see, so far. 1. To be like the Modeling framework, and use the Transaction metaphor, where Modeling uses an EditingContext: http://modeling.sourceforge.net/UserGuide/editing-context.html This adds one more level of indirection between Python objects and database persistence, as the only realistic way for this to be useful and thread-safe is for all data manipulation and retrieval to be done through a Transaction object, which will have a global registry of object ID's for the thread-safety end of things. I already gave an example of what the interface might look like here. SQLObject would still maintain the advantage of being less verbose than Modeling (e.g. new-style properties instead of get/set methods for every column like Modeling, no FetchSpecification like Modeling has, no need to edit a fancy XML file, etc.), but would unavoidably become slightly more complex to learn and use. 2. Scarecrow objects (the "temp" metaphor, except to me "temp" makes it difficult to extrapolate what that's supposed to mean IMHO). So you have: Client(1) to retrieve Client.new(name = "Somebody") to create Client.scarecrow() to create a "brainless" (database-dumb) object The Transaction metaphor could live behind the scenes of a scarecrow, so that scarecrow objects aren't truly persisted, but it's made note of by an object registry of some sort, so: from invoicing import Client ian = Client(1) luke = Client.new(name = "luke") brad = Client.scarecrow(name = "brad") sqlobject_geeks = Client.select('all') Assuming there was only a client with ID 1 in the database to begin with, sqlobject_geeks would now have all three of us in the list. The metaphor for rolling back and committing could be something like: if confirmed_signup: brad.remember() # "commit" the changes to brad else: brad.forget() # "rollback" the changes to brad Of course, this isn't very useful for atomocity. :/ The "temp" or "scarecrow" object (e.g. a single object "transaction") is probably better solved with lazy committing and a save() method needing to be called. Really though, I don't think I see adding a whole new class and metaphor to the framework for this "single object transaction" to be useful. Just my $0.02. -- Brad Bollenbach BBnet.ca |
From: Luke O. <lu...@me...> - 2003-04-29 22:56:28
|
> I think t.insertObject(c) isn't a good idea. Client(1) is an > object > that may be in use in several places, and is presumed to be > transparently persistent. When you put it into the transaction > you've > made temporary for all users of the object. Bad bugs will follow. > At > least in threaded situations. > > This is why I think the object has to be instantiated as part of > the > transaction, or perhaps copied if you don't like instantiation... > > t = Transaction() > c = Client(1) > ctmp = c.inTransaction(t) > # or... > ctmp = Client(1, t) > > Ian > I couldn't agree more, this is also refers to the challenges (from an interface perspective) of "insertObject" for new objects. Hence my semantics note, "a temporary object (including one in Transaction: i'm considering transactions to be a simple wrapper around a temporary object implementation) is not visible to anything that doesn't explicitly reference the python-side instance of it". This is also why it seems to me that any Real objects referenced by or to a temp object need to immediately become temp (part of the transaction). My original interface suggestion is very close to what you just wrote, with cInst.temp() or cClass.temp(...) returning temporary objects, outside of being referenced by any other thread unless explicitly passed by the programmer. Anyways, all just wanking until something gets built. :) I'll get on it. - Luke |
From: Ian B. <ia...@co...> - 2003-04-29 22:14:44
|
On Tue, 2003-04-29 at 12:17, Luke Opperman wrote: > > from Transaction import Transaction > > from invoicing import Client > > t = Transaction() > > c = Client(1) > > t.insertObject(c) > > c.address1 = "100 New Street" > > t.saveChanges() I think t.insertObject(c) isn't a good idea. Client(1) is an object that may be in use in several places, and is presumed to be transparently persistent. When you put it into the transaction you've made temporary for all users of the object. Bad bugs will follow. At least in threaded situations. This is why I think the object has to be instantiated as part of the transaction, or perhaps copied if you don't like instantiation... t = Transaction() c = Client(1) ctmp = c.inTransaction(t) # or... ctmp = Client(1, t) Ian |
From: Ian B. <ia...@co...> - 2003-04-29 21:17:18
|
On Tue, 2003-04-29 at 10:20, Frank Barknecht wrote: > Hallo, > Ian Bicking hat gesagt: // Ian Bicking wrote: > > > Okay, it turns out I accidentally changed _cacheValues default to False, > > instead of True, which causes SQLObject to fetch from the database for > > every access. Sorry about that... > > Ohh, noooo! > > Sorry for crying out so loud, but now I get my old problem with > changed values not visible in Webware again. I had hoped, that there > had been a fix recently, but now it turns out that caching was > disabled... I gave two several examples for this in previous mails, > which all remained unanswerd... Yes, I didn't have time at that point to look into what seemed like a difficult problem. Then you said it was fixed, and I was happy to have missed the problem altogether... but I'll look at it again. Ian |
From: Ian B. <ia...@co...> - 2003-04-29 20:55:34
|
On Tue, 2003-04-29 at 11:53, Brad Bollenbach wrote: > > That seems confusing to me. If you're going to use database > > transactions, why not just use them entirely? You only seem to really > > gain something if you can avoid them entirely (and thus make it possible > > to add transactions to lesser databases). > > To be useable with database backends that don't support transactions > natively, and to provide a consistent interface for using them either > way. If we can create in-Python transactions, that'd be great... but if we create something that's in Python, but still depends on database transactions (and so isn't usable from MySQL), then I'm not clear what the advantage is. Ian |