Thread: [SQLObject] alternateMethodName and creating SQL
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
From: Luke O. <lu...@me...> - 2003-03-12 21:34:29
|
Hello again - Col.alternateMethodName: apparently not implemented yet? I assume the implementation will look something like this (in the metaclass, while evaluating each Column object: I added it around line 195 in SQLObject.py): # If the column defines an alternateMethodName for retrieval # apply it to the object. if column.alternateMethodName: fun = eval('lambda cls, val: cls.select(cls.q.%s == val)[0]' % column.name) dict[column.alternateMethodName] = classmethod(fun) Of course, maybe part of the reason this hasn't been implemented yet is because the semantics are more difficult that the implementation: What does the user expect if there are really more than one returned by that query? What if there are none? If alternateID is set in the Column, is it SQLObject's responsibility to enforce uniqueness? Thoughts? (I'm using the code above until future notice. :) - Luke P.S. alternateMethodName in Col() expects a function called "prefix(str,str)"? I have no such function, so I ended up writing on that looks like: def prefix(pref, post): return "%s%s%s" % (pref, post[0].capitalize(), post[1:]) ------------------------------------------------- This mail sent through IMP: http://horde.org/imp/ |
From: Ian B. <ia...@co...> - 2003-03-13 04:15:16
|
On Wed, 2003-03-12 at 15:22, Luke Opperman wrote: > Col.alternateMethodName: apparently not implemented yet? I assume the > implementation will look something like this (in the metaclass, while evaluating > each Column object: I added it around line 195 in SQLObject.py): Yes, I had implemented this in a previous incarnation of SQLObject, but hadn't finished porting it to this version. It's now implemented in CVS. Ian |
From: Ian B. <ia...@co...> - 2003-03-13 04:18:45
|
On Wed, 2003-03-12 at 15:22, Luke Opperman wrote: > Of course, maybe part of the reason this hasn't been implemented yet is because > the semantics are more difficult that the implementation: What does the user > expect if there are really more than one returned by that query? What if there > are none? If alternateID is set in the Column, is it SQLObject's responsibility > to enforce uniqueness? It's a little fuzzy. Some exception will be raised if nothing is returned. Ambiguous results are silently ignored right now, which probably isn't good. Ideally you should declare the column UNIQUE in your table schema. If SQLObject ever generates table schemas (maybe not far off) then it would add UNIQUE based on this, but would otherwise let the database do the work of ensuring that. It might be good to define the exception that's raised when nothing's found, though... KeyError? IndexError? ValueError? Ian |
From: Luke O. <lu...@me...> - 2003-03-13 07:42:55
|
Quoting Ian Bicking <ia...@co...>: > It's a little fuzzy. Some exception will be raised if nothing is > returned. Ambiguous results are silently ignored right now, which > probably isn't good. Ideally you should declare the column UNIQUE in > your table schema. If SQLObject ever generates table schemas (maybe not > far off) then it would add UNIQUE based on this, but would otherwise let > the database do the work of ensuring that. Yes, generating schemas was the second part of my subject that I now realize I never wrote. :) I haven't quite decided how to handle varying databases, but that's not an issue for me at this moment. How we're generating SQL is by subclassing Col, so our object defs look like: class OurObj(SQLObject): _columns = [ String('name'), Integer('toes'), Date('birthday') ] and SQLObject has a classmethod "createSql", which calls out to each column's own createSql method, taking into consideration foreignKeys and alternateIds. RelatedJoins are still biting us, but mostly because I haven't looked at it enough. I suppose one solution to the database-variance would be to have modules for each Column-subclass group (String,Int,Float,Date,whatever) so you'd from PostgresTypes import * or from MySqlTypes import * and those would be responsible for defining the createSql method. We're also using the Column subclasses to enforce validation, which means our base Column has a validators=[] named parameter, and String's __init__ adds a default AsString() validator to the start of that list etc. We're using the same ValidatorConverters we have from working with FFK, btw. That's all I've got to write tonight. - Luke |
From: Ian B. <ia...@co...> - 2003-03-13 08:11:02
|
On Thu, 2003-03-13 at 01:30, Luke Opperman wrote: > Yes, generating schemas was the second part of my subject that I now realize I > never wrote. :) I haven't quite decided how to handle varying databases, but > that's not an issue for me at this moment. Ultimately all the SQL should be moved out of SQLObject, and be into DBConnection. For instance, selects are now done in DBConnection, with some help from SQLBuilder (though SQLBuilder currently is still database agnostic). SQLObject handles the select without using any SQL. The same should happen with the database creation. SQLObject should pass the database connection a couple parameters (_table and _idName should be enough), along with the list of columns, and the database connection has the responsibility from there. But that's only part of it. I'm blanking out on a good example of a case where MySQL and Postgres differ on data type definitions, though there's a ton of them. Presumably a column type should know how to represent itself in the create statement, but it has to do that with the database connection in mind. It's an NxM kind of problem (N being the kinds of columns, M being the supported databases). Maybe each column should simply have a postgresCreate, mysqlCreate, etc. methods. I think that's probably as good as it will get. But then that moves SQL generation back into the SQLObject module and away from DBConnection. > How we're generating SQL is by subclassing Col, so our object defs look like: > > class OurObj(SQLObject): > _columns = [ > String('name'), > Integer('toes'), > Date('birthday') > ] I'd like to make import * reasonably clean, so I'd probably prefer names like StringCol (and StringCol is still shorter than SQLObject.String); but otherwise that seems like the right way to do it. Of course you can always do "import SQLObject as SO", and SO.String isn't any worse to type than StringCol. Hmm... > and SQLObject has a classmethod "createSql", which calls out to each column's > own createSql method, taking into consideration foreignKeys and alternateIds. > RelatedJoins are still biting us, but mostly because I haven't looked at it enough. > > I suppose one solution to the database-variance would be to have modules for > each Column-subclass group (String,Int,Float,Date,whatever) so you'd > > from PostgresTypes import * > or > from MySqlTypes import * > > and those would be responsible for defining the createSql method. I don't want to use imports for choosing a database. Right now you can change databases really easily and dynamically. I don't see any reason that has to change. > We're also using the Column subclasses to enforce validation, which means our > base Column has a validators=[] named parameter, and String's __init__ adds a > default AsString() validator to the start of that list etc. We're using the same > ValidatorConverters we have from working with FFK, btw. ValidatorConverters seem too general to me, but validation functions would certainly be useful. It seems like all the interface you need for SQLObject would be a function. It could just raise ValueError when necessary. Maybe take three arguments -- the SQLObject instance that is having its value set, the Col object for the column that's being set, and of course the value that's being set. Actual integration between SQLObject and FunFormKit should come about at some time, but I don't think there's actually much code that needs to be shared between them. At some point I'd like to make some sort of builder that takes a SQLObject class and builds a FormDefinition -- I'd have to expand the introspection some, though. Ian |
From: Luke O. <lu...@me...> - 2003-03-13 09:05:06
|
> same should happen with the database creation. SQLObject should pass > the database connection a couple parameters (_table and _idName should > be enough), along with the list of columns, and the database connection > has the responsibility from there. Agreed. There's no reason my code has to be in SQLObject (where it now resides) except for _table and _idName, yes. RelatedJoins still make life messy, because from a single object you can't tell what it's own column in the intersection table might be called, and so my first thought tells me you need an equivalent of needSet/classRegistry... I suppose those things (perhaps the entire list of generated tables, if i could see a point to it...) could be held in the DBConnection though. > But that's only part of it. I'm blanking out on a good example of a > case where MySQL and Postgres differ on data type definitions, though > there's a ton of them. Presumably a column type should know how to > represent itself in the create statement, but it has to do that with the > database connection in mind. It's an NxM kind of problem (N being the > kinds of columns, M being the supported databases). Maybe each column > should simply have a postgresCreate, mysqlCreate, etc. methods. I think > that's probably as good as it will get. But then that moves SQL > generation back into the SQLObject module and away from DBConnection. Yes, the NxM problem leads to having either separate imports (which you oppose below, and I agree with you on the switching database reason, so screw that idea), or explicit knowledge of what dbtypes or columntypes exist in the opposite piece (DBConnection.createSql('String') lookup or String.createSQL('postgresconnection') lookup or equivalent like your postgresSqlCreate/mySqlCreate curry). Hmm, now that I think about it some more, maybe I'm less opposed to simply switching imports. Right now, here create a local subclass of SQLObject called CoreObject which contains the _connection attribute. So switching Databases is in one spot for our entire library of objects. In the same way, at the CoreObject level we would import the proper (database-specific) set of Columns, and re-export (__all__) them from there. Then rather than each Object importing CoreObject from CoreObject, and * from PostgresColumns, they would simply import * from CoreObject, and hence get the database-specific connection and the database-specific columns. And if I needed to create a column type not pre-existing in SQLObject, I'd put it somewhere, and import it at CoreObject, and continue on in the same manner (when building specific objects one would never know or care where a column definition was really coming from). (Maybe you could take this one step further, so that it's "from SQLObject.PostgresConnection import *", and that gives you the actual PostgresConnection and the Columns. But that doesn't make it as easy for someone to write their own extensions/additions to the list of available Columns. Hmm.) Solves it from my end, in that there's still only one file I need to change Database-specific stuff in (and I already had to change it there regardless of DB-specific columns). But it's a slightly less pretty setup (requires a smart local setup outside of SQLObject's direct control). But the current situation can promote hard-to-change code too, if you were to define the DBConnection for each Object in it's specific source file, you'd have to go through and modify them all. > I don't want to use imports for choosing a database. Right now you can > change databases really easily and dynamically. I don't see any reason > that has to change. Is there some way that you use it where you change the _connection between instances / dynamically by re-assigning _connection during runtime. I can see that this is *possible* now, but I'm not seeing why you'd do such a thing (effectively, move databases/migrate data while leaving the objects in place in a live system...?) doesn't mean there's not a valid reason to, but elaborate for me. > I'd like to make import * reasonably clean, so I'd probably prefer names > like StringCol (and StringCol is still shorter than SQLObject.String); > but otherwise that seems like the right way to do it. Of course you can > always do "import SQLObject as SO", and SO.String isn't any worse to > type than StringCol. Hmm... Understand and agree with your point, somewhat undecided. I'd say StringCol is nicer, but that's just me. > ValidatorConverters seem too general to me, but validation functions > would certainly be useful. It seems like all the interface you need for > SQLObject would be a function. It could just raise ValueError when > necessary. Maybe take three arguments -- the SQLObject instance that is > having its value set, the Col object for the column that's being set, > and of course the value that's being set. Hmm. I'm not seeing how ValidatorConverters are too general yet. Where would the function you describe go? For us, using a list of validators is as easy as calling them (single function, just takes value) when a column is set (in SQLObject), and if a ValueError happens, it just bubbles up and away (SQLObject is not made more complicated except the need to run through the list). > Actual integration between SQLObject and FunFormKit should come about at > some time, but I don't think there's actually much code that needs to be > shared between them. At some point I'd like to make some sort of > builder that takes a SQLObject class and builds a FormDefinition -- I'd > have to expand the introspection some, though. Is planned from us, and should come about in the next few weeks. You're exactly right, there's very little code shared between them in our planned implementation, and I will admit that part of the reason we're using ValidatorConverters is to ease the transition. :) I'll send you details of this as it gets closer on my radar. - Luke |
From: Ian B. <ia...@co...> - 2003-03-13 09:44:33
|
On Thu, 2003-03-13 at 02:52, Luke Opperman wrote: > > same should happen with the database creation. SQLObject should pass > > the database connection a couple parameters (_table and _idName should > > be enough), along with the list of columns, and the database connection > > has the responsibility from there. > > Agreed. There's no reason my code has to be in SQLObject (where it now resides) > except for _table and _idName, yes. RelatedJoins still make life messy, because > from a single object you can't tell what it's own column in the intersection > table might be called, and so my first thought tells me you need an equivalent > of needSet/classRegistry... I suppose those things (perhaps the entire list of > generated tables, if i could see a point to it...) could be held in the > DBConnection though. I must admit I've only recently been learning Postgres, so I still initially think about MySQL behavior, where relations are just implicit. So I'm not entirely sure of the correct behavior... if you can give a small class definition and the appropriate class definition, that'd help me. > Hmm, now that I think about it some more, maybe I'm less opposed to simply > switching imports. Right now, here create a local subclass of SQLObject called > CoreObject which contains the _connection attribute. So switching Databases is > in one spot for our entire library of objects. In the same way, at the > CoreObject level we would import the proper (database-specific) set of Columns, > and re-export (__all__) them from there. Then rather than each Object importing > CoreObject from CoreObject, and * from PostgresColumns, they would simply > import * from CoreObject, and hence get the database-specific connection and the > database-specific columns. And if I needed to create a column type not > pre-existing in SQLObject, I'd put it somewhere, and import it at CoreObject, > and continue on in the same manner (when building specific objects one would > never know or care where a column definition was really coming from). I still don't like that. Then in the docs I have to go through this complex description of how you should set up your modules if you want to change databases in the future. The Right Way should also be the easy way. I'm strongly leaning towards putting database compatibility into the Col subclasses. It does mean that DBConnection won't be the sole location for compatibility code, but I don't think that's too big a deal. > > I don't want to use imports for choosing a database. Right now you can > > change databases really easily and dynamically. I don't see any reason > > that has to change. > > Is there some way that you use it where you change the _connection between > instances / dynamically by re-assigning _connection during runtime. I can see > that this is *possible* now, but I'm not seeing why you'd do such a thing > (effectively, move databases/migrate data while leaving the objects in place in > a live system...?) doesn't mean there's not a valid reason to, but elaborate for > me. After you've started using the class changing _connection would do weird stuff. Though potentially you could pass the connection object into the constructor (like you'd do with transactions)... that actually would work right now, but strange things might start happening when you did joins or otherwise mixed the objects. Definitely bad stuff. But what I want to be able to support is pre-instantiation dynamicism. For instance, an application might allow the user to select their database in a configuration file or even something more interactive. Selecting the kind of database can go right alongside selecting the connection parameters. Also, by putting compatibility code into the Col subclasses, the SQLObject user is shielded from that detail. But if it's done with imports then it's exposed. More conceptual overhead. > > ValidatorConverters seem too general to me, but validation functions > > would certainly be useful. It seems like all the interface you need for > > SQLObject would be a function. It could just raise ValueError when > > necessary. Maybe take three arguments -- the SQLObject instance that is > > having its value set, the Col object for the column that's being set, > > and of course the value that's being set. > > Hmm. I'm not seeing how ValidatorConverters are too general yet. Where would the > function you describe go? For us, using a list of validators is as easy as > calling them (single function, just takes value) when a column is set (in > SQLObject), and if a ValueError happens, it just bubbles up and away (SQLObject > is not made more complicated except the need to run through the list). Well, the whole converting is weird. SQLBuilder does some conversion, but it's very limited. I'm not sure how conversion works into everything else. And ValidatorConverter already badly needs an unconverter interface, among other additions, which will make it more complicated. The validation functions would be pretty much like ValidatorConverters, and you'd use them like you're using them currently. So you could do something like Col(contraint=Range(10, 20)) where Range looked like: class Range: def __init__(self, min, max): self.min, self.max = min, max def __call__(self, obj, col, value): if not value >= self.min and value < min: raise ValueError, "Not in range [%s, %s): %s" \ % (self.min, self.max, value) Other validators could be even simpler. I suppose some columns, like IntCol, would add their own validators in addition to the user-provided validators. > > Actual integration between SQLObject and FunFormKit should come about at > > some time, but I don't think there's actually much code that needs to be > > shared between them. At some point I'd like to make some sort of > > builder that takes a SQLObject class and builds a FormDefinition -- I'd > > have to expand the introspection some, though. > > Is planned from us, and should come about in the next few weeks. You're exactly > right, there's very little code shared between them in our planned > implementation, and I will admit that part of the reason we're using > ValidatorConverters is to ease the transition. :) I'll send you details of this > as it gets closer on my radar. It would be easy to take these validators and use a wrapper to turn them into ValidatorConverters. Though there would still be issues of converting values to and from strings, which this doesn't support. Ian |