sqlobject-discuss Mailing List for SQLObject (Page 425)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Bud P. B. <bu...@si...> - 2003-05-20 12:18:06
|
I've been thinking about the API for dealing with relations. person.roles is just perfect. In addition, what about person.rolesWhere(<some condition>) And instead of person.addRole(r), what about person.roles.append(r) and similarly, r.removePerson(p) as r.persons.remove(p) To be pythonic, everything should behave as much as a list as possible. So I started to play around with inheriting from list and looking at the hooks (see attached file). I was expecting that there was a single interface to setting and getting list items (namely, __getitem__ and __setitem__), but it seems that other ways of modifying the list such as l.extend() or l.append() don't internally fall back on __getitem__ and __setitem__. Also, someone could "recalculate" relationships and assign them: p.roles = throttleBack(p.roles) Has any of you looked into these issues and knows which hooks have to be taken care of? thanks --bud /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Bud P. B. <bu...@si...> - 2003-05-20 12:05:13
|
and as a followup: Of particular magic to me is what happends here: for item in myList: print item I frankly haven't a clue what secial method to override to get control over what is returned as items... any idea? /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Ian B. <ia...@co...> - 2003-05-20 09:00:39
|
On Tue, 2003-05-20 at 03:27, Bud P.Bruegger wrote: > > Actually, I don't consider my db models to be legacy models... "legacy" > > kind of implies "out of date", "mainframe", etc. > I agree that "legacy" isn't a good term. What I meant is really a > pre-existing data schema that has to be accesses (in contrast to one > that is newly created following the needs of the middleware). You know, outside of programming "legacy" doesn't imply anything bad :) > I'm wondering what can (theoretically) be done and where, if they > exist, are difficulties. OID is one that I see. Besides this, I > believe that (what Postgresql calls) constraints should be quite > straight forward to add. I already declare foreign keys all the way. > While I haven't implemented it yet, I believe it to be easy to add > things such as CHECK constraints, multi-field UNIQUE constraints, etc. > I also plan to add the declaration of INDICES. I haven't thought of > constraints that involve multiple tables. But off hand it doesn't > seem difficult either. If you really need a sophisticated data definition, you can always create the table yourself, and allow the database to handle much of the logic (like constraints). That seems acceptable to me. What I suspect Edmund's problem is, is that his tables don't map well to classes. Or maybe more accurately, his *usage* doesn't map well to instances and attribute access. I'm imagining that his applications are phrased with rows being relations, not so much structured data. Or at least some of the data. Ian |
From: Bud P. B. <bu...@si...> - 2003-05-20 08:48:38
|
> Actually, I don't consider my db models to be legacy models... "legacy" > kind of implies "out of date", "mainframe", etc. I agree that "legacy" isn't a good term. What I meant is really a pre-existing data schema that has to be accesses (in contrast to one that is newly created following the needs of the middleware). I believe that the biggest issue I see here is that of the object ID. Both SQLObject and my experiments create an additional OID that was not declared in the mapping. Both allow to chose the name, but (I believe???) not the type of that field. Do you see a strong need to live without such an OID? Is it possible to retrofit pre-existing schemas with such an OID field? > It seems to me that (1) is best served by ZODB, (2) is where the ORMs > and SQLObject are, and (3) is where people like me are, but would prefer > not to be. I'm wondering what can (theoretically) be done and where, if they exist, are difficulties. OID is one that I see. Besides this, I believe that (what Postgresql calls) constraints should be quite straight forward to add. I already declare foreign keys all the way. While I haven't implemented it yet, I believe it to be easy to add things such as CHECK constraints, multi-field UNIQUE constraints, etc. I also plan to add the declaration of INDICES. I haven't thought of constraints that involve multiple tables. But off hand it doesn't seem difficult either. Is this what you need or is there more to it? > > What about a middleware layer who usually does the first solution but > > doesn't block you from doing the third? Would that be useful or does > > anyone see a strong need for the middle solution? > > I would have loved to have used SQLObject or some other alternative. I > looked at MiddleKit, dbObj, etc. But, none of these tools made it easy > to do complex data models. I would believe that a concrete example would be the best way of understanding each other. I admit to have difficulties to fully understand the situation so far. I tried to make it more concrete above what I have understood about constraints. I still have only a vague understanding of the multiple joins... Can you enlightern me a little more? best cheers --bud |
From: Bud P. B. <bu...@si...> - 2003-05-20 08:48:24
|
This is a great discussion from which I learn a lot about requirements. > I don't see anything inherent in (3) that requires you to "trade pure object > orientation", the problem is that we're not aware of an object mapper that > supports as much as we'd like. I suppose the biggest difference between object and relational philosophy is what the unit of retrieval is: while in relational DBs you can select just a subset of columns, usually objects with a subset of attributes don't seem to make much sense. What I imagine at the moment is the possibility to write an arbitrary SQL query (or maybe better a VIEW) and relate it to a class such that the result set can be easily used. I would think that using this would be an exception but that middleware layers should make this possible. > > ... > > But, none of these tools made it easy to do complex data models. > > Since I'm interested in making SQLObject (or a byproduct of) able to support > what I consider to be all of relational theory in an object-accessible way, > I'm curious what you consider to be the too-complex parts today. So far I've > heard multi-way joins and constraints. Let me see if I can elaborate these, > then you do the same? It seems I have the same need to understand better.. > 1. Multi-way joins, meaning n-way intersections. I've had to hack with > SQLObject to handle these in a reasonable way, but conceivable. > > 2. Multi-way joins, meaning join traversal. Seems to be supported, although > not necessarily efficiently. So I assume you mean (1)? Could you give examples here for what exactly you mean and maybe how a nice API would look like? > 3. Date's Contraint taxonomy: I don't have Date's book, but believe to understand... >Domain/type constraints, In an OO approach, I suppose you would express this by a class. The class must be concerned that only valid instances can be created. A python class is definitely the prefered way for me to express this. I see that SQL offers some constraints (Check, foreign key, unique, etc) that can express a subset of consistency requirements. When it gets more complex, one needs a real programming language to express the requirements. While in DBMS you can do this with triggers and stored procedures, it really becomes complex, brings portability problems, etc. So maybe a middleware layer should support a declarative spec of constraints that can automatically be propagated to the dbms and also used from python (since you may want to know about consistency also for objects that don't go in the db...). But I don't see that it is easy to add consistency checks that go into the programming domain beyond what you can do declaratively with a pre-defined set of constraints. > column constraints, > table constraints, I thought I understand the differences in your taxonomy--but I'm getting foggy again. In my thinking, either an attribute or an instance is of a certain type(/domain) and its constraints accordingly refer either to a column or a table. But aren't these really type constaints in different incarnations? > and multi-table constraints (database constraints). I'll > probably expand on these wrt SQLObject soon, but wanted to put them all up > here for now. Here I have the same kind of intuition. Wouldn't this mean that I create a higher-level object that contains multiple lower-level ones, each of which is physically represented in a table... > 4. Transition constraints (column, table, multi-table) are somewhat different, > and will almost certainly be messier. :) What are these? cheers --b |
From: Luke O. <lu...@me...> - 2003-05-19 18:09:40
|
> But your observation (and Luke's) does raise a thought... perhaps there > are three broad classes of database user: > > 1. Someone who wants an object store and isn't especially concerned > about data integrity, flexibility, etc. > > 2. Someone who wants an object store but does want some measure of data > integrity, has simple data models, and wants some ability to access the > data via third party apps/tools. > > 3. Some who wants flexibility, ability to access the data via third > party apps/tools, is concerned about data integrity, etc. Is willing > trade pure object orientation for these. > > It seems to me that (1) is best served by ZODB, (2) is where the ORMs > and SQLObject are, and (3) is where people like me are, but would prefer > not to be. Mostly agree with your taxonomy. I think it can be transitively restated as: 1. All data integrity / relational design in via object design. Database is generic persistence. 2. Data design is via Relational concepts, but constraints and complexity are still dealt with on the object side: they want to access their data models as objects, but the data model is more important. 3. Recognizing that constraints are *part* of the data model, this is just a clarification of (2), a desire to model everything via relational theory but to access this model via objects. I don't see anything inherent in (3) that requires you to "trade pure object orientation", the problem is that we're not aware of an object mapper that supports as much as we'd like. ... > But, none of these tools made it easy to do complex data models. Since I'm interested in making SQLObject (or a byproduct of) able to support what I consider to be all of relational theory in an object-accessible way, I'm curious what you consider to be the too-complex parts today. So far I've heard multi-way joins and constraints. Let me see if I can elaborate these, then you do the same? 1. Multi-way joins, meaning n-way intersections. I've had to hack with SQLObject to handle these in a reasonable way, but conceivable. 2. Multi-way joins, meaning join traversal. Seems to be supported, although not necessarily efficiently. So I assume you mean (1)? 3. Date's Contraint taxonomy: Domain/type constraints, column constraints, table constraints, and multi-table constraints (database constraints). I'll probably expand on these wrt SQLObject soon, but wanted to put them all up here for now. 4. Transition constraints (column, table, multi-table) are somewhat different, and will almost certainly be messier. :) - Luke |
From: Edmund L. <el...@in...> - 2003-05-19 17:52:06
|
Bud P.Bruegger wrote: > thanks for your insight. I'm still digesting but it seems that there > is a considerable range of needs. People (like you) who need a single > DB model accessed by multiple languages and apps and people who do > just python and single app (like possibly me at the moment) and don't > do legacy db models... Actually, I don't consider my db models to be legacy models... "legacy" kind of implies "out of date", "mainframe", etc. In reality however, they are just data models that exploit the strengths of the relational model--data integrity, flexibility, etc. But your observation (and Luke's) does raise a thought... perhaps there are three broad classes of database user: 1. Someone who wants an object store and isn't especially concerned about data integrity, flexibility, etc. 2. Someone who wants an object store but does want some measure of data integrity, has simple data models, and wants some ability to access the data via third party apps/tools. 3. Some who wants flexibility, ability to access the data via third party apps/tools, is concerned about data integrity, etc. Is willing trade pure object orientation for these. It seems to me that (1) is best served by ZODB, (2) is where the ORMs and SQLObject are, and (3) is where people like me are, but would prefer not to be. > I see three ways of doing constraint kind of stuff: > > * all in python, dbms doesn't know about constraints. This works > if everything that accesses the data goes through the same > middleware layer (not directly to dbms) > > * specification of constraints in python/middleware that translates it > to SQL statements to send to the dbms (such as check constraints). > As you already say, there is a lot of diversity with dbms > capabilities here (I've only looked at Postgres so far) > > * doing everything in the dbms. > > It seems that the first and last are easy, the middle one more > difficult (as you mention). This is what I've been forced to do. I have a thin API that the upper layers use to access the DB. Maybe 97% of the constraints are in the DB, and the remaining 3% are outside it because it was easier to debug this code as external Python code than horrid PL/PgSql. Isolating all the SQL in this layer also makes it easier to port to a different DB if I need to. > What about a middleware layer who usually does the first solution but > doesn't block you from doing the third? Would that be useful or does > anyone see a strong need for the middle solution? I would have loved to have used SQLObject or some other alternative. I looked at MiddleKit, dbObj, etc. But, none of these tools made it easy to do complex data models. ...Edmund. |
From: Bud P. B. <bu...@si...> - 2003-05-19 12:39:18
|
On Mon, 19 May 2003 00:17:32 -0400 Edmund Lian <el...@in...> wrote: > I've seen these issues again and again. I've chosen, in my latest work, > to forgo a third-party TP layer and just put the constraints in the DB. > This way, I don't have to deal with API and cross language issues since > the data is accessed from Python, VB, Access, Crystal Reports, etc. thanks for your insight. I'm still digesting but it seems that there is a considerable range of needs. People (like you) who need a single DB model accessed by multiple languages and apps and people who do just python and single app (like possibly me at the moment) and don't do legacy db models... I see three ways of doing constraint kind of stuff: * all in python, dbms doesn't know about constraints. This works if everything that accesses the data goes through the same middleware layer (not directly to dbms) * specification of constraints in python/middleware that translates it to SQL statements to send to the dbms (such as check constraints). As you already say, there is a lot of diversity with dbms capabilities here (I've only looked at Postgres so far) * doing everything in the dbms. It seems that the first and last are easy, the middle one more difficult (as you mention). What about a middleware layer who usually does the first solution but doesn't block you from doing the third? Would that be useful or does anyone see a strong need for the middle solution? --b /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Bud P. B. <bu...@si...> - 2003-05-19 12:29:13
|
On Mon, 19 May 2003 02:21:42 -0400 Edmund Lian <el...@in...> wrote: > What we need is something that allows us to use a mixture of techniques. > E.g., use the tool to do quick/dirty/simple stuff, but also use the tool > to implement hard stuff that requires lots of enterprise level SQL (for > want of a better term): multiway joins, subqueries, transactions, etc. > OK, so I'm dreaming! :-) Maybe one way of doing this would be to allow a (what I call) ClassMap to use custom SQL instead of one generated automatically. That way, almost anything could be encapsulated as an object. Don't know how to deal with relationships this way, though.. --b /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Luke O. <lu...@me...> - 2003-05-19 06:39:50
|
> [As an aside, I'm not quite convinced that using Python to express where > clauses, etc. is better, since you have to work much harder to deduce > the underlying SQL constructs when debugging. But, I'm still mulling > this over.] I am also still weighing this. The clumsiness needed for the AND() and OR() problems, and that there really aren't a lot of 'hard' clauses I've had that are easier to manage in python instead of SQL make me avoid it. I think it may be more helpful/relevant/(necessary?) when SQLObject supports more abstracted toSQL/fromSQL actions for columns, but for now sqlEncode or whatever it's called is just as easily used to make an SQL string manually that any developer can read. > > I'm in the midst of re-reading CJ Date's "What not How", > > which applies so directly to this issue... tangentially, I'm also leaning > > towards the suggestion of supporting nested types, ala the Point example, > > although I need to think it through for myself... > > Don't forget to check out http://www.pgro.uk7.net/index.htm It has lots > of stuff by Date, Pascal, etc. They do take an extremist, theoretic view > of the world, but maybe more rigor is needed given how many people seem > to equate MySQL and Access with PostgreSQL, Oracle, DB2, etc. > > Is the book worth a read? yep, I try to read dbdebunk.com every once in a while, although it's become a lot of repititious Pascal whining about normalization than the stuff that really interests me (although I do take it as a matter of fact that you have to have a really freaking big proven reason to denormalize in any database I have say over. This gets to a point Date makes occasionally about most SQL products' base tables being equal to implementation, when they ought to be as abstract from implementation (conceptually at least) as views are from base tables. Anyways... Yes, I think the book is worth a read, although it's expensive and short, and if you already know Date's views then half the book is a refresher. Also, definitely could have used some more structural editing (a lot of "Oh and this is relevant, but I'll talk about it in these other two chapters so forget I said it.") I found Third Manifesto to be much more worthwhile, and it covers most of the constraint arguments here. The one value I've really gotten out of What Not How is that it's so short, and introduces concepts like relvar vs relation as sidenotes, that it's a very handy book to pass off to co-workers etc to make a point. Most people won't read Third Manifesto, or care, but this is very conversational and short. :) - Luke |
From: Edmund L. <el...@in...> - 2003-05-19 06:21:56
|
Luke Opperman wrote: > However, I would like to see a decent constraints system built into > SQLObject/Python, rather than relying on the database. Just was I have > obviously been pushing towards a python-side transaction system, etc. My needs > are typically the inverse of Edmund's: flexibility to use Python exclusively > but need to run across a variety of backends occasionally, and the fact that > I'm not a fan of most databases' 'trigger'-based procedural constraints > systems. Me neither. I usually stick to extensive referential integrity constraints. Triggers and stored procedures are really hard to debug at times, and they are outside the purview of a source code control system. I've found situations where triggers were genuinely needed though, so I've got a smattering of them. > Now, it would be nice to simply augment the DB's constraint system if possible, > since I appreciate the fact that other tools touch the DB (heck, that's where > our bad IDs that started me talking about this came from..) Not sure how > feasible that will be beyond adding them to the creation SQL where there's a > conceptual match. What we need is something that allows us to use a mixture of techniques. E.g., use the tool to do quick/dirty/simple stuff, but also use the tool to implement hard stuff that requires lots of enterprise level SQL (for want of a better term): multiway joins, subqueries, transactions, etc. OK, so I'm dreaming! :-) SQLObject might get there... it certainly feels different--even so early in its life--from the other ORMs, which fall down on support for complex, preexisting data-models and SQL queries. [As an aside, I'm not quite convinced that using Python to express where clauses, etc. is better, since you have to work much harder to deduce the underlying SQL constructs when debugging. But, I'm still mulling this over.] > I'm in the midst of re-reading CJ Date's "What not How", > which applies so directly to this issue... tangentially, I'm also leaning > towards the suggestion of supporting nested types, ala the Point example, > although I need to think it through for myself... Don't forget to check out http://www.pgro.uk7.net/index.htm It has lots of stuff by Date, Pascal, etc. They do take an extremist, theoretic view of the world, but maybe more rigor is needed given how many people seem to equate MySQL and Access with PostgreSQL, Oracle, DB2, etc. Is the book worth a read? ...Edmund. |
From: Luke O. <lu...@me...> - 2003-05-19 05:34:23
|
As a short partial answer to myself: We really only encounter this when we have testing/temporary databases, as I agree strongly with Edmund that it is the constraints that give a database meaning. However, I would like to see a decent constraints system built into SQLObject/Python, rather than relying on the database. Just was I have obviously been pushing towards a python-side transaction system, etc. My needs are typically the inverse of Edmund's: flexibility to use Python exclusively but need to run across a variety of backends occasionally, and the fact that I'm not a fan of most databases' 'trigger'-based procedural constraints systems. I also think (don't know yet) that having python-side knowledge of the constraints of the system will make dealing with these errors in an intelligent manner Now, it would be nice to simply augment the DB's constraint system if possible, since I appreciate the fact that other tools touch the DB (heck, that's where our bad IDs that started me talking about this came from..) Not sure how feasible that will be beyond adding them to the creation SQL where there's a conceptual match. I'm in the midst of re-reading CJ Date's "What not How", which applies so directly to this issue... tangentially, I'm also leaning towards the suggestion of supporting nested types, ala the Point example, although I need to think it through for myself... (I think my mind is slowly trying to figure out how to make SQLObject into an implementation of Date's 'real' relational database / 'D' from Third Manifesto, although i'm not at all sure yet. Reading 'What not How' again is tying into a discussion I've been having on #python about static typing as a way of declaring invariants about a system... that's what constraints are too... must slow down the thoughts. :) - Luke |
From: Edmund L. <el...@in...> - 2003-05-19 04:17:59
|
Bud P.Bruegger wrote: > I have been thinking about constraints etc a bit and my impression is > so far that this is better dealth with in python objects than in the > dbms. I see some advantages by doing it in Python: > > * the app programmer doesn't need to worry about SQL. [The difficulty > here is particularly the marshalling that separates the two worlds > that either the app programmer or the backend programmer has to deal > with] > > * it is more portable across dmbs (particularly those that don't do > constraints well) > > * the backend developer has to worry about less things > > * SQL constraints may not be expressive enough to do full consistency > checking, so you need python code in any case. Constraints are essential for data integrity, but they are often incomplete or poorly utilized--because good data modellers are hard to find, and because RDBMS theory is poorly taught/learnt in schools. A good data model uses extensive constraints to ensure that it is never possible to create invalid combinations of data in the DB via invalid insertions, deletions or modifications. But where to add them? To my mind, the constraints should really be in the DB, because it is the DB that should enforce the constraints. The problem is that it is then very hard to port the data model to a different DB. On the other hand, other applications (in possibly other languages) that use the data will be subject to the same checks and constraints. To improve portability, one can move the constraints to some middleware layer. But now, you have more complexity, and potentially just as many portability issues (does the middleware layer run on my platform? Can I call it from my chosen language? etc.) If you write your own middleware, you have more APIs/code to maintain, etc. And, writing a decent transaction processing layer is non-trivial (because if we care enough to use constraints, we care about having robust transactions). I've seen these issues again and again. I've chosen, in my latest work, to forgo a third-party TP layer and just put the constraints in the DB. This way, I don't have to deal with API and cross language issues since the data is accessed from Python, VB, Access, Crystal Reports, etc. A middleware layer would not have worked in this situation, since things like Crystal Reports and Access would not have interfaced to it. But what if the DB doesn't support constraints or transactions well? If you care about data integrity, change your DB. ACIDity is mandatory for reliability. ...Edmund. |
From: Luke O. <lu...@me...> - 2003-05-17 22:38:57
|
Hi - Interface question for everyone to ponder: If you have a foreignKey column, or a join, and have an object with an invalid ID in one of these slots, it will raise an SQLObjectNotFound exception when you try to access any of these attributes. Which is cool and all, except that this means that many attribute accesses need to be try/excepted, instead of just the explicit creation of objects. For instance, the contrived simple access: # should be try/excepted anyways, and we do ob = Person(12) # these don't seem reasonable to do so v = ob.roles w = ob.phoneNumber should be: try: ob = Person(12) except SQLObjectNotFound: fail() try: v = ob.roles except SQLObjectNotFound: v = [] try: w = ob.phoneNumber except SQLObjectNotFound: w = None Now, I'm not entirely convinced on the solution. Sometimes you want to know that you've got bad data, other times you just want to be able to go on, blindly ignoring the bad (so having SQLObject do the exception checking in those cases would be cool). (Alternatively, as I'm sure Edmund will tell me, my database should ensure integrity at all times. And he's probably right, but this still concerns me. :) So. Should SQLObject provide an option to transparently pass over bad relationships? Perhaps at the class level, perhaps at the instance level (creation argument or anytime flag?), or should this be rethought into a stronger enforcement of relationships throughout? (We've occasionally been having full interpreter hangs on these accesses, but I haven't tracked it down, may be an odd Webware or Postgres interaction. It is completely tangential to this idea, but if anyone else is experiencing hangs instead of exceptions when accessing invalid join or FK attrs, I'd love to hear about it to help track this down.) Thanks for your thoughts, - Luke |
From: Bud P. B. <bu...@si...> - 2003-05-16 07:28:54
|
On Thu, 15 May 2003 16:20:30 -0400 Edmund Lian <el...@in...> wrote: > I cannot confirm this--I do not use SQLObject. I'm following discussions > off and on, but have not tried to use it since it cannot support what I > do (multi-table joins, extensive referential integrity checks, etc.) yet. > > This is not to knock SQLObject. So far, I haven't seen an ORM that can > support complex schema, and does not get in the way of exploiting the > relational model to the max. Can you tell us more about it? Maybe an example? My experimental code is almost there to support foreign keys all the way. I thought of adding arbitrary table constraints later (with those, you can do things such as Check constraints, Unique constraints etc that involve a single or multiple fields). I have been thinking about constraints etc a bit and my impression is so far that this is better dealth with in python objects than in the dbms. I see some advantages by doing it in Python: * the app programmer doesn't need to worry about SQL. [The difficulty here is particularly the marshalling that separates the two worlds that either the app programmer or the backend programmer has to deal with] * it is more portable across dmbs (particularly those that don't do constraints well) * the backend developer has to worry about less things * SQL constraints may not be expressive enough to do full consistency checking, so you need python code in any case. Just thoughts and if anyone has oppinions, I'd be very interested to hearing them... -b /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Bud P. B. <bu...@si...> - 2003-05-16 03:31:37
|
I'm wondering, you may be right that my approach is more complex to the end user??? Hmmm, not easy to say... so let me try to make a concrete example that is similar to what you have in the doc: #-- the application classes ------------- class Person(object): pass class PhoneNumber(object): pass # I like the fact that these are totally independent... #-- making the above storable ------------------ personMap = ClassMap('Person', [ SimpleAttr('firstName', 'varchar', notNull=True, \ unique=True, length=100), SimpleAttr('middleInitial', 'char', notNull=True, \ unique=True, length=1), SimpleAttr('lastName', 'varchar', notNull=True, \ unique=True, length=100) ], \ tableName='perTab', idName='persid', autoSave=1) phoneMap = ClassMap('PhoneNumber', [ SimpleAttr('phoneNumber', 'varchar', notNull=True, \ length=14), SimpleAttr('phoneType', 'char', notNull=True, \ unique=True, length=1), SimpleAttr('lastName', 'varchar',length=5) ], autoSave=1) # what I don't have yet are specialized Attr subclasses (such as you # various Col subclasses. These could probably save some typing and # implement interesting semantics such as ENUM.. # also note that so far I don't deal with default values. They belong # in what I belive is the higher-level physical types that are # application dependent. I need to supply some base classes here to # make life easy. (Such as a nominal value class that is equiv to # ENUM) persPhoneRel = Many2Many(personMap, phoneMap, 'phoneNumbers') appMap = Mapping([personMap, phoneMap], [persPhoneRel]) conn= psycopg.connect(...) drv = PostgresPsycopgDriver() backend = Backend(drv, conn, appMap) #-------- and now everything is persistent --------- So yes, this may be a little more verbose than SQLObject and I suppose users are exposed to more classes. There is some room for making it more efficient for typing... But in any case, I don't see a big difference in complexity. [Obviously, I'm biased on this...] On 15 May 2003 10:52:26 -0500k Ian Bicking <ia...@co...> wrote: > Just looking at your example below, doesn't this seem like a lot more > work than SQLObject? What was two classes (Person and PhoneNumber), > becomes seven (Person, PhoneNumber, StorablePerson, StorablePhoneNumber, > a Person mapper, a PhoneNumber mapper, and a relation). > > I like that SQLObject's UML diagram (which doesn't exist, but let us > imagine it) is a boring diagram, and that each class you use requires > one class definition. > > On Tue, 2003-05-13 at 02:15, Bud P.Bruegger wrote: > > So assume your app has a class Person, a class PhoneNumber, and a > > 1:many relationship between them. So there would be the following > > steps to make this storable: > > > > * make StorablePerson and StorablePhoneNumber with multiple > > inheritance from Person and Storable, and PhoneNumber and Storable, > > respectively. This would add methods to communicate with the > > backend. Not sure yet, how I will do this--but hope to have some > > initial code in a couple of days. > > > > * make a mapping objects that contains (aggregation) three major > > subobjects: > > - an ObjMap that basically represents the Person table > > - an ObjMap that represents the PhoneNumber table > > - a relationship object that defines the relationship between them > > So a mapping has a set of "table definitions" and a set of > > relationship definitions. "TableDefinitions" consist of a set of > > attributes and attributes of a set of columns (in most cases just > > one). > > > > * create a backend object by assembling a dbDriver, a dbConnection, > > and the above mapping object. The dbDriver takes information > > expressed in terms of mapping entities to create SQL statements > > (strings). The connection sends these off... The backend object > > manages all this. > > > > * storablePerson and storablePhoenNumber objects know about the > > backend object and ask it to be stored, updated, deleted. The > > backend first checks the class of the object and uses it to find the > > according mapping (name match). Mapping and Driver produce the SQL > > with variables for values, as well as marshal the python values of > > the object for use in sql statments. > > > > * backend also provides finder methods that basically correspond to > > select queries.. > > > > * backend shoudl also manage a cache and do some concurrancy > > control... > > > > Hope this clarifies my ideas a little. > > > > best cheers > > --b > > > > > > On 12 May 2003 17:10:40 -0500 > > Ian Bicking <ia...@co...> wrote: > > > > > On Mon, 2003-05-12 at 10:39, Bud P.Bruegger wrote: > > > > > Now, if I'm reading it right, this would be the most significant > > > > > difference with SQLObject -- a mapping object is a separate from the > > > > > class. But why? > > > > > > > > Ian, I made some progress and I can more clearly answer to the > > > > question above: > > > > > > > > In this approach, relationships are instances managed by Mapping, not > > > > part of the "class" (of the storable application object). This way, > > > > to keep object references from the relationship object to the > > > > contained (from, to..) "table objects" (that I call ObjMap). I can > > > > also keep back-references from ObjMap to all the relations that > > > > reference it. > > > > > > > > This way, I avoid (at least it seems at the moment) all problems of > > > > circular dependencies, referencing to objects only by name, etc. > > > > > > I'm still not really clear what you're thinking. Can you show how you > > > might define a class, using a mapping? > > > > > > Ian > > > > > > > > > > > > > > > /----------------------------------------------------------------- > > | Bud P. Bruegger, Ph.D. > > | Sistema (www.sistema.it) > > | Via U. Bassi, 54 > > | 58100 Grosseto, Italy > > | +39-0564-411682 (voice and fax) > > \----------------------------------------------------------------- > > > > /----------------------------------------------------------------- | Bud P. Bruegger, Ph.D. | Sistema (www.sistema.it) | Via U. Bassi, 54 | 58100 Grosseto, Italy | +39-0564-411682 (voice and fax) \----------------------------------------------------------------- |
From: Brad B. <br...@bb...> - 2003-05-15 21:12:53
|
On 05/15/03 11:02, Ian Bicking wrote: > On Wed, 2003-05-14 at 14:08, Brad Bollenbach wrote: > > My proposal to speak about SQLObject at EuroPython has been accepted. > > Congratulations! Thanks. :) -- Brad Bollenbach BBnet.ca |
From: Edmund L. <el...@in...> - 2003-05-15 20:20:43
|
Bud P.Bruegger wrote: >>This depends on which version of PostgreSQL you are using. The latest >>versions drop this automatically. > > So you seem to confirm that SQLObject is fine as long as it runs with > Postgres7.3 or higher... I cannot confirm this--I do not use SQLObject. I'm following discussions off and on, but have not tried to use it since it cannot support what I do (multi-table joins, extensive referential integrity checks, etc.) yet. This is not to knock SQLObject. So far, I haven't seen an ORM that can support complex schema, and does not get in the way of exploiting the relational model to the max. ...Edmund. |
From: Frank B. <fb...@fo...> - 2003-05-15 19:55:47
|
Hallo, Brad Bollenbach hat gesagt: // Brad Bollenbach wrote: > My proposal to speak about SQLObject at EuroPython has been accepted. > > A summary of the talk (and an embarrassingly long speaker bio :) can > be found here: > > http://www.europython.org/sessions/talks/talk?itemId=03108161107 > > Hope to see some SQLObject geeks there! :) I don't know yet, if I can come. It's not that far from here (about 2 hours train). BTW: My first SQLObject-project is currently in full effect at www.dradio.de or more exactly the survey at http://www.dradio.de/cgi-bin/umfrage (all in German) Only for three more weeks, though. An early version of the code is in the webware-sandbox. ciao -- Frank Barknecht _ ______footils.org__ |
From: Bud P. B. <bu...@si...> - 2003-05-15 16:43:58
|
On Thu, 15 May 2003 10:55:32 -0500 Luke Opperman <lu...@me...> wrote: > Hmm, intriguing, as I said.. I was really surprised about this possibility myself. In my personal style, I would like to write application classes independently of any kind of persistence mechanism first and then add persistence in a second step. I guess in SQL object, I'd do that something along the following lines: (and yes, one doesn't need to do it that way in SQLObject) class Person(object): pass #add all business logic here class PersistentPerson(Person, SQLObject) _columns = [...] _joins = _idName = .. _table =... _connection = # I'd personally prefer to deal with that later... So in my current approach, there is a similar structure: class Person(object): pass #add all business logic here persClassMap = ClassMap(....) cheers --b |
From: Luke O. <lu...@me...> - 2003-05-15 16:10:06
|
Quoting Nick <ni...@dd...>: > Sorry, I forgot to say what my point was :) My point is that it's more > convenient to treat objects as persistent using that kind of model than > have set() and get() functions a la C (java, etc.). You're doing nearly > the exact same thing as properties, so why not use them. I think Bud's point was to use properties (see the end of his example), but the important part was to be able to add Backend 'Mappings' to any arbitrary class (not needing classes to be initially tied to SQLObject or similar). I'm not entirely sure I understand the need for this, but it's intriguing. Actually, out of curiousity, I wanted to see if there was any special reason SQLObject addColumn's in the metaclass: class Answer(CoreObject): _columns = [ TextCol('answer'), ## removed ForeignKey('Question') for this test ] class Question(CoreObject): _columns = [ Col('question') ] _joins = [ MultipleJoin('Answer') ] >>> from Answer import Answer >>> from SQLObject import * >>> col = ForeignKey('Question') >>> Answer.addColumn(col) >>> from Question import Question >>> Answer(2).question <Question 1 question='John'> >>> So, SQLObject currently supports dynamically adding properties at runtime (I guess I knew that...). What we gain by pulling this ability into a helper class (like I assume Mapping to be), that allows us to 'addColumn' to any class we please? ArbitraryClass + Mapping == SQLObjectClass + _columns/_joins, yes? Hmm, intriguing, as I said.. - Luke |
From: Ian B. <ia...@co...> - 2003-05-15 16:04:51
|
On Thu, 2003-05-15 at 10:57, Bud P.Bruegger wrote: > > Declaring a connection for a class is easy. Why is it a problem? > > It is easy once you know. When I first used it, I didn't understand > why it wouldn't work and found out that I need the connection first. > At least for me personally, decoupling connection and definition of > the physical representation makes it easier to use, flattens the > learning curve.. But I may be alone on this... Then that might just be a documentation issue. > > The other way would be to make a table-driven columnsFromSchema that was > > attached to your connection, and let that create the columns for you. > > If I understand this correctly, ie, that you store the "object schema" > that defines the mapping from python to SQL in tables, it comes very > close to how I do it. Just that I am at a level where I represent the > mapping as python objects--but it would be possible to write them out > to a dbms and get them back into an object form.. Well, I was thinking of something that's maybe a bit more hackish. Like: class MappingPostgresConnection(PostgresConnection): def __init__(self, mappings=None, **kw): self.mappings = mappings PostgresConnection(self, **kw) def columnsFromSchema(self, table, soClass): return self.mappings[soClass.__name__] myConnection = MappingPostgresConnection( mappings={'Person': [Col('name'), ...], 'PhoneNumber': [Col(...)]} Ian |
From: Ian B. <ia...@co...> - 2003-05-15 16:03:26
|
On Wed, 2003-05-14 at 17:02, J-P Lee wrote: > Is it possible to backreference a 1-to-1 join using the _joins > attribute? The documentation only mentions one-to-many and > many-to-many. Foreign keys enable a one way reference, but I'd like to > get the reverse implemented as well. Not yet. At some point soon I'm going to clean up and extend joins. For now you might want to just do: class Something(SQLObject): def _get_other(self): return Other.selectBy(somethingID=self.id)[0] def _set_other(self, value): value.somethingID = self.id |
From: Nick <ni...@dd...> - 2003-05-15 16:02:39
|
On Thu, 2003-05-15 at 10:46, Bud P.Bruegger wrote: > Now I create my Backend instance that runs its __init__. For each > ClassMap in its Mapping instance (i.e., ['Person', 'Role']), it does > something along the following lines: > > * add a class attribute called "backendInstance" that is the python > object reference of the backend. This is necessary to make sure > application objects know who to talk to in order to get their > persistence needs satisfied. (this is where I am at the moment) > > * add methods for controlling persistence. I'm currently thinking of > - insert (new in database) > - update (update values in database) > - synch (update valued from database) > - del > - as well as functions to control transactions. I was thinking of > wrapping the record-oriented methods of the DB-API with > object-oriented ones... Gotta think about that one... > > * add properties to traverse relationships. There seem to be two > options here: lazy or automatic. This can be controlled by the > Relationship object in the Mapping. I haven't looked into this > much and probably I can rip off some of SQLObject's code here.. I do like that model, however you probably don't even need to make properties and such, since it is all internally called and used by other classes and probably never by the API user. At least it would certainly reduce the complexity that metaclasses bring to the game. And by doing that in __init__ you're not really doing much all that different from metaclassing of the Python 1.5 days. If you're storing mappings, just refer to the mappings when you do an operation instead of making yourself a bunch of lambdas. Depending on how many operations you're doing and how many classes you're defining, you may save yourself a lot of up-front work and memory just to import the classes. Nick |
From: Ian B. <ia...@co...> - 2003-05-15 16:01:30
|
On Wed, 2003-05-14 at 14:08, Brad Bollenbach wrote: > My proposal to speak about SQLObject at EuroPython has been accepted. Congratulations! Ian |