sqlobject-discuss Mailing List for SQLObject (Page 396)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
(2) |
Mar
(43) |
Apr
(204) |
May
(208) |
Jun
(102) |
Jul
(113) |
Aug
(63) |
Sep
(88) |
Oct
(85) |
Nov
(95) |
Dec
(62) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(38) |
Feb
(93) |
Mar
(125) |
Apr
(89) |
May
(66) |
Jun
(65) |
Jul
(53) |
Aug
(65) |
Sep
(79) |
Oct
(60) |
Nov
(171) |
Dec
(176) |
2005 |
Jan
(264) |
Feb
(260) |
Mar
(145) |
Apr
(153) |
May
(192) |
Jun
(166) |
Jul
(265) |
Aug
(340) |
Sep
(300) |
Oct
(469) |
Nov
(316) |
Dec
(235) |
2006 |
Jan
(236) |
Feb
(156) |
Mar
(229) |
Apr
(221) |
May
(257) |
Jun
(161) |
Jul
(97) |
Aug
(169) |
Sep
(159) |
Oct
(400) |
Nov
(136) |
Dec
(134) |
2007 |
Jan
(152) |
Feb
(101) |
Mar
(115) |
Apr
(120) |
May
(129) |
Jun
(82) |
Jul
(118) |
Aug
(82) |
Sep
(30) |
Oct
(101) |
Nov
(137) |
Dec
(53) |
2008 |
Jan
(83) |
Feb
(139) |
Mar
(55) |
Apr
(69) |
May
(82) |
Jun
(31) |
Jul
(66) |
Aug
(30) |
Sep
(21) |
Oct
(37) |
Nov
(41) |
Dec
(65) |
2009 |
Jan
(69) |
Feb
(46) |
Mar
(22) |
Apr
(20) |
May
(39) |
Jun
(30) |
Jul
(36) |
Aug
(58) |
Sep
(38) |
Oct
(20) |
Nov
(10) |
Dec
(11) |
2010 |
Jan
(24) |
Feb
(63) |
Mar
(22) |
Apr
(72) |
May
(8) |
Jun
(13) |
Jul
(35) |
Aug
(23) |
Sep
(12) |
Oct
(26) |
Nov
(11) |
Dec
(30) |
2011 |
Jan
(15) |
Feb
(44) |
Mar
(36) |
Apr
(26) |
May
(27) |
Jun
(10) |
Jul
(28) |
Aug
(12) |
Sep
|
Oct
|
Nov
(17) |
Dec
(16) |
2012 |
Jan
(12) |
Feb
(31) |
Mar
(23) |
Apr
(14) |
May
(10) |
Jun
(26) |
Jul
|
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(6) |
2013 |
Jan
(4) |
Feb
(5) |
Mar
|
Apr
(4) |
May
(13) |
Jun
(7) |
Jul
(5) |
Aug
(15) |
Sep
(25) |
Oct
(18) |
Nov
(7) |
Dec
(3) |
2014 |
Jan
(1) |
Feb
(5) |
Mar
|
Apr
(3) |
May
(3) |
Jun
(2) |
Jul
(4) |
Aug
(5) |
Sep
|
Oct
(11) |
Nov
|
Dec
(62) |
2015 |
Jan
(8) |
Feb
(3) |
Mar
(15) |
Apr
|
May
|
Jun
(6) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
(19) |
2016 |
Jan
(2) |
Feb
|
Mar
(2) |
Apr
(4) |
May
(3) |
Jun
(7) |
Jul
(14) |
Aug
(13) |
Sep
(6) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
(6) |
Feb
(14) |
Mar
(2) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(4) |
Nov
(3) |
Dec
|
2018 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
|
Apr
(44) |
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(2) |
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Ian B. <ia...@co...> - 2004-01-21 16:59:05
|
Brad Bollenbach wrote: > [Gah, very annoying that I can't reply on list. I'll try to get in touc= h=20 > with somebody at SourceForge today to figure out what's going wrong her= e.] >=20 > Le mardi, 20 jan 2004, =E0 12:23 Canada/Eastern, Ian Bicking a =E9crit = : >> Sidnei da Silva wrote: > [snip] >>> * Enforcing constraints in python. Brad B. was chatting to me on irc >>> yesterday and we came to agree on a api. He's writing a proposal (wit= h >>> a patch) and its going to present it soon. Basically, when you create >>> a column you would provide a callable object as a keyword 'constraint= ' >>> parameter. This constraint would then be used to enforce some >>> restrictions. >>> def foo_constraint(obj, name, value, values=3DNone): >>> # name is the column name >>> # value is the value to be set for this column >>> # values is a dict of the values to be set for other columns >>> # in the case you are creating an object or modifying more than >>> # one column at a time >>> # returns True or False >>> age =3D IntCol(constraint=3Dfoo_constraint) >>> class Col: >>> def __init__(self, name, constraint): >>> self.name =3D name >>> self.constraint =3D constraint >>> def __set__(self, obj, value): >>> if not self.constraint(obj, name, value): >>> raise ValueError, value >>> # Set the value otherwise >> >> >> We already have Python constraints available through the=20 >> validator/converter interface, which I hope to fill out some more, and= =20 >> provide some more documentation and examples. >=20 >=20 > These constraints are only useful in trivial cases though. I have at=20 > least one specific case where I need to cross-reference column values i= n=20 > the object, which may currently be set, or about to be changed to a new= =20 > value. So, there are other parameters that must be supplied to the=20 > callback. Well, what we need is a schema-level/instance level validation. The=20 validator interface allows for this, but SQLObject doesn't currently=20 call a validator for the entire instance (so there would have to be some=20 changes). So, if DBI has the values: new_value, target_object, name_of_column,=20 all_new_values, then the validator would look something like: class MyValidator(Validator): def validate(self, fields, state): # we don't know the new value or name_of_column, which doesn't # really apply in this case target_object =3D state.soObject all_new_values =3D fields Through state.soObject you can check a single column for consistency=20 with other columns, but in the case of a .set() call you won't see all=20 of the new values; in that case you may want to have symmetric=20 validators, so if A and B are dependent, then when A is changed it=20 checks B, and when B is changed it checks A. Or use an instance=20 validator, which should So, somewhere in .set() (or probably a new method, called by both .set()=20 and ._SO_setValue()) we'd check any instance validators, probably being=20 more careful that they don't see the object while it's in the middle of=20 having values set (i.e., convert and collect all the values, then set=20 them all at once). I want to use validators more heavily, and have them translate into=20 database-side constraints as well. So, for instance, ForeignKey would=20 become a validator/converter, and would also create the "REFERENCES ..."=20 portion of the SQL. An example of an instance validator might be a=20 multi-column unique constraint, which again could create the necessary SQ= L. > Essentially, I need an interface like Perl's Class::DBI: >=20 > http://search.cpan.org/~tmtm/Class-DBI-0.95/lib/Class/DBI.pm#CONSTRAINT= S >=20 > Class::DBI is excellent, but I have little knowledge of its internals,=20 > and so it may suffer from the same performance problems inherent in=20 > SQLObject, but it's definitely a project every SQLObject developer=20 > should be well aware of for a 0.6 redesign, because we might as well do= =20 > what everybody else does and steal ideas and improve upon them. DBI does seem like a good system. Maybe because (besides being in=20 Perl), it actually reminds me a lot of SQLObject ;) Though SQLObject=20 actually seems to be significantly larger in scope than DBI, which=20 doesn't seem to address transactions or caching in any way. This seems=20 particularly significant with respect to transactions, as transactions=20 are one of the things 0.6 tries to resolve more cleanly, and it's not a=20 trivial addition. > In other news, the patch is necessarily on hold until we resolve the=20 > database backend versions vs. SQLObject issue. I've got tons of tests=20 > that have errors in them. The next thing that should be checked into th= e=20 > repository is something that makes those tests pass, but that will=20 > depend on what the consensus is among the users about how to handle=20 > those versioning problems. |
From: Ian B. <ia...@co...> - 2004-01-20 23:38:53
|
Ian Sparks wrote: >> Ian Bicking wrote on Monday, January 19, 2004 10:38 PM : By using >> descriptors, introspection should become a bit easier -- or at >> least more uniform with respect to other new-style classes. Various >> class-wide indexes of columns will still be necessary, but these >> should be able to remain mostly private. > > Sorry to pull out just this one part of your plan but its not clear > to me how you do introspection on current table classes to say, find > out what columns a table has. This would be useful for generating > editing web-pages for SO Objects for instance. Being able to walk > joins would also be helpful so that you could have master/detail > relationships or populate combo-boxes of all possible values for a > field. Well, right now you can look at _columns, which should have most of the stuff you'd want. Ideally the SQLMeta object (in this redesign) would have a clear set of methods for doing introspection. For this case, I think it's better for the object to introspect itself, and that you add a method to it like .fields() or something. Of course, you still need some form of introspection, even if the object is introspecting itself. I think for joins, you are really thinking of ForeignKey (or maybe RelatedJoin, I suppose), where you want to find all the possible objects that this object could point to. In this case I think you want to look at the ForeignKey object (which should be available through _columns), and do something like SQLObject.findClass(ForeignKeyColObj.foreignKey).search() to get the possible options. Ian |
From: Ian S. <Ian...@et...> - 2004-01-20 22:37:59
|
>Ian Bicking wrote on Monday, January 19, 2004 10:38 PM : >By using descriptors, introspection should become a bit easier -- or >at least more uniform with respect to other new-style classes. >Various class-wide indexes of columns will still be necessary, but >these should be able to remain mostly private. Sorry to pull out just this one part of your plan but its not clear to = me how you do introspection on current table classes to say, find out = what columns a table has. This would be useful for generating editing = web-pages for SO Objects for instance. Being able to walk joins would = also be helpful so that you could have master/detail relationships or = populate combo-boxes of all possible values for a field. -----Original Message----- From: Ian Bicking [mailto:ia...@co...] Sent: Monday, January 19, 2004 10:38 PM To: sql...@li... Subject: [SQLObject] API Redesign for 0.6 SQLObject 0.6 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D *A tentative plan, 20 Jan 2004* Introduction ------------ During vacation I thought about some changes that I might like to make to SQLObject. Several of these change the API, but not too drastically, and I think they change the API for the better. And we'd not at 1.0 yet, changes are still allowed! Here's my ideas... Editing Context --------------- Taken from Modeling, the "editing context" is essentially a transaction, though it also encompasses some other features. Typically it is used to distinguish between separate contexts in a multi-threaded program. This is intended to separate several distinct concepts: * The database backend (MySQL, PostgreSQL, etc), coupled with the driver (MySQLdb, psycopg, etc). (Should the driver be part of the connection parameters?) * The connection parameters. Typically these are the server host, username, and password, but they could also be a filename or other path. Perhaps this could be represented with a URI, ala PEAK, but I also dislike taking structured data and destructuring it (i.e., packing it into a string). OTOH, URLs are structured, even if they require some parsing. Serialization of URLs is free and highly transparent. Python syntax is well structured and *programmatically* considerably more transparent (in a robust fashion), but also programmatically fairly read-only (because it is embedded in the structure of Python source code). We can also have both. * The database transactional context. * The application transactional context (preferably these two would be seemless, but they still represent somewhat distinct entities, and a portability layer might be nice). The application's transactional context may include other transactions -- e.g., multiple databases, a ZODB transaction, etc. * The cache policy. There are many different kinds of caches potentially involved, include write batching, and per-object and per-table caches, connection pooling, and so on. * Classes, which on the database side are typically tables. (This proposal does not attempt to de-couple classes and tables) Example:: from SQLObject import EditingContext ec =3D EditingContext() # every editing context automatically picks up all the SQLObject # classes, all magic like. person =3D ec.Person.get(1) # by ID ec2 =3D EditingContext() # separate transaction person2 =3D ec.Person.get(1) assert person is not person2 assert person.id =3D=3D person2.id assert person.fname =3D=3D 'Guy' person.fname =3D 'Gerald' assert person2.fname =3D=3D 'Guy' ec.commit() # SQL is not sent to server assert person2.fname =3D=3D 'Guy' # Doesn't see changes person2.fname =3D 'Norm' # raises exception if locking is turned on; overwrites if locking # is not turned on. (Locking enabled on a per-class level) I'm not at all sure about that example. Mostly the confusing parts relate to locking and when the database lookup occurs (and how late a conflict exception may be raised). Somewhere in here, process-level transactions might fit in. That is, even on a backend that doesn't support transactions, we can still delay SQL statements until a commit/rollback is performed. In turn, we can create temporary "memory" objects, which is any object which hasn't been committed to the database in any way. To do this we'll need sequences -- to preallocate IDs -- which MySQL and SQLite don't really provide :( Nested transactions...? Maybe they'd fall out of this fairly easily, especially if we define a global context, with global caches etc., then further levels of context will come for free. We still need to think about an auto-commit mode. Maybe the global context would be auto-commit. Caching ------- Really doing transactions right means making caching significantly more complex. If the cache is purely transaction-specific, then we'll really be limiting the effectiveness of the cache. With that in mind, a copy-on-write style of object is really called for -- when you fetch an object in a transaction, you can use the globally cached instance until you write to the object. Really this isn't copy-on-write, it's more like a proxy object. Until the object is changed, it can delegate all its columns to its global object for which it is a proxy. Of course, traversal via foreign keys or joins must also return proxied objects. As the object is changed -- perhaps on a column-by-column basis, or as a whole on the first change -- the object takes on the personality of a full SQLObject instance. When the transaction is committed, this transactional object copies itself to the global object, and becomes a full proxy. These transactional caches themselves should be pooled -- so that when another transaction comes along you have a potentially useful set of proxy objects already created for you. This is a common use case for web applications, which have lots of short transactions, which are often very repetitive. In addition to this, there should be more cache control. This means explicit ways to control things like: 1. Caching of instances: + Application/process-global definition. + Database-level definition. + Transaction/EditingContext-level definition. + Class-level definition. 2. Caching of columns: + Class-level. 3. Cache sweep frequency: + Application/process-global. + Database-level. + Class-level. + Doesn't need to be as complete as 1; maybe on the class level you could only indicate that a certain class should not be sweeped. + Sweep during a fetch (e.g., every 100 fetches), by time or fetch frequency, or sweep with an explicit call (e.g., to do sweeps in a separate thread). 4. Cache sweep policy: + Maximum age. + Least-recently-used (actually, least-recently-fetched). + Random (the current policy). + Multi-level (randomly move objects to a lower-priority cache, raise level when the object is fetched again). + Target cache size (keep trimming until the cache is small enough). + Simple policy (if enough objects qualify, cache can be of any size). + Percentage culling (e.g., kill 33% of objects for each sweep; this is the current policy). 5. Batching of updates (whether updates should immediately go to the database, or whether it would be batched until a commit or other signal). 6. Natural expiring of objects. Even if an object must persist because there are still references, we could expire it so that future accesses re-query the database. To avoid stale data. Expose some methods of the cache, like getting all objects currently in memory. These would probably be exposed on a class level, e.g., all the Addresses currently in memory via ``Address.cache.current()`` or something. What about when there's a cached instance in the parent context, but not in the present transaction? Columns as Descriptors ---------------------- Each column will become a descriptor. That is, ``Col`` and subclasses will return an object with ``__get__`` and ``__set__`` methods. The metaclass will not itself generate methods. A metaclass will still be used so that the descriptor can be tied to its name, e.g., that with ``fname =3D StringCol()``, the resultant descriptor will know that it is bound to ``fname``. By using descriptors, introspection should become a bit easier -- or at least more uniform with respect to other new-style classes. Various class-wide indexes of columns will still be necessary, but these should be able to remain mostly private. To customize getters or setters (which you currently do by defining a ``_get_columnName`` or ``_set_columnName`` method), you will pass arguments to the ``Col`` object, like:: def _get_name(self, dbGetter): return dbGetter().strip() name =3D StringCol(getter=3D_get_name) This gets rid of ``_SO_get_columnName`` as well. We can transitionally add something to the metaclass to signal an error if a spurious ``_get_columnName`` method is sitting around. Construction and Fetching ------------------------- Currently you fetch an object with class instantiation, e.g., ``Address(1)``. This may or may not create a new instance, and does not create a table row. If you want to create a table row, you do something like ``Address.new(city=3D'New York', ...)``. This is somewhat in contrast to normal Python, where class instantiation (calling a class) will create a new object, while objects are fetched otherwise (with no particular standard interface). To make SQLObject classes more normal in this case, ``new`` will become ``__init__`` (more or less), and classes will have a ``get`` method that gets an already-existant row. E.g., ``Address.get(1)`` vs. ``Address(city=3D'New York', ...)``. This is perhaps the most significant change in SQLObject usage. Because of the different signatures, if you forget to make a change someplace you will get an immediate exception, so updating code should not be too hard. Extra Table Information ----------------------- People have increasingly used SQLObject to create tables, and while it can make a significant number of schemas, there are several extensions of table generation that people occasionally want. Since these occur later in development, it would be convenient if SQLObject could grow as the complexity of the programs using it grow. Some of these extensions are: * Table name (``_table``). * Table type for MySQL (e.g., MyISAM vs. InnoDB). * Multi-column unique constraints. (Other constraints?) * Indexes. (Function or multi-column indexes?) * Primary key type. (Primary key generation?) * Primary key sequence names (for Postgres, Firebird, Oracle, etc). * Multi-column primary keys. * Naming scheme. * Permissions. * Locking (e.g., optimistic locking). * Inheritance (see Daniel Savard's recent patch). * Anything else? Some of these may be globally defined, or defined for an entire database. For example, typically you'll want to use a common MySQL table type for your entire database, even though its defined on a per-table basis. And while MySQL allows global permission declarations, Postgres does not and requires tedious repetitions of the permissions for each table -- so while it's applied on a per-table basis, it's likely that (at least to some degree) a per-database declaration is called for. Naming schemes are also usually database-wide. As these accumulate -- and by partitioning this list differently, the list could be even longer -- it's messy to do these all as special class variables (``_idName``, etc). It also makes the class logic and its database implementation details difficult to distinguish. Some of these can be handled elegantly like ``id =3D StringCol()`` or ``id =3D ("fname", "lname")``. But the others perhaps should be put into a single instance variable, perhaps itself a class:: class Address(SQLObject): class SQLMeta: mysqlType =3D 'InnoDB' naming =3D Underscore permission =3D {'bob': ['select', 'insert'], 'joe': ['select', 'insert', 'update'], 'public': ['select']} street =3D StringCol() .... The metadata is found by its name (``SQLMeta``), and is simply a container. The class syntax is easier to write and read than a dictionary-like syntax. Or, it could be a proper class/instance and provide a partitioned way to handle introspection. E.g., ``Address.SQLMeta.permission.get('bob')`` or ``Address.SQLMeta.columns``. In this case values that weren't overridden would be calculated from defaults (like the default naming scheme and so on).=20 I'm not at all certain about how this should look, or if there are other things that should go into the class-meta-data object. Joins, Foreign Keys ------------------- First, the poorly-named ``MultipleJoin`` and ``RelatedJoin`` (which are rather ambiguous) will be renamed ``ManyToOneJoin`` and ``ManyToManyJoin``. ``OneToOneJoin`` will also be added, while ``ForeignKey`` remains the related column type. (Many2Many? Many2many? many2many?) ForeignKey will be driven by a special validator/converter. (But will this make ID access more difficult?) Joins will return smart objects which can be iterated across. These smart objects will be related to ``SelectResults``, and allow the same features like ordering. In both cases, an option to retrieve IDs instead of objects will be allowed. These smarter objects will allow, in the case of ManyToManyJoin, ``Set`` like operations to relate (or unrelate) objects. For ManyToOneJoin the list/set operations are not really appropriate, because they would reassign the relation, not just add or remove relations. It would be nice to make the Join protocol more explicit and public, so other kinds of joins (e.g., three-way) could be more accessible. ------------------------------------------------------- The SF.Net email is sponsored by EclipseCon 2004 Premiere Conference on Open Tools Development and Integration See the breadth of Eclipse activity. February 3-5 in Anaheim, CA. http://www.eclipsecon.org/osdn _______________________________________________ sqlobject-discuss mailing list sql...@li... https://lists.sourceforge.net/lists/listinfo/sqlobject-discuss |
From: Ian B. <ia...@co...> - 2004-01-20 20:28:50
|
-------- Original Message -------- Subject: DB Backend Version Compatibility Date: Tue, 20 Jan 2004 15:10:59 -0500 From: Brad Bollenbach <br...@bb...> To: ia...@co... [Here's an email I tried sending to the list, but apparently got spam-filtered.] Hi all, I'm trying to fix a bug in SQLObject right now whereby if psycopg raises an IntegrityError, SQLObject doesn't properly reset the connection (with a rollback() on the connection), leaving it in, effectively an unpredictable state, leading to "no results to fetch" problems when I try to access SQLObjects after that exception occurs. So (obviously) I need to write a unit test that illustrates the problem, and then make the code modifications to make the test pass. There's a problem though: there are errors occurring in the tests. The problem is quite a bit deeper though. When I run the tests with: python test.py -dpostgres --database=sqlo_test I get lots of errors like: ProgrammingError: ERROR: parser: parse error at or near "CASCADE" DROP TABLE so_validation CASCADE (btw, I changed the PostgresConnection in SQLObjectTest.py to point to a special db I created manually, but no matter...) These errors are the result of SQL being run that expects me to be using a PostgreSQL database version >= 7.3. We're currently using 7.2.1 for many, many projects. The problem is that the SQLObject project has no clearly defined requirements for which database versions are supported for each of the backends. The "Requirements" section of the documentation (http://sqlobject.org/docs/SQLObject.html#requirements) must state which versions of databases are required for each of the given backends to function properly. The required version of each database backend should be the earliest version of the backend for which the unit tests pass. Optionally, we could add in little hacks here and there where a known version incompatibility (e.g. the fact that PostgreSQL < 7.3 doesn't support CASCADE on dropping tables) is resolved by detecting the version and using alternative SQL to achieve the same effect (e.g. a manual drop table + drop sequence for PostgreSQL < 7.3, or whatever the most accurate "manual" SQL equivalent is.) If everyone's okay with this I can update the Requirements section with the backend versions that are expected to be supported when I hear from the people who wrote them. Thoughts? -- Brad Bollenbach BBnet.ca |
From: Nils D. <gma...@xo...> - 2004-01-20 18:30:23
|
Hi, SQLObject 0.5.1 has an error when the connection uses "cache": None. Calling destroySelf() on an object triggers this exception: File "/home/nde/.www/pdms/Pages.py", line 542, in handleForm self.obj.destroySelf() File "/home/nde/dev/pdms/libs/SQLObject/SQLObject.py", line 1014, in destroySelf self._connection.cache.expire(self.id, self.__class__) File "/home/nde/dev/pdms/libs/SQLObject/Cache.py", line 186, in expire self.caches[cls.__name__].expire(id) File "/home/nde/dev/pdms/libs/SQLObject/Cache.py", line 138, in expire if self.cache.has_key(id): AttributeError: 'CacheFactory' object has no attribute 'cache' this small path fixes it for me: --- Cache.py.old Tue Jan 20 18:31:51 2004 +++ Cache.py Tue Jan 20 18:31:57 2004 @@ -40,8 +40,7 @@ self.cullFraction = cullFraction self.doCache = cache - if self.doCache: - self.cache = {} + self.cache = {} self.expiredCache = {} self.lock = threading.Lock() Regards Nils Decker |
From: Ian B. <ia...@co...> - 2004-01-20 17:25:43
|
Sidnei da Silva wrote: > On Tue, Jan 20, 2004 at 03:37:50AM +0000, Ian Bicking wrote: > <snip> > | from SQLObject import EditingContext > | ec = EditingContext() > | # every editing context automatically picks up all the SQLObject > | # classes, all magic like. > > I would like to see two things here: > > + Being able to pass a list of class names to be picked > + Being able to pass a class and pick the class plus its dependencies > (eg: ForeignKey's) > > There's no much reason for picking all classes if you are only to use > one or two. The classes would be picked up lazily, so it shouldn't be a problem. There would be a central repository of SQLObject classes (probably just stored in a global variable in the SQLObject module). Oh, and maybe I should rename the SQLObject module, to avoid SQLObject/SQLObject/SQLObject.py... > | person = ec.Person.get(1) # by ID > | ec2 = EditingContext() # separate transaction > | person2 = ec.Person.get(1) > | assert person is not person2 > | assert person.id == person2.id > | assert person.fname == 'Guy' > | person.fname = 'Gerald' > | assert person2.fname == 'Guy' > | ec.commit() # SQL is not sent to server > > When SQL is sent to server? Uh... I don't know what I was thinking. I think it should be sent then. Maybe I mistyped "not". > | assert person2.fname == 'Guy' # Doesn't see changes > | person2.fname = 'Norm' > | # raises exception if locking is turned on; overwrites if locking > | # is not turned on. (Locking enabled on a per-class level) > | > | I'm not at all sure about that example. Mostly the confusing parts > | relate to locking and when the database lookup occurs (and how late a > | conflict exception may be raised). > > The example is ok, I'm also curious about how locking would be handled. That's a little fuzzy in my mind. It's pretty important with transactions, though, so there has to be something. > | Somewhere in here, process-level transactions might fit in. That is, > | even on a backend that doesn't support transactions, we can still > | delay SQL statements until a commit/rollback is performed. In turn, > | we can create temporary "memory" objects, which is any object which > | hasn't been committed to the database in any way. To do this we'll > | need sequences -- to preallocate IDs -- which MySQL and SQLite don't > | really provide :( > | > | Nested transactions...? Maybe they'd fall out of this fairly easily, > | especially if we define a global context, with global caches etc., > | then further levels of context will come for free. > | > | We still need to think about an auto-commit mode. Maybe the global > | context would be auto-commit. > > As long as there's a way to disable auto-commit if you don't want it, > it seems ok. Nested transactions would be particularly useful for me. Certainly; if auto-commit was always on, transactions wouldn't be very interesting. > There's one cache algorithm that was tried on ZODB and I think gave > one of the best hit rates. If I still recall it was called 'Thor' or > something similar. It takes the object size and number of hits into > account to keep the memory footprint to a minimal while still doing a > good job at caching both big and small objects. I would imagine ZODB caching would have very different requirements than an ORM in this case -- in part because there's more small objects in a ZODB. > | 5. Batching of updates (whether updates should immediately go to the > | database, or whether it would be batched until a commit or other > | signal). > | 6. Natural expiring of objects. Even if an object must persist > | because there are still references, we could expire it so that > | future accesses re-query the database. To avoid stale data. > > Maybe replacing the object in the cache by a proxy? :) You can't replace objects -- objects that must persist are being referenced, and we can't track down those references. All SQLObject instances would have to be potential proxies, and would have to be able to shift from proxy to normal. > | Columns as Descriptors > | ---------------------- > | > | Each column will become a descriptor. That is, ``Col`` and subclasses > | will return an object with ``__get__`` and ``__set__`` methods. The > | metaclass will not itself generate methods. > | > | A metaclass will still be used so that the descriptor can be tied to > | its name, e.g., that with ``fname = StringCol()``, the resultant > | descriptor will know that it is bound to ``fname``. > | > | By using descriptors, introspection should become a bit easier -- or > | at least more uniform with respect to other new-style classes. > | Various class-wide indexes of columns will still be necessary, but > | these should be able to remain mostly private. > | > | To customize getters or setters (which you currently do by defining a > | ``_get_columnName`` or ``_set_columnName`` method), you will pass > | arguments to the ``Col`` object, like:: > | > | def _get_name(self, dbGetter): > | return dbGetter().strip() > | > | name = StringCol(getter=_get_name) > | > | This gets rid of ``_SO_get_columnName`` as well. We can > | transitionally add something to the metaclass to signal an error if a > | spurious ``_get_columnName`` method is sitting around. > > Yay! You dont know how much I've been missing this one. :) Columns as descriptors in general, or passing in dbGetter/dbSetter? > | Extra Table Information > | ----------------------- > | > | People have increasingly used SQLObject to create tables, and while it > | can make a significant number of schemas, there are several extensions > | of table generation that people occasionally want. Since these occur > | later in development, it would be convenient if SQLObject could grow > | as the complexity of the programs using it grow. Some of these > | extensions are: > | > | * Table name (``_table``). > | * Table type for MySQL (e.g., MyISAM vs. InnoDB). > | * Multi-column unique constraints. (Other constraints?) > | * Indexes. (Function or multi-column indexes?) > | * Primary key type. (Primary key generation?) > | * Primary key sequence names (for Postgres, Firebird, Oracle, etc). > | * Multi-column primary keys. > | * Naming scheme. > | * Permissions. > | * Locking (e.g., optimistic locking). > | * Inheritance (see Daniel Savard's recent patch). > | * Anything else? > > * Enforcing constraints in python. Brad B. was chatting to me on irc > yesterday and we came to agree on a api. He's writing a proposal (with > a patch) and its going to present it soon. Basically, when you create > a column you would provide a callable object as a keyword 'constraint' > parameter. This constraint would then be used to enforce some > restrictions. > > def foo_constraint(obj, name, value, values=None): > # name is the column name > # value is the value to be set for this column > # values is a dict of the values to be set for other columns > # in the case you are creating an object or modifying more than > # one column at a time > # returns True or False > > age = IntCol(constraint=foo_constraint) > > class Col: > > def __init__(self, name, constraint): > self.name = name > self.constraint = constraint > > def __set__(self, obj, value): > if not self.constraint(obj, name, value): > raise ValueError, value > # Set the value otherwise We already have Python constraints available through the validator/converter interface, which I hope to fill out some more, and provide some more documentation and examples. > | Some of these may be globally defined, or defined for an entire > | database. For example, typically you'll want to use a common MySQL > | table type for your entire database, even though its defined on a > | per-table basis. And while MySQL allows global permission > | declarations, Postgres does not and requires tedious repetitions of > | the permissions for each table -- so while it's applied on a per-table > | basis, it's likely that (at least to some degree) a per-database > | declaration is called for. Naming schemes are also usually > | database-wide. > | > | As these accumulate -- and by partitioning this list differently, the > | list could be even longer -- it's messy to do these all as special > | class variables (``_idName``, etc). It also makes the class logic and > | its database implementation details difficult to distinguish. Some > | of these can be handled elegantly like ``id = StringCol()`` or ``id > | = ("fname", "lname")``. But the others perhaps should be put into a > | single instance variable, perhaps itself a class:: > | > | class Address(SQLObject): > | class SQLMeta: > | mysqlType = 'InnoDB' > | naming = Underscore > | permission = {'bob': ['select', 'insert'], > | 'joe': ['select', 'insert', 'update'], > | 'public': ['select']} > | street = StringCol() > | .... > | > | The metadata is found by its name (``SQLMeta``), and is simply a > | container. The class syntax is easier to write and read than a > | dictionary-like syntax. Or, it could be a proper class/instance and > | provide a partitioned way to handle introspection. E.g., > | ``Address.SQLMeta.permission.get('bob')`` or > | ``Address.SQLMeta.columns``. In this case values that weren't > | overridden would be calculated from defaults (like the default naming > | scheme and so on). > > +1 on it being an instance and providing introspection. -1 on it being > a class. Mostly the class is easy to type. I find things like: class Address(SQLObject): sqlmeta = SQLMeta( blah blah blah) To be a little ugly, though not so bad I suppose. Better than: class Address(SQLObject): sqlmeta = { 'blah': blah, ...} FormEncode uses a funny technique, whereby these are equivalent: class B(A): v1 = 10 v2 = 20 B = A(v1=10, v2=20) I.e., calling a class actually creates a subclass. This may be suspect, but it does work, and provides a certain uniformity. I'm not sure if it's really necessary here, though. There's also a hack so that SQLMeta could subclass some special class, and by subclassing it would create an instance of an anonymous class. This is a little too clever for my taste, though. Ian |
From: Ian B. <ia...@co...> - 2004-01-20 16:57:03
|
Daniel Savard wrote: > Here is a patch that add simple inheritance to SQLObject 0.8.1. > If you try it, please send me some comments (to the list or my > email). Thanks. Thanks for contributing this. It looks like it required less modifications than I expected. One thing I wondered about: will this work with the addColumn class method? It doesn't seem like it -- the inheritance structure seems to be constructed entirely at the time of class instantiation. To do this, superclasses would need to know about all their subclasses, so they could add the column to subclasses as well. > A new class attribute '_inheritable' is added. When this new > attribute is set to 1, the class is marked 'inheritable' and two colomns > will automatically be added: childID (INT) and childName (TEXT). When a > class inherits from a class that is marked inheritable, a new colomn > (ForeignKey) will automatically be added: parent. Is childID needed? It seems like the child should share the parent's primary key. > The column parent is a foreign key that point to the parent class. > It works as all SQLObject's foreign keys. There will also be a parentID > attribute to retreive the ID of the parent class. > > The columns childID and childName will respectivly contain the id > and the name of the child class (for exemple, 1 and 'Employee'. This > will permit to call a new function: getSubClass() that will return a > child class if possible. These seem weird to me, but I think that's the disconnect between RDBMS inheritance (which is really just another kind of relationship), and OO inheritance. I'd rather see Person(1) return an Employee object, instead of having to use getSubClass(). Or, call getSubClass() simply "child", so that someEmployee.child.parent == someEmployee. But at that point, it really doesn't look like inheritance. Rather we have a polymorphic one-to-one (or one-to-zero) relation between Person and Employee, and potentially other tables (exclusive with the Employee relation). The functional difference is that the join is in some ways implicit, in that Employee automatically brings in all of Person's columns. At which point Employee looks kind of like a view. Could this all be implemented with something like: class Person(SQLObject): child = PolyForeignKey() # Which creates a column for the table name, and maybe one for the # foreign ID as well. class Employee(SQLObject): parent = ColumnJoin('Person') # ColumnJoin -- maybe with another name -- brings in all the columns # of the joined class. This seems functionally equivalent to this inheritance, but phrased as relationships. And, phrased as a relationship, it's more flexible. For instance, you could have multiple ColumnJoins in a class, without it being very confusing, and multiple PolyForeignKeys, for different classes of objects. Ian |
From: Sidnei da S. <si...@aw...> - 2004-01-20 16:39:34
|
On Tue, Jan 20, 2004 at 03:37:50AM +0000, Ian Bicking wrote: <snip> | from SQLObject import EditingContext | ec = EditingContext() | # every editing context automatically picks up all the SQLObject | # classes, all magic like. I would like to see two things here: + Being able to pass a list of class names to be picked + Being able to pass a class and pick the class plus its dependencies (eg: ForeignKey's) There's no much reason for picking all classes if you are only to use one or two. | person = ec.Person.get(1) # by ID | ec2 = EditingContext() # separate transaction | person2 = ec.Person.get(1) | assert person is not person2 | assert person.id == person2.id | assert person.fname == 'Guy' | person.fname = 'Gerald' | assert person2.fname == 'Guy' | ec.commit() # SQL is not sent to server When SQL is sent to server? | assert person2.fname == 'Guy' # Doesn't see changes | person2.fname = 'Norm' | # raises exception if locking is turned on; overwrites if locking | # is not turned on. (Locking enabled on a per-class level) | | I'm not at all sure about that example. Mostly the confusing parts | relate to locking and when the database lookup occurs (and how late a | conflict exception may be raised). The example is ok, I'm also curious about how locking would be handled. | Somewhere in here, process-level transactions might fit in. That is, | even on a backend that doesn't support transactions, we can still | delay SQL statements until a commit/rollback is performed. In turn, | we can create temporary "memory" objects, which is any object which | hasn't been committed to the database in any way. To do this we'll | need sequences -- to preallocate IDs -- which MySQL and SQLite don't | really provide :( | | Nested transactions...? Maybe they'd fall out of this fairly easily, | especially if we define a global context, with global caches etc., | then further levels of context will come for free. | | We still need to think about an auto-commit mode. Maybe the global | context would be auto-commit. As long as there's a way to disable auto-commit if you don't want it, it seems ok. Nested transactions would be particularly useful for me. | Caching | ------- | | Really doing transactions right means making caching significantly | more complex. If the cache is purely transaction-specific, then we'll | really be limiting the effectiveness of the cache. With that in mind, | a copy-on-write style of object is really called for -- when you fetch | an object in a transaction, you can use the globally cached instance | until you write to the object. | | Really this isn't copy-on-write, it's more like a proxy object. Until | the object is changed, it can delegate all its columns to its global | object for which it is a proxy. Of course, traversal via foreign keys | or joins must also return proxied objects. As the object is changed | -- perhaps on a column-by-column basis, or as a whole on the first | change -- the object takes on the personality of a full SQLObject | instance. I like the proxy idea a lot. +1 on it. | When the transaction is committed, this transactional object copies | itself to the global object, and becomes a full proxy. These | transactional caches themselves should be pooled -- so that when | another transaction comes along you have a potentially useful set of | proxy objects already created for you. This is a common use case for | web applications, which have lots of short transactions, which are | often very repetitive. | | In addition to this, there should be more cache control. This means | explicit ways to control things like: | | 1. Caching of instances: | + Application/process-global definition. | + Database-level definition. | + Transaction/EditingContext-level definition. | + Class-level definition. | 2. Caching of columns: | + Class-level. | 3. Cache sweep frequency: | + Application/process-global. | + Database-level. | + Class-level. | + Doesn't need to be as complete as 1; maybe on the class level you | could only indicate that a certain class should not be sweeped. | + Sweep during a fetch (e.g., every 100 fetches), by time or fetch | frequency, or sweep with an explicit call (e.g., to do sweeps in | a separate thread). | 4. Cache sweep policy: | + Maximum age. | + Least-recently-used (actually, least-recently-fetched). | + Random (the current policy). | + Multi-level (randomly move objects to a lower-priority cache, | raise level when the object is fetched again). | + Target cache size (keep trimming until the cache is small | enough). | + Simple policy (if enough objects qualify, cache can be of any | size). | + Percentage culling (e.g., kill 33% of objects for each sweep; | this is the current policy). There's one cache algorithm that was tried on ZODB and I think gave one of the best hit rates. If I still recall it was called 'Thor' or something similar. It takes the object size and number of hits into account to keep the memory footprint to a minimal while still doing a good job at caching both big and small objects. | 5. Batching of updates (whether updates should immediately go to the | database, or whether it would be batched until a commit or other | signal). | 6. Natural expiring of objects. Even if an object must persist | because there are still references, we could expire it so that | future accesses re-query the database. To avoid stale data. Maybe replacing the object in the cache by a proxy? :) | Expose some methods of the cache, like getting all objects currently | in memory. These would probably be exposed on a class level, e.g., | all the Addresses currently in memory via | ``Address.cache.current()`` or something. What about when there's a | cached instance in the parent context, but not in the present | transaction? I think the cached instance in the parent context should show up together with objects cached in the transaction. | Columns as Descriptors | ---------------------- | | Each column will become a descriptor. That is, ``Col`` and subclasses | will return an object with ``__get__`` and ``__set__`` methods. The | metaclass will not itself generate methods. | | A metaclass will still be used so that the descriptor can be tied to | its name, e.g., that with ``fname = StringCol()``, the resultant | descriptor will know that it is bound to ``fname``. | | By using descriptors, introspection should become a bit easier -- or | at least more uniform with respect to other new-style classes. | Various class-wide indexes of columns will still be necessary, but | these should be able to remain mostly private. | | To customize getters or setters (which you currently do by defining a | ``_get_columnName`` or ``_set_columnName`` method), you will pass | arguments to the ``Col`` object, like:: | | def _get_name(self, dbGetter): | return dbGetter().strip() | | name = StringCol(getter=_get_name) | | This gets rid of ``_SO_get_columnName`` as well. We can | transitionally add something to the metaclass to signal an error if a | spurious ``_get_columnName`` method is sitting around. Yay! You dont know how much I've been missing this one. :) | Construction and Fetching | ------------------------- | | Currently you fetch an object with class instantiation, e.g., | ``Address(1)``. This may or may not create a new instance, and does | not create a table row. If you want to create a table row, you do | something like ``Address.new(city='New York', ...)``. This is | somewhat in contrast to normal Python, where class instantiation | (calling a class) will create a new object, while objects are fetched | otherwise (with no particular standard interface). | | To make SQLObject classes more normal in this case, ``new`` will | become ``__init__`` (more or less), and classes will have a ``get`` | method that gets an already-existant row. E.g., ``Address.get(1)`` | vs. ``Address(city='New York', ...)``. This is perhaps the most | significant change in SQLObject usage. Because of the different | signatures, if you forget to make a change someplace you will get an | immediate exception, so updating code should not be too hard. +1 all the way. Because of this, I had to special-case object creation for SQLObject in my zope3-based app. /me sees lots of crufty code going away and feels happy | Extra Table Information | ----------------------- | | People have increasingly used SQLObject to create tables, and while it | can make a significant number of schemas, there are several extensions | of table generation that people occasionally want. Since these occur | later in development, it would be convenient if SQLObject could grow | as the complexity of the programs using it grow. Some of these | extensions are: | | * Table name (``_table``). | * Table type for MySQL (e.g., MyISAM vs. InnoDB). | * Multi-column unique constraints. (Other constraints?) | * Indexes. (Function or multi-column indexes?) | * Primary key type. (Primary key generation?) | * Primary key sequence names (for Postgres, Firebird, Oracle, etc). | * Multi-column primary keys. | * Naming scheme. | * Permissions. | * Locking (e.g., optimistic locking). | * Inheritance (see Daniel Savard's recent patch). | * Anything else? * Enforcing constraints in python. Brad B. was chatting to me on irc yesterday and we came to agree on a api. He's writing a proposal (with a patch) and its going to present it soon. Basically, when you create a column you would provide a callable object as a keyword 'constraint' parameter. This constraint would then be used to enforce some restrictions. def foo_constraint(obj, name, value, values=None): # name is the column name # value is the value to be set for this column # values is a dict of the values to be set for other columns # in the case you are creating an object or modifying more than # one column at a time # returns True or False age = IntCol(constraint=foo_constraint) class Col: def __init__(self, name, constraint): self.name = name self.constraint = constraint def __set__(self, obj, value): if not self.constraint(obj, name, value): raise ValueError, value # Set the value otherwise | Some of these may be globally defined, or defined for an entire | database. For example, typically you'll want to use a common MySQL | table type for your entire database, even though its defined on a | per-table basis. And while MySQL allows global permission | declarations, Postgres does not and requires tedious repetitions of | the permissions for each table -- so while it's applied on a per-table | basis, it's likely that (at least to some degree) a per-database | declaration is called for. Naming schemes are also usually | database-wide. | | As these accumulate -- and by partitioning this list differently, the | list could be even longer -- it's messy to do these all as special | class variables (``_idName``, etc). It also makes the class logic and | its database implementation details difficult to distinguish. Some | of these can be handled elegantly like ``id = StringCol()`` or ``id | = ("fname", "lname")``. But the others perhaps should be put into a | single instance variable, perhaps itself a class:: | | class Address(SQLObject): | class SQLMeta: | mysqlType = 'InnoDB' | naming = Underscore | permission = {'bob': ['select', 'insert'], | 'joe': ['select', 'insert', 'update'], | 'public': ['select']} | street = StringCol() | .... | | The metadata is found by its name (``SQLMeta``), and is simply a | container. The class syntax is easier to write and read than a | dictionary-like syntax. Or, it could be a proper class/instance and | provide a partitioned way to handle introspection. E.g., | ``Address.SQLMeta.permission.get('bob')`` or | ``Address.SQLMeta.columns``. In this case values that weren't | overridden would be calculated from defaults (like the default naming | scheme and so on). +1 on it being an instance and providing introspection. -1 on it being a class. | I'm not at all certain about how this should look, or if there are | other things that should go into the class-meta-data object. I can't think of something missing for now. | Joins, Foreign Keys | ------------------- | | First, the poorly-named ``MultipleJoin`` and ``RelatedJoin`` (which | are rather ambiguous) will be renamed ``ManyToOneJoin`` and | ``ManyToManyJoin``. ``OneToOneJoin`` will also be added, while | ``ForeignKey`` remains the related column type. (Many2Many? | Many2many? many2many?) | | ForeignKey will be driven by a special validator/converter. (But will | this make ID access more difficult?) | | Joins will return smart objects which can be iterated across. These | smart objects will be related to ``SelectResults``, and allow the | same features like ordering. In both cases, an option to retrieve | IDs instead of objects will be allowed. | | These smarter objects will allow, in the case of ManyToManyJoin, | ``Set`` like operations to relate (or unrelate) objects. For | ManyToOneJoin the list/set operations are not really appropriate, | because they would reassign the relation, not just add or remove | relations. | | It would be nice to make the Join protocol more explicit and public, | so other kinds of joins (e.g., three-way) could be more accessible. Sounds pretty good. I haven't used much Joins, only ForeignKeys though. -- Sidnei da Silva <si...@aw...> http://awkly.org - dreamcatching :: making your dreams come true http://plone.org/about/team#dreamcatcher Machines that have broken down will work perfectly when the repairman arrives. |
From: Chris A. <ch...@at...> - 2004-01-20 14:05:44
|
Attached is the latest patch of my modifications to DBConnection.py to add support for: - getSupportedDrivers() - connectionByURI() - DBAPI.isSupported() - DBAPI.getURI() I messed around with using urlparse for the URI parsing...It is possible, but you need to add all the different db schemas to lists internal to urlparse, for example: urlparse.uses_relative.extend(getSupportedDrivers()) urlparse.uses_netloc.extend(getSupportedDrivers()) urlparse.uses_query.extend(getSupportedDrivers()) I was hesitant to do this because it would possibly change the behaviour of urlparse once SQLObject was imported. So for now the URI parsing is done by my ugly re based code. Cheers, Chris -- Chris AtLee <ch...@at...> |
From: Ian B. <ia...@co...> - 2004-01-20 03:38:21
|
SQLObject 0.6 ============= *A tentative plan, 20 Jan 2004* Introduction ------------ During vacation I thought about some changes that I might like to make to SQLObject. Several of these change the API, but not too drastically, and I think they change the API for the better. And we'd not at 1.0 yet, changes are still allowed! Here's my ideas... Editing Context --------------- Taken from Modeling, the "editing context" is essentially a transaction, though it also encompasses some other features. Typically it is used to distinguish between separate contexts in a multi-threaded program. This is intended to separate several distinct concepts: * The database backend (MySQL, PostgreSQL, etc), coupled with the driver (MySQLdb, psycopg, etc). (Should the driver be part of the connection parameters?) * The connection parameters. Typically these are the server host, username, and password, but they could also be a filename or other path. Perhaps this could be represented with a URI, ala PEAK, but I also dislike taking structured data and destructuring it (i.e., packing it into a string). OTOH, URLs are structured, even if they require some parsing. Serialization of URLs is free and highly transparent. Python syntax is well structured and *programmatically* considerably more transparent (in a robust fashion), but also programmatically fairly read-only (because it is embedded in the structure of Python source code). We can also have both. * The database transactional context. * The application transactional context (preferably these two would be seemless, but they still represent somewhat distinct entities, and a portability layer might be nice). The application's transactional context may include other transactions -- e.g., multiple databases, a ZODB transaction, etc. * The cache policy. There are many different kinds of caches potentially involved, include write batching, and per-object and per-table caches, connection pooling, and so on. * Classes, which on the database side are typically tables. (This proposal does not attempt to de-couple classes and tables) Example:: from SQLObject import EditingContext ec = EditingContext() # every editing context automatically picks up all the SQLObject # classes, all magic like. person = ec.Person.get(1) # by ID ec2 = EditingContext() # separate transaction person2 = ec.Person.get(1) assert person is not person2 assert person.id == person2.id assert person.fname == 'Guy' person.fname = 'Gerald' assert person2.fname == 'Guy' ec.commit() # SQL is not sent to server assert person2.fname == 'Guy' # Doesn't see changes person2.fname = 'Norm' # raises exception if locking is turned on; overwrites if locking # is not turned on. (Locking enabled on a per-class level) I'm not at all sure about that example. Mostly the confusing parts relate to locking and when the database lookup occurs (and how late a conflict exception may be raised). Somewhere in here, process-level transactions might fit in. That is, even on a backend that doesn't support transactions, we can still delay SQL statements until a commit/rollback is performed. In turn, we can create temporary "memory" objects, which is any object which hasn't been committed to the database in any way. To do this we'll need sequences -- to preallocate IDs -- which MySQL and SQLite don't really provide :( Nested transactions...? Maybe they'd fall out of this fairly easily, especially if we define a global context, with global caches etc., then further levels of context will come for free. We still need to think about an auto-commit mode. Maybe the global context would be auto-commit. Caching ------- Really doing transactions right means making caching significantly more complex. If the cache is purely transaction-specific, then we'll really be limiting the effectiveness of the cache. With that in mind, a copy-on-write style of object is really called for -- when you fetch an object in a transaction, you can use the globally cached instance until you write to the object. Really this isn't copy-on-write, it's more like a proxy object. Until the object is changed, it can delegate all its columns to its global object for which it is a proxy. Of course, traversal via foreign keys or joins must also return proxied objects. As the object is changed -- perhaps on a column-by-column basis, or as a whole on the first change -- the object takes on the personality of a full SQLObject instance. When the transaction is committed, this transactional object copies itself to the global object, and becomes a full proxy. These transactional caches themselves should be pooled -- so that when another transaction comes along you have a potentially useful set of proxy objects already created for you. This is a common use case for web applications, which have lots of short transactions, which are often very repetitive. In addition to this, there should be more cache control. This means explicit ways to control things like: 1. Caching of instances: + Application/process-global definition. + Database-level definition. + Transaction/EditingContext-level definition. + Class-level definition. 2. Caching of columns: + Class-level. 3. Cache sweep frequency: + Application/process-global. + Database-level. + Class-level. + Doesn't need to be as complete as 1; maybe on the class level you could only indicate that a certain class should not be sweeped. + Sweep during a fetch (e.g., every 100 fetches), by time or fetch frequency, or sweep with an explicit call (e.g., to do sweeps in a separate thread). 4. Cache sweep policy: + Maximum age. + Least-recently-used (actually, least-recently-fetched). + Random (the current policy). + Multi-level (randomly move objects to a lower-priority cache, raise level when the object is fetched again). + Target cache size (keep trimming until the cache is small enough). + Simple policy (if enough objects qualify, cache can be of any size). + Percentage culling (e.g., kill 33% of objects for each sweep; this is the current policy). 5. Batching of updates (whether updates should immediately go to the database, or whether it would be batched until a commit or other signal). 6. Natural expiring of objects. Even if an object must persist because there are still references, we could expire it so that future accesses re-query the database. To avoid stale data. Expose some methods of the cache, like getting all objects currently in memory. These would probably be exposed on a class level, e.g., all the Addresses currently in memory via ``Address.cache.current()`` or something. What about when there's a cached instance in the parent context, but not in the present transaction? Columns as Descriptors ---------------------- Each column will become a descriptor. That is, ``Col`` and subclasses will return an object with ``__get__`` and ``__set__`` methods. The metaclass will not itself generate methods. A metaclass will still be used so that the descriptor can be tied to its name, e.g., that with ``fname = StringCol()``, the resultant descriptor will know that it is bound to ``fname``. By using descriptors, introspection should become a bit easier -- or at least more uniform with respect to other new-style classes. Various class-wide indexes of columns will still be necessary, but these should be able to remain mostly private. To customize getters or setters (which you currently do by defining a ``_get_columnName`` or ``_set_columnName`` method), you will pass arguments to the ``Col`` object, like:: def _get_name(self, dbGetter): return dbGetter().strip() name = StringCol(getter=_get_name) This gets rid of ``_SO_get_columnName`` as well. We can transitionally add something to the metaclass to signal an error if a spurious ``_get_columnName`` method is sitting around. Construction and Fetching ------------------------- Currently you fetch an object with class instantiation, e.g., ``Address(1)``. This may or may not create a new instance, and does not create a table row. If you want to create a table row, you do something like ``Address.new(city='New York', ...)``. This is somewhat in contrast to normal Python, where class instantiation (calling a class) will create a new object, while objects are fetched otherwise (with no particular standard interface). To make SQLObject classes more normal in this case, ``new`` will become ``__init__`` (more or less), and classes will have a ``get`` method that gets an already-existant row. E.g., ``Address.get(1)`` vs. ``Address(city='New York', ...)``. This is perhaps the most significant change in SQLObject usage. Because of the different signatures, if you forget to make a change someplace you will get an immediate exception, so updating code should not be too hard. Extra Table Information ----------------------- People have increasingly used SQLObject to create tables, and while it can make a significant number of schemas, there are several extensions of table generation that people occasionally want. Since these occur later in development, it would be convenient if SQLObject could grow as the complexity of the programs using it grow. Some of these extensions are: * Table name (``_table``). * Table type for MySQL (e.g., MyISAM vs. InnoDB). * Multi-column unique constraints. (Other constraints?) * Indexes. (Function or multi-column indexes?) * Primary key type. (Primary key generation?) * Primary key sequence names (for Postgres, Firebird, Oracle, etc). * Multi-column primary keys. * Naming scheme. * Permissions. * Locking (e.g., optimistic locking). * Inheritance (see Daniel Savard's recent patch). * Anything else? Some of these may be globally defined, or defined for an entire database. For example, typically you'll want to use a common MySQL table type for your entire database, even though its defined on a per-table basis. And while MySQL allows global permission declarations, Postgres does not and requires tedious repetitions of the permissions for each table -- so while it's applied on a per-table basis, it's likely that (at least to some degree) a per-database declaration is called for. Naming schemes are also usually database-wide. As these accumulate -- and by partitioning this list differently, the list could be even longer -- it's messy to do these all as special class variables (``_idName``, etc). It also makes the class logic and its database implementation details difficult to distinguish. Some of these can be handled elegantly like ``id = StringCol()`` or ``id = ("fname", "lname")``. But the others perhaps should be put into a single instance variable, perhaps itself a class:: class Address(SQLObject): class SQLMeta: mysqlType = 'InnoDB' naming = Underscore permission = {'bob': ['select', 'insert'], 'joe': ['select', 'insert', 'update'], 'public': ['select']} street = StringCol() .... The metadata is found by its name (``SQLMeta``), and is simply a container. The class syntax is easier to write and read than a dictionary-like syntax. Or, it could be a proper class/instance and provide a partitioned way to handle introspection. E.g., ``Address.SQLMeta.permission.get('bob')`` or ``Address.SQLMeta.columns``. In this case values that weren't overridden would be calculated from defaults (like the default naming scheme and so on). I'm not at all certain about how this should look, or if there are other things that should go into the class-meta-data object. Joins, Foreign Keys ------------------- First, the poorly-named ``MultipleJoin`` and ``RelatedJoin`` (which are rather ambiguous) will be renamed ``ManyToOneJoin`` and ``ManyToManyJoin``. ``OneToOneJoin`` will also be added, while ``ForeignKey`` remains the related column type. (Many2Many? Many2many? many2many?) ForeignKey will be driven by a special validator/converter. (But will this make ID access more difficult?) Joins will return smart objects which can be iterated across. These smart objects will be related to ``SelectResults``, and allow the same features like ordering. In both cases, an option to retrieve IDs instead of objects will be allowed. These smarter objects will allow, in the case of ManyToManyJoin, ``Set`` like operations to relate (or unrelate) objects. For ManyToOneJoin the list/set operations are not really appropriate, because they would reassign the relation, not just add or remove relations. It would be nice to make the Join protocol more explicit and public, so other kinds of joins (e.g., three-way) could be more accessible. |
From: Daniel S. <sql...@xs...> - 2004-01-15 22:50:08
|
--- SQLObject.orig/SQLObject.py 2003-11-12 12:03:45.000000000 -0500 +++ SQLObject/SQLObject.py 2004-01-14 23:50:50.000000000 -0500 @@ -4,6 +4,10 @@ SQLObject is a object-relational mapper. See SQLObject.html or SQLObject.txt for more. +Modified by + Daniel Savard, Xsoli Inc <sqlobject xsoli.com> 14 Jan 2004 + - Added support for simple table inheritance. + This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the @@ -155,10 +159,37 @@ classRegistry[registry] = {} classRegistry[registry][className] = newClass + #DSM: Need to keep the name of the class for easy access later + newClass._className = className + #DSM: Need to know very soon if the class is a children of an inheritable class. + #DSM: If so, we keep a link to our parent class. + for cls in bases: + if hasattr(cls, '_inheritable') and cls._inheritable: + newClass._parentClass = cls + # We append to _columns, but we don't want to change the # superclass's _columns list, so we make a copy if necessary if not d.has_key('_columns'): newClass._columns = newClass._columns[:] + #DSM: If this class is a child of a parent class, we need to do some + #DSM: attribute check and a a foreign key to the parent. + if newClass._parentClass: + #DSM: First, look for invalid column name: reserved ones or same as a parent + parentCols = [column.kw['name'] for column in newClass._columns] + for col in implicitColumns: + cname = col.kw['name'] + if cname in ['parent', 'parentID', 'childID', 'childName']: + raise AttributeError, "The column name '%s' is reserved" % cname + if cname in parentCols: + raise AttributeError, "The column '%s' is already defined in an inheritable parent" % cname + #DSM: Remove columns if inherited from an inheritable class as we don't want them + #DSM: All we want is a foreign key that will link to our parent + newClass._columns = [] + newClass._columns.append(Col.ForeignKey(name='parent', foreignKey=newClass._parentClass._className)) + #DSM: If this is inheritable, add some default columns to be able to link to children + if hasattr(newClass, '_inheritable') and newClass._inheritable: + newClass._columns.append(Col.IntCol(name='childID',default=None)) + newClass._columns.append(Col.StringCol(name='childName',default=None)) newClass._columns.extend(implicitColumns) if not d.has_key('_joins'): newClass._joins = newClass._joins[:] @@ -218,6 +249,13 @@ # SQL where-clause generation. See the sql module for # more. newClass.q = SQLBuilder.SQLObjectTable(newClass) + #DSM: If we are a child, get the q magic from the parent + currentClass = newClass + while currentClass._parentClass: + currentClass = currentClass._parentClass + for col in currentClass._columns: + if type(col) == Col.ForeignKey: continue + setattr(newClass.q, col.kw['name'], getattr(currentClass.q, col.kw['name'])) for column in newClass._columns[:]: newClass.addColumn(column) @@ -349,6 +387,14 @@ # it's set (default 1). _cacheValues = True + #DSM: The _inheritable attribute controls wheter the class can by + #DSM: inherited 'logically' with a foreignKey and back reference. + _inheritable = False # Does this class is inheritable + _parentClass = None # A reference to the parent class + parent = None # Foreign key to the parent + childID = None # Id to the children + childName = None # Children name (to be able to get a subclass) + # The _defaultOrder is used by SelectResults _defaultOrder = None @@ -442,7 +488,7 @@ # Here if the _get_columnName method isn't in the # definition, we add it with the default # _SO_get_columnName definition. - if not hasattr(cls, getterName(name)): + if not hasattr(cls, getterName(name)) or name == 'parentID' or name == 'childID' or name == 'childName': setattr(cls, getterName(name), getter) cls._SO_plainGetters[name] = 1 @@ -459,7 +505,7 @@ setattr(cls, '_SO_fromPython_%s' % name, column.fromPython) setattr(cls, rawSetterName(name), setter) # Then do the aliasing - if not hasattr(cls, setterName(name)): + if not hasattr(cls, setterName(name)) or name == 'parentID' or name == 'childID' or name == 'childName': setattr(cls, setterName(name), setter) # We keep track of setters that haven't been # overridden, because we can combine these @@ -488,10 +534,19 @@ # And we set the _get_columnName version # (sans ID ending) - if not hasattr(cls, getterName(name)[:-2]): + if not hasattr(cls, getterName(name)[:-2]) or name == 'parentID': setattr(cls, getterName(name)[:-2], getter) cls._SO_plainForeignGetters[name[:-2]] = 1 + #DSM: Try to add parent properties to the current class + if name=='parentID': + for col in cls._parentClass._columns: + cname = col.kw['name'] + if cname == 'parent' or cname == 'childID' or cname == 'childName': continue + setattr(cls, getterName(cname), eval('lambda self: self.parent.%s' % cname)) + if not col.kw.has_key('immutable') or not col.kw['immutable']: + setattr(cls, setterName(cname), eval('lambda self, val: setattr(self.parent, %s, val)' % repr(cname))) + if not column.immutable: # The setter just gets the ID of the object, # and then sets the real column. @@ -835,6 +890,18 @@ else: id = None + #DSM: If we were called by a children class, we must retreive the properties dictionnary. + #DSM: Note: we can't use the ** call paremeter directly as we must be able to delete items from + #DSM: the dictionary (and our children must know that the items were removed!) + fromChild = False + if kw.has_key('kw'): + kw = kw['kw'] + fromChild = True + #DSM: If we are the children of an inheritable class, we must first create our parent + if cls._parentClass: + parent = cls._parentClass.new(kw=kw) + kw['parent'] = parent + # First we do a little fix-up on the keywords we were # passed: for column in inst._SO_columns: @@ -864,7 +931,10 @@ for name, value in kw.items(): if name in inst._SO_plainSetters: forDB[name] = value - else: + #DSM: If this is a call from the child, we must remove the parameter for the database + if fromChild: del kw[name] + elif not fromChild: + #DSM: Only use other items if this isn't a call from the child others[name] = value # We take all the straight-to-DB values and use set() to @@ -881,9 +951,21 @@ # Then we finalize the process: inst._SO_finishCreate(id) + + #DSM: If we are a child, we must set our parent link + if cls._parentClass: + parent.childID = inst.id + parent.childName = inst._className + return inst new = classmethod(new) + #DSM: return the subclass if asked for + def getSubClass(self): + if not hasattr(self, 'childID'): return None + if self.childID is None: return None + return findClass(self.childName)(self.childID) + def _SO_finishCreate(self, id=None): # Here's where an INSERT is finalized. # These are all the column values that were supposed @@ -1008,6 +1090,13 @@ clearTable = classmethod(clearTable) def destroySelf(self): + #DSM: If this object has children, find the last one and destroy it. If not, simply destroy this object + while self.getSubClass(): self = self.getSubClass() + self._destroySelf() + + def _destroySelf(self): + #DSM: If this object has parents, recursivly kill them + if self.parent: self.parent._destroySelf() # Kills this object. Kills it dead! self._SO_obsolete = True self._connection._SO_delete(self) @@ -1066,6 +1155,33 @@ self.clause = clause tablesDict = SQLBuilder.tablesUsedDict(self.clause) tablesDict[sourceClass._table] = 1 + #DSM: if this class has a parent, we need to link it and be sure the parent is in the table list + #DSM: the following code is before clauseTables because if the user uses clauseTables (and normal string SELECT), he + #DSM: must know what he wants and will do himself the relationship between classes. + if type(self.clause) is not str: + tableRegistry = {} + for registryClass in classRegistry[None].values(): + if registryClass._table in tablesDict: + #DSM: By default, no parent are needed for the clauses + tableRegistry[registryClass] = registryClass + currentClass = registryClass + while currentClass._parentClass: + currentClass = currentClass._parentClass + if tableRegistry.has_key(currentClass): + #DSM: Must keep the last parent needed (to limit the number of join needed) + tableRegistry[registryClass] = currentClass + #DSM: Remove this class as it is a parent one of a needed children + del tableRegistry[currentClass] + #DSM: Table registry contains only the last children or standalone classes + parentClause = [] + for (currentClass, minParentClass) in tableRegistry.items(): + while currentClass != minParentClass and currentClass._parentClass: + parentClass = currentClass._parentClass + parentClause.append(currentClass.q.parentID == parentClass.q.id) + currentClass = parentClass + tablesDict[currentClass._table] = 1 + self.clause = reduce(SQLBuilder.AND, parentClause, clause) + if clauseTables: for table in clauseTables: tablesDict[table] = 1 |
From: Luke O. <lu...@me...> - 2004-01-15 22:33:13
|
Hello- Is anyone else using one-to-one joins? I rarely run across them, and have a workaround, but wondering whether there's something else I could be doing. (By one-to-one joins, I mean A class has an FK to B class, and B class has a join back that should act like an FK.) class B(SQLObject): myA = MultipleJoin('A') def _get_myA(self): x = self._SO_get_myA() try: return x[0] except IndexError: return None def _set_myA(self, value): value.myB = self Of course, this has a couple problems: it's not automated (that could be easily fixed with a little effort), and it makes no attempt to enforce the 1-1 part (multiple B's could point at one A, and I'm out of luck when retrieving. But SQLObject doesn't really support relation-level constraints). Thoughts? - Luke |
From: Ken K. <ke...@ke...> - 2004-01-15 17:06:06
|
Ruslan, when you do "foo = SQLObject.new(...)", it will take care of all of that for you and the new row's id will be accessible as "foo.id". Is there some specific problem you're trying to solve? Ruslan Spivak said: > Hello. > > Is it possible with sqlobject to get behaviour similar to 'select > last_inserted_id()'. > > Thanks in advance. > > Best regards, > Ruslan > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Perforce Software. > Perforce is the Fast Software Configuration Management System offering > advanced branching capabilities and atomic changes on 50+ platforms. > Free Eval! http://www.perforce.com/perforce/loadprog.html > _______________________________________________ > sqlobject-discuss mailing list > sql...@li... > https://lists.sourceforge.net/lists/listinfo/sqlobject-discuss |
From: Ruslan S. <ali...@is...> - 2004-01-15 14:19:25
|
Hello. Is it possible with sqlobject to get behaviour similar to 'select last_inserted_id()'. Thanks in advance. Best regards, Ruslan |
From: Randall R. <ra...@ra...> - 2004-01-15 11:59:48
|
On Thursday, January 15, 2004, at 05:50 AM, Ruslan Spivak wrote: > Hello. > > I need to use 'delete' on sqlobject class, how to make it work? f = MyObject(1) f.destroySelf() -- Randall Randall ra...@ra... |
From: Ruslan S. <ali...@is...> - 2004-01-15 10:50:32
|
Hello. I need to use 'delete' on sqlobject class, how to make it work? Thanks in advance |
From: Ian B. <ia...@co...> - 2004-01-13 20:30:42
|
On Mon, 2004-01-12 at 22:28, Chris AtLee wrote: > I don't know if I've got myself complete sidetracked with this, but I > thought I'd share. I started hacking on DBConnection.py a little bit so > that: > > - you can test if a particular connection type will work with > DBAPI.isSupported() (e.g. SQLiteConnection.isSupported()) That's seems reasonable. > - get a list of all supported drivers with > SQLObject.getSupportedDrivers() > - create an appropriate DBAPI object according to a DB URI (e.g. > "sqlite:///home/user/database.db" or "mysql://user:pass@host/db") > - save the connection parameters as an URI (e.g. getURI() called on a > MySQLConnection object gives "mysql://user:pass@host/db") The implementation seems pretty straight-forward for these too, though you might be able to use the urlparse library to simplify this, or at least make it more robust. Or not, it might not be that useful. Anyway, MySQL and other databases allow for passwordless connections, so that has to be kept in mind when parsing the connection string, among other things. Another option besides connectionByURI() would be to have a uri keyword argument in the constructor. > I haven't added the required code for all of the different connection > drivers, so your mileage will vary. > > Is the right approach to take to opening up database connections in a > generic way? I want the user of my software to be able to configure his > database settings pretty easily. Sure. If you are going to encode it in a string, a URI is the right way to go. And strings are nice for settings. Ian |
From: David B. <db...@fi...> - 2004-01-13 19:03:23
|
Ian Bicking <ia...@co...> writes: > On Tue, 2004-01-13 at 10:43, David Bolen wrote: (...) > > What I find attractive about PyProtocols (and probably the others) is > > that you still get the "Does it quack?" approach, but you do so with a > > single adaptation call ("adapt" with PyProtocols"). And that's an > > often discussed topic in the Python arena - just how do you ask "Does > > it quack?" > > Interfaces answer "Does it quack" -- adaptation goes a bit further, > saying "Give me something that quacks". Definitely - I oversimplified a bit much. I'd probably rephrase that instead of asking "Does it quack?" the adaption says "Could you quack?" :-) > > The typing really isn't that much. Yes, you need to define an > > interface, but to be honest, you should be doing that anyway as a way > > of documenting the "duck" that your object expects. > > And if you are just writing an "application" (vs. a framework), you > probably won't define any interfaces at all. Depending on your definition of "application" I'd agree. To this point much of my own use of PyProtocols has been within our own application, but at the layer of core components within the application (not really a framework, just central component definitions). These components are expected to interface in various ways over time, so while I don't have the need to define many adapters beyond the interfaces yet, I can envision using that facility down the road. So I'd expect that most applications would at least have an opportunity to use interface definitions at some level, unless they really are just a thin glue layer above other libraries of objects. -- David |
From: Ian B. <ia...@co...> - 2004-01-13 17:48:52
|
On Tue, 2004-01-13 at 10:43, David Bolen wrote: > "Ian Sparks" <Ian...@et...> writes: > > I really meant that Interfaces mean its no longer enough to create a > > class. I now need to create an Interface "prototype" and then > > classes that implement that Interface. One of the great things about > > Python (for me) is that you can say "Does it quack? Ok, lets assume > > its a duck" without a lot of rigmarole. I'm probably showing my > > (deep) ignorance but the whole Interfaces thing seems to be putting > > the rigmarole back in. > > What I find attractive about PyProtocols (and probably the others) is > that you still get the "Does it quack?" approach, but you do so with a > single adaptation call ("adapt" with PyProtocols"). And that's an > often discussed topic in the Python arena - just how do you ask "Does > it quack?" Interfaces answer "Does it quack" -- adaptation goes a bit further, saying "Give me something that quacks". Interfaces are like the old, boring type declarations. They might save you from a few bugs, but they don't really give you anything. Adaptation adds real power. It formalizes something we've all been doing in various ad hoc manners for a long time. It's also an alternative to many kinds of traversal -- though where to draw that line (between doing traversal via attribute access vs. adaptation) I haven't yet figured out. > So you don't have to attempt random calls into the object. When you > couple that with the fact that you can externally adapt or register > that an object supports an interface, and it's really a very dynamic > way to handle well-defined interfaces. > > The typing really isn't that much. Yes, you need to define an > interface, but to be honest, you should be doing that anyway as a way > of documenting the "duck" that your object expects. And if you are just writing an "application" (vs. a framework), you probably won't define any interfaces at all. Certainly to use FormEncode you don't need to define any interface -- it only uses interfaces that are already defined (IValidator and IField). In fact, all the adaptation and interfaces are mostly hidden from you. It's really there for when you want to fit different frameworks together, like SQLObject and FormEncode. Ian |
From: David B. <db...@fi...> - 2004-01-13 16:50:16
|
"Ian Sparks" <Ian...@et...> writes: > I really meant that Interfaces mean its no longer enough to create a > class. I now need to create an Interface "prototype" and then > classes that implement that Interface. One of the great things about > Python (for me) is that you can say "Does it quack? Ok, lets assume > its a duck" without a lot of rigmarole. I'm probably showing my > (deep) ignorance but the whole Interfaces thing seems to be putting > the rigmarole back in. What I find attractive about PyProtocols (and probably the others) is that you still get the "Does it quack?" approach, but you do so with a single adaptation call ("adapt" with PyProtocols"). And that's an often discussed topic in the Python arena - just how do you ask "Does it quack?" So you don't have to attempt random calls into the object. When you couple that with the fact that you can externally adapt or register that an object supports an interface, and it's really a very dynamic way to handle well-defined interfaces. The typing really isn't that much. Yes, you need to define an interface, but to be honest, you should be doing that anyway as a way of documenting the "duck" that your object expects. In terms of the classes itself, if you use the abstract-base-case approach to your interface, just inherit from it for your class. Or, for any class, just add an "advise(instancesProvide=[XXXX]))" call in your class to say it provides interface XXXX. Or, you can indicate your object supports an interface from the "outside" without ever touching your object code (or register an adapter to take an existing object and wrap it to conform to the interface). And in the end, nothing is hard-enforced. Any class can claim to support an interface without really doing so, and you can be in precisely the same state as now as if you hadn't claimed that fact but just used the object. -- David |
From: Ian S. <Ian...@et...> - 2004-01-13 14:03:30
|
>> Adaptation looks very promising. I just wish it didn't require all = that extra typing.... Ian Bicking wrote (in response to the above)... >It should usually take less typing. You need only adapt at the last >moment, so you pass objects through based on the expectation they will >later be adapted if necessary. =20 I really meant that Interfaces mean its no longer enough to create a = class. I now need to create an Interface "prototype" and then classes = that implement that Interface. One of the great things about Python (for = me) is that you can say "Does it quack? Ok, lets assume its a duck" = without a lot of rigmarole. I'm probably showing my (deep) ignorance but = the whole Interfaces thing seems to be putting the rigmarole back in. Its kind of a comfort that the very bright people involved with Twisted = and Zope are making big use of Interfaces but at the same time these = projects are noted for their use of python to create things that do not = appear to be very pythonic (Aquisition? reliance on Callbacks?) >But I'm still getting used to it as a paradigm -- I think it will be a >very big deal, but I don't think it's widely understood, even by those >(like me) that are using it. Which is probably why I should do more research into this instead of = whining about having to do a bit more typing.... -----Original Message----- From: Ian Bicking [mailto:ia...@co...] Sent: Monday, January 12, 2004 11:16 AM To: Ian Sparks Cc: Sqlobject-Discuss@Lists. Sourceforge. Net (E-mail) Subject: RE: [SQLObject] SQLObject & Webware FormKit Integration On Mon, 2004-01-12 at 15:38, Ian Sparks wrote: > Thanks for the update on this and on the Archetypes & PyProtocols = links. They make=20 > for interesting reading. Is PyProtocols stable?=20 Yes, I think it's pretty stable. It's a core part of PEAK, and it's fairly mature. I'm sure Zope 3 and Twisted's adaptation are fairly mature too (again because they are fundamental building blocks for those platforms), but PyProtocols was the only one with decent documentation (and isolated distribution). > Adaptation looks very promising. I just wish it didn't require all = that extra typing.... It should usually take less typing. You need only adapt at the last moment, so you pass objects through based on the expectation they will later be adapted if necessary. In FormEncode there's the toPython and fromPython functions, which actually do the necessary adaptation themselves then call the appropriate method, so there's really no extra typing at all. But I'm still getting used to it as a paradigm -- I think it will be a very big deal, but I don't think it's widely understood, even by those (like me) that are using it. Well, I understand the mechanism, but that's different from having an intuition on how it should best be used. And I don't feel like there's many other languages that provide a model for how it might work -- it's really very novel, at least from my experience. Ian |
From: Chris A. <ch...@at...> - 2004-01-13 04:28:40
|
I don't know if I've got myself complete sidetracked with this, but I thought I'd share. I started hacking on DBConnection.py a little bit so that: - you can test if a particular connection type will work with DBAPI.isSupported() (e.g. SQLiteConnection.isSupported()) - get a list of all supported drivers with SQLObject.getSupportedDrivers() - create an appropriate DBAPI object according to a DB URI (e.g. "sqlite:///home/user/database.db" or "mysql://user:pass@host/db") - save the connection parameters as an URI (e.g. getURI() called on a MySQLConnection object gives "mysql://user:pass@host/db") I haven't added the required code for all of the different connection drivers, so your mileage will vary. Is the right approach to take to opening up database connections in a generic way? I want the user of my software to be able to configure his database settings pretty easily. Cheers, Chris -- Chris AtLee <ch...@at...> |
From: Ian B. <ia...@co...> - 2004-01-12 22:14:46
|
On Mon, 2004-01-12 at 15:38, Ian Sparks wrote: > Thanks for the update on this and on the Archetypes & PyProtocols links. They make > for interesting reading. Is PyProtocols stable? Yes, I think it's pretty stable. It's a core part of PEAK, and it's fairly mature. I'm sure Zope 3 and Twisted's adaptation are fairly mature too (again because they are fundamental building blocks for those platforms), but PyProtocols was the only one with decent documentation (and isolated distribution). > Adaptation looks very promising. I just wish it didn't require all that extra typing.... It should usually take less typing. You need only adapt at the last moment, so you pass objects through based on the expectation they will later be adapted if necessary. In FormEncode there's the toPython and fromPython functions, which actually do the necessary adaptation themselves then call the appropriate method, so there's really no extra typing at all. But I'm still getting used to it as a paradigm -- I think it will be a very big deal, but I don't think it's widely understood, even by those (like me) that are using it. Well, I understand the mechanism, but that's different from having an intuition on how it should best be used. And I don't feel like there's many other languages that provide a model for how it might work -- it's really very novel, at least from my experience. Ian |
From: Ian S. <Ian...@et...> - 2004-01-12 21:38:32
|
Ian Bicking wrote: >> Well, I've been thinking about it of course, especially with respect to FormEncode (http://formencode.org). But it's something that I keep putting aside. << Thanks for the update on this and on the Archetypes & PyProtocols links. = They make for interesting reading. Is PyProtocols stable? I keep looking = at PEAK but the fact that the Tutorial PDF runs out at Page 25 just as = it starts to get interesting worries me. Its been that way for a loooong = time. Adaptation looks very promising. I just wish it didn't require all that = extra typing.... -----Original Message----- From: Ian Bicking [mailto:ia...@co...] Sent: Monday, January 12, 2004 8:30 AM To: Ian Sparks Cc: Sqlobject-Discuss@Lists. Sourceforge. Net (E-mail) Subject: Re: [SQLObject] SQLObject & Webware FormKit Integration On Mon, 2004-01-12 at 11:29, Ian Sparks wrote: > Has anyone given any more thought to SO -> Webware (esp Fun/FormKit) = integration? Well, I've been thinking about it of course, especially with respect to FormEncode (http://formencode.org). But it's something that I keep putting aside. With FormEncode, the validation schema would be created through adaptation, probably adapting the object or class (e.g., adapt the object to edit it, adapt the class to create a form for a new object).=20 The adaptation uses PyProtocols (http://peak.telecommunity.com/PyProtocols.html), though it would be familiar to Zope 3 developers. Something slightly less ambitious would probably be pretty easy to make right now. Using FormEncode isn't *very* far off, but I know I still get confused thinking about how the API should work, even when the pieces are ready to be put together. Really, I might have everything in place now, I just haven't tried to actually put it all together. I think most of the stuff I've worked on is in FormEncode's CVS repository. I've submitted a proposal about FormEncode for PyCon, so if that goes through I will probably be doing a lot of work on FormEncode (and SO integration) to get things polished to show it off. (FWIW, I think there's also some considerable overlap with the Archetypes project that Sidnei's working on) Ian |
From: Ian B. <ia...@co...> - 2004-01-12 19:29:01
|
On Mon, 2004-01-12 at 11:29, Ian Sparks wrote: > Has anyone given any more thought to SO -> Webware (esp Fun/FormKit) integration? Well, I've been thinking about it of course, especially with respect to FormEncode (http://formencode.org). But it's something that I keep putting aside. With FormEncode, the validation schema would be created through adaptation, probably adapting the object or class (e.g., adapt the object to edit it, adapt the class to create a form for a new object). The adaptation uses PyProtocols (http://peak.telecommunity.com/PyProtocols.html), though it would be familiar to Zope 3 developers. Something slightly less ambitious would probably be pretty easy to make right now. Using FormEncode isn't *very* far off, but I know I still get confused thinking about how the API should work, even when the pieces are ready to be put together. Really, I might have everything in place now, I just haven't tried to actually put it all together. I think most of the stuff I've worked on is in FormEncode's CVS repository. I've submitted a proposal about FormEncode for PyCon, so if that goes through I will probably be doing a lot of work on FormEncode (and SO integration) to get things polished to show it off. (FWIW, I think there's also some considerable overlap with the Archetypes project that Sidnei's working on) Ian |