[Sqlalchemy-commits] [1127] sqlalchemy/trunk: refactor to Compiled.get_params() to return new Clause
Brought to you by:
zzzeek
From: <co...@sq...> - 2006-03-13 00:25:25
|
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head><style type="text/css"><!-- #msg dl { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; } #msg dt { float: left; width: 6em; font-weight: bold; } #msg dt:after { content:':';} #msg dl, #msg dt, #msg ul, #msg li { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; } #msg dl a { font-weight: bold} #msg dl a:link { color:#fc3; } #msg dl a:active { color:#ff0; } #msg dl a:visited { color:#cc6; } h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; } #msg pre { overflow: auto; background: #ffc; border: 1px #fc0 solid; padding: 6px; } #msg ul, pre { overflow: auto; } #patch { width: 100%; } #patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;} #patch .propset h4, #patch .binary h4 {margin:0;} #patch pre {padding:0;line-height:1.2em;margin:0;} #patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;} #patch .propset .diff, #patch .binary .diff {padding:10px 0;} #patch span {display:block;padding:0 10px;} #patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;} #patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;} #patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;} #patch .lines, .info {color:#888;background:#fff;} --></style> <title>[1127] sqlalchemy/trunk: refactor to Compiled.get_params() to return new ClauseParameters object, a more intelligent bind parameter dictionary that does type conversions late and preserves the unconverted value; used to fix mappers not comparing correct value in post-fetch [ticket:110]</title> </head> <body> <div id="msg"> <dl> <dt>Revision</dt> <dd>1127</dd> <dt>Author</dt> <dd>zzzeek</dd> <dt>Date</dt> <dd>2006-03-12 18:24:54 -0600 (Sun, 12 Mar 2006)</dd> </dl> <h3>Log Message</h3> <pre>refactor to Compiled.get_params() to return new ClauseParameters object, a more intelligent bind parameter dictionary that does type conversions late and preserves the unconverted value; used to fix mappers not comparing correct value in post-fetch [ticket:110] removed pre_exec assertion from oracle/firebird regarding "check for sequence/primary key value" fix to Unicode type to check for null, fixes [ticket:109] create_engine() now uses genericized parameters; host/hostname, db/dbname/database, password/passwd, etc. for all engine connections fix to select([func(column)]) so that it creates a FROM clause to the column's table, fixes [ticket:111] doc updates for column defaults, indexes, connection pooling, engine params unit tests for the above bugfixes</pre> <h3>Modified Paths</h3> <ul> <li><a href="#sqlalchemytrunkCHANGES">sqlalchemy/trunk/CHANGES</a></li> <li><a href="#sqlalchemytrunkdocbuildcontentdbenginemyt">sqlalchemy/trunk/doc/build/content/dbengine.myt</a></li> <li><a href="#sqlalchemytrunkdocbuildcontentdocstringsmyt">sqlalchemy/trunk/doc/build/content/docstrings.myt</a></li> <li><a href="#sqlalchemytrunkdocbuildcontentdocument_basemyt">sqlalchemy/trunk/doc/build/content/document_base.myt</a></li> <li><a href="#sqlalchemytrunkdocbuildcontentmetadatamyt">sqlalchemy/trunk/doc/build/content/metadata.myt</a></li> <li><a href="#sqlalchemytrunkdocbuildcontentpoolingmyt">sqlalchemy/trunk/doc/build/content/pooling.myt</a></li> <li><a href="#sqlalchemytrunklibsqlalchemyansisqlpy">sqlalchemy/trunk/lib/sqlalchemy/ansisql.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemydatabasesfirebirdpy">sqlalchemy/trunk/lib/sqlalchemy/databases/firebird.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemydatabasesmysqlpy">sqlalchemy/trunk/lib/sqlalchemy/databases/mysql.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemydatabasesoraclepy">sqlalchemy/trunk/lib/sqlalchemy/databases/oracle.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemydatabasespostgrespy">sqlalchemy/trunk/lib/sqlalchemy/databases/postgres.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemyenginepy">sqlalchemy/trunk/lib/sqlalchemy/engine.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemymappingmapperpy">sqlalchemy/trunk/lib/sqlalchemy/mapping/mapper.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemyschemapy">sqlalchemy/trunk/lib/sqlalchemy/schema.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemysqlpy">sqlalchemy/trunk/lib/sqlalchemy/sql.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemytypespy">sqlalchemy/trunk/lib/sqlalchemy/types.py</a></li> <li><a href="#sqlalchemytrunklibsqlalchemyutilpy">sqlalchemy/trunk/lib/sqlalchemy/util.py</a></li> <li><a href="#sqlalchemytrunksetuppy">sqlalchemy/trunk/setup.py</a></li> <li><a href="#sqlalchemytrunktestindexespy">sqlalchemy/trunk/test/indexes.py</a></li> <li><a href="#sqlalchemytrunktestmapperpy">sqlalchemy/trunk/test/mapper.py</a></li> <li><a href="#sqlalchemytrunktestobjectstorepy">sqlalchemy/trunk/test/objectstore.py</a></li> <li><a href="#sqlalchemytrunktestselectpy">sqlalchemy/trunk/test/select.py</a></li> <li><a href="#sqlalchemytrunktesttestbasepy">sqlalchemy/trunk/test/testbase.py</a></li> </ul> </div> <div id="patch"> <h3>Diff</h3> <a id="sqlalchemytrunkCHANGES"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/CHANGES (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/CHANGES 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/CHANGES 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -1,11 +1,46 @@ </span><span class="cx"> 0.1.4 </span><ins>+- create_engine() now uses genericized parameters; host/hostname, db/dbname/database, password/passwd, etc. for all engine connections. makes +engine URIs much more "universal" +- another overhaul to EagerLoading when used in conjunction with mappers that +inherit; improvements to eager loads figuring out their aliased queries +correctly, also relations set up against a mapper with inherited mappers will +create joins against the table that is specific to the mapper itself (i.e. and +not any tables that are inherited/are further down the inheritance chain), +this can be overridden by using custom primary/secondary joins. +- added onupdate parameter to Column, will exec SQL/python upon an update +statement.Also adds "for_update=True" to all DefaultGenerator subclasses +- added rudimentary support for Oracle table reflection. +- checked in an initial Firebird module, awaiting testing. +- added sql.ClauseParameters dictionary object as the result for +compiled.get_params(), does late-typeprocessing of bind parameters so +that the original values are easier to access +- more docs for indexes, column defaults, connection pooling, engine construction +- overhaul to the construction of the types system. uses a simpler inheritance pattern +so that any of the generic types can be easily subclassed, with no need for TypeDecorator. +- added "convert_unicode=False" parameter to SQLEngine, will cause all String types to +perform unicode encoding/decoding (makes Strings act like Unicodes) +- added 'encoding="utf8"' parameter to engine. the given encoding will be +used for all encode/decode calls within Unicode types as well as Strings +when convert_unicode=True. +- improved support to column defaults when used by mappers; mappers will pull +pre-executed defaults from statement's executed bind parameters +(pre-conversion) to populate them into a saved object's attributes; if any +PassiveDefaults have fired off, will instead post-fetch the row from the DB to +populate the object. +- added 'get_session().invalidate(*obj)' method to objectstore, instances will +refresh() themselves upon the next attribute access. +- improvements to SQL func calls including an "engine" keyword argument so +they can be execute()d or scalar()ed standalone, also added func accessor to +SQLEngine </ins><span class="cx"> </span><span class="cx"> 0.1.3 </span><del>-- completed "post_update" feature, will add a second update statement before inserts -and after deletes in order to reconcile a relationship without any dependencies -being created; used when persisting two rows that are dependent on each other -- completed mapper.using(session) function, localized per-object Session functionality; -objects can be declared and manipulated as local to any user-defined Session </del><ins>+- completed "post_update" feature, will add a second update statement before +inserts and after deletes in order to reconcile a relationship without any +dependencies being created; used when persisting two rows that are dependent +on each other +- completed mapper.using(session) function, localized per-object Session +functionality; objects can be declared and manipulated as local to any +user-defined Session </ins><span class="cx"> - fix to Oracle "row_number over" clause with multiple tables </span><span class="cx"> - mapper.get() was not selecting multiple-keyed objects if the mapper's table was a join, </span><span class="cx"> such as in an inheritance relationship, this is fixed. </span></span></pre></div> <a id="sqlalchemytrunkdocbuildcontentdbenginemyt"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/doc/build/content/dbengine.myt (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/doc/build/content/dbengine.myt 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/doc/build/content/dbengine.myt 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -13,17 +13,24 @@ </span><span class="cx"> <p> </span><span class="cx"> Engines exist for SQLite, Postgres, MySQL, and Oracle, using the Pysqlite, Psycopg (1 or 2), MySQLDB, and cx_Oracle modules. Each engine imports its corresponding module which is required to be installed. For Postgres and Oracle, an alternate module may be specified at construction time as well. </span><span class="cx"> </p> </span><del>- <p>An example of connecting to each engine is as follows:</p> </del><ins>+ <p>The string based argument names for connecting are translated to the appropriate names when the connection is made; argument names include "host" or "hostname" for database host, "database", "db", or "dbname" for the database name (also is dsn for Oracle), "user" or "username" for the user, and "password", "pw", or "passwd" for the password. SQLite expects "filename" or "file" for the filename, or if None it defaults to "":memory:".</p> + <p>The connection arguments can be specified as a string + dictionary pair, or a single URL-encoded string, as follows:</p> </ins><span class="cx"> </span><span class="cx"> <&|formatting.myt:code&> </span><span class="cx"> from sqlalchemy import * </span><span class="cx"> </span><span class="cx"> # sqlite in memory </span><span class="cx"> sqlite_engine = create_engine('sqlite', {'filename':':memory:'}, **opts) </span><ins>+ + # via URL + sqlite_engine = create_engine('sqlite://', **opts) </ins><span class="cx"> </span><span class="cx"> # sqlite using a file </span><span class="cx"> sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'}, **opts) </span><span class="cx"> </span><ins>+ # via URL + sqlite_engine = create_engine('sqlite://filename=querytest.db', **opts) + </ins><span class="cx"> # postgres </span><span class="cx"> postgres_engine = create_engine('postgres', </span><span class="cx"> {'database':'test', </span><span class="lines">@@ -31,6 +38,9 @@ </span><span class="cx"> 'user':'scott', </span><span class="cx"> 'password':'tiger'}, **opts) </span><span class="cx"> </span><ins>+ # via URL + postgres_engine = create_engine('postgres://database=test&amp;host=127.0.0.1&amp;user=scott&amp;password=tiger') + </ins><span class="cx"> # mysql </span><span class="cx"> mysql_engine = create_engine('mysql', </span><span class="cx"> { </span><span class="lines">@@ -49,20 +59,17 @@ </span><span class="cx"> </span><span class="cx"> </&> </span><span class="cx"> <p>Note that the general form of connecting to an engine is:</p> </span><del>- <&|formatting.myt:code&> </del><ins>+ <&|formatting.myt:code &> + # separate arguments </ins><span class="cx"> engine = create_engine( </span><span class="cx"> <enginename>, </span><span class="cx"> {<named DBAPI arguments>}, </span><del>- <sqlalchemy options> </del><ins>+ <sqlalchemy options>; </ins><span class="cx"> ) </span><ins>+ + # url + engine = create_engine('&lt;enginename&gt;://&lt;named DBAPI arguments&gt;', <sqlalchemy options>) </ins><span class="cx"> </&> </span><del>- <p>The second argument is a dictionary whose key/value pairs will be passed to the underlying DBAPI connect() method as keyword arguments. Any keyword argument supported by the DBAPI module can be in this dictionary.</p> - <p>Engines can also be loaded by URL. The above format is converted into <span class="codeline"><% '<enginename>://key=val&key=val' |h %></span>: - <&|formatting.myt:code&> - sqlite_engine = create_engine('sqlite://filename=querytest.db') - postgres_engine = create_engine('postgres://database=test&user=scott&password=tiger') - </&> - </p> </del><span class="cx"> </&> </span><span class="cx"> <&|doclib.myt:item, name="methods", description="Database Engine Methods" &> </span><span class="cx"> <p>A few useful methods off the SQLEngine are described here:</p> </span><span class="lines">@@ -95,7 +102,18 @@ </span><span class="cx"> <&|doclib.myt:item, name="options", description="Database Engine Options" &> </span><span class="cx"> <p>The remaining arguments to <span class="codeline">create_engine</span> are keyword arguments that are passed to the specific subclass of <span class="codeline">sqlalchemy.engine.SQLEngine</span> being used, as well as the underlying <span class="codeline">sqlalchemy.pool.Pool</span> instance. All of the options described in the previous section <&formatting.myt:link, path="pooling_configuration"&> can be specified, as well as engine-specific options:</p> </span><span class="cx"> <ul> </span><del>- <li>pool=None : an instance of <span class="codeline">sqlalchemy.pool.DBProxy</span> to be used as the underlying source for connections (DBProxy is described in the previous section). If None, a default DBProxy will be created using the engine's own database module with the given arguments.</li> </del><ins>+ <li><p>pool=None : an instance of <span class="codeline">sqlalchemy.pool.Pool</span> to be used as the underlying source for connections, overriding the engine's connect arguments (pooling is described in the previous section). If None, a default Pool (QueuePool or SingletonThreadPool as appropriate) will be created using the engine's connect arguments.</p> + <p>Example:</p> + <&|formatting.myt:code&> + from sqlalchemy import * + import sqlalchemy.pool as pool + import MySQLdb + + def getconn(): + return MySQLdb.connect(user='ed', dbname='mydb') + + engine = create_engine('mysql', pool=pool.QueuePool(getconn, pool_size=20, max_overflow=40)) + </&></li> </ins><span class="cx"> <li>echo=False : if True, the SQLEngine will log all statements as well as a repr() of their parameter lists to the engines logger, which defaults to sys.stdout. A SQLEngine instances' "echo" data member can be modified at any time to turn logging on and off. If set to the string 'debug', result rows will be printed to the standard output as well.</li> </span><span class="cx"> <li>logger=None : a file-like object where logging output can be sent, if echo is set to True. This defaults to sys.stdout.</li> </span><span class="cx"> <li>module=None : used by Oracle and Postgres, this is a reference to a DBAPI2 module to be used instead of the engine's default module. For Postgres, the default is psycopg2, or psycopg1 if 2 cannot be found. For Oracle, its cx_Oracle.</li> </span><span class="lines">@@ -103,7 +121,8 @@ </span><span class="cx"> <li>use_ansi=True : used only by Oracle; when False, the Oracle driver attempts to support a particular "quirk" of some Oracle databases, that the LEFT OUTER JOIN SQL syntax is not supported, and the "Oracle join" syntax of using <% "<column1>(+)=<column2>" |h%> must be used in order to achieve a LEFT OUTER JOIN. Its advised that the Oracle database be configured to have full ANSI support instead of using this feature.</li> </span><span class="cx"> <li>use_oids=False : used only by Postgres, will enable the column name "oid" as the object ID column. Postgres as of 8.1 has object IDs disabled by default.</li> </span><span class="cx"> <li>convert_unicode=False : if set to True, all String/character based types will convert Unicode values to raw byte values going into the database, and all raw byte values to Python Unicode coming out in result sets. This is an engine-wide method to provide unicode across the board. For unicode conversion on a column-by-column level, use the Unicode column type instead.</li> </span><del>- <li>encoding='utf-8' : the encoding to use when doing unicode translations.</li> </del><ins>+ <li>encoding='utf-8' : the encoding to use for Unicode translations - passed to all encode/decode functions.</li> + <li>echo_uow=False : when True, logs unit of work commit plans to the standard output.</li> </ins><span class="cx"> </ul> </span><span class="cx"> </&> </span><span class="cx"> <&|doclib.myt:item, name="proxy", description="Using the Proxy Engine" &> </span></span></pre></div> <a id="sqlalchemytrunkdocbuildcontentdocstringsmyt"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/doc/build/content/docstrings.myt (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/doc/build/content/docstrings.myt 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/doc/build/content/docstrings.myt 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -14,7 +14,7 @@ </span><span class="cx"> </span><span class="cx"> <& pydoc.myt:obj_doc, obj=schema &> </span><span class="cx"> <& pydoc.myt:obj_doc, obj=engine, classes=[engine.SQLEngine, engine.ResultProxy, engine.RowProxy] &> </span><del>-<& pydoc.myt:obj_doc, obj=sql, classes=[sql.Compiled, sql.ClauseElement, sql.TableClause, sql.ColumnClause] &> </del><ins>+<& pydoc.myt:obj_doc, obj=sql, classes=[sql.ClauseParameters, sql.Compiled, sql.ClauseElement, sql.TableClause, sql.ColumnClause] &> </ins><span class="cx"> <& pydoc.myt:obj_doc, obj=pool, classes=[pool.DBProxy, pool.Pool, pool.QueuePool, pool.SingletonThreadPool] &> </span><span class="cx"> <& pydoc.myt:obj_doc, obj=mapping &> </span><span class="cx"> <& pydoc.myt:obj_doc, obj=mapping.objectstore, classes=[mapping.objectstore.Session, mapping.objectstore.Session.SessionTrans, mapping.objectstore.UnitOfWork] &> </span></span></pre></div> <a id="sqlalchemytrunkdocbuildcontentdocument_basemyt"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/doc/build/content/document_base.myt (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/doc/build/content/document_base.myt 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/doc/build/content/document_base.myt 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -23,7 +23,7 @@ </span><span class="cx"> onepage='documentation' </span><span class="cx"> index='index' </span><span class="cx"> title='SQLAlchemy Documentation' </span><del>- version = '0.1.3' </del><ins>+ version = '0.1.4' </ins><span class="cx"> </%attr> </span><span class="cx"> </span><span class="cx"> <%method title> </span></span></pre></div> <a id="sqlalchemytrunkdocbuildcontentmetadatamyt"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/doc/build/content/metadata.myt (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/doc/build/content/metadata.myt 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/doc/build/content/metadata.myt 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -107,15 +107,11 @@ </span><span class="cx"> >>> othertable is news_articles </span><span class="cx"> True </span><span class="cx"> </&> </span><del>- - - </del><span class="cx"> </&> </span><span class="cx"> <&|doclib.myt:item, name="creating", description="Creating and Dropping Database Tables" &> </span><span class="cx"> <p>Creating and dropping is easy, just use the <span class="codeline">create()</span> and <span class="codeline">drop()</span> methods: </span><span class="cx"> <&|formatting.myt:code&> </span><del>- <&formatting.myt:poplink&> - employees = Table('employees', engine, </del><ins>+ <&formatting.myt:poplink&>employees = Table('employees', engine, </ins><span class="cx"> Column('employee_id', Integer, primary_key=True), </span><span class="cx"> Column('employee_name', String(60), nullable=False, key='name'), </span><span class="cx"> Column('employee_dept', Integer, ForeignKey("departments.department_id")) </span><span class="lines">@@ -126,18 +122,162 @@ </span><span class="cx"> employee_name VARCHAR(60) NOT NULL, </span><span class="cx"> employee_dept INTEGER REFERENCES departments(department_id) </span><span class="cx"> ) </span><ins>+{} </&> </ins><span class="cx"> </span><del>-{} </&> - - <&formatting.myt:poplink&> - employees.drop() <&|formatting.myt:codepopper, link="sql" &> </del><ins>+ <&formatting.myt:poplink&>employees.drop() <&|formatting.myt:codepopper, link="sql" &> </ins><span class="cx"> DROP TABLE employees </span><span class="cx"> {} </&> </span><span class="cx"> </&> </span><span class="cx"> </&> </span><span class="cx"> </span><ins>+ + <&|doclib.myt:item, name="defaults", description="Column Defaults and OnUpdates" &> + <p>SQLAlchemy includes flexible constructs in which to create default values for columns upon the insertion of rows, as well as upon update. These defaults can take several forms: a constant, a Python callable to be pre-executed before the SQL is executed, a SQL expression or function to be pre-executed before the SQL is executed, a pre-executed Sequence (for databases that support sequences), or a "passive" default, which is a default function triggered by the database itself upon insert, the value of which can then be post-fetched by the engine, provided the row provides a primary key in which to call upon.</p> + <&|doclib.myt:item, name="oninsert", description="Pre-Executed Insert Defaults" &> + <p>A basic default is most easily specified by the "default" keyword argument to Column:</p> + <&|formatting.myt:code&> + # a function to create primary key ids + i = 0 + def mydefault(): + i += 1 + return i + + t = Table("mytable", db, + # function-based default + Column('id', Integer, primary_key=True, default=mydefault), + + # a scalar default + Column('key', String(10), default="default") + ) + </&> + <p>The "default" keyword can also take SQL expressions, including select statements or direct function calls:</p> + <&|formatting.myt:code&> + t = Table("mytable", db, + Column('id', Integer, primary_key=True), + + # define 'create_date' to default to now() + Column('create_date', DateTime, default=func.now()), + + # define 'key' to pull its default from the 'keyvalues' table + Column('key', String(20), default=keyvalues.select(keyvalues.c.type='type1', limit=1)) + ) + </&> + <p>The "default" keyword argument is shorthand for using a ColumnDefault object in a column definition. This syntax is optional, but is required for other types of defaults, futher described below:</p> + <&|formatting.myt:code&> + Column('mycolumn', String(30), ColumnDefault(func.get_data())) + </&> + </&> + + <&|doclib.myt:item, name="onupdate", description="Pre-Executed OnUpdate Defaults" &> + <p>Similar to an on-insert default is an on-update default, which is most easily specified by the "onupdate" keyword to Column, which also can be a constanct, plain Python function or SQL expression:</p> + <&|formatting.myt:code&> + t = Table("mytable", db, + Column('id', Integer, primary_key=True), + + # define 'last_updated' to be populated with current_timestamp (the ANSI-SQL version of now()) + Column('last_updated', DateTime, onupdate=func.current_timestamp()), + ) + </&> + <p>To use a ColumnDefault explicitly for an on-update, use the "for_update" keyword argument:</p> + <&|formatting.myt:code&> + Column('mycolumn', String(30), ColumnDefault(func.get_data(), for_update=True)) + </&> + </&> + + <&|doclib.myt:item, name="passive", description="Inline Default Execution: PassiveDefault" &> + <p>A PassiveDefault indicates a column default or on-update value that is executed automatically by the database. This construct is used to specify a SQL function that will be specified as "DEFAULT" when creating tables, and also to indicate the presence of new data that is available to be "post-fetched" after an insert or update execution.</p> + <&|formatting.myt:code&> + t = Table('test', e, + Column('mycolumn', DateTime, PassiveDefault("sysdate")) + ) + </&> + <p>A create call for the above table will produce:</p> + <&|formatting.myt:code&> + CREATE TABLE test ( + mycolumn datetime default sysdate + ) + </&> + <p>PassiveDefaults also send a message to the SQLEngine that data is available after update or insert. The object-relational mapper system uses this information to post-fetch rows after insert or update, so that instances can be refreshed with the new data. Below is a simplified version:</p> + <&|formatting.myt:code&> + # table with passive defaults + mytable = Table('mytable', engine, + Column('my_id', Integer, primary_key=True), + + # an on-insert database-side default + Column('data1', Integer, PassiveDefault("d1_func")), + + # an on-update database-side default + Column('data2', Integer, PassiveDefault("d2_func", for_update=True)) + ) + # insert a row + mytable.insert().execute(name='fred') + + # ask the engine: were there defaults fired off on that row ? + if table.engine.lastrow_has_defaults(): + # postfetch the row based on primary key. + # this only works for a table with primary key columns defined + primary_key = table.engine.last_inserted_ids() + row = table.select(table.c.id == primary_key[0]) + </&> + <p>Tables that are reflected from the database which have default values set on them, will receive those defaults as PassiveDefaults.</p> + + <&|doclib.myt:item, name="postgres", description="The Catch: Postgres Primary Key Defaults always Pre-Execute" &> + <p>Current Postgres support does not rely upon OID's to determine the identity of a row. This is because the usage of OIDs has been deprecated with Postgres and they are disabled by default for table creates as of PG version 8. Pyscopg2's "cursor.lastrowid" function only returns OIDs. Therefore, when inserting a new row which has passive defaults set on the primary key columns, the default function is <b>still pre-executed</b> since SQLAlchemy would otherwise have no way of retrieving the row just inserted.</p> + </&> + </&> + <&|doclib.myt:item, name="sequences", description="Defining Sequences" &> + <P>A table with a sequence looks like:</p> + <&|formatting.myt:code&> + table = Table("cartitems", db, + Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True), + Column("description", String(40)), + Column("createdate", DateTime()) + ) + </&> + <p>The Sequence is used with Postgres or Oracle to indicate the name of a Sequence that will be used to create default values for a column. When a table with a Sequence on a column is created by SQLAlchemy, the Sequence object is also created. Similarly, the Sequence is dropped when the table is dropped. Sequences are typically used with primary key columns. When using Postgres, if an integer primary key column defines no explicit Sequence or other default method, SQLAlchemy will create the column with the SERIAL keyword, and will pre-execute a sequence named "tablename_columnname_seq" in order to retrieve new primary key values. Oracle, which has no "auto-increment" keyword, requires that a Sequence be created for a table if automatic primary key generation is desired. Note that for all databases, primary key values can always be explicitly stated within the bind parameters for any insert statement as well, removing the need for an! y kind of default generation function.</p> + + <p>A Sequence object can be defined on a Table that is then used for a non-sequence-supporting database. In that case, the Sequence object is simply ignored. Note that a Sequence object is <b>entirely optional for all databases except Oracle</b>, as other databases offer options for auto-creating primary key values, such as AUTOINCREMENT, SERIAL, etc. SQLAlchemy will use these default methods for creating primary key values if no Sequence is present on the table metadata.</p> + + <p>A sequence can also be specified with <span class="codeline">optional=True</span> which indicates the Sequence should only be used on a database that requires an explicit sequence, and not those that supply some other method of providing integer values. At the moment, it essentially means "use this sequence only with Oracle and not Postgres".</p> + </&> + </&> + <&|doclib.myt:item, name="indexes", description="Defining Indexes" &> + <p>Indexes can be defined on table columns, including named indexes, non-unique or unique, multiple column. Indexes are included along with table create and drop statements. They are not used for any kind of run-time constraint checking...SQLAlchemy leaves that job to the expert on constraint checking, the database itself.</p> + <&|formatting.myt:code&> + mytable = Table('mytable', engine, + + # define a unique index + Column('col1', Integer, unique=True), + + # define a unique index with a specific name + Column('col2', Integer, unique='mytab_idx_1'), + + # define a non-unique index + Column('col3', Integer, index=True), + + # define a non-unique index with a specific name + Column('col4', Integer, index='mytab_idx_2'), + + # pass the same name to multiple columns to add them to the same index + Column('col5', Integer, index='mytab_idx_2'), + + Column('col6', Integer), + Column('col7', Integer) + ) + + # create the table. all the indexes will be created along with it. + mytable.create() + + # indexes can also be specified standalone + i = Index('mytab_idx_3', mytable.c.col6, mytable.c.col7, unique=False) + + # which can then be created separately (will also get created with table creates) + i.create() + + </&> + </&> </ins><span class="cx"> <&|doclib.myt:item, name="adapting", description="Adapting Tables to Alternate Engines" &> </span><del>- <p>Occasionally an application will need to reference the same tables within multiple databases simultaneously. Since a Table object is specific to a SQLEngine, an extra method is provided to create copies of the Table object for a different SQLEngine instance, which can represent a different set of connection parameters, or a totally different database driver: </del><ins>+ <p>A Table object created against a specific engine can be re-created against a new engine using the <span class="codeline">toengine</span> method:</p> </ins><span class="cx"> </span><span class="cx"> <&|formatting.myt:code&> </span><span class="cx"> # create two engines </span><span class="lines">@@ -153,7 +293,7 @@ </span><span class="cx"> pg_users = users.toengine(postgres_engine) </span><span class="cx"> </&> </span><span class="cx"> </span><del>- <p>You can also create tables using a "database neutral" engine, which can serve as a starting point for tables that are then adapted to specific engines:</p> </del><ins>+ <p>Also available is the "database neutral" ansisql engine:</p> </ins><span class="cx"> <&|formatting.myt:code&> </span><span class="cx"> import sqlalchemy.ansisql as ansisql </span><span class="cx"> generic_engine = ansisql.engine() </span><span class="lines">@@ -162,30 +302,27 @@ </span><span class="cx"> Column('user_id', Integer), </span><span class="cx"> Column('user_name', String(50)) </span><span class="cx"> ) </span><ins>+ </&> + <p>Flexible "multi-engined" tables can also be achieved via the proxy engine, described in the section <&formatting.myt:link, path="dbengine_proxy"&>.</p> </ins><span class="cx"> </span><del>- sqlite_engine = create_engine('sqlite', {'filename':'querytest.db'}) - sqlite_users = users.toengine(sqlite_engine) - sqlite_users.create() - </&> </del><ins>+ <&|doclib.myt:item, name="primitives", description="Non-engine primitives: TableClause/ColumnClause" &> + + <p>TableClause and ColumnClause are "primitive" versions of the Table and Column objects which dont use engines at all; applications that just want to generate SQL strings but not directly communicate with a database can use TableClause and ColumnClause objects, which are non-singleton and serve as the "lexical" base class of Table and Column:</p> + <&|formatting.myt:code&> + tab1 = TableClause('table1', + ColumnClause('id'), + ColumnClause('name')) + + tab2 = TableClause('table2', + ColumnClause('id'), + ColumnClause('email')) + + tab1.select(tab1.c.name == 'foo') + </&> </ins><span class="cx"> </span><ins>+ <p>TableClause and ColumnClause are strictly lexical. This means they are fully supported within the full range of SQL statement generation, but they don't support schema concepts like creates, drops, primary keys, defaults, nullable status, indexes, or foreign keys.</p> </ins><span class="cx"> </&> </span><ins>+ </&> </ins><span class="cx"> </span><del>- <&|doclib.myt:item, name="sequences", description="Defining Sequences" &> - <P>A table with a sequence looks like:</p> - <&|formatting.myt:code&> - table = Table("cartitems", db, - Column("cart_id", Integer, Sequence('cart_id_seq'), primary_key=True), - Column("description", String(40)), - Column("createdate", DateTime()) - ) - </&> - <p>The Sequence is used when a Postgres or Oracle database schema defines a sequence of a specific name which must be used to create integer values. If a Sequence is not defined, Postgres will default to regular SERIAL access. Oracle currently has no default primary key method; so explicit primary key values or Sequence objects are required to insert new rows.</p> </del><span class="cx"> </span><del>-<p>Defining a Sequence means that it will be created along with the table.create() call, and that the sequence will be explicitly used when inserting new rows for this table, for databases that support sequences. If the Table is connected to a database that doesnt support sequences, the Sequence object is simply ignored. Note that a Sequence object is <b>entirely optional for all databases except Oracle</b>, as other databases offer options for auto-creating primary key values, such as AUTOINCREMENT, SERIAL, etc. SQLAlchemy will use these default methods for creating primary key values if no Sequence is present on the table metadata.</p> - -<p>A sequence can also be specified with <span class="codeline">optional=True</span> which indicates the Sequence should only be used on a database that requires an explicit sequence, and not those that supply some other method of providing integer values. At the moment, it essentially means "use this sequence only with Oracle and not Postgres".</p> - -<p>More docs TODO in this area include the ColumnDefault and PassiveDefault objects which provide more options to automatic generation of column values.</p> - </&> - </del><span class="cx"> </&> </span></span></pre></div> <a id="sqlalchemytrunkdocbuildcontentpoolingmyt"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/doc/build/content/pooling.myt (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/doc/build/content/pooling.myt 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/doc/build/content/pooling.myt 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -27,10 +27,40 @@ </span><span class="cx"> </p> </span><span class="cx"> <ul> </span><span class="cx"> <li>echo=False : if set to True, connections being pulled and retrieved from/to the pool will be logged to the standard output, as well as pool sizing information.</li> </span><del>- <li>use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the <b>same</b> connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using.</li> - <li>poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python <span class="codeline">Queue.Queue</span> class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm.</li> </del><ins>+ <li>use_threadlocal=True : if set to True, repeated calls to connect() within the same application thread will be guaranteed to return the <b>same</b> connection object, if one has already been retrieved from the pool and has not been returned yet. This allows code to retrieve a connection from the pool, and then while still holding on to that connection, to call other functions which also ask the pool for a connection of the same arguments; those functions will act upon the same connection that the calling method is using. Note that once the connection is returned to the pool, it then may be used by another thread. To guarantee a single unique connection per thread that <b>never</b> changes, use the option <span class="codeline">poolclass=SingletonThreadPool</span>, in which case the use_threadlocal parameter is automatically set to False.</li> + <li>poolclass=QueuePool : the Pool class used by the pool module to provide pooling. QueuePool uses the Python <span class="codeline">Queue.Queue</span> class to maintain a list of available connections. A developer can supply his or her own Pool class to supply a different pooling algorithm. Also included is the ThreadSingletonPool, which provides a single distinct connection per thread and is required with SQLite.</li> </ins><span class="cx"> <li>pool_size=5 : used by QueuePool - the size of the pool to be maintained. This is the largest number of connections that will be kept persistently in the pool. Note that the pool begins with no connections; once this number of connections is requested, that number of connections will remain.</li> </span><del>- <li>max_overflow=10 : the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.</li> </del><ins>+ <li>max_overflow=10 : used by QueuePool - the maximum overflow size of the pool. When the number of checked-out connections reaches the size set in pool_size, additional connections will be returned up to this limit. When those additional connections are returned to the pool, they are disconnected and discarded. It follows then that the total number of simultaneous connections the pool will allow is pool_size + max_overflow, and the total number of "sleeping" connections the pool will allow is pool_size. max_overflow can be set to -1 to indicate no overflow limit; no limit will be placed on the total number of concurrent connections.</li> </ins><span class="cx"> </ul> </span><span class="cx"> </&> </span><ins>+ + <&|doclib.myt:item, name="custom", description="Custom Pool Construction" &> + <p>One level below using a DBProxy to make transparent pools is creating the pool yourself. The pool module comes with two implementations of connection pools: <span class="codeline">QueuePool</span> and <span class="codeline">SingletonThreadPool</span>. While QueuePool uses Queue.Queue to provide connections, SingletonThreadPool provides a single per-thread connection which SQLite requires.</p> + + <p>Constructing your own pool involves passing a callable used to create a connection. Through this method, custom connection schemes can be made, such as a connection that automatically executes some initialization commands to start. The options from the previous section can be used as they apply to QueuePool or SingletonThreadPool.</p> + <&|formatting.myt:code, title="Plain QueuePool"&> + import sqlalchemy.pool as pool + import psycopg2 + + def getconn(): + c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test') + # execute an initialization function on the connection before returning + c.cursor.execute("setup_encodings()") + return c + + p = pool.QueuePool(getconn, max_overflow=10, pool_size=5, use_threadlocal=True) + </&> + + <&|formatting.myt:code, title="SingletonThreadPool"&> + import sqlalchemy.pool as pool + import sqlite + + def getconn(): + return sqlite.connect(filename='myfile.db') + + # SQLite connections require the SingletonThreadPool + p = pool.SingletonThreadPool(getconn) + </&> + + </&> </ins><span class="cx"> </&> </span><span class="cx">\ No newline at end of file </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemyansisqlpy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/ansisql.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/ansisql.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/ansisql.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -118,7 +118,8 @@ </span><span class="cx"> objects compiled within this object. The output is dependent on the paramstyle </span><span class="cx"> of the DBAPI being used; if a named style, the return result will be a dictionary </span><span class="cx"> with keynames matching the compiled statement. If a positional style, the output </span><del>- will be a list corresponding to the bind positions in the compiled statement. </del><ins>+ will be a list, with an iterator that will return parameter + values in an order corresponding to the bind positions in the compiled statement. </ins><span class="cx"> </span><span class="cx"> for an executemany style of call, this method should be called for each element </span><span class="cx"> in the list of parameter groups that will ultimately be executed. </span><span class="lines">@@ -129,32 +130,23 @@ </span><span class="cx"> bindparams = {} </span><span class="cx"> bindparams.update(params) </span><span class="cx"> </span><ins>+ d = sql.ClauseParameters(self.engine) </ins><span class="cx"> if self.positional: </span><del>- d = OrderedDict() </del><span class="cx"> for k in self.positiontup: </span><span class="cx"> b = self.binds[k] </span><del>- if self.engine is not None: - d[k] = b.typeprocess(b.value, self.engine) - else: - d[k] = b.value </del><ins>+ d.set_parameter(k, b.value, b) </ins><span class="cx"> else: </span><del>- d = {} </del><span class="cx"> for b in self.binds.values(): </span><del>- if self.engine is not None: - d[b.key] = b.typeprocess(b.value, self.engine) - else: - d[b.key] = b.value </del><ins>+ d.set_parameter(b.key, b.value, b) </ins><span class="cx"> </span><span class="cx"> for key, value in bindparams.iteritems(): </span><span class="cx"> try: </span><span class="cx"> b = self.binds[key] </span><span class="cx"> except KeyError: </span><span class="cx"> continue </span><del>- if self.engine is not None: - d[b.key] = b.typeprocess(value, self.engine) - else: - d[b.key] = value </del><ins>+ d.set_parameter(b.key, value, b) </ins><span class="cx"> </span><ins>+ #print "FROM", params, "TO", d </ins><span class="cx"> return d </span><span class="cx"> </span><span class="cx"> def get_named_params(self, parameters): </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemydatabasesfirebirdpy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/databases/firebird.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/databases/firebird.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/databases/firebird.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -176,19 +176,8 @@ </span><span class="cx"> return self.context.last_inserted_ids </span><span class="cx"> </span><span class="cx"> def pre_exec(self, proxy, compiled, parameters, **kwargs): </span><del>- # this is just an assertion that all the primary key columns in an insert statement - # have a value set up, or have a default generator ready to go - if getattr(compiled, "isinsert", False): - if isinstance(parameters, list): - plist = parameters - else: - plist = [parameters] - for param in plist: - for primary_key in compiled.statement.table.primary_key: - if not param.has_key(primary_key.key) or param[primary_key.key] is None: - if primary_key.default is None: - raise "Column '%s.%s': Firebird primary key columns require a default value or a schema.Sequence to create ids" % (primary_key.table.name, primary_key.name) - </del><ins>+ pass + </ins><span class="cx"> def _executemany(self, c, statement, parameters): </span><span class="cx"> rowcount = 0 </span><span class="cx"> for param in parameters: </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemydatabasesmysqlpy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/databases/mysql.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/databases/mysql.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/databases/mysql.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -134,7 +134,7 @@ </span><span class="cx"> def __init__(self, opts, module = None, **params): </span><span class="cx"> if module is None: </span><span class="cx"> self.module = mysql </span><del>- self.opts = opts or {} </del><ins>+ self.opts = self._translate_connect_args(('host', 'db', 'user', 'passwd'), opts) </ins><span class="cx"> ansisql.ANSISQLEngine.__init__(self, **params) </span><span class="cx"> </span><span class="cx"> def connect_args(self): </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemydatabasesoraclepy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/databases/oracle.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/databases/oracle.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/databases/oracle.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -90,7 +90,7 @@ </span><span class="cx"> class OracleSQLEngine(ansisql.ANSISQLEngine): </span><span class="cx"> def __init__(self, opts, use_ansi = True, module = None, **params): </span><span class="cx"> self._use_ansi = use_ansi </span><del>- self.opts = opts or {} </del><ins>+ self.opts = self._translate_connect_args((None, 'dsn', 'user', 'password'), opts) </ins><span class="cx"> if module is None: </span><span class="cx"> self.module = cx_Oracle </span><span class="cx"> else: </span><span class="lines">@@ -181,18 +181,7 @@ </span><span class="cx"> return self.context.last_inserted_ids </span><span class="cx"> </span><span class="cx"> def pre_exec(self, proxy, compiled, parameters, **kwargs): </span><del>- # this is just an assertion that all the primary key columns in an insert statement - # have a value set up, or have a default generator ready to go - if getattr(compiled, "isinsert", False): - if isinstance(parameters, list): - plist = parameters - else: - plist = [parameters] - for param in plist: - for primary_key in compiled.statement.table.primary_key: - if not param.has_key(primary_key.key) or param[primary_key.key] is None: - if primary_key.default is None: - raise "Column '%s.%s': Oracle primary key columns require a default value or a schema.Sequence to create ids" % (primary_key.table.name, primary_key.name) </del><ins>+ pass </ins><span class="cx"> </span><span class="cx"> def _executemany(self, c, statement, parameters): </span><span class="cx"> rowcount = 0 </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemydatabasespostgrespy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/databases/postgres.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/databases/postgres.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/databases/postgres.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -181,7 +181,7 @@ </span><span class="cx"> self.version = 1 </span><span class="cx"> except: </span><span class="cx"> self.version = 1 </span><del>- self.opts = opts or {} </del><ins>+ self.opts = self._translate_connect_args(('host', 'database', 'user', 'password'), opts) </ins><span class="cx"> if self.opts.has_key('port'): </span><span class="cx"> if self.version == 2: </span><span class="cx"> self.opts['port'] = int(self.opts['port']) </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemyenginepy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/engine.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/engine.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/engine.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -203,6 +203,25 @@ </span><span class="cx"> self._figure_paramstyle() </span><span class="cx"> self.logger = logger or util.Logger(origin='engine') </span><span class="cx"> </span><ins>+ def _translate_connect_args(self, names, args): + """translates a dictionary of connection arguments to those used by a specific dbapi. + the names parameter is a tuple of argument names in the form ('host', 'database', 'user', 'password') + where the given strings match the corresponding argument names for the dbapi. Will return a dictionary + with the dbapi-specific parameters, the generic ones removed, and any additional parameters still remaining, + from the dictionary represented by args. Will return a blank dictionary if args is null.""" + if args is None: + return {} + a = args.copy() + standard_names = [('host','hostname'), ('database', 'dbname'), ('user', 'username'), ('password', 'passwd', 'pw')] + for n in names: + sname = standard_names.pop(0) + if n is None: + continue + for sn in sname: + if sn != n and a.has_key(sn): + a[n] = a[sn] + del a[sn] + return a </ins><span class="cx"> def _get_ischema(self): </span><span class="cx"> # We use a property for ischema so that the accessor </span><span class="cx"> # creation only happens as needed, since otherwise we </span><span class="lines">@@ -563,7 +582,6 @@ </span><span class="cx"> parameters = [compiled.get_params(**m) for m in parameters] </span><span class="cx"> else: </span><span class="cx"> parameters = compiled.get_params(**parameters) </span><del>- </del><span class="cx"> def proxy(statement=None, parameters=None): </span><span class="cx"> if statement is None: </span><span class="cx"> return cursor </span></span></pre></div> <a id="sqlalchemytrunklibsqlalchemymappingmapperpy"></a> <div class="modfile"><h4>Modified: sqlalchemy/trunk/lib/sqlalchemy/mapping/mapper.py (1126 => 1127)</h4> <pre class="diff"><span> <span class="info">--- sqlalchemy/trunk/lib/sqlalchemy/mapping/mapper.py 2006-03-10 05:03:17 UTC (rev 1126) +++ sqlalchemy/trunk/lib/sqlalchemy/mapping/mapper.py 2006-03-13 00:24:54 UTC (rev 1127) </span><span class="lines">@@ -651,8 +651,8 @@ </span><span class="cx"> for c in table.c: </span><span class="cx"> if c.primary_key or not params.has_key(c.name): </span><span class="cx"> continue </span><del>- if self._getattrbycolumn(obj, c) != params[c.name]: - self._setattrbycolumn(obj, c, params[c.name]) </del><ins>+ if self._getattrbycolumn(obj, c) != params.get_original(c.name): + self._setattrbycolumn(obj, c, params.get_original(c.name)) </ins><span class="cx"> </span><span class="cx"> def delete_obj(self, objects, uow): </span><s... [truncated message content] |