sqlalchemy-tickets Mailing List for SQLAlchemy (Page 31)
Brought to you by:
zzzeek
You can subscribe to this list here.
| 2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
| 2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
| 2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
| 2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
| 2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
| 2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
| 2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
| 2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
| 2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
| 2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
| 2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
| 2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
|
From: Mike B. <iss...@bi...> - 2015-11-21 21:32:50
|
New issue 3592: logic to ensure parent cols present in joined eager load inappropriately turns off for m2m https://bitbucket.org/zzzeek/sqlalchemy/issues/3592/logic-to-ensure-parent-cols-present-in Mike Bayer: this issue is usually missed because a. we always under the primary columns of the parent mapper and b. the join condition of a relationship almost always refers to the primary columns of the parent mapper. if we are using joined-table-inheritance, then c. the PK col in the subclass is usually named the same as the parent PK col, is grouped into the same attribute name, and is therefore undeferred via the logic in https://bitbucket.org/zzzeek/sqlalchemy/src/ef9a4cb60b4e7fe305367c5223e8bb2cbf2b3b0f/lib/sqlalchemy/orm/strategies.py?at=master&fileviewer=file-view-default#strategies.py-230. If all of the above is *not* the case, then the logic at https://bitbucket.org/zzzeek/sqlalchemy/src/ef9a4cb60b4e7fe305367c5223e8bb2cbf2b3b0f/lib/sqlalchemy/orm/strategies.py?at=master&fileviewer=file-view-default#strategies.py-1370 adds the column to the query anyway, but *only* if there's no secondaryjoin. The whole block here appears to be uncovered by tests in any case. So the criteria is: 1. The query defers/excludes a parent column used in the join condition 2. joinedload is used 3. the excluded column is not part of the *mapper* primary key of the parent, note this includes both columns that are totally not PK or are a differently-named PK col on a joined-inh subtable 4. the relationship includes "secondary" 5. the query uses limit/offset so is subject to the subquery-on-joinedload behavior. If you have all five of those things, here's the bug: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) class ASub(A): __tablename__ = 'asub' sub_id = Column(ForeignKey('a.id'), primary_key=True) bs = relationship("B", secondary=Table( 'atob', Base.metadata, Column('aid', ForeignKey('asub.sub_id')), Column('bid', ForeignKey('b.id')) )) class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) s.query(ASub).options(load_only('id'), joinedload(ASub.bs)).limit(10).all() ``` error: ``` #!sql sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such column: asub.sub_id [SQL: u'SELECT anon_1.a_id AS anon_1_a_id, anon_2.b_1_id AS b_1_id \nFROM (SELECT a.id AS a_id \nFROM a JOIN asub ON a.id = asub.sub_id\n LIMIT ? OFFSET ?) AS anon_1 LEFT OUTER JOIN (SELECT atob_1.aid AS atob_1_aid, atob_1.bid AS atob_1_bid, b_1.id AS b_1_id \nFROM atob AS atob_1 JOIN b AS b_1 ON b_1.id = atob_1.bid) AS anon_2 ON asub.sub_id = anon_2.atob_1_aid'] [parameters: (10, 0)] ``` patch! ``` #!diff diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index 67dac1c..7de470d 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -1367,8 +1367,7 @@ class JoinedLoader(AbstractRelationshipLoader): # send a hint to the Query as to where it may "splice" this join eagerjoin.stop_on = entity.selectable - if self.parent_property.secondary is None and \ - not parentmapper: + if not parentmapper: # for parentclause that is the non-eager end of the join, # ensure all the parent cols in the primaryjoin are actually # in the ``` |
|
From: Serge v. d. B. <iss...@bi...> - 2015-11-20 17:54:34
|
New issue 3591: Update with subselect on Sqlite produces bad SQL https://bitbucket.org/zzzeek/sqlalchemy/issues/3591/update-with-subselect-on-sqlite-produces Serge van den Boom: I have (in my simplified example) a table 'thing' with an entity 'value'. I have another table, 'replacement', with columns 'old' and 'new'. I wish to update 'thing.value' with 'new' from 'replacement', wherever 'thing.value' matches 'old'. Because Sqlite does not support 'join' in 'update', nor 'update from', I use a subselect to achieve this, with the SqlAlchemy ORM functions: selectQuery = session.query(Replacement.new).filter(Replacement.old == Thing.value).subquery() session.query(Thing).update({ Thing.value: selectQuery }, synchronize_session=False) This produces an Sql error: sqlalchemy.exc.OperationalError: (pysqlite2.dbapi2.OperationalError) near "SELECT": syntax error [SQL: u'UPDATE thing SET value=SELECT replacement.new \nFROM replacement \nWHERE replacement.old = thing.value'] This is because Sqlite requires parentheses around the subselect. A full example and a full stack trace are attached to this bug report. Version information: * SQLAlchemy 1.0.9 (via pip) * Sqlite 3.9.1 (from Ubuntu, libsqlite3-0_3.9.1-2_amd64.deb) |
|
From: Milas B. <iss...@bi...> - 2015-11-18 19:44:07
|
New issue 3590: Eagerload is ignored if root object is already in session https://bitbucket.org/zzzeek/sqlalchemy/issues/3590/eagerload-is-ignored-if-root-object-is Milas Bowman: I have a relationship defaulted to `lazy='noload'` and on specific queries override this using `.options(eagerload())`. However, if the object being queried is already loaded into the session, it will not perform the eager load if queried. This is a regression in 1.0.9 -- it works properly in 1.0.8. I'm specifically seeing it in a unit test using SQLite in-memory database. Here's a simple example (also attached): ``` #!python from __future__ import print_function from sqlalchemy import Column, Integer, ForeignKey, String, create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import eagerload, relationship, sessionmaker Base = declarative_base() engine = create_engine('sqlite:///:memory:', echo=True) Session = sessionmaker(bind=engine) class Foo(Base): __tablename__ = 'foo' id = Column(Integer, primary_key=True) value = Column(String(50)) bars = relationship('Bar', back_populates='foo', lazy='noload') class Bar(Base): __tablename__ = 'bar' id = Column(Integer, primary_key=True) foo_id = Column(ForeignKey('foo.id')) foo = relationship('Foo', back_populates='bars', lazy='noload') def initialize(): Base.metadata.create_all(engine) session = Session() f = Foo(value='foobar') b = Bar(foo=f) session.add(b) session.commit() session.close() if __name__ == '__main__': initialize() session = Session() bar = session.query(Bar).one() print('Foo should be None - ', bar.foo) bar_with_foo_query = session.query(Bar).options(eagerload(Bar.foo)) bar_missing_foo = bar_with_foo_query.one() print('Foo is incorrectly None - ', bar_missing_foo.foo) # invalidate the session cache to force it to reload session.close() bar_with_foo = bar_with_foo_query.one() print('Foo is loaded correctly - ', bar_with_foo.foo) ``` Let me know if there's any additional information I can provide. |
|
From: Greg S. <iss...@bi...> - 2015-11-18 13:42:19
|
New issue 3589: Timestamp column syntax error in migration https://bitbucket.org/zzzeek/sqlalchemy/issues/3589/timestamp-column-syntax-error-in-migration Greg Selvin: I have a Model class with the database column: date_created = db.Column(db.DateTime, default=datetime.utcnow) this generates the following code in my xxx_migrations.py code: Column('date_created', DateTime, default=ColumnDefault(<function ColumnDefault._maybe_wrap_callable.<locals>.<lambda> at 0x1045b22f0>)) Here is the code I borrowed for my db_migrate.py script: ``` #!python import imp from migrate.versioning import api from app import db from config import SQLALCHEMY_DATABASE_URI from config import SQLALCHEMY_MIGRATE_REPO v = api.db_version(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO) migration = SQLALCHEMY_MIGRATE_REPO + ('/versions/%03d_migration.py' % (v+1)) tmp_module = imp.new_module('old_model') old_model = api.create_model(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO) exec(old_model, tmp_module.__dict__) script = api.make_update_script_for_model(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO, tmp_module.meta, db.metadata) open(migration, "wt").write(script) api.upgrade(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO) v = api.db_version(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_MIGRATE_REPO) print('New migration saved as ' + migration) print('Current database version: ' + str(v)) ``` I'm using SQLAlchemy 1.0.9 and Flask-SQLAlchemy 2.1. I don't think it matters, but I'm using a PostgreSQL 9.4 database. Any idea what I'm doing wrong here? |
|
From: Jeff W. <iss...@bi...> - 2015-11-16 23:45:22
|
New issue 3588: Extend view reflection in PostgreSQL to support differentiating between normal views and materialized views https://bitbucket.org/zzzeek/sqlalchemy/issues/3588/extend-view-reflection-in-postgresql-to Jeff Widman: I'm trying to write a simple python function that refreshes all the materialized views in a PostgreSQL db. Ideally the core would include a 'refresh_all()' that does this automagically, but it gets complicated quickly because some views might depend on other views. So instead I'd like for sqlalchemy to support querying the db to get a list of all the materialized view names #2891 nicely added support for PostgreSQL materialized views in inspect.get_view_names() However, it's not possible to limit the query to only normal views or only materialized views. Nor does the list of returned view names include anything about whether they're normal views vs materialized views... That'd probably be simpler--just return all view names, but also include what type of view it is. |
|
From: Jeff W. <iss...@bi...> - 2015-11-16 23:34:25
|
New issue 3587: inspect.get_view_defintion(postgresql_materialized_view) returns None https://bitbucket.org/zzzeek/sqlalchemy/issues/3587/inspectget_view_defintion Jeff Widman: PostgreSQL database with a couple of materialized views: Calling inspect.get_view_names() correctly returns the names of the materialized views. But when I call inspect.get_view_definition() against any of those views it returns None rather than the table definition. |
|
From: Tim D. <iss...@bi...> - 2015-11-16 20:42:05
|
New issue 3586: numpy int64 garbled upon insert https://bitbucket.org/zzzeek/sqlalchemy/issues/3586/numpy-int64-garbled-upon-insert Tim Diels: `numpy.int64` is garbled upon insert. This may apply to other numpy types as well. Workaround is to cast with `int` before inserting. Problem observed on latest commit (0847097c29ab48a5f111518e2c6ee324d5242057) and on v1.0.8. Test case: ``` #!python from sqlalchemy import create_engine, MetaData, Column, Integer, Table import numpy as np engine = create_engine('sqlite:///test.db', echo=True) metadata = MetaData() test_table = Table('test', metadata, Column('id', Integer, primary_key=True), Column('number', Integer) ) metadata.drop_all(engine) # clean metadata.create_all(engine) engine.execute(test_table.insert(), {'number': np.int64(123)}) result = engine.execute(test_table.select()).first() assert result[1] == 123, result ``` |
|
From: Mat M. <iss...@bi...> - 2015-11-16 14:46:45
|
New issue 3585: Extending pymssql dialect is_disconnect https://bitbucket.org/zzzeek/sqlalchemy/issues/3585/extending-pymssql-dialect-is_disconnect Mat Moore: Hi, I am using SQLAlchemy 1.0.9 with pymssql, and recently I encountered an occasional issue where I run into a database write error. The last time this happened, the error was ``` #!python File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 1034, in execute bind, close_with_result=True).execute(clause, params or {}) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute return meth(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement compiled_sql, distilled_params File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception exc_info File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute cursor.execute(statement, parameters) File "pymssql.pyx", line 465, in pymssql.Cursor.execute (pymssql.c:7527) OperationalError: (pymssql.OperationalError) (20006, 'DB-Lib error message 20006, severity 9:\nWrite to the server failed\n' ``` And all future queries from the same engine gave an error like: ``` #!python File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py", line 1038, in first return self._non_result(None) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/result.py", line 920, in _non_result "This result object does not return rows. " ResourceClosedError: This result object does not return rows. It has been closed automatically. ``` When this occurred I was able to verify in SQL server's activity monitor that the connection wasn't there any more, but it seems like SQLAlchemy still treated it as open. I am creating my engine through flask_sqlalchemy which doesn't seem to pass any special options for SQL server by default. This may be fixable by setting a pool recycle (as documented here http://docs.sqlalchemy.org/en/latest/core/pooling.html#setting-pool-recycle) but it led me to wonder about how the disconnect decision is reached for this driver, since in my case it didn't close the connection. The pymssql dialect currently considers these messages as errors that should result in a disconnect: ``` #!python "Adaptive Server connection timed out", "Net-Lib error during Connection reset by peer", "message 20003", # connection timeout "Error 10054", "Not connected to any MS SQL server", "Connection is closed" ``` Like 20003, my 20006 error code comes from db-lib, inside freetds, and there is a lot of other error codes defined here: https://github.com/brianb/FreeTDS/blob/master/include/sybdb.h My question is: is it appropriate to extend this list so that SQLAlchemy can recover from this kind of situation, and if so exactly what scenarios should be treated as a disconnect? |
|
From: Yegor R. <iss...@bi...> - 2015-11-13 11:56:31
|
New issue 3584: Column.copy and TypeDecorator bug in 1.1 beta https://bitbucket.org/zzzeek/sqlalchemy/issues/3584/columncopy-and-typedecorator-bug-in-11 Yegor Roganov: As per https://bitbucket.org/zzzeek/sqlalchemy/commits/ed535649d423 `TypeDecorator` inherits from `SchemaEventTarget`, thus this statement ``` if isinstance(type_, SchemaEventTarget): type_ = type_.copy(**kw) ``` in `Column.copy` raises `TypeError: copy() got an unexpected keyword argument 'schema'` since `TypeDecorator.copy` doesn't accept any params. Part of the stacktrace: ``` File "alembic/env.py", line 24, in merge_metadata t.tometadata(base_metadata) File "env/lib/python3.4/site-packages/sqlalchemy/sql/schema.py", line 814, in tometadata args.append(c.copy(schema=schema)) File "env/lib/python3.4/site-packages/sqlalchemy/sql/schema.py", line 1330, in copy type_ = type_.copy(**kw) TypeError: copy() got an unexpected keyword argument 'schema' ``` |
|
From: thiefmaster <iss...@bi...> - 2015-11-12 14:52:40
|
New issue 3583: `obj.foo = set(obj.foo)` https://bitbucket.org/zzzeek/sqlalchemy/issues/3583/objfoo-set-objfoo thiefmaster: I have an `association_proxy` on a set-like relationship. Normal set operations work perfectly when it comes to not adding duplicates, but when I replace the whole set e.g. using `foo.bar = set(foo.bar)` (in my real application the right side comes from a form) I get errors due to duplicate rows being inserted. ```python from sqlalchemy import * from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import * Base = declarative_base() class A(Base): __tablename__ = 'test_a' id = Column(Integer, primary_key=True) b_rel = relationship('B', collection_class=set, cascade='all, delete-orphan') b = association_proxy('b_rel', 'value', creator=lambda x: B(value=x)) class B(Base): __tablename__ = 'test_b' __table_args__ = UniqueConstraint('a_id', 'value'), id = Column(Integer, primary_key=True) a_id = Column(Integer, ForeignKey('test_a.id'), nullable=False) value = Column(String) e = create_engine('sqlite:///:memory:', echo=True) # e = create_engine('postgresql:///test', echo=True) Base.metadata.create_all(e) # e.execute('TRUNCATE test_a, test_b;') s = Session(e) a = A() a.b = {'x', 'y', 'z'} s.add(a) s.commit() print print 'adding existing element to set' a.b.add('x') s.flush() print print 'assigning same items to set' a.b = set(a.b) s.flush() ``` |
|
From: Yegor R. <iss...@bi...> - 2015-11-11 13:24:45
|
New issue 3582: Possible bug with polymorphic entities and `exists` https://bitbucket.org/zzzeek/sqlalchemy/issues/3582/possible-bug-with-polymorphic-entities-and Yegor Roganov: Given class hierarchy from http://docs.sqlalchemy.org/en/rel_1_0/changelog/migration_10.html ```python class Widget(Base): __tablename__ = 'widget' id = Column(Integer, primary_key=True) type = Column(String) related_id = Column(ForeignKey('related.id')) related = relationship("Related", backref="widget") __mapper_args__ = {'polymorphic_on': type} class FooWidget(Widget): __mapper_args__ = {'polymorphic_identity': 'foo'} class Related(Base): __tablename__ = 'related' id = Column(Integer, primary_key=True) ``` query `session.query(exists().where(and_(FooWidget.related_id == 5, FooWidget.id == 6)))` produces strange SQL (`widget.type IN (:type_1)` should be in the subquery): ```sql SELECT EXISTS (SELECT * FROM widget WHERE widget.related_id = :related_id_1 AND widget.id = :id_1) AS anon_1 FROM widget WHERE widget.type IN (:type_1) ``` Query `session.query(session.query(FooWidget).filter(and_(FooWidget.related_id == 5, FooWidget.id == 6)).exists())` is also suboptimal: ```sql SELECT EXISTS (SELECT 1 FROM widget WHERE widget.related_id = :related_id_1 AND widget.id = :id_1 AND widget.type IN (:type_1)) AS anon_1 FROM widget WHERE widget.type IN (:type_1) ``` (note the repeating `widget.type IN (:type_1)` clause) Not sure if this is a bug or I'm just using it wrong, but I figured I'd create an issue. Sorry for the hassle if it's not a bug. |
|
From: Johnathon H. <iss...@bi...> - 2015-11-10 22:08:54
|
New issue 3581: MS SQL Insert from CTE https://bitbucket.org/zzzeek/sqlalchemy/issues/3581/ms-sql-insert-from-cte Johnathon Hege: It appears that insert().from_select(), when passed a CTE, doesn't generate syntactically valid MS SQL code. The code it generates (using the recursive "parts" example in the CTE documentation) looks like: ``` #!sql INSERT INTO destination (part, sub_part, quantity) WITH anon_1(part, sub_part, quantity) AS (SELECT parts.sub_part AS sub_part, parts.part AS part, parts.quantity AS quantity FROM parts WHERE parts.part = ? UNION ALL SELECT parts_1.sub_part AS sub_part, parts_1.part AS part, parts_1.quantity AS quantity FROM parts AS parts_1, anon_1 AS anon_2 WHERE parts_1.part = anon_2.sub_part) SELECT anon_1.part, anon_1.sub_part, anon_1.quantity FROM anon_1 ``` But SQL Server requires the INSERT below the CTE definition, as below: ``` #!sql WITH anon_1(part, sub_part, quantity) AS (SELECT parts.sub_part AS sub_part, parts.part AS part, parts.quantity AS quantity FROM parts WHERE parts.part = ? UNION ALL SELECT parts_1.sub_part AS sub_part, parts_1.part AS part, parts_1.quantity AS quantity FROM parts AS parts_1, anon_1 AS anon_2 WHERE parts_1.part = anon_2.sub_part) INSERT INTO destination (part, sub_part, quantity) SELECT anon_1.part, anon_1.sub_part, anon_1.quantity FROM anon_1 ``` |
|
From: mrudula c. <iss...@bi...> - 2015-11-10 15:01:30
|
New issue 3580: sqlalchemy.exc.compileError: Unconsumed column Name https://bitbucket.org/zzzeek/sqlalchemy/issues/3580/sqlalchemyexccompileerror-unconsumed mrudula chougule: Hi, I am having python+gtk application. I'm currently porting my python application from Ubuntu 10.04 to 14.04. I have sqlalchemy version of 0.8.4-1Ubuntu2.1 on Ubuntu 14.04. When trying to run my application, when it is trying to insert column into database it is throwing this error: **sqlalchemy.exc.compileError: Unconsumed column Name ** Please share your views or ideas to resolve this error. I'm stuck!! Thanks, Mrudula Responsible: _diana_ |
|
From: mrudula c. <iss...@bi...> - 2015-11-10 14:59:57
|
New issue 3579: sqlalchemy.exc.compileError: Unconsumed column Name https://bitbucket.org/zzzeek/sqlalchemy/issues/3579/sqlalchemyexccompileerror-unconsumed mrudula chougule: Hi, I am having python+gtk application. I'm currently porting my python application from Ubuntu 10.04 to 14.04. I have sqlalchemy version of 0.8.4-1Ubuntu2.1 on Ubuntu 14.04. When trying to run my application, when it is trying to insert column into database it is throwing this error: **sqlalchemy.exc.compileError: Unconsumed column Name** Please share your views or ideas to resolve this error. I'm stuck!! Thanks, Mrudula Responsible: zzzeek |
|
From: Steve P. <iss...@bi...> - 2015-11-08 00:21:20
|
New issue 3578: stable build fails on RTD https://bitbucket.org/zzzeek/sqlalchemy/issues/3578/stable-build-fails-on-rtd Steve Piercy: See http://readthedocs.org/projects/sqlalchemy/builds/3464291/ |
|
From: Roman Z. <iss...@bi...> - 2015-11-05 14:13:26
|
New issue 3577: get_history() call on relationship may interfere with other attributes' histories https://bitbucket.org/zzzeek/sqlalchemy/issues/3577/get_history-call-on-relationship-may Roman Zimmermann: I currently try to get the original values of certain attributes in `before_commit`. For that I call `get_history` on some attributes. It seems that the history of all attributes is flushed, when the history of the relationship is loaded. Here is a full test-script. ``` #!python from sqlalchemy import create_engine, Column, String, Integer from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import backref, relationship, sessionmaker from sqlalchemy.orm.attributes import get_history, History from sqlalchemy.sql.schema import ForeignKey Base = declarative_base() class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String) class Tag(Base): __tablename__ = 'tag' id = Column(Integer, primary_key=True) user_id = Column(ForeignKey(User.id)) tag = Column(String) user = relationship(User, backref=backref('tags')) engine = create_engine('sqlite:///:memory:', echo=True) Base.metadata.create_all(engine) session = sessionmaker(bind=engine)() u = User(name='old') session.add(u) session.commit() getattr(u, 'name') u.name = 'new' assert get_history(u, 'name') == History(['new'], (), ['old']) get_history(u, 'tags') # Fails: History(added=(), unchanged=['new'], deleted=()) assert get_history(u, 'name') == History(['new'], (), ['old']) ``` It seems that the call `get_history(u, 'tags')` leads to the update being flushed. Is there any way to consistently get the original value of an attribute from before the transaction has started? This was tested using 1.0.8. |
|
From: Ye W. <iss...@bi...> - 2015-11-04 17:05:40
|
New issue 3576: SQLAlchemy generated table schema doesn't take default value into consideration https://bitbucket.org/zzzeek/sqlalchemy/issues/3576/sqlalchemy-generated-table-schema-doesnt Ye Wang: Here is a sample python script to generate the CREATE TABLE statement for MySQL. ``` #!python import sqlalchemy from sqlalchemy import Table, Column, Integer, ForeignKey, Boolean from sqlalchemy.orm import relationship, backref from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.schema import CreateTable from sqlalchemy import create_engine engine = create_engine('mysql+pymysql://user:password@127.0.0.1/test', echo=True) Base = declarative_base() association_table = Table('association', Base.metadata, Column('parent_id', Integer, ForeignKey('parent.id')), Column('child_id', Integer, ForeignKey('child.id')) ) class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) children = relationship("Child", secondary=association_table) class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) read = Column(Boolean, default=False, nullable=False) print(CreateTable(Child.__table__).compile(engine)) print('sqlalchemy verison: {}'.format(sqlalchemy.__version__)) ``` Running it will get this SQL: ``` #!sql $ python ~/sqlalchemy-create-table-column-default.py CREATE TABLE child ( id INTEGER NOT NULL AUTO_INCREMENT, `read` BOOL NOT NULL, PRIMARY KEY (id), CHECK (`read` IN (0, 1)) ) -- sqlalchemy verison: 1.0.6 mysql> CREATE TABLE child ( -> id INTEGER NOT NULL AUTO_INCREMENT, -> `read` BOOL NOT NULL, -> PRIMARY KEY (id), -> CHECK (`read` IN (0, 1)) -> ); Query OK, 0 rows affected (0.02 sec) mysql> show create table child\G *************************** 1. row *************************** Table: child Create Table: CREATE TABLE `child` ( `id` int(11) NOT NULL AUTO_INCREMENT, `read` tinyint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 1 row in set (0.01 sec) ``` As you can see, it ignores the default value in the CREATE TABLE statement for the column `read`, which should have a `DEFAULT 0` part. |
|
From: Daniel K. <iss...@bi...> - 2015-11-03 18:22:05
|
New issue 3575: @declared_attr cannot take in column_property https://bitbucket.org/zzzeek/sqlalchemy/issues/3575/declared_attr-cannot-take-in Daniel Kassen: In an effort to force multiple level polymorphism out of sqlalchemy, my colleague has devised a clever way to make multiple types of collections. To do this he creates a collections base which inherits from the 'polymorphic_on' class. All the attributes declared in this collection base class are decorated with @declared_attr, including the id, __mapper_args__, and title. the 'title' is inherited from the polymorphic class and is simply a unicode text type column. But because it is inherited, and redeclared in this new collections class, we get the following warning: SAWarning: Implicitly combining column assets.title with column collections.title under attribute 'title'. Please configure one or more attributes for these same-named columns explicitly. I tried fixing this warning by wrapping the Column object being returned with the column_property method(?). Apparently @declared_attr cannot take a column_property object, and if I'm guessing right thinking that a column_property(A, B) is a column of type A with the properties of B, then this should be fixed. It asks me to configure the column explicitly, and then complains when I do. Responsible: zzzeek |
|
From: Taha J. <iss...@bi...> - 2015-11-03 05:08:51
|
New issue 3574: `in_` operator does not supports bound parameter https://bitbucket.org/zzzeek/sqlalchemy/issues/3574/in_-operator-does-not-supports-bound Taha Jahangir: Using `column.in_(bindparam('name'))` (to place a variable list of values) throws following exception: (with psycopg2 engine) `sqlalchemy.exc.InvalidRequestError: in_() accepts either a list of expressions or a selectable: BindParameter('name', None, type_=NullType())` |
|
From: Bernard Y. <iss...@bi...> - 2015-11-03 01:18:12
|
New issue 3573: SqlAlchemy generates bad SQL for Postgresql SELECT...FOR UPDATE OF, because Postgresql requires the table parameter to NOT be schema qualified, but SqlAlchemy schema-qualifies the parameter anyway. https://bitbucket.org/zzzeek/sqlalchemy/issues/3573/sqlalchemy-generates-bad-sql-for Bernard Yeh: Basically, using the SELECT ... FOR UPDATE OF table on a schema qualified table is a syntax error in Postgresql. The schema name has to be omitted, i.e. instead of: ``` SELECT * FROM schema.table JOIN <other tables> FOR UPDATE OF schema.table ``` it needs to be ``` SELECT * FROM schema.table JOIN <other tables> FOR UPDATE OF table ``` If a schema is specified in the table object, SqlAlchemy always schema qualifies table names in its code generation, so this construct generates code that causes a syntax error in postgresql. ``` #!python # users is Table object invoked with schema='s1' s = select([users.c.name]).with_for_update(nowait=True, of=users) ``` generates code: ``` SELECT s1.users.name FROM s1.users FOR UPDATE OF s1.users NOWAIT ``` with traceback result: ``` #! $ python3 for_update_bug.py Traceback (most recent call last): File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context context) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute cursor.execute(statement, parameters) psycopg2.ProgrammingError: FOR UPDATE must specify unqualified relation names LINE 2: FROM s1.users FOR UPDATE OF s1.users NOWAIT ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "update_for_bug.py", line 25, in <module> conn.execute(s) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 914, in execute return meth(self, multiparams, params) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/sql/elements.py", line 323, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement compiled_sql, distilled_params File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context context) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception exc_info File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=exc_value) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/util/compat.py", line 181, in reraise raise value.with_traceback(tb) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context context) File "/home/byeh/python3/lib/python3.4/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) FOR UPDATE must specify unqualified relation names ``` Tested on SqlAlchemy 1.0.9, Python 3.4.3, Ubuntu 14.04, Postgresql 9.3 Attached file contains test case: for_update_bug.py |
|
From: Taha J. <iss...@bi...> - 2015-11-02 06:47:46
|
New issue 3572: Ambiguous documentation about `bake_queries` param in `relationship` https://bitbucket.org/zzzeek/sqlalchemy/issues/3572/ambiguous-documentation-about-bake_queries Taha Jahangir: Documentation about `bake_queries` param states that using baked queries in enabled by default (1), but `orm.ext.baked` documentations says different. It seems `bake_queries` should be kept True, and `lazy` param should be set to `baked_select` to actually use baked queries. Also documentation of `lazy` param does not include the `baked_select` option. [1] http://docs.sqlalchemy.org/en/rel_1_0/orm/relationship_api.html#sqlalchemy.orm.relationship.params.bake_queries [2] http://docs.sqlalchemy.org/en/rel_1_0/orm/extensions/baked.html#lazy-loading-integration |
|
From: Adrian K. <iss...@bi...> - 2015-11-01 22:10:58
|
New issue 3571: Interval vs INTERVAL and python_type https://bitbucket.org/zzzeek/sqlalchemy/issues/3571/interval-vs-interval-and-python_type Adrian Klaver: SQLAlchemy 1.0.9 Python 2.7.10 Postgres 9.4.4 Seems the SQL Interval type has two spellings in the code. This seems to be causing an issue when using *.python_type So on Postgres: test=> \d agate_interval_test Table "public.agate_interval_test" Column | Type | Modifiers --------------+----------+----------- id | integer | interval_fld | interval | Using SQLAlchemy: In [23]: from sqlalchemy import Column, MetaData, Table, create_engine In [24]: from sqlalchemy.engine import Connection In [25]: engine = create_engine('postgresql://aklaver:@localhost:5432/test') In [26]: connection = engine.connect() In [27]: metadata = MetaData(connection) In [28]: sql_table = Table('agate_interval_test', metadata, autoload=True, autoload_with=connection) In [31]: for col in sql_table.columns: print col.type ....: INTEGER INTERVAL In [32]: for col in sql_table.columns: print col.type.python_type ....: <type 'int'> --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-32-7def2730dfe9> in <module>() 1 for col in sql_table.columns: ----> 2 print col.type.python_type 3 /home/aklaver/py_virt/pandas/lib/python2.7/site-packages/sqlalchemy/sql/type_api.pyc in python_type(self) 305 306 """ --> 307 raise NotImplementedError() 308 309 def with_variant(self, type_, dialect_name): NotImplementedError: In [45]: sqlalchemy.sql.sqltypes.INTERVAL --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-45-fc36ad754ae1> in <module>() ----> 1 sqlalchemy.sql.sqltypes.INTERVAL AttributeError: 'module' object has no attribute 'INTERVAL' In [46]: sqlalchemy.sql.sqltypes.Interval Out[46]: sqlalchemy.sql.sqltypes.Interval |
|
From: fengsp <iss...@bi...> - 2015-10-30 13:57:50
|
New issue 3570: Weird behavior of `delete` using different query param type https://bitbucket.org/zzzeek/sqlalchemy/issues/3570/weird-behavior-of-delete-using-different fengsp: >>> t <Text 643544210980470784> >>> print t.__dict__ {'content': u'ha', 'created_ts': 1446212896L, '_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x100c615d0>, 'updated_ts': 1446212896L, 'id': 643544210980470784L} >>> models.Text.query.filter_by(id='643544210980470784').delete() 1L >>> session.commit() >>> print t.__dict__ {'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x100c615d0>} The t's attribute can not be accessed any more, I try it again with id as integer, and get different result: >>> t <Text 643544531031031808> >>> print t.__dict__ {'content': u'ah', 'created_ts': 1446212972L, '_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x108ca45d0>, 'updated_ts': 1446212972L, 'id': 643544531031031808L} >>> models.Text.query.filter_by(id=643544531031031808).delete() 1L >>> session.commit() >>> print t.__dict__ {'content': u'ah', 'created_ts': 1446212972L, '_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x108ca45d0>, 'updated_ts': 1446212972L, 'id': 643544531031031808L} |
|
From: Roman Z. <iss...@bi...> - 2015-10-28 15:46:22
|
New issue 3569: Make MetaData.drop_all() work with foreign keys https://bitbucket.org/zzzeek/sqlalchemy/issues/3569/make-metadatadrop_all-work-with-foreign Roman Zimmermann: Currently issuing a drop_all() for PostgreSQL databases sometimes leads to a foreign key error like: ``` sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table users because other objects depend on it DETAIL: constraint engaging_networks_tokens_user_id_fkey on table engaging_networks_tokens depends on table users HINT: Use DROP ... CASCADE to drop the dependent objects too. [SQL: '\nDROP TABLE users'] ``` This is a major nuissance since drop_all() is commonly used for testing. A workaround is documented in [this blogpost](http://www.mbeckler.org/blog/?p=218). |
|
From: Fabio S. <iss...@bi...> - 2015-10-27 16:56:32
|
New issue 3568: Use column expressions with session.bulk_update_mappings https://bitbucket.org/zzzeek/sqlalchemy/issues/3568/use-column-expressions-with Fabio Sussetto: ``` prize = 10 mappings = {'id': 1, 'balance': User.balance + prize} db.session.bulk_update_mappings(User, mappings) ``` On 1.0.9 this fails with the following error: ``` can't adapt type 'BinaryExpression' [SQL: 'UPDATE "user" SET balance=%(balance)s WHERE "user".id = %(user_id)s'] [parameters: ({'user_id': 1, 'balance': <sqlalchemy.sql.elements.BinaryExpression object at 0x10d153be0>} ``` Is this supported? Thanks! |