sqlalchemy-tickets Mailing List for SQLAlchemy (Page 44)
Brought to you by:
zzzeek
You can subscribe to this list here.
| 2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
| 2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
| 2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
| 2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
| 2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
| 2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
| 2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
| 2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
| 2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
| 2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
| 2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
| 2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
|
From: Gildson B. <iss...@bi...> - 2014-11-22 23:48:28
|
New issue 3255: how to get the position of the current record of the result of a query? https://bitbucket.org/zzzeek/sqlalchemy/issue/3255/how-to-get-the-position-of-the-current Gildson Bezerra: I have 1 record table of people and I need to make a query and then immediately send the list of people to a ranking. I would like to know how best to do this operation. (Without using func.row_number()) thanks ``` #!python from sqlalchemy import create_engine, String, Integer, Column from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class People(Base): __tablename__ = 'peoples' _id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(50)) points = Column(Integer) engine = create_engine('sqlite:///db.sqlite') Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) def run(): session = Session() session.add_all([People(name='Alice', points=190), People(name='George', points=215), People(name='Nick', points=167), People(name='Alex', points=210), People(name='Paul', points=233)]) session.commit() peoples = session.query(People) \ .filter(People.points > 200) \ .order_by(People.points.desc()) \ .all() for p in peoples: # How I Can Get Position of Current Register in Peoples Here? position = 0 #position = p.? print('%s - %s (%s)' % (position, p.name, p.points)) if __name__ == '__main__': run() ``` Responsible: zzzeek |
|
From: nickretallack <iss...@bi...> - 2014-11-21 01:44:00
|
New issue 3254: Join should be able to follow relationships implicitly https://bitbucket.org/zzzeek/sqlalchemy/issue/3254/join-should-be-able-to-follow nickretallack: It's my opinion that this should work: ``` #!python from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, ForeignKey, Integer, String, ForeignKeyConstraint from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker, relationship Model = declarative_base() class Parent(Model): __tablename__ = 'parent' id = Column(Integer, primary_key=True) name = Column(String, nullable=False) class Child(Model): __tablename__ = 'child' id = Column(Integer, primary_key=True) name = Column(String, nullable=False) parent_id = Column(Integer, nullable=False) parent = relationship('Parent', primaryjoin='foreign(Child.parent_id) == remote(Parent.id)') engine = create_engine('sqlite:///:memory:', echo=True) Session = sessionmaker(bind=engine) if __name__ == "__main__": Model.metadata.create_all(engine) session = Session() # setup parent = Parent(name="fred") child = Child(name="bob", parent=parent) session.add_all([parent, child]) session.commit() # payoff print session.query(Parent).join(Child).all() ``` It works if parent_id has a ForeignKeyConstraint. It also works if I change it to ```print session.query(Parent).join(Child.parent).all()```. But I think it should work without either of these changes. It would be very helpful in an application I'm working on now, which mixes queries very dynamically. I suppose I could work around it by searching my models for relationships to follow manually, or adding foreign key constraints to the model that do not actually exist in the database. I just wish it worked without having to do these things. |
|
From: Moritz B. <iss...@bi...> - 2014-11-20 18:07:13
|
New issue 3253: Executing table insert with variable items yields unexpected results https://bitbucket.org/zzzeek/sqlalchemy/issue/3253/executing-table-insert-with-variable-items Moritz Beber: I recently stumbled over the following *feature*: Suppose I have a table model `Dummy` inheriting from the declarative base: ```python class Dummy(Base): a = Column(Integer) b = Column(Integer) ``` In order to efficiently insert many rows into the table I build a list `values` of `dict`s and run the following commands in a valid session: ```python session.execute(Dummy.__table__.insert(), values) session.commit() ``` Now the kicker is, which needs to be documented I think, that if the first `dict` in that list is, for example, `{"a": 3}` and any following `dict` contains both `a` and `b`, all the following `b`s are ignored, i.e., None. If the first is `{"a": 1, "b": 3}` and any following contains only one key then this causes a `StatementError`. If both pairs are present in the dictionary but one of them is `None` this works perfectly. So for the sake of efficiency, I can understand why it works the way that it does but I didn't see this written down anywhere and I think it should be. Maybe it belongs in the insert tutorial, not sure. |
|
From: Abhinav G. <iss...@bi...> - 2014-11-15 02:34:23
|
New issue 3252: Query.update on polymorphic tables generates statements with a very wide JOIN https://bitbucket.org/zzzeek/sqlalchemy/issue/3252/queryupdate-on-polymorphic-tables Abhinav Gupta: Consider the following setup, ``` #!python class User(Base): __tablename__ = 'users' user_id = Column(Integer, primary_key=True) name = Column(String) role = Column('role', Enum('user', 'super')) __mapper_args__ = {'polymorphic_on': role, 'polymorphic_identity': 'user'} class SuperUser(User): __tablename__ = 'superusers' __mapper_args__ = {'polymorphic_identity': "super"} super_id = Column(Integer, primary_key=True) users_user_id = Column(Integer, ForeignKey('users.user_id')) email = Column('email', String) ``` If you try to perform an update on a query using the base class's primary key like so, ``` #!python session.query(SuperUser).filter_by(user_id=some_id).update( {'email': 'fo...@ex...'} ) ``` The system will generate the following `UPDATE` command. ``` #!sql UPDATE superusers SET email=? FROM users WHERE users.user_id = ? ``` This is trying to join `superusers` with `users` without any join criteria. That's obviously wrong. To make it work, you have to either use the foreign key column name or fetch the original object and update on that. ``` #!python session.query(SuperUser).filter_by(users_user_id=some_id).update( {'email': 'fo...@ex...'} ) session.commit() # or su = session.query(SuperUser).get(some_id) su.email = 'fo...@ex...' session.add(su) session.commit() ``` This generates the correct command: ``` #!sql UPDATE superusers SET email=? WHERE superusers.super_id = ? ``` I have tested it with SQLAlchemy 0.8.2 against Postgres and SQLite, and SQLAlchemy 0.9.8 against SQLite. In case of SQLite, the command immediately fails because it doesn't support `JOIN`s in `UPDATE`s. Here's the full trace: ``` Traceback (most recent call last): File "bug.py", line 53, in <module> {'email': 'fo...@ex...'} File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2796, in update update_op.exec_() File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 897, in exec_ self._do_exec() File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 995, in _do_exec update_stmt, params=self.query._params) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 991, in execute bind, close_with_result=True).execute(clause, params or {}) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement compiled_sql, distilled_params File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception exc_info File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "$HOME/dev/py/sqlalchemy-bug-repro/env/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) near "FROM": syntax error u'UPDATE superusers SET email=? FROM users WHERE users.user_id = ?' ('fo...@ex...', 2) ``` In case of Postgres, it ends up joining the tables and possibly updating everything (I haven't had a chance to confirm the last part). Sample code to reproduce this attached to the report. |
|
From: Francisco F. C. <iss...@bi...> - 2014-11-13 15:09:04
|
New issue 3251: Possible Memory Leak on relationships https://bitbucket.org/zzzeek/sqlalchemy/issue/3251/possible-memory-leak-on-relationships Francisco Fernández Castaño: Hi, I've the tables shown in the code mapped to some classes and I need to run a query using a custom selectable that gives me augmented information about Accounts through FollowAssociation depending on a given user_id and then join to Broadcast orm object this custom selectable defined on runtime, creating a new relationship with the same name. But with this solution, that probably is not sqlalchemy idiomatic I'm getting memory leaks. The RelationshipProperty objects are never freed so memory don't stop growing. I dind't be able to find an alternative way to achieve this goal and probably is my fault, but just in case here is a runnable snippet that shows how memory grows over time. sqlalchemy==0.9.8 ``` #!python from sqlalchemy import create_engine, Column, Integer, ForeignKey, DateTime, case, and_ from sqlalchemy.orm import sessionmaker, relationship, aliased, mapper from sqlalchemy.ext.declarative import declarative_base import datetime import random from guppy import hpy engine = create_engine('sqlite:///:memory:', echo=False) Session = sessionmaker(bind=engine) Base = declarative_base() session = Session() class FollowAssociation(Base): __tablename__ = 'followers_followees' follower_id = Column(Integer, ForeignKey('accounts.id'), primary_key=True) followee_id = Column(Integer, ForeignKey('accounts.id'), primary_key=True) created_at = Column(DateTime, default=datetime.datetime.now) ali = aliased(FollowAssociation) class Account(Base): __tablename__ = 'accounts' id = Column(Integer, primary_key=True) followers = relationship('Account', secondary='followers_followees', primaryjoin=(FollowAssociation.followee_id == id), secondaryjoin=(FollowAssociation.follower_id == id), backref='followees') class Broadcast(Base): __tablename__ = 'broadcasts' id = Column(Integer, primary_key=True) account_id = Column(Integer, ForeignKey('accounts.id')) Base.metadata.create_all(engine) a1 = Account() a2 = Account() a3 = Account() b = Broadcast() a3.followers.append(a2) session.add_all([a1, a2, a3, b]) session.commit() h = hpy() def custom_query(user_id): query = session.query( Account, case([(FollowAssociation.follower_id == None, False)], else_=True).label('follower'), case([(ali.follower_id == None, False) ], else_=True).label('followee') ).outerjoin( FollowAssociation, and_(Account.id == FollowAssociation.followee_id, FollowAssociation.follower_id == user_id) ).outerjoin(ali, and_(Account.id == ali.follower_id, ali.followee_id == user_id)) class CustomAccount(object): pass CustomAccount = type('CustomAccount', (object,), {}) mapper(CustomAccount, query.statement.alias()) return CustomAccount def execute(): for i in range(200000): custom_account = custom_query(random.randint(1, 200)) Broadcast.account_with_follow_info = relationship(custom_account) session.query(Broadcast).first() if i % 1000 == 0: print i print h.heap() print h.heap().more execute() ``` |
|
From: Oleg P. <iss...@bi...> - 2014-11-13 14:57:56
|
New issue 3250: Setting None to a relation or id doesn't make it 'dirty' and the default is called for a column https://bitbucket.org/zzzeek/sqlalchemy/issue/3250/setting-none-to-a-relation-or-id-doesnt Oleg Pidsadnyi: Mike, I've encountered a behavior that is a bit unexpected: ``` #!python class Model(Base): a = Column(Integer, nullable=True, default=1) m = Model() m.a = None session.add(m) In: print m.a Out: None session.commit() In: print m.a Out: 1 ``` None is expected :( I understand that the configuration is silly, but this is the existing code. Setting None to the column doesn't cancel the default and the default thinks that the column wasn't set, so it uses the default value in the end. Responsible: zzzeek |
|
From: Mike B. <iss...@bi...> - 2014-11-12 22:15:14
|
New issue 3249: disable multirow eager loaders based on uselist instead of MANYTOONE? https://bitbucket.org/zzzeek/sqlalchemy/issue/3249/disable-multirow-eager-loaders-based-on Mike Bayer: why not? this makes it higher risk if someone is using uselist improperly, but if they are doing that then they are getting warnings anyway. all the tests pass except one that is looking for exact SQL. ``` #!diff diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index d95f17f..9549e09 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -1246,7 +1246,7 @@ class JoinedLoader(AbstractRelationshipLoader): anonymize_labels=True) assert clauses.aliased_class is not None - if self.parent_property.direction != interfaces.MANYTOONE: + if self.parent_property.uselist: context.multi_row_eager_loaders = True innerjoin = ( ``` |
|
From: Mike B. <iss...@bi...> - 2014-11-11 15:53:07
|
New issue 3248: populate_result_map gets set for select() that's in the VALUES of an insert() https://bitbucket.org/zzzeek/sqlalchemy/issue/3248/populate_result_map-gets-set-for-select Mike Bayer: ``` #!python from sqlalchemy import * class MyType(TypeDecorator): impl = Integer def process_result_value(self, value, dialect): raise Exception("I have not been selected") m1 = MetaData() t1 = Table('t1', m1, Column('x', MyType()) ) t2 = Table('t2', m1, Column('x', Integer) ) e = create_engine("postgresql://scott:tiger@localhost/test", echo='debug') m1.drop_all(e) m1.create_all(e) e.execute(t1.insert().values(x=5)) stmt = t2.insert().values(x=select([t1.c.x]).as_scalar()).returning(t2.c.x) result = e.execute(stmt) result.scalar() ``` ``` #! Traceback (most recent call last): File "test2.py", line 27, in <module> result.scalar() File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/result.py", line 888, in scalar return row[0] File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/engine/result.py", line 90, in __getitem__ return processor(self._row[index]) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/type_api.py", line 915, in process return process_value(value, dialect) File "test2.py", line 7, in process_result_value raise Exception("I have not been selected") Exception: I have not been selected ``` |
|
From: Ulrich P. <iss...@bi...> - 2014-11-11 14:07:29
|
New issue 3247: Reflecting table containing only oid column raises NoSuchTableError https://bitbucket.org/zzzeek/sqlalchemy/issue/3247/reflecting-table-containing-only-oid Ulrich Petri: I ran into what I think is a bug while trying to reflect a legacy database that contained a table with no columns except for the Postgres oid column (don't ask why...). That table had been created with the "WITH OIDS" option. When using reflection on a table like this a `NoSuchTable` exception is raised. The reason for that lies in `sqlalchemy.engine.reflection.reflecttable`. The `found_table` flag is set to true inside the column processing loop. This loop is not executed in my case because `sqlalchemy.dialects.postgresql.base.PGDialect.get_columns` excludes attributes with an `attnum` < 1 (`attnum` for the `oid` column is -2) returning an empty list in this case. I'm aware that this is a very unusual case. But I'd argue that at least the exception is misleading and should be changed since the table clearly exists. Minimal example: ``` #!bash createdb test psql test <<EOF create table test () with oids; EOF ``` ``` #!python from sqlalchemy import create_engine, MetaData engine = create_engine("postgresql+psycopg2://localhost/test") meta = MetaData() meta.reflect(bind=engine) ``` |
|
From: Mike B. <iss...@bi...> - 2014-11-10 20:38:10
|
New issue 3246: escape bound param keys on postgresql https://bitbucket.org/zzzeek/sqlalchemy/issue/3246/escape-bound-param-keys-on-postgresql Mike Bayer: see #3245 ``` #!python from sqlalchemy import create_engine, MetaData, Table, Column, Integer engine = create_engine('postgresql://scott:tiger@localhost:5432/test', echo=True) meta = MetaData(engine) table = Table('test_parentheses', meta, Column('CT Volume (cc)', Integer) #, key='foob') ) meta.create() i = table.insert() i.execute({'foob': 1}) print engine.execute(table.select().where(table.c.foob == 5)) ``` |
|
From: Joris V. d. B. <iss...@bi...> - 2014-11-10 20:17:44
|
New issue 3245: postgres: insert into a table whith column name with parentheses gives KeyError https://bitbucket.org/zzzeek/sqlalchemy/issue/3245/postgres-insert-into-a-table-whith-column Joris Van den Bossche: As the title says, when trying to insert data into a table with a column name containing parentheses, you get a KeyError. Small example: ``` from sqlalchemy import create_engine, MetaData, Table, Column, Integer engine = create_engine('postgresql://postgres@localhost:5432/test') meta = MetaData(engine) table = Table('test_parentheses', meta, Column('CT Volume (cc)', Integer) ) table.create() i = table.insert() i.execute({'CT Volume (cc)': 1}) ``` gives: ``` .... /home/joris/miniconda/envs/dev/lib/python2.7/site-packages/sqlalchemy/engine/default.pyc in do_execute(self, cursor, statement, parameters, context) 322 323 def do_execute(self, cursor, statement, parameters, context=None): --> 324 cursor.execute(statement, parameters) 325 326 def do_execute_no_params(self, cursor, statement, context=None): KeyError: 'CT Volume (cc' ``` This happens with postgresql (not with sqlite), and was reported at the pandas issue tracker (https://github.com/pydata/pandas/issues/8762), but I also saw references to the same issue here: https://github.com/onyxfish/csvkit/issues/317 and http://stackoverflow.com/questions/25917741/sqlalchemy-with-postgres-insert-into-a-table-whose-columns-have-parentheses. Is this possibly a sqlalchemy bug, or has it do to with the psycopg2 driver? |
|
From: Jon N. <iss...@bi...> - 2014-11-10 18:14:00
|
New issue 3244: sqlite inspection with get_unique_constraints doesn't pick up (all) UNIQUE constraints https://bitbucket.org/zzzeek/sqlalchemy/issue/3244/sqlite-inspection-with Jon Nelson: In sqlite you may *optionally* define a unique constraint without a name (and without CONSTRAINT): ``` #!sql CREATE TABLE foo (a int, b int, UNIQUE (a,b)) ``` As of 0.9.8, the as-supplied regex which does the constraint parsing will miss these. I'm using this: ``` #!python '(?!CONSTRAINT \w+)? UNIQUE \(([^\)]+)\)' ``` and some local code modifications, and it seems to work fine. Michael Bayer responded via email with: sqlite doesn’t supply any system of getting the unique constraints for a table, nor does it give an implicit name to any of them, and thus doesn’t match what all other relational databases do in this regard. We should probably return these with name=None, feel free to open a ticket, but this would be 1.0 only. In the meantime, if inspection of SQLite constraints are needed you should give them names. |
|
From: Mike B. <iss...@bi...> - 2014-11-08 21:56:31
|
New issue 3243: ForeignKeyConstraint.columns - WAT https://bitbucket.org/zzzeek/sqlalchemy/issue/3243/foreignkeyconstraintcolumns-wat Mike Bayer: The ".columns" collection is first off not a ColumnCollection, and secondly it is a list that is either of strings or columns, depending on factors that are in a practical sense arbitrary, which is ridiculous. This collection should either not be there at all, or just changed to be a ColumnCollection. Hitting it will just invoke the FK. Some other accessor should be added that gives us the column names in the absense of the FK constraint actually being set up. ``` #!python from sqlalchemy import Integer, Column, Table, MetaData, \ ForeignKeyConstraint, ForeignKey m = MetaData() t1 = Table( 't1', m, Column('x', Integer, primary_key=True) ) t2 = Table( 't2', m, Column('y', Integer, primary_key=True), Column('z1', Integer, ForeignKey('t1.x')), Column('z2', Integer) ) fk1 = [ const for const in t2.constraints if isinstance(const, ForeignKeyConstraint)][0] fk2 = ForeignKeyConstraint(['z2'], ['t1.x']) # keys are Column objects, OK assert isinstance(fk1.columns[0], Column) # WAT assert isinstance(fk2.columns[0], str) t2.append_constraint(fk2) # WAT assert isinstance(fk2.columns[0], str) ``` |
|
From: asyschikov <iss...@bi...> - 2014-11-05 19:28:54
|
New issue 3242: Delete doesn't use mapper (query does!) https://bitbucket.org/zzzeek/sqlalchemy/issue/3242/delete-doesnt-use-mapper-query-does asyschikov: Maybe not strictly a bug but it led to very buggy/unexpected results. When query is getting connection it passes a mapper: ``` #!python conn = self._connection_from_session( mapper=self._mapper_zero_or_none(), clause=querycontext.statement, close_with_result=True) ``` But delete (BulkDelete) does not: ``` #!python self.result = self.query.session.execute(delete_stmt, params=self.query._params) ``` We have a table that is not in binds list (session.__binds), so for the same table query works (because of mapper) but delete not and it was very hard to figure out what is the difference. It would be much better if delete used the same mechanism as query. |
|
From: Mike B. <iss...@bi...> - 2014-11-05 08:37:32
|
New issue 3241: multiple anonymous expressions in query() https://bitbucket.org/zzzeek/sqlalchemy/issue/3241/multiple-anonymous-expressions-in-query Mike Bayer: the error on 0.9 is better than in 1.0: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * e = create_engine('sqlite:///:memory:', echo='debug') s = Session(e) q1 = s.query(literal_column("1")).as_scalar() q2 = s.query(literal_column("2")).as_scalar() s.query(q1, q2).all() ``` 0.9: sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column '(SELECT 1)'" 1.0: File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 71, in instances rows = [keyed_tuple([proc(row) for proc in process]) TypeError: 'NoneType' object is not callable |
|
From: Mike B. <iss...@bi...> - 2014-11-04 16:54:27
|
New issue 3240: using an __abstract__ interferes with declarative determining inherits https://bitbucket.org/zzzeek/sqlalchemy/issue/3240/using-an-__abstract__-interferes-with Mike Bayer: ``` #!python from sqlalchemy import Integer, Column, ForeignKey from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) class AAbs(A): __abstract__ = True class B1(A): __tablename__ = 'b1' id = Column(ForeignKey('a.id'), primary_key=True) class B2(AAbs): __tablename__ = 'b2' id = Column(ForeignKey('a.id'), primary_key=True) assert B1.__mapper__.inherits is A.__mapper__ # passes assert B2.__mapper__.inherits is A.__mapper__ # fails ``` |
|
From: Craig R. <iss...@bi...> - 2014-10-31 22:33:55
|
New issue 3239: literal_binds parameter not applied to LIMIT clause https://bitbucket.org/zzzeek/sqlalchemy/issue/3239/literal_binds-parameter-not-applied-to Craig Radcliffe: I noticed this issue while trying to use sqlalchemy as a query generator to produce a string for input into a system that doesn't accept bind parameters. The literal_binds parameter, when passed through the compile_kwargs parameter in the compile() method, does not seem to apply to a LIMIT clause. Here is a basic example: ``` #!python from sqlalchemy import select from sqlalchemy.sql import table, literal_column t = table("foo") t.append_column(literal_column("bar")) print select([t.c["bar"]]).limit(100).compile(compile_kwargs={'literal_binds': True}) ``` The output is: ``` #! SELECT foo.bar FROM foo LIMIT :param_1 ``` I would expect: ``` #! SELECT foo.bar FROM foo LIMIT 100 ``` Other clauses, including JOINs, WHEREs, GROUP BYs, etc. work as expected. The issue is present in 0.9.8. |
|
From: Boris K. <iss...@bi...> - 2014-10-29 19:30:16
|
New issue 3238: When using m2m association collections, strange behavior occurs, if the session is not committed. https://bitbucket.org/zzzeek/sqlalchemy/issue/3238/when-using-m2m-association-collections Boris Kozinsky: If the session is not committed: When appending to collection the association object is not created. When the association object is created, the collection is not updated. If the session is committed: Both association objects and collections are updated, but using difference routes it seems, which is reflected in the way object initialization is happening. Is this normal? I would think that the results should not depend on whether the session is committed to the database. This would be much easier for testing, if everything was updated and available for querying even without a commit. Note: in the attached file the association table UserKeywords has an extra field (special_key), but that is not relevant for this report. |
|
From: dwt <iss...@bi...> - 2014-10-28 22:00:02
|
New issue 3237: mysql dialect discards cast to float https://bitbucket.org/zzzeek/sqlalchemy/issue/3237/mysql-dialect-discards-cast-to-float dwt: While debugging an error stemming from the incompatibility of decimal.Decimal with certain numerical operations in python, we discovered that sqlalchemy actually returns decimal when computing the average of an integer column in mysql, even if we wrap the query in a cast to float. Please see this example: ``` #!python import sqlalchemy print "sqlalchemy.__version__", sqlalchemy.__version__ engine = sqlalchemy.create_engine('mysql://yeepa_demo:yeepa_demo@localhost/yeepa_demo?charset=utf8', echo=True) from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() from sqlalchemy import Column, Integer, String class Track(Base): __tablename__ = 'track' id = Column('idtrack', Integer, primary_key=True) score = Column('score', Integer, server_default="0", nullable=False) user_id = Column('userid', Integer, nullable=False) from sqlalchemy.orm import sessionmaker Session = sessionmaker(bind=engine) session = Session() from sqlalchemy.sql.expression import cast from sqlalchemy.types import Integer, Float from sqlalchemy import func, desc, not_ # print session.query(cast(func.avg(func.coalesce(Track.score, 0)), Float).label('average_game_score')) print session.query(Track.score, Track.user_id).limit(20).all() print session.query(cast(func.avg(func.coalesce(Track.score, 0)), Float).label('average_game_score')).group_by(Track.user_id).all() ``` Which creates this output: ``` % ./sqlalchemy_test.py sqlalchemy.__version__ 0.9.8 2014-10-28 22:44:36,051 INFO sqlalchemy.engine.base.Engine SHOW VARIABLES LIKE 'sql_mode' 2014-10-28 22:44:36,051 INFO sqlalchemy.engine.base.Engine () 2014-10-28 22:44:36,053 INFO sqlalchemy.engine.base.Engine SELECT DATABASE() 2014-10-28 22:44:36,053 INFO sqlalchemy.engine.base.Engine () 2014-10-28 22:44:36,053 INFO sqlalchemy.engine.base.Engine show collation where `Charset` = 'utf8' and `Collation` = 'utf8_bin' 2014-10-28 22:44:36,053 INFO sqlalchemy.engine.base.Engine () 2014-10-28 22:44:36,055 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS CHAR(60)) AS anon_1 2014-10-28 22:44:36,056 INFO sqlalchemy.engine.base.Engine () 2014-10-28 22:44:36,056 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS CHAR(60)) AS anon_1 2014-10-28 22:44:36,056 INFO sqlalchemy.engine.base.Engine () 2014-10-28 22:44:36,057 INFO sqlalchemy.engine.base.Engine SELECT CAST('test collated returns' AS CHAR CHARACTER SET utf8) COLLATE utf8_bin AS anon_1 2014-10-28 22:44:36,057 INFO sqlalchemy.engine.base.Engine () 2014-10-28 22:44:36,059 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) 2014-10-28 22:44:36,059 INFO sqlalchemy.engine.base.Engine SELECT track.score AS track_score, track.userid AS track_userid FROM track LIMIT %s 2014-10-28 22:44:36,060 INFO sqlalchemy.engine.base.Engine (20,) [(30L, 11L), (40L, 12L), (50L, 13L), (60L, 14L), (70L, 15L), (60L, 16L), (70L, 17L), (80L, 18L), (90L, 19L), (50L, 20L), (50L, 21L), (40L, 22L), (40L, 23L), (30L, 11L), (40L, 12L), (50L, 13L), (60L, 14L), (70L, 15L), (60L, 16L), (70L, 17L)] 2014-10-28 22:44:36,062 INFO sqlalchemy.engine.base.Engine SELECT avg(coalesce(track.score, %s)) AS average_game_score FROM track GROUP BY track.userid 2014-10-28 22:44:36,062 INFO sqlalchemy.engine.base.Engine (0,) [(Decimal('222.4444'),), (Decimal('215.1481'),), (Decimal('23.6667'),), (Decimal('60.0000'),), (Decimal('70.0000'),), (Decimal('60.0000'),), (Decimal('70.0000'),), (Decimal('80.0000'),), (Decimal('90.0000'),), (Decimal('50.0000'),), (Decimal('50.0000'),), (Decimal('52.5000'),), (Decimal('45.0000'),), (Decimal('14.5000'),), (Decimal('0.0000'),), (Decimal('121.5000'),), (Decimal('42.0000'),), (Decimal('550.0000'),)] ``` This shows that the cast is actually discarded in sql (which makes some sense as mysql can't cast to float, but only to decimal - which doesn't make sense, but seems to be the case). But more importantly, there seems to be no better way to express what format the returned column should have. In normal columns I can tell SQLAlchemy that we want the type to be Float, even though the underlying type might be NUMERIC, and still sqlalchemy will convert the result to float for us. To my understanding because Float implies Float(asdecimal=false). As a workaround we added this custom type: ``` #!python import sqlalchemy.types as types class MyFloat(types.TypeDecorator): impl = types.Float def process_bind_param(self, value, dialect): return value def process_result_value(self, value, dialect): return float(value) def copy(self): return MyFloat() print session.query(cast(func.avg(func.coalesce(Track.score, 0)), MyFloat).label('average_game_score')).group_by(Track.user_id).all() ``` which creates the output ``` 2014-10-28 22:44:36,067 INFO sqlalchemy.engine.base.Engine SELECT avg(coalesce(track.score, %s)) AS average_game_score FROM track GROUP BY track.userid 2014-10-28 22:44:36,067 INFO sqlalchemy.engine.base.Engine (0,) [(222.4444,), (215.1481,), (23.6667,), (60.0,), (70.0,), (60.0,), (70.0,), (80.0,), (90.0,), (50.0,), (50.0,), (52.5,), (45.0,), (14.5,), (0.0,), (121.5,), (42.0,), (550.0,)] ``` Is it the intentional and expected behavior for a cast to float for the result of a column in a query that it is just discarded? If so, how do I annotate a query to tell sqlalchemy that I would please like to get the result in a specific type? I think there is a good argument that the type Float(asdecimal=False) would behave more consistently if it would would allow to convert a column from a query via the cast operator as well as the column from a model. Or perhaps a different operator than cast should/could be chosen? Right now, the current behavior certainly surprised me. Would you change sqlalchemys Float implementation to force the conversion to float in ```def process_result_value(...)``` or is there some other better way to achieve this? |
|
From: Mike B. <iss...@bi...> - 2014-10-28 17:11:01
|
New issue 3236: add introspeciton features to associaiton proxy such that kw vs. positional is automatically inspected? https://bitbucket.org/zzzeek/sqlalchemy/issue/3236/add-introspeciton-features-to-associaiton Mike Bayer: see #647 #2808 #3235 for what keeps coming up. Responsible: sqlalchemy_sprinters |
|
From: Boris K. <iss...@bi...> - 2014-10-28 14:18:04
|
New issue 3235: Association proxy "init" does not work symmetrically https://bitbucket.org/zzzeek/sqlalchemy/issue/3235/association-proxy-init-does-not-work Boris Kozinsky: For some reason when I try to define association proxy in the user-keyword example, it only works in one direction but not the other, even if I define the association proxies symmetrically (please see attached code). For instance, I can do user1.keywords.append(keyword1) but not keyword1.users.append(user1). The second one will give a "Key Error". I traced this behavior to the order of arguments in the UserKeyword.__init__ method. The behavior is correct only one way, depending on the argument order, but never both ways. It is somewhat hard to know the right order of arguments for the collection to work... |
|
From: Marcin K. <iss...@bi...> - 2014-10-23 11:18:39
|
New issue 3234: Query performance degradation with autoloaded tables https://bitbucket.org/zzzeek/sqlalchemy/issue/3234/query-performance-degradation-with Marcin Krol: I get over 3 times slower query execution in SQA with autoloaded tables than executing query directly. Details are here: [http://stackoverflow.com/questions/26526034/sqlalchemy-query-performance](http://stackoverflow.com/questions/26526034/sqlalchemy-query-performance) |
|
From: Mike B. <iss...@bi...> - 2014-10-23 05:03:51
|
New issue 3233: unexpected/inconsistent aliasing applied to single-table inh subclass in query.join() https://bitbucket.org/zzzeek/sqlalchemy/issue/3233/unexpected-inconsistent-aliasing-applied Mike Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = "a" id = Column(Integer, primary_key=True) type = Column(String) __mapper_args__ = { 'polymorphic_on': type, 'polymorphic_identity': 'a' } class ASub1(A): __mapper_args__ = { 'polymorphic_identity': 'asub1' } class ASub2(A): __mapper_args__ = { 'polymorphic_identity': 'asub2' } class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a1_id = Column(Integer, ForeignKey("a.id")) a2_id = Column(Integer, ForeignKey("a.id")) a1 = relationship("A", primaryjoin="B.a1_id == A.id", backref='b1') a2 = relationship("A", primaryjoin="B.a2_id == A.id", backref='b2') s = Session() print s.query(ASub1).join(B, ASub1.b1).join(ASub2, B.a2) print s.query(ASub1).join(B, ASub1.b1).join(ASub2, ASub2.id == B.a2_id) ``` first query: ``` #!sql SELECT a.id AS a_id, a.type AS a_type FROM a JOIN b ON b.a1_id = a.id JOIN a ON b.a2_id = a.id AND a.type IN (:type_1) WHERE a.type IN (:type_2) ``` second query, it's automatigically aliasing ASub2, bizarrely *without* the relationship: ``` #!sql SELECT a.id AS a_id, a.type AS a_type FROM a JOIN b ON b.a1_id = a.id JOIN a AS a_1 ON a_1.id = b.a2_id AND a_1.type IN (:type_1) WHERE a_1.type IN (:type_2) ``` this is really bad, is this happening with joined inheritance, whats the rules for the magic aliasing here? changing this is going to be really backwards incompatible. |
|
From: Boris K. <iss...@bi...> - 2014-10-23 03:11:25
|
New issue 3232: Type checking still has issues with single-inheritance joins (related to #3222) https://bitbucket.org/zzzeek/sqlalchemy/issue/3232/type-checking-still-has-issues-with-single Boris Kozinsky: When doing multiple joins with single-inheritance objects, type/discriminator checking has issues. In some cases, the checking is not done when "from_joinpoint = True" is used. In other cases, when explicit two-argument joins are used, the type checking is on the wrong alias. See tests 8 and 9 in attached test_inh_query.py file. |
|
From: Nils P. <iss...@bi...> - 2014-10-19 18:04:42
|
New issue 3231: nullsfirst(), nullslast() broken with sqlite https://bitbucket.org/zzzeek/sqlalchemy/issue/3231/nullsfirst-nullslast-broken-with-sqlite Nils Philippsen: Using nullsfirst() or nullslast() in a query to an sqlite database will break when it's executed because sqlite doesn't know about NULLS (FIRST|LAST). See the attached test program: ``` #!python nils@gibraltar:~/test/sqlalchemy> ./nullsfirstlast.py Traceback (most recent call last): File "./nullsfirstlast.py", line 38, in <module> print "\n".join(unicode(x) for x in query) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2438, in __iter__ return self._execute_and_instances(context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py", line 2453, in _execute_and_instances result = conn.execute(querycontext.statement, self._params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement compiled_sql, distilled_params File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1159, in _handle_dbapi_exception exc_info File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) near "NULLS": syntax error u'SELECT foo.foo_id AS foo_foo_id, foo.bar AS foo_bar \nFROM foo ORDER BY foo.bar NULLS FIRST ASC' () nils@gibraltar:~/test/sqlalchemy> ``` I haven't found anything in the docs that one shouldn't attempt to do that on sqlite. On the other hand, I've found some recipes on stackoverflow ([here](http://stackoverflow.com/questions/12503120/how-to-do-nulls-last-in-sqlite) and [here](http://stackoverflow.com/questions/19629775/is-this-the-correct-syntax-for-sqlite-for-making-nulls-appear-last)) on how to fake this functionality in SQL, but they seem to use fake columns or special casing dependent on the column type, so I don't know if it's feasible to implement nullsfirst/nullslast in SQLAlchemy that way. If I'm not off-track, this issue would be in all versions that know nullsfirst(), nullslast(), so everything from 0.7 upward. |