sqlalchemy-tickets Mailing List for SQLAlchemy (Page 10)
Brought to you by:
zzzeek
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
From: Michael B. <iss...@bi...> - 2017-11-22 22:55:50
|
New issue 4137: sharding token extension in identity key https://bitbucket.org/zzzeek/sqlalchemy/issues/4137/sharding-token-extension-in-identity-key Michael Bayer: consider the sharded setup where primary tables in each DB include a shard-unique primary key, but then those tables branch off into sub-tables that do not. In order to represent these objects in an identity map, we would need to support an additional token within the identity key. Generation of identity key is present against the row in loading.py -> instances, so at that point some token from the query context would be pulled (more likely a callable function or event so that this can be generic), and then at the instance level users would need to define some attribute or callable that can retrieve the shard identifier for an instance. another approach would be that the whole identity key is generated from user-defined callables that receive either a row in context or an instance. |
From: ploutch <iss...@bi...> - 2017-11-13 10:27:19
|
New issue 4136: In MySQL, visit_concat_op_binary does not forward kwargs https://bitbucket.org/zzzeek/sqlalchemy/issues/4136/in-mysql-visit_concat_op_binary-does-not ploutch: In `sqlalchemy.dialects.mysql.base`, line 817, `visit_concat_op_binary` should pass its keyword arguments to both calls of `self.process`. The bug happened when using the `CreateView` recipe from https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/Views . Something like this should reproduce the bug: import sqlalchemy as sa from sqlalchemy.schema import DDLElement from sqlalchemy.ext import compiler class CreateView(DDLElement): __visit_name__ = "create_view" def __init__(self, name, selectable, or_replace=False): super(CreateView, self).__init__() self.name = name self.selectable = selectable self.or_replace = or_replace @compiler.compiles(CreateView) def compile_create(element, compiler, **kw): cmd = ( 'CREATE OR REPLACE' if element.or_replace else 'CREATE', 'VIEW', element.name, 'AS', compiler.sql_compiler.process(element.selectable, literal_binds=True) ) return ' '.join(cmd) definition = sa.select([table.c.value]).where(table.c.value.contains('foo')) query = CreateView('bar', definition) engine.execute(query) (Considering that `engine` is an Engine and `table` is a Table with a string column called `value`) You should obtain the given traceback when running the query: File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 2064, in execute return connection.execute(statement, *multiparams, **params) File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1002, in _execute_ddl compiled File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context context) File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception util.reraise(*exc_info) File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context context) File "/home/user/dev/env_201601/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute cursor.execute(statement, parameters) File "/home/user/dev/env_201601/lib/python2.7/site-packages/MySQLdb/cursors.py", line 187, in execute query = query % tuple([db.literal(item) for item in args]) TypeError: not enough arguments for format string Database: MySQL 5.5.57 |
From: Rishi S. <iss...@bi...> - 2017-11-11 14:03:49
|
New issue 4135: Enable set_shard on baked qurires https://bitbucket.org/zzzeek/sqlalchemy/issues/4135/enable-set_shard-on-baked-qurires Rishi Sharma: I am exploring the use of baked queries to help reduce the compilation overhead. Looks great, however, we use ShardedQuery as the query_cls in our session and need the flexibility to set_shard on the query object. The reason for this is to avoid our catch-all implementation of query_chooser, which calls query.statement and negates the performance gain from the baked query. Roughly speaking, being able to do the following would be useful for us: ``` #!python baked_query(session).params(user_id=user_id).set_shard(shard_id).one_or_none() ``` We are currently on 1.1.15 |
From: Jack Z. <iss...@bi...> - 2017-11-10 21:59:18
|
New issue 4134: `AttributeEvents.remove` has inconsistent timing https://bitbucket.org/zzzeek/sqlalchemy/issues/4134/attributeeventsremove-has-inconsistent Jack Zhou: `AttributeEvents.set` and `AttributeEvents.append` both fire before the underlying collection has been changed, but the behavior for `AttributeEvents.remove` is not consistent, both between different methods of a single collection, as well as between the same methods of different collection types. Demo: ``` #!python class Department(Base): __tablename__ = "departments" id = Column(Integer, primary_key=True) employees = relationship(lambda: Employee, cascade="all, delete-orphan") executives = relationship(lambda: Executive, collection_class=attribute_mapped_collection("type"), cascade="all, delete-orphan") contractors = relationship(lambda: Contractor, collection_class=set, cascade="all, delete-orphan") class Executive(Base): __tablename__ = "executives" id = Column(Integer, primary_key=True) type = Column(String, nullable=False) department_id = Column(Integer, ForeignKey(Department.id), nullable=False) class Employee(Base): __tablename__ = "employees" id = Column(Integer, primary_key=True) department_id = Column(Integer, ForeignKey(Department.id), nullable=False) class Contractor(Base): __tablename__ = "contractors" id = Column(Integer, primary_key=True) department_id = Column(Integer, ForeignKey(Department.id), nullable=False) def _set_up_event(attr, name): @event.listens_for(attr, name, raw=True) def _check_if_modified(target, value, initiator): print(attr, name, target.attrs[initiator.key].history.has_changes()) session = Session() emp1 = Employee() emp2 = Employee() exec1 = Executive(type="finance") exec2 = Executive(type="operations") con1 = Contractor() con2 = Contractor() dept = Department(employees=[emp1, emp2], executives={"finance": exec1, "operations": exec2}, contractors={con1, con2}) session.add(dept) session.commit() for attr in [Department.employees, Department.contractors, Department.executives]: for name in ["remove", "append"]: _set_up_event(attr, name) # list print("remove()") dept.employees.remove(emp1) session.rollback() print("pop()") dept.employees.pop() session.rollback() print("clear()") dept.employees.clear() session.rollback() emp3 = Employee() print("[0] = ...") dept.employees[0] = emp3 session.rollback() print("del [0]") del dept.employees[0] session.rollback() print("del [0:2]") del dept.employees[0:2] session.rollback() print("[0:2] = ...") dept.employees[0:2] = [emp3] session.rollback() # set print("discard()") dept.contractors.discard(con1) session.rollback() print("remove()") dept.contractors.remove(con1) session.rollback() print("pop()") dept.contractors.pop() session.rollback() print("clear()") dept.contractors.clear() session.rollback() print("-=") dept.contractors -= {con1, con2} session.rollback() con3 = Contractor() print("&=") dept.contractors &= {con3} session.rollback() print("^=") dept.contractors ^= {con1, con2} session.rollback() # dict print("pop()") dept.executives.pop("finance") session.rollback() print("popitem()") dept.executives.popitem() session.rollback() print("clear()") dept.executives.clear() session.rollback() exec3 = Executive(type="finance") print('["finance"] = ...') dept.executives["finance"] = exec3 session.rollback() print('del ["finance"]') del dept.executives["finance"] session.rollback() exec4 = Executive(type="operations") print("update()") dept.executives.update({"finance": exec3, "operations": exec4}) session.rollback() ``` The following table summarizes the current behavior: method |list |set |dict ---------------------|--------|--------|-------- `remove` |after |before |- `clear` |before |before\*|before `pop/popitem` |after |after |after `dict.pop` |- |- |before `discard` |- |before |- `__setitem__` |before |- |before `__setitem__` (slice)|before\*|- |- `__delitem__` |before |- |before `__delitem__` (slice)|before |- |- `__isub__` |- |before\*|- `__iand__` |- |before\*|- `__ixor__` |- |before\*|- `update` |- |- |before\* \* before\* means the event fires before the removal of the current item, but after the removal of the previous item Ideally they should all behave like either "before" or "before\*" (but not both, for consistency). Technically it's not really a bug because the documentation doesn't prescribe any one particular behavior, but the inconsistency is annoying. For me personally, the motivating use case is being able to intercept *all* changes to a particular instance and act before any changes are made; currently my use case is always okay for `append`/`set`, but only sometimes for `remove`. |
From: dima-starosud <iss...@bi...> - 2017-11-09 10:42:38
|
New issue 4133: configure_mappers does not refresh all_orm_descriptors https://bitbucket.org/zzzeek/sqlalchemy/issues/4133/configure_mappers-does-not-refresh dima-starosud: This is using sqlalchemy 1.2.0b3. I've faced weird behavior shown in following code snippet. Assertion in that test fails. But if you comment either `#1` or `#2` or both it will pass. ``` #!python Base = declarative_base() class Target(Base): __tablename__ = 'target' id = Column(Integer, primary_key=True) descriptors = lambda: inspect(Target).all_orm_descriptors.keys() # these two calls make test fail configure_mappers() #1 descriptors() #2 Target.value = hybrid_property(lambda _: 'Hello!') configure_mappers() assert 'value' in descriptors() ``` Please let me know if I get it wrong. Thanks a lot in advance! |
From: Keonwoo K. <iss...@bi...> - 2017-11-09 03:30:12
|
New issue 4132: Documentation: benchmark code is wrong https://bitbucket.org/zzzeek/sqlalchemy/issues/4132/documentation-benchmark-code-is-wrong Keonwoo Kang: In [Performance](http://docs.sqlalchemy.org/en/latest/faq/performance.html) page, there's code for comparing session.add and session.bulk_save_objects. In this code, actually `test_sqlalchemy_orm_bulk_save_objects` is inserting 90,000 records while `test_sqlalchemy_orm` inserting 100,000 records. Because of `n1 = n1 - 10000` execute before inserting, last loop will insert 0 records. |
From: erdnaxeli <iss...@bi...> - 2017-11-07 18:07:14
|
New issue 4131: Object deleted by delete-orphan not marked as deleted https://bitbucket.org/zzzeek/sqlalchemy/issues/4131/object-deleted-by-delete-orphan-not-marked erdnaxeli: Hi, When removing an object from a collection, if this object is deleted because of `cascade="all, delete-orphan'`, the object is not marked as deleted. ``` #!/usr/bin/env python from sqlalchemy import Column, ForeignKey, Integer, String, create_engine, event from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.inspection import inspect from sqlalchemy.orm import backref, relationship, sessionmaker engine = create_engine('sqlite:///:memory:', echo=True) Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String) class Pet(Base): __tablename__ = 'pets' id = Column(Integer, primary_key=True) user_id = Column(Integer, ForeignKey('users.id'), nullable=False) name = Column(String) user = relationship( 'User', backref=backref('pets', cascade='all, delete-orphan') ) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() @event.listens_for(Session, 'after_flush') def do_something_with_updated_objects(session, flush_context): for obj in session.dirty: print(obj.name) i = inspect(obj) if not i.deleted: session.refresh(obj) print('> Create objects') george = User(id=1, name='George') rex = Pet(id=1, name='Rex') george.pets.append(rex) session.add(george) session.commit() print('> Remove Rex') george = session.query(User).filter_by(name='George').scalar() assert(len(george.pets) == 1) rex = george.pets[0] # removing Rex will actually delete it, because of delete-orphan george.pets.remove(rex) session.add(george) session.commit() george = session.query(User).filter_by(name='George').scalar() assert(len(george.pets) == 0) ``` Output: ``` > Create objects 2017-11-07 19:03:07,351 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) 2017-11-07 19:03:07,352 INFO sqlalchemy.engine.base.Engine INSERT INTO users (id, name) VALUES (?, ?) 2017-11-07 19:03:07,352 INFO sqlalchemy.engine.base.Engine (1, 'George') 2017-11-07 19:03:07,352 INFO sqlalchemy.engine.base.Engine INSERT INTO pets (id, user_id, name) VALUES (?, ?, ?) 2017-11-07 19:03:07,353 INFO sqlalchemy.engine.base.Engine (1, 1, 'Rex') 2017-11-07 19:03:07,353 INFO sqlalchemy.engine.base.Engine COMMIT > Remove Rex 2017-11-07 19:03:07,353 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) 2017-11-07 19:03:07,354 INFO sqlalchemy.engine.base.Engine SELECT users.id AS users_id, users.name AS users_name FROM users WHERE users.name = ? 2017-11-07 19:03:07,354 INFO sqlalchemy.engine.base.Engine ('George',) 2017-11-07 19:03:07,354 INFO sqlalchemy.engine.base.Engine SELECT pets.id AS pets_id, pets.user_id AS pets_user_id, pets.name AS pets_name FROM pets WHERE ? = pets.user_id 2017-11-07 19:03:07,354 INFO sqlalchemy.engine.base.Engine (1,) 2017-11-07 19:03:07,355 INFO sqlalchemy.engine.base.Engine DELETE FROM pets WHERE pets.id = ? 2017-11-07 19:03:07,355 INFO sqlalchemy.engine.base.Engine (1,) Rex 2017-11-07 19:03:07,356 INFO sqlalchemy.engine.base.Engine SELECT pets.id AS pets_id, pets.user_id AS pets_user_id, pets.name AS pets_name FROM pets WHERE pets.id = ? 2017-11-07 19:03:07,356 INFO sqlalchemy.engine.base.Engine (1,) 2017-11-07 19:03:07,356 INFO sqlalchemy.engine.base.Engine ROLLBACK Traceback (most recent call last): File "t.py", line 61, in <module> session.commit() File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 921, in commit self.transaction.commit() File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 461, in commit self._prepare_impl() File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 441, in _prepare_impl self.session.flush() File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2192, in flush self._flush(objects) File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2312, in _flush transaction.rollback(_capture_exception=True) File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2280, in _flush self.dispatch.after_flush(self, flush_context) File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/event/attr.py", line 218, in __call__ fn(*args, **kw) File "t.py", line 43, in do_something_with_updated_objects session.refresh(obj) File "/home/amorigno/bacasable/delete/.venv/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1474, in refresh instance_str(instance)) sqlalchemy.exc.InvalidRequestError: Could not refresh instance '<Pet at 0x7f21393a5490>' ``` I would expect `i.deleted` to be `True`. SQLAlchemy==1.1.15 Python 2.7.6 |
From: dima-starosud <iss...@bi...> - 2017-11-02 12:11:16
|
New issue 4130: Chained contains_eager for inherited relationships https://bitbucket.org/zzzeek/sqlalchemy/issues/4130/chained-contains_eager-for-inherited dima-starosud: This is using sqlalchemy 1.2.0b3. Please consider following code snippet. Sorry it's a bit long. It basically defines polymorphic models `_A`, `_B`, `_C`, `A1`, `B1`, `C1` and relationships between them `_A <-> _B`, `B1 <-> C1`. Then it `contains_eager` them in different ways `A1 * B1`, `B1 * C1`, `A1 * B1 * C1`, . Since it works for query`ab` and `bc`, I would expect it to also work for `abc`. Or give some warning/error. ``` #!python class Common: @declared_attr def id(cls): return Column(Integer, primary_key=True) @declared_attr def type(cls): return Column(String, nullable=False) @declared_attr def __mapper_args__(cls): return {'polymorphic_on': cls.type} Base = declarative_base() class _A(Base, Common): __tablename__ = 'a' b = relationship('_B', back_populates='a') class _B(Base, Common): __tablename__ = 'b' a_id = Column(Integer, ForeignKey(_A.id)) a = relationship(_A, back_populates='b') class _C(Base, Common): __tablename__ = 'c' b_id = Column(Integer, ForeignKey(_B.id)) class A1(_A): __mapper_args__ = {'polymorphic_identity': 'A1'} class B1(_B): __mapper_args__ = {'polymorphic_identity': 'B1'} class C1(_C): __mapper_args__ = {'polymorphic_identity': 'C1'} b1 = relationship(B1, backref='c1') configure_mappers() ab = Query(A1) \ .outerjoin(B1, A1.b).options(contains_eager(A1.b, alias=B1)) print(ab) # SELECT b.a_id AS b_a_id, b.type AS b_type, b.id AS b_id # , a.type AS a_type, a.id AS a_id # FROM ... bc = Query(B1) \ .outerjoin(C1, B1.c1).options(contains_eager(B1.c1, alias=C1)) print(bc) # SELECT b.a_id AS b_a_id, b.type AS b_type, b.id AS b_id # , c.b_id AS c_b_id, c.type AS c_type, c.id AS c_id # FROM ... abc = Query(A1) \ .outerjoin(B1, A1.b).options(contains_eager(A1.b, alias=B1)) \ .outerjoin(C1, B1.c1).options(contains_eager(A1.b, B1.c1, alias=C1)) print(abc) # doesn't contain "c": # SELECT b.a_id AS b_a_id, b.type AS b_type, b.id AS b_id # , a.type AS a_type, a.id AS a_id # FROM ... ``` There is workaround though. Looks like changing `A1.b` has an effect of how `B1.c1` is contained. ``` #!python B1.a = relationship(A1, back_populates='b') A1.b = relationship(B1, back_populates='a') configure_mappers() abc = Query(A1) \ .outerjoin(B1, A1.b).options(contains_eager(A1.b, alias=B1)) \ .outerjoin(C1, B1.c1).options(contains_eager(A1.b, B1.c1, alias=C1)) print(abc) # SELECT b.a_id AS b_a_id, b.type AS b_type, b.id AS b_id # , c.b_id AS c_b_id, c.type AS c_type, c.id AS c_id # , a.type AS a_type, a.id AS a_id # FROM ... ``` |
From: Michael B. <iss...@bi...> - 2017-11-01 14:31:20
|
New issue 4129: mapper_configured event 100% non functional, no tests, need to remove from docs ASAP until fixed https://bitbucket.org/zzzeek/sqlalchemy/issues/4129/mapper_configured-event-100-non-functional Michael Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import event from sqlalchemy.orm import mapper # TypeError: 'memoized_property' object is not iterable # @event.listens_for(mapper, "mapper_configured", propagate=True) # def evt1(*arg, **kw): # print "Evt1" Base = declarative_base() # nothing @event.listens_for(Base, "mapper_configured", propagate=True) def evt2(*arg, **kw): print "Evt2" class SomeMixin(object): pass # nothing @event.listens_for(SomeMixin, "mapper_configured", propagate=True) def evt3(*arg, **kw): print "Evt3" class A(SomeMixin, Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) bs = relationship("B") e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) ``` |
From: Michael B. <iss...@bi...> - 2017-11-01 13:54:44
|
New issue 4128: common mapperoption recipes broken since 1.0 https://bitbucket.org/zzzeek/sqlalchemy/issues/4128/common-mapperoption-recipes-broken-since Michael Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base import datetime from sqlalchemy.orm.interfaces import MapperOption e = create_engine('sqlite://', echo=True) Base = declarative_base(e) class Parent(Base): __tablename__ = 'parent' id = Column(Integer, primary_key=True) timestamp = Column(TIMESTAMP, nullable=False) children = relation("Child") class Child(Base): __tablename__ = 'child' id = Column(Integer, primary_key=True) parent_id = Column(Integer, ForeignKey('parent.id'), nullable=False) timestamp = Column(TIMESTAMP, nullable=False) Base.metadata.create_all() Parent.temporal_children = relation( Child, primaryjoin=and_( Parent.id == Child.parent_id, Child.timestamp.between( bindparam("temporal_lower"), bindparam("temporal_upper") ) ), viewonly=True ) session = sessionmaker()() c1, c2, c3, c4, c5 = [ Child(timestamp=datetime.datetime(2009, 10, 15, 12, 00, 00)), Child(timestamp=datetime.datetime(2009, 10, 17, 12, 00, 00)), Child(timestamp=datetime.datetime(2009, 10, 20, 12, 00, 00)), Child(timestamp=datetime.datetime(2009, 10, 12, 12, 00, 00)), Child(timestamp=datetime.datetime(2009, 10, 17, 12, 00, 00)), ] p1, p2 = [ Parent( timestamp=datetime.datetime(2009, 10, 15, 12, 00, 00), children=[c1, c2, c3] ), Parent( timestamp=datetime.datetime(2009, 10, 17, 12, 00, 00), children=[c4, c5] )] session.add_all([p1, p2]) session.commit() class TemporalOption(MapperOption): propagate_to_loaders = True def __init__(self, range_lower, range_upper): self.range_lower = range_lower self.range_upper = range_upper def process_query_conditionally(self, query): """process query during a lazyload""" query._params = query._params.union( dict( temporal_lower=self.range_lower, temporal_upper=self.range_upper )) def process_query(self, query): """process query during a primary user query""" # apply bindparam values self.process_query_conditionally(query) # requires a query against a single mapper parent_cls = query._mapper_zero().class_ filter_crit = parent_cls.timestamp.between( bindparam("temporal_lower"), bindparam("temporal_upper") ) if query._criterion is None: query._criterion = filter_crit else: query._criterion = query._criterion & filter_crit session.expire_all() # for a clean test # ISSUE 1: if we don't do populate_existing() here, the load_options # are not applied to Parent because as of 1.0 we no longer # apply load_options unconditionally, only on "new" or "populate existing" parents = session.query(Parent).\ options( TemporalOption( datetime.datetime(2009, 10, 16, 12, 00, 00), datetime.datetime(2009, 10, 18, 12, 00, 00)) ).all() assert parents[0] == p2 # ISSUE 2: _emit_lazyload() now calls: # lazy_clause, params = self._generate_lazy_clause( # state, passive=passive) # producing params separate from the binds, # due to e3b46bd62405b6ff57119e164718118f3e3565e0 # issue #3054, introduction of baked query. params set up by the # mapper option must be honored assert parents[0].temporal_children == [c5] ``` |
From: Philip M. <iss...@bi...> - 2017-11-01 03:03:56
|
New issue 4127: Allowing schema name to be specified for table construction https://bitbucket.org/zzzeek/sqlalchemy/issues/4127/allowing-schema-name-to-be-specified-for Philip Martin: I want to propose a change to the table expression class so it can accept a schema name in its constructor. I believe this is useful when building queries that are referencing table objects with different schema names. My proposed enhancement would follow the same convention used in the Table class by allowing for: ```python from sqlalchemy import column, table, MetaData, Table, Column, CHAR # this currently generates a TypeError due to invalid keyword argument t = table('bar', column('x'), schema='foo') # same API as Table class t = Table('bar', MetaData(), Column('x', CHAR(2), primary_key=True), schema='foo) ``` |
From: Nicolas C. <iss...@bi...> - 2017-10-31 11:04:16
|
New issue 4126: ColumnDefault.__repr__ does not cope with tuple in composite types. https://bitbucket.org/zzzeek/sqlalchemy/issues/4126/columndefault__repr__-does-not-cope-with Nicolas CANIART: We defined a composite type that is seen as a tuple on the Python side: a value with a unit. Some columns are defined with a default value like: `(0, Unit.ton)`. The __repr__ method of the ColumnDefault class cannot cope with such values as the tuple is intrepret as arguments to the format, not the actual value to format. Simple fix is to wrap the format argument with `(` and `, )` to make it safe. |
From: Bryan B. <iss...@bi...> - 2017-10-30 10:53:06
|
New issue 4125: AttributeError in StrSQLCompiler https://bitbucket.org/zzzeek/sqlalchemy/issues/4125/attributeerror-in-strsqlcompiler Bryan Brown: I've attached an image file (only one is needed to reproduce this error but more like it can be added freely) in the proper folder hierarchy. For reference, ``` #!python uuid ``` is a short string, ``` #!python dim ``` is an integer, and ``` #!python array ``` is also a string (of comma-delimited integers). Their output is shown in line 5 of my Output ("Inserting..."). As the code suggests, you'll need a recent build of Pillow to get the PIL functionality. The problem statement seems to be on line 29/30. # scan.py # ``` #!python import os import hashlib import base64 import MySQLdb import sqlalchemy from PIL import Image from sqlalchemy import create_engine, Table, Column, Integer, Text, MetaData from sqlalchemy.dialects import mysql imdim = 8 hasher = hashlib.md5() def getuuid(array): hasher.update(array.encode("utf-8")) return str(base64.urlsafe_b64encode(hasher.digest()),"utf-8") def scan(imdata): for size in range(4,imdim-1,4): for x in range(imdim*(imdim-1)): temp = [] for y in range(size): start = x+(y*imdim) temp = temp + (imdata[start:start+size]) print(temp) temp = [str(x) for x in temp] tempstr = ','.join(temp) uuid = getuuid(tempstr) print("Inserting",uuid,size,tempstr) insert = test.insert().values(uuid=uuid,dim=size,array=tempstr) con.execute(insert.compile()) eng = create_engine('mysql://[creds and ip redacted]/Test') print("Connected") with eng.connect() as con: con.execute('''DROP TABLE IF EXISTS test''') print("Dropped old test") metadata = MetaData() test = Table('test', metadata, Column('uuid', Text), Column('dim', Integer), Column('array', Text), ) metadata.create_all(eng) print("Made new test") thumbs = os.walk('/home/bryan/cake-'+str(imdim)) count = 0 for dirpath, dirnames, filenames in thumbs: for filename in filenames: im = Image.open(os.path.join(dirpath,filename)) imdata = list(im.getdata()) scan(imdata) ``` # Full Output # ``` #!python bryan@cake:~/scripts$ python3 scan.py Connected Dropped old test Made new test [89, 68, 33, 33, 114, 76, 70, 89, 134, 112, 116, 117, 125, 108, 66, 62] Inserting Nyr46IPx33dO-EGb3CX7dA== 4 89,68,33,33,114,76,70,89,134,112,116,117,125,108,66,62 Traceback (most recent call last): File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1116, in _execute_context context = constructor(dialect, self, conn, *args) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 621, in _init_compiled for key in self.compiled.positiontup: AttributeError: 'StrSQLCompiler' object has no attribute 'positiontup' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "scan.py", line 53, in <module> scan(imdata) File "scan.py", line 30, in scan con.execute(insert.compile()) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/sql/compiler.py", line 227, in _execute_on_connection return connection._execute_compiled(self, multiparams, params) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1075, in _execute_compiled compiled, parameters File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1121, in _execute_context None, None) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception exc_info File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 186, in reraise raise value.with_traceback(tb) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1116, in _execute_context context = constructor(dialect, self, conn, *args) File "/home/bryan/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 621, in _init_compiled for key in self.compiled.positiontup: sqlalchemy.exc.StatementError: (builtins.AttributeError) 'StrSQLCompiler' object has no attribute 'positiontup' [SQL: 'INSERT INTO test (uuid, dim, "array") VALUES (:uuid, :dim, :array)'] ``` # Version # ``` #!python >>> sqlalchemy.__version__ '1.1.14' ``` ``` #!python root@cake-db ~# mysqladmin version -p Enter password: mysqladmin Ver 8.42 Distrib 5.5.58, for debian-linux-gnu on x86_64 Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Server version 5.5.58-0+deb8u1 Protocol version 10 Connection Localhost via UNIX socket UNIX socket /var/run/mysqld/mysqld.sock Uptime: 3 days 2 hours 10 min 23 sec Threads: 1 Questions: 2700 Slow queries: 0 Opens: 314 Flush tables: 2 Open tables: 41 Queries per second avg: 0.010 ``` |
From: Michael B. <iss...@bi...> - 2017-10-28 16:27:34
|
New issue 4124: AbstractConcreteBase transmitting inappropriate "load only" props to expired relationship item https://bitbucket.org/zzzeek/sqlalchemy/issues/4124/abstractconcretebase-transmitting Michael Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.declarative import AbstractConcreteBase Base = declarative_base() class Abstract(AbstractConcreteBase, Base): pass # using this base, no problem # class Abstract(Base): # __abstract__ = True class Concrete(Abstract): __tablename__ = 'concrete' id = Column(Integer, primary_key=True) details = relationship("Detail") class Detail(Base): __tablename__ = 'detail' id = Column(Integer, primary_key=True) c2id = Column(ForeignKey('concrete.id')) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) s.add(Concrete(details=[Detail()])) s.commit() c1 = s.query(Concrete).first() # triggers bug s.expire(c1) c1.details ``` trace, illustrates the pseudocolumn "type" is involved, also any other columns that get transmitted up to the base from other sibling classes end up here as well ``` #! Traceback (most recent call last): File "test.py", line 44, in <module> c1.details File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 242, in __get__ return self.impl.get(instance_state(instance), dict_) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 603, in get value = self.callable_(state, passive) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py", line 623, in _load_for_state return self._emit_lazyload(session, state, ident_key, passive) File "<string>", line 1, in <lambda> File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py", line 733, in _emit_lazyload lazy_clause, params = self._generate_lazy_clause(state, passive) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py", line 556, in _generate_lazy_clause state, dict_, ident, passive) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/mapper.py", line 2609, in _get_state_attr_by_column return state.manager[prop.key].impl.get(state, dict_, passive=passive) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/attributes.py", line 598, in get value = state._load_expired(state, passive) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/state.py", line 594, in _load_expired self.manager.deferred_scalar_loader(self, toload) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 835, in load_scalar_attributes only_load_props=attribute_names) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 231, in load_on_ident return q.one() File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 2835, in one ret = self.one_or_none() File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 2805, in one_or_none ret = list(self) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 97, in instances util.raise_from_cause(err) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/util/compat.py", line 203, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 60, in instances for query_entity in query._entities File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 3720, in row_processor polymorphic_discriminator=self._polymorphic_discriminator File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 307, in _instance_processor mapper._props[k] for k in only_load_props) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 307, in <genexpr> mapper._props[k] for k in only_load_props) KeyError: 'type' ``` |
From: rdunklau <iss...@bi...> - 2017-10-26 13:35:01
|
New issue 4123: Feature Request: Nested CTEs https://bitbucket.org/zzzeek/sqlalchemy/issues/4123/feature-request-nested-ctes rdunklau: Hello. Sorry if this is not a proper feature request, and should have gone to the mailing list first. It seems like it is impossible to generate nested CTEs since CTEs are only rendered at the top level (when the compiler stack is empty). The goal would be to generate SQL looking like this: ``` #!sql WITH t AS ( WITH t2 AS ( SELECT 1 ) SELECT * FROM t2 ) SELECT * from t ``` The real use case is a bit more complicated: the goal would be to be able to reference a recursive CTE more than once in the recursive term (which is disallowed, at least in PostgreSQL), by materializing it in its own CTE on every recursion, something akin to: ``` #!sql WITH recursive cte AS ( SELECT initial_term UNION ALL ( WITH mat_previous_result AS ( SELECT * FROM cte WHERE <recursion_stop_condition> ) SELECT something FROM mat_previous_result UNION ALL SELECT anotherthing FROM mat_previous_result ) ) SELECT * FROM cte WHERE <recursion_stop_condition> ``` |
From: Michael B. <iss...@bi...> - 2017-10-25 15:37:57
|
New issue 4122: postgresql get_indexes() is returning a record for an EXCLUDE constraint https://bitbucket.org/zzzeek/sqlalchemy/issues/4122/postgresql-get_indexes-is-returning-a Michael Bayer: not sure what's going on here because the table.indexes collection does not get an entry, but in any case this breaks Alembic: ``` #!python from sqlalchemy import * from sqlalchemy.dialects.postgresql import ExcludeConstraint, TSRANGE m = MetaData() Table( 't', m, Column('id', Integer, primary_key=True), Column('period', TSRANGE), ExcludeConstraint(('period', '&&'), name='quarters_period_excl') ) e = create_engine("postgresql://scott:tiger@localhost/test", echo='debug') # e.execute("CREATE EXTENSION btree_gist") m.drop_all(e) m.create_all(e) insp = inspect(e) print insp.get_indexes('t') # m2 = MetaData() # t2 = Table('t', m2, autoload_with=e) # print t2.indexes ``` output: [{'unique': False, 'duplicates_constraint': u'quarters_period_excl', 'name': u'quarters_period_excl', 'dialect_options': {'postgresql_using': u'gist'}, 'column_names': [u'period']}] |
From: Michael B. <iss...@bi...> - 2017-10-24 17:41:25
|
New issue 4121: no handler for SQL Server / pyodbc BINARY https://bitbucket.org/zzzeek/sqlalchemy/issues/4121/no-handler-for-sql-server-pyodbc-binary Michael Bayer: the pyodbc driver should send pyodbc.NullParam when None is sent, which helps for the freetds driver. Demo, which run for me as I'm on the MS drivers right now: ``` #!python from sqlalchemy import * e = create_engine("mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server", echo=True) m = MetaData() t = Table('t', m, Column('data', BINARY(100))) conn = e.connect() trans = conn.begin() t.create(conn) conn.execute(t.insert(), data=None) ``` |
From: Michael B. <iss...@bi...> - 2017-10-23 22:27:59
|
New issue 4120: mysql 5.7.20 warns on @tx_isolation https://bitbucket.org/zzzeek/sqlalchemy/issues/4120/mysql-5720-warns-on-tx_isolation Michael Bayer: changed to @transaction_isolation and is already producing errors with 1.1 due to older "our_warn" fixture modified for 1.2 in 9f0fb6c601. pr https://github.com/zzzeek/sqlalchemy/pull/391 has an initial fix the gerrit is working through at https://gerrit.sqlalchemy.org/#/q/I4d2e04df760c5351a71dde8b32145cdc69fa6115 |
From: Ben C. D. <iss...@bi...> - 2017-10-20 05:49:23
|
New issue 4119: JDBC support? https://bitbucket.org/zzzeek/sqlalchemy/issues/4119/jdbc-support Ben Chuanlong Du: Does SQLAlchemy support connecting to databases using JDBC in Python (not Jython)? |
From: Torsten L. <iss...@bi...> - 2017-10-19 16:18:01
|
New issue 4118: Surprising behaviour for session.execute(table.insert(), []) https://bitbucket.org/zzzeek/sqlalchemy/issues/4118/surprising-behaviour-for-sessionexecute Torsten Landschoff: Hi Michael, I don't think this is really a bug, but it was surprising to me. Basically, I was doing some python side processing during a database upgrade, processing batches of 1000 input rows into the target table, filtering invalid data. For each each I used a `session.execute` call with a list comprehension generating the rows to insert for each batch. This happend to crash with real data, as all rows of the batch were filtered out. Code example: ```python from sqlalchemy import * from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Task(Base): __tablename__ = "tasks" customer_id = Column(Integer, primary_key=True) task_id = Column(Integer, primary_key=True) engine = create_engine("sqlite:///") Base.metadata.create_all(engine) session = sessionmaker(engine)() session.execute(Task.__table__.insert(), []) # real code: [row for row in ... if ...] ``` I get this output: ``` /home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/sql/crud.py:695: SAWarning: Column 'tasks.customer_id' is marked as a member of the primary key for table 'tasks', but has no Python-side or server-side default generator indicated, nor does it indicate 'autoincrement=True' or 'nullable=True', and no explicit value is passed. Primary key columns typically may not store NULL. Note that as of SQLAlchemy 1.1, 'autoincrement=True' must be indicated explicitly for composite (e.g. multicolumn) primary keys if AUTO_INCREMENT/SERIAL/IDENTITY behavior is expected for one of the columns in the primary key. CREATE TABLE statements are impacted by this change as well on most backends. util.warn(msg) /home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/sql/crud.py:695: SAWarning: Column 'tasks.task_id' is marked as a member of the primary key for table 'tasks', but has no Python-side or server-side default generator indicated, nor does it indicate 'autoincrement=True' or 'nullable=True', and no explicit value is passed. Primary key columns typically may not store NULL. Note that as of SQLAlchemy 1.1, 'autoincrement=True' must be indicated explicitly for composite (e.g. multicolumn) primary keys if AUTO_INCREMENT/SERIAL/IDENTITY behavior is expected for one of the columns in the primary key. CREATE TABLE statements are impacted by this change as well on most backends. util.warn(msg) Traceback (most recent call last): File "demo.py", line 20, in <module> session.execute(Task.__table__.insert(), []) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 1170, in execute bind, close_with_result=True).execute(clause, params or {}) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute return meth(self, multiparams, params) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement compiled_sql, distilled_params File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context context) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception exc_info File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context context) File "/home/torsten.landschoff/sqlabug/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) NOT NULL constraint failed: tasks.customer_id [SQL: u'INSERT INTO tasks DEFAULT VALUES'] ``` I would have expected that session.execute would do nothing when presented with an empty list for rows. The documentation is unclear about this: > Optional dictionary, or list of dictionaries, containing bound parameter values. If a single dictionary, [...]; if a list of dictionaries, an "executemany" will be invoked. In my opinion an empty list qualifies as a list of dictionary - YMMV. Thanks for considering, IMHO it would be a good thing to eliminate this pitfall in the long run. Greetings, Torsten PS: BTW: The behaviour is consistent compared with calling `engine.execute`. |
From: dima-starosud <iss...@bi...> - 2017-10-19 14:12:19
|
New issue 4117: all_orm_descriptors contains deleted attribute https://bitbucket.org/zzzeek/sqlalchemy/issues/4117/all_orm_descriptors-contains-deleted dima-starosud: This is using sqlalchemy 1.2.0b3. Please consider following code snippet. ``` #!python class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) value = Column(String) mapper = inspect(A).mapper print('Initial', mapper.all_orm_descriptors.keys()) A.value_2 = hybrid_property(lambda me: me.value) configure_mappers() print('Added', mapper.all_orm_descriptors.keys()) del A.value_2 configure_mappers() print('Deleted', mapper.all_orm_descriptors.keys()) from sqlalchemy.orm.mapper import _memoized_configured_property _memoized_configured_property.expire_instance(mapper) print('Expired', mapper.all_orm_descriptors.keys()) ``` I think `_memoized_configured_property` is not supposed to be used outside lib code. Is there *official* way to do this? |
From: dima-starosud <iss...@bi...> - 2017-10-19 09:25:18
|
New issue 4116: Accessing AssociationProxy of aliased model breaks it https://bitbucket.org/zzzeek/sqlalchemy/issues/4116/accessing-associationproxy-of-aliased dima-starosud: This is using sqlalchemy 1.2.0b3. When `AssociationProxy` is first accessed on `aliased` it leads to setting `owning_class` to that `AliasedClass`, which then leads to exceptions inside lib. Please see details in following code snippet. ``` #!python class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) value = Column(String) class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a_id = Column(Integer, ForeignKey(A.id)) a = relationship(A) a_value = association_proxy('a', 'value') print(aliased(B).a_value) # Due to this call line below fails print(B(a=A(value=4)).a_value) # Class object expected, got '<AliasedClass at 0x7f491cbb3048; B>'. ``` |
From: yoch <iss...@bi...> - 2017-10-18 14:59:41
|
New issue 4115: 1.2.0b3 Regression https://bitbucket.org/zzzeek/sqlalchemy/issues/4115/120b3-regression yoch: Hi, I use mariadb 10.2, and 1.2.0b2 worked well for me. After upgrading to the new beta, I got the following error : ``` #!pycon Traceback (most recent call last): File "<stdin>", line 1, in <module> File "~/data_models.py", line 290, in <module> name_for_collection_relationship=name_for_collection_relationship) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/ext/automap.py", line 754, in prepare autoload_replace=False File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/sql/schema.py", line 3895, in reflect with bind.connect() as conn: File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2102, in connect return self._connection_cls(self, **kwargs) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 90, in __init__ if connection is not None else engine.raw_connection() File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2188, in raw_connection self.pool.unique_connection, _connection) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/base.py", line 2158, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 344, in unique_connection return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 781, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 531, in checkout rec = pool._do_get() File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 1185, in _do_get self._dec_overflow() File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise raise value File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 1182, in _do_get return self._create_connection() File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 349, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 476, in __init__ self.__connect(first_connect_check=True) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/pool.py", line 676, in __connect exec_once(self.connection, self) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/event/attr.py", line 246, in exec_once self(*args, **kw) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/event/attr.py", line 256, in __call__ fn(*args, **kw) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/util/langhelpers.py", line 1334, in go return once_fn(*arg, **kw) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/engine/strategies.py", line 182, in first_connect dialect.initialize(c) File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/dialects/mysql/base.py", line 1871, in initialize self._warn_for_known_db_issues() File "/usr/local/lib/python3.5/dist-packages/sqlalchemy/dialects/mysql/base.py", line 1876, in _warn_for_known_db_issues if mdb_version > (10, 2) and mdb_version < (10, 2, 9): TypeError: unorderable types: str() > int() ``` Thanks |
From: Alessandro P. <iss...@bi...> - 2017-10-18 12:51:08
|
New issue 4114: sqlite: the compiled query escapes percent sign when it shouldn't https://bitbucket.org/zzzeek/sqlalchemy/issues/4114/sqlite-the-compiled-query-escapes-percent Alessandro Pelliciari: ``` #!python SQLAlchemy==1.1.9 SQLAlchemy-Utils==0.32.16 ``` Working on sqlite, with a query like this: ``` #!sql SELECT DATE(creation_date, -strftime('%d', data) || ' days', '+1 day') AS __timestamp FROM table_xyz ``` SQLAlchemy when compiling escapes percent sign inside `strtfime`, making the query wrong and meaningless: ``` #!sql SELECT DATE(creation_date, -strftime('%%d', data) || ' days', '+1 day') AS __timestamp FROM table_xyz ``` notice the double percent inside `strftime` function. I think the problem is in the `sqlite` dialect: it doesn't cover this case, but I don't know very well SQLAlchemy so i don't know where to look to patch it. I don't have an easy reproducible case because this issue came up using Superset (https://github.com/apache/incubator-superset/), who's using SQLAlchemy to do the interrogation on the database, so not knowing SQLAlchemy well I can't isolate the case. Relevant Superset code doing the query: ``` #!python def get_query_str(self, query_obj): engine = self.database.get_sqla_engine() qry = self.get_sqla_query(**query_obj) sql = str( qry.compile( engine, compile_kwargs={"literal_binds": True} ) ) logging.info(sql) sql = sqlparse.format(sql, reindent=True) return sql ``` In my case, engine is: ``` #! (Pdb++) pp engine.__dict__ {'_echo': None, 'dialect': <sqlalchemy.dialects.sqlite.pysqlite.SQLiteDialect_pysqlite object at 0x7f6f62e74290>, 'engine': Engine(sqlite:////home/superset/consuntivo.db), 'logger': <logging.Logger object at 0x7f6f6ad7c350>, 'pool': <sqlalchemy.pool.NullPool object at 0x7f6f62e74350>, 'url': sqlite:////home/superset/consuntivo.db} ``` qry is: ``` #! <sqlalchemy.sql.selectable.Select at 0x7f83d83b9950; Select object> ``` and raw columns in qry object are ok: ``` #! (Pdb++) pp qry._raw_columns[0].__dict__ {'_allow_label_resolve': True, '_element': <sqlalchemy.sql.elements.ColumnClause at 0x7f83d85d7ad0; DATE(data, -strftime('%d', data) || ' days', '+1 day')>, '_key_label': u'__timestamp', '_label': u'__timestamp', '_proxies': [<sqlalchemy.sql.elements.ColumnClause at 0x7f83d85d7ad0; DATE(data, -strftime('%d', data) || ' days', '+1 day')>], '_resolve_label': u'__timestamp', '_type': DateTime(), 'comparator': <sqlalchemy.sql.sqltypes.Comparator object at 0x7f83d84e50a0>, 'element': <sqlalchemy.sql.elements.ColumnClause at 0x7f83d85d7ad0; DATE(data, -strftime('%d', data) || ' days', '+1 day')>, 'key': u'__timestamp', 'name': u'__timestamp', 'type': DateTime()} ``` |
From: zaazbb <iss...@bi...> - 2017-10-17 10:47:06
|
New issue 4113: use asyncio + uvloop in sqlalchemy. https://bitbucket.org/zzzeek/sqlalchemy/issues/4113/use-asyncio-uvloop-in-sqlalchemy zaazbb: asyncio + uvloop is very fast now. uvloop makes asyncio fast. In fact, it is at least 2x faster than nodejs, gevent, as well as any other Python asynchronous framework. The performance of uvloop-based asyncio is close to that of Go programs. from: https://magic.io/blog/uvloop-blazing-fast-python-networking/ |