sqlalchemy-tickets Mailing List for SQLAlchemy (Page 33)
Brought to you by:
zzzeek
You can subscribe to this list here.
| 2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
| 2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
| 2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
| 2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
| 2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
| 2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
| 2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
| 2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
| 2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
| 2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
| 2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
| 2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
|
From: Maximilian R. <iss...@bi...> - 2015-09-24 14:04:12
|
New issue 3542: Design for Pandas access to Query https://bitbucket.org/zzzeek/sqlalchemy/issues/3542/design-for-pandas-access-to-query Maximilian Roos: Over at Pandas, there's a question on the best way to load a pandas DataFrame from a SQLAlchemy Query object. https://github.com/pydata/pandas/issues/11181#issuecomment-142915059 Do you have any guidance on what the design should be? Currently something like `pd.read_sql_query(Query().selectable, engine)` or `pd.read_sql(query.statement, query.session.bind)` will work. But is there a better way that we could add to pandas, vs just executing the SQL string? [SO question](http://stackoverflow.com/questions/29525808/sqlalchemy-orm-conversion-to-pandas-dataframe) Thanks |
|
From: Gorka E. <iss...@bi...> - 2015-09-24 13:32:11
|
New issue 3541: Undesired SQL generated on update https://bitbucket.org/zzzeek/sqlalchemy/issues/3541/undesired-sql-generated-on-update Gorka Eguileor: In some SQL DBs the UPDATE operation is order dependent, so the operation behaves differently depending on the order of the values. As an example, a volumes table with columns 'status' and 'previous_status' and we want to update a volume that has 'available' in the status column. If the SQL query is generated as: UPDATE volumes SET previous_status=status, status='new' WHERE id=1; This will result in a volume with 'new' status and 'available' previous_status both on SQLite and MariaDB, but if columns get reversed and the query is generated as: UPDATE volumes SET status='new', previous_status=status WHERE id=1; We will get the same result in SQLite but will result in a volume with status and previous_status set to 'new' in MariaDB, which is not what we want. And this cannot be selected when calling update or values on update(). |
|
From: Arcadiy I. <iss...@bi...> - 2015-09-23 23:41:08
|
New issue 3540: @Column updatable and/or insertable https://bitbucket.org/zzzeek/sqlalchemy/issues/3540/column-updatable-and-or-insertable Arcadiy Ivanov: In JPA in addition to *nullable* supported by SQLAlchemy the specification supports attributes **updatable** and **insertable**, allowing to have non-updatable and non-insertable columns that never appear in UPDATE or INSERT statements respectively: http://docs.oracle.com/javaee/7/api/javax/persistence/Column.html Reading documentation I can figure out whether this capability, at least for **updatable** columns already exists or the only thing you can do is to specify a default updatable value somehow. It's essential for us to make sure that some columns once inserted are never updated, or are never inserted or updated, or never inserted. |
|
From: vr2262 <iss...@bi...> - 2015-09-23 19:46:35
|
New issue 3539: TypeError in sqlalchemy.orm.strategy_options.Load.__str__ under certain conditions https://bitbucket.org/zzzeek/sqlalchemy/issues/3539/typeerror-in vr2262: The `__str__` method for this class is defined on line 182 here: https://bitbucket.org/zzzeek/sqlalchemy/src/50866d2f857dd45bb2d186a0fa076768437d62a3/lib/sqlalchemy/orm/strategy_options.py?fileviewer=file-view-default#strategy_options.py-182:183 I encountered this logging error: `TypeError: not all arguments converted during string formatting` Upon inspection, the value of `self.strategy` was ``` #!python (('deferred', False), ('instrument', True)) ``` The really strange thing is that I only saw this error when trying to configure logging. Up until this point, I had been using `create_engine(echo=True)` and never encountered this error. |
|
From: jvanasco <iss...@bi...> - 2015-09-23 19:10:11
|
New issue 3538: docs clarification- orm.joinedload https://bitbucket.org/zzzeek/sqlalchemy/issues/3538/docs-clarification-ormjoinedload jvanasco: I'd like to suggest extending the note in the docs on orm.joined load with something like the following: [http://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html#sqlalchemy.orm.joinedload, http://docs.sqlalchemy.org/en/rel_0_8/orm/loading.html#sqlalchemy.orm.joinedload] ========== If you have already explicitly JOINed a collection onto your query for filtering (or other use), you will likely want to use `orm.contains_eager` instead of `orm.joinedload`. `joinedload` will create a second anonymously aliased join, while `contains_eager` will more efficiently re-use the existing join. ======= |
|
From: thiefmaster <iss...@bi...> - 2015-09-23 09:00:23
|
New issue 3537: Smart identifier truncation is only applied to index names, not constraint names https://bitbucket.org/zzzeek/sqlalchemy/issues/3537/smart-identifier-truncation-is-only thiefmaster: ```python from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base def _column_names(constraint, table): return '_'.join((c if isinstance(c, basestring) else c.name) for c in constraint.columns) naming_convention = { 'fk': 'fk_%(table_name)s_%(column_names)s_%(referred_table_name)s', 'ix': 'ix_%(table_name)s_%(column_names)s', 'column_names': _column_names, } Base = declarative_base() Base.metadata.naming_convention = naming_convention class A(Base): __tablename__ = 'test_very_long_name_abcdefghijklmnopqrstuvwxyz' __table_args__ = Index(None, 'abcdefghijklmnopqrstuvwxyz'), id = Column(Integer, primary_key=True) abcdefghijklmnopqrstuvwxyz = Column(Integer) class B(Base): __tablename__ = 'test_very_long_name_abcdefghijklmnopqrstuvwxyz_b' id = Column(Integer, primary_key=True) abcdefghijklmnopqrstuvwxyz = Column(Integer, ForeignKey('test_very_long_name_abcdefghijklmnopqrstuvwxyz.id')) e = create_engine('postgresql:///test', echo=True) Base.metadata.create_all(e) raw_input('check the names...') e.execute('DROP TABLE test_very_long_name_abcdefghijklmnopqrstuvwxyz, test_very_long_name_abcdefghijklmnopqrstuvwxyz_b') ``` This results in the following SQL (removed unrelated parts): ```sql CREATE INDEX ix_test_very_long_name_abcdefghijklmnopqrstuvwxyz_abcde_5f65 ON test_very_long_name_abcdefghijklmnopqrstuvwxyz (abcdefghijklmnopqrstuvwxyz) CREATE TABLE test_very_long_name_abcdefghijklmnopqrstuvwxyz_b ( ... CONSTRAINT fk_test_very_long_name_abcdefghijklmnopqrstuvwxyz_b_abcdefghijklmnopqrstuvwxyz_test_very_long_name_abcdefghijklmnopqrstuvwxyz FOREIGN KEY(abcdefghijklmnopqrstuvwxyz) REFERENCES test_very_long_name_abcdefghijklmnopqrstuvwxyz (id) ) ``` SQLAlchemy properly truncates the index name and appends a hash for uniqueness, but the constraint name is left unmodified, resulting in Postgres truncating it to 63 characters (without adding a hash to the end to ensure uniqueness). I think this behavior is quite inconsistent and may even result in problems depending on the structure since it may result in two truncated names being the same even though they were different before. It's also somewhat ugly in Alembic migration scripts as you'll have to use `op.f('...')` for the index to ensure it's truncated by SA while it's optional for FK names since they are not touched even if too long. |
|
From: Mike B. <iss...@bi...> - 2015-09-17 14:44:47
|
New issue 3535: flush error suppressed when rollback fails https://bitbucket.org/zzzeek/sqlalchemy/issues/3535/flush-error-suppressed-when-rollback-fails Mike Bayer: despite all the "Safe reraise" and everything, I'm still not seeing the original cause here when I illustrate a variant of #3532: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) bs = relationship("B") class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a_id = Column(ForeignKey('a.id'), nullable=False) class C(Base): __tablename__ = 'c' id = Column(Integer, primary_key=True) a_id = Column(ForeignKey('a.id')) a1 = relationship("A") a2 = a1 e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) s.add(C()) s.commit() c1 = s.query(C).first() s.add(B()) s.query(A).get(1) ``` ``` #! 2015-09-17 10:42:02,198 INFO sqlalchemy.engine.base.Engine ROLLBACK Traceback (most recent call last): File "test.py", line 37, in <module> s.query(A).get(1) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 831, in get return self._get_impl(ident, loading.load_on_ident) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 864, in _get_impl return fallback_fn(self, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/loading.py", line 223, in load_on_ident return q.one() File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 2498, in one ret = list(self) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 2540, in __iter__ self.session._autoflush() File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 1283, in _autoflush self.flush() File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 2024, in flush self._flush(objects) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 2142, in _flush transaction.rollback(_capture_exception=True) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/util/langhelpers.py", line 63, in __exit__ compat.reraise(type_, value, traceback) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 2142, in _flush transaction.rollback(_capture_exception=True) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 415, in rollback transaction._rollback_impl() File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 448, in _rollback_impl self._restore_snapshot(dirty_only=self.nested) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/session.py", line 290, in _restore_snapshot s._expire(s.dict, self.session.identity_map._modified) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/state.py", line 523, in _expire [impl.key for impl in self.manager._scalar_loader_impls File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/util/langhelpers.py", line 747, in __get__ obj.__dict__[self.__name__] = result = self.fget(obj) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/instrumentation.py", line 111, in _scalar_loader_impls self.values() if attr.impl.accepts_scalar_loader]) AttributeError: 'NoneType' object has no attribute 'accepts_scalar_loader' ``` between this and that we're seeing #2696 variants come up in openstack we need to squash these rollback-related errors once and for all |
|
From: Denis S. <iss...@bi...> - 2015-09-17 10:54:29
|
New issue 3534: Set transaction isolation level for MSSQL? https://bitbucket.org/zzzeek/sqlalchemy/issues/3534/set-transaction-isolation-level-for-mssql Denis Stebunov: I'm trying to set transaction isolation level to READ UNCOMMITTED for connection to MSSQL 2012. If I specify it as a parameter to create_engine(), I get ``` ... File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 144, in create engineclass.__name__)) TypeError: Invalid argument(s) 'isolation_level' sent to create_engine(), using configuration MSDialect_pymssql/QueuePool/Engine. Please check that the keyword arguments are appropriate for this combination of components. ``` If I specify it as a parameter to sessionmaker(), I get ``` ... File "/usr/local/lib/python2.7/site-packages/sqlalchemy/engine/interfaces.py", line 716, in set_isolation_level raise NotImplementedError() NotImplementedError ``` I tried it both with pyodbc and pymssql, and none of them worked. As a last resort I'm using `.with_hint(myTable, 'WITH (nolock)')` for my queries (this is equivalent to READ UNCOMMITTED at query level), but unfortunately it doesn't work with relationships loaded with joinedload - hint is ignored for relationships (but successfully applied to main table in the query). Any ideas? |
|
From: hasier <iss...@bi...> - 2015-09-17 10:54:18
|
New issue 3533: PostgreSQL 24:00:00 time provokes exception https://bitbucket.org/zzzeek/sqlalchemy/issues/3533/postgresql-24-00-00-time-provokes hasier: In PostgreSQL, a value of 24:00:00 in a time type column is valid, but Python only accepts a max of 23:59:59.999999. Therefore it breaks with 'ValueError: hour must be in 0..23' when retrieving it. P.S.: I guess it would be a better practice to store 23:59:59.99 in the DB as the SQL standard says, but I think it should be somewhat managed anyway, transforming the retrieved value to 23:59:59.99. |
|
From: Mike B. <iss...@bi...> - 2015-09-16 22:55:31
|
New issue 3532: detect property being assigned to more than one parent, being assigned twice https://bitbucket.org/zzzeek/sqlalchemy/issues/3532/detect-property-being-assigned-to-more Mike Bayer: e.g.: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) bs = relationship("B") b_1 = bs class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a_id = Column(ForeignKey('a.id')) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) configure_mappers() print A.bs.impl print A.b_1.impl ``` the basic idea, but this should be at most a warning in 1.0, error in 1.1: ``` #!diff diff --git a/lib/sqlalchemy/orm/interfaces.py b/lib/sqlalchemy/orm/interfaces.py index cd4a011..19f4e0f 100644 --- a/lib/sqlalchemy/orm/interfaces.py +++ b/lib/sqlalchemy/orm/interfaces.py @@ -147,6 +147,8 @@ class MapperProperty(_MappedAttribute, InspectionAttr, util.MemoizedSlots): setup when the mapper is first known. """ + if getattr(self, "parent", None) is not None: + raise Exception("oof") self.parent = parent def instrument_class(self, mapper): ``` needs: 1. tests 2. nice message 3. appropriate exception class 4. for 1.0 backport a warning 5. migration notes 6. changelog |
|
From: Mike B. <iss...@bi...> - 2015-09-15 19:52:54
|
New issue 3531: bind param replacement in join_condition() doesn't work for type_coerce https://bitbucket.org/zzzeek/sqlalchemy/issues/3531/bind-param-replacement-in-join_condition Mike Bayer: type_coerce makes its decision about the thing it is processing when created. if a bindparam, the type is changed. otherwise, a label() is produced. However this fails for this : ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class CastToIntegerType(TypeDecorator): impl = String def column_expression(self, col): return cast(col, Integer) def bind_expression(self,col): return cast(col, String) class Person(Base): __tablename__ = 'person' id = Column('id_string', CastToIntegerType, primary_key=True) pets = relationship('Pets', primaryjoin='foreign(Pets.person_id)==type_coerce(Person.id, Integer)') class Pets(Base): __tablename__ = 'pets' id = Column('id', Integer, primary_key=True) person_id = Column('person_id', Integer, ForeignKey('person.id_string'), primary_key=True) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) s.add_all([Person(id="5", pets=[Pets(id="1")])]) s.commit() p1 = s.query(Person).first() p1.pets ``` because we are creating bindparam after the fact - there's no actual TypeCoerce construct here so when we see that Label, we don't know anything that there was a type coerce. We'd need to build some new construct. |
|
From: Mike B. <iss...@bi...> - 2015-09-15 19:50:31
|
New issue 3530: bind param replacement in join_condition() doesn't work for heterogeneous types https://bitbucket.org/zzzeek/sqlalchemy/issues/3530/bind-param-replacement-in-join_condition Mike Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class CastToIntegerType(TypeDecorator): impl = String def column_expression(self, col): return cast(col, Integer) def bind_expression(self,col): return cast(col, String) class Person(Base): __tablename__ = 'person' id = Column('id_string', CastToIntegerType, primary_key=True) pets = relationship('Pets', primaryjoin='foreign(Pets.person_id)==Person.id') class Pets(Base): __tablename__ = 'pets' id = Column('id', Integer, primary_key=True) person_id = Column('person_id', Integer, ForeignKey('person.id_string'), primary_key=True) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) s.add_all([Person(id="5", pets=[Pets(id="1")])]) s.commit() p1 = s.query(Person).first() p1.pets ``` generates: ``` #!sql SELECT pets.id AS pets_id, pets.person_id AS pets_person_id FROM pets WHERE pets.person_id = CAST(? AS VARCHAR) ``` should generate: ``` #!sql SELECT pets.id AS pets_id, pets.person_id AS pets_person_id FROM pets WHERE pets.person_id = ? ``` that is, inside of create_lazy_clause()->col_to_bind() we are taking the type of the column itself when we create the bindparam but not taking into account the thing we're comparing to. this would fix that: ``` #!diff diff --git a/lib/sqlalchemy/orm/relationships.py b/lib/sqlalchemy/orm/relationships.py index 552ce8b..6ac3ac0 100644 --- a/lib/sqlalchemy/orm/relationships.py +++ b/lib/sqlalchemy/orm/relationships.py @@ -2782,6 +2782,7 @@ class JoinCondition(object): def create_lazy_clause(self, reverse_direction=False): binds = util.column_dict() equated_columns = util.column_dict() + types = util.column_dict() has_secondary = self.secondaryjoin is not None @@ -2790,12 +2791,15 @@ class JoinCondition(object): for l, r in self.local_remote_pairs: lookup[l].append((l, r)) equated_columns[r] = l + types[l] = r.type elif not reverse_direction: for l, r in self.local_remote_pairs: equated_columns[r] = l + types[l] = r.type else: for l, r in self.local_remote_pairs: equated_columns[l] = r + types[r] = l.type def col_to_bind(col): @@ -2808,7 +2812,8 @@ class JoinCondition(object): ): if col not in binds: binds[col] = sql.bindparam( - None, None, type_=col.type, unique=True) + None, None, + type_=types.get(col, col.type), unique=True) return binds[col] return None ``` |
|
From: thiefmaster <iss...@bi...> - 2015-09-11 09:49:33
|
New issue 3529: Support Postgres 9.5 "ON CONFLICT" https://bitbucket.org/zzzeek/sqlalchemy/issues/3529/support-postgres-95-on-conflict thiefmaster: Probably a bit early since 9.5 isn't out yet, but once it's out this would be pretty useful. https://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.5#INSERT_..._ON_CONFLICT_DO_NOTHING.2FUPDATE_.28.22UPSERT.22.29 This could be a table arg, e.g. `postgres_onconflict='nothing|update'` (or maybe even in a less db-specific way if other databases also support this feature). When set it would automatically add it to INSERTs involving that table. |
|
From: Jason M. <iss...@bi...> - 2015-09-10 12:23:54
|
New issue 3528: Hybrid Property Does Interval Intersect Wrong https://bitbucket.org/zzzeek/sqlalchemy/issues/3528/hybrid-property-does-interval-intersect Jason Myers: I don't know if we even care to fix this or maybe just rename it from intersect to contains range; however, the docs at https://bitbucket.org/zzzeek/sqlalchemy/src/be548690c7053a25a546d47cc4ae6e3595c6f421/lib/sqlalchemy/ext/hybrid.py?at=master&fileviewer=file-view-default#hybrid.py-52 would fail for an example of ``` #!python Interval(3, 5).intersects(Interval(2, 5)) ``` Although those intervals do intersect. |
|
From: Tim M. <iss...@bi...> - 2015-09-10 02:56:39
|
New issue 3527: Possible Connection Leak? https://bitbucket.org/zzzeek/sqlalchemy/issues/3527/possible-connection-leak Tim Mitchell: SQLA 1.0.8 I am trying to debug an inexplicable 'connection unexpectedly closed' error when I run a test suite (with lots of engine.dispose()s). When I came across this. Given a pretty normal query in a ThreadLocal session and a QueuePool ``` #!python with db_context() as session: session.query(tables.Attachment).get(attachment_id) ``` When I look at the callstack for a new connection I see this: ``` #!python ... Query._connection_from_session(..., close_with_result=True) Session._connection_for_bind(..., kw={close_with_result=True)) SessionTransaction._connection_for_bind(...) # close_with_result NOT passed Engine.contextual_connect(..., close_with_result=False) ... # Create DBAPI connection ``` Specifying a NullPool makes everything work fine. |
|
From: Mike B. <iss...@bi...> - 2015-09-08 16:59:30
|
New issue 3526: unused systems in bulk https://bitbucket.org/zzzeek/sqlalchemy/issues/3526/unused-systems-in-bulk Mike Bayer: based on things found via #3525, _postfetch is called in bulk update for unknown reasons, as no tests fail removing it. need to isolate why _postfetch is needed and add tests, or not: ``` #!diff diff --git a/lib/sqlalchemy/orm/persistence.py b/lib/sqlalchemy/orm/persistence.py index c785a4d..6487228 100644 --- a/lib/sqlalchemy/orm/persistence.py +++ b/lib/sqlalchemy/orm/persistence.py @@ -672,6 +672,8 @@ def _emit_update_statements(base_mapper, uowtransaction, connection, value_params in records: c = cached_connections[connection].\ execute(statement, params) + + # TODO: why with bookkeeping=False? _postfetch( mapper, uowtransaction, @@ -694,6 +696,8 @@ def _emit_update_statements(base_mapper, uowtransaction, execute(statement, multiparams) rows += c.rowcount + + # TODO: why with bookkeeping=False? for state, state_dict, params, mapper, \ connection, value_params in records: _postfetch( @@ -964,6 +968,8 @@ def _postfetch(mapper, uowtransaction, table, after an INSERT or UPDATE statement has proceeded for that state.""" + # TODO: bulk is never non-False, need to clean this up + prefetch_cols = result.context.compiled.prefetch postfetch_cols = result.context.compiled.postfetch returning_cols = result.context.compiled.returning ``` |
|
From: Berislav L. <iss...@bi...> - 2015-09-05 00:09:44
|
New issue 3525: On bulk update, the state is discarded at some point, causing an exception https://bitbucket.org/zzzeek/sqlalchemy/issues/3525/on-bulk-update-the-state-is-discarded-at Berislav Lopac: I've asked about this issue on Stack Overflow: http://stackoverflow.com/questions/32407210/sqlalchemy-nonetype-object-has-no-attribute-expire-attributes-on-bulk-upda However, I kept digging and have chased it to the `persistence._bulk_update` function, which replaces all states with `None`, causing the error shown in the SO question above. I've thrown together a brief patch which seems to fix the issue; will see if I can make a pull request out of it. |
|
From: chrisirhc <iss...@bi...> - 2015-09-04 21:08:10
|
New issue 3524: Unable to insert data on table with column name containing percent sign https://bitbucket.org/zzzeek/sqlalchemy/issues/3524/unable-to-insert-data-on-table-with-column chrisirhc: I have a column called "Bookings % change" and it looks like the table is created with the correct schema but when doing an insert, SQLAlchemy is inserting on the column name but escaping the percent sign `%%`. ``` #! (_mysql_exceptions.OperationalError) (1054, \"Unknown column 'Bookings %% change' in 'field list'\") ``` |
|
From: DNeu <iss...@bi...> - 2015-09-03 15:41:35
|
New issue 3523: "EXCEPT" keyword in MySQL query https://bitbucket.org/zzzeek/sqlalchemy/issues/3523/except-keyword-in-mysql-query DNeu: I got the following error: File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py', line 2399 in all return list(self) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py', line 2516 in __iter__ return self._execute_and_instances(context) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py', line 2531 in _execute_and_instances result = conn.execute(querycontext.statement, self._params) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py', line 914 in execute return meth(self, multiparams, params) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/sql/elements.py', line 323 in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py', line 1010 in _execute_clauseelement compiled_sql, distilled_params File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py', line 1146 in _execute_context context) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py', line 1341 in _handle_dbapi_exception exc_info File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/util/compat.py', line 199 in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py', line 1139 in _execute_context context) File '/home/adhocracy/adhocracy_buildout/eggs/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/engine/default.py', line 450 in do_execute cursor.execute(statement, parameters) File '/home/adhocracy/adhocracy_buildout/eggs/MySQL_python-1.2.5-py2.7-linux-x86_64.egg/MySQLdb/cursors.py', line 205 in execute self.errorhandler(self, exc, value) File '/home/adhocracy/adhocracy_buildout/eggs/MySQL_python-1.2.5-py2.7-linux-x86_64.egg/MySQLdb/connections.py', line 36 in defaulterrorhandler raise errorclass, errorvalue ProgrammingError: (_mysql_exceptions.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'EXCEPT SELECT event.data AS event_data, event.event AS event_event, event.id AS ' at line 3") [SQL: u'SELECT anon_1.event_data AS anon_1_event_data, anon_1.event_event AS anon_1_event_event, anon_1.event_id AS anon_1_event_id, anon_1.event_time AS anon_1_event_time, anon_1.event_user_id AS anon_1_event_user_id, anon_1.event_instance_id AS anon_1_event_instance_id, user_1.locale AS user_1_locale, user_1.password AS user_1_password, user_1.email AS user_1_email, user_1.id AS user_1_id, user_1.user_name AS user_1_user_name, user_1.display_name AS user_1_display_name, user_1.bio AS user_1_bio, user_1.email_priority AS user_1_email_priority, user_1.activation_code AS user_1_activation_code, user_1.reset_code AS user_1_reset_code, user_1.create_time AS user_1_create_time, user_1.access_t ime AS user_1_access_time, user_1.delete_time AS user_1_delete_time, user_1.banned AS user_1_banned, user_1.no_help AS user_1_no_help, user_1.page_size AS user_1_page_size, user_1.proposal_sort_order AS user_1_proposal_sort_order, user_1.gender AS user_1_gender, user_1._is_organization AS user_1__is_organization, user_1.email_messages AS user_1_email_messages, user_1.welcome_code AS user_1_welcome_code, user_1.optional_attributes AS user_1_optional_attributes, group_1.id AS group_1_id, group_1.group_name AS group_1_group_name, group_1.code AS group_1_code, group_1.description AS group_1_description, permission_1.id AS permission_1_id, permission_1.permission_name AS permission_1_permission_name, badge_1.id AS badge_1_id, badge_1.type AS badge_1_type, badge_1.create_time AS badge_1_create_time, badge_1.title AS badge_1_title, badge_1.color AS badge_1_color, badge_1.description AS badge_1_description, badge_1.instance_id AS badge_1_instance_id, badge_1.impact AS badge_1_impact , badge_1.select_child_description AS badge_1_select_child_description, badge_1.parent_id AS badge_1_parent_id, badge_1.long_description AS badge_1_long_description, badge_1.group_id AS badge_1_group_id, badge_1.display_group AS badge_1_display_group, badge_1.visible AS badge_1_visible, badge_1.thumbnail AS badge_1_thumbnail, badge_1.behavior_proposal_sort_order AS badge_1_behavior_proposal_sort_order \nFROM (SELECT event.data AS event_data, event.event AS event_event, event.id AS event_id, event.time AS event_time, event.user_id AS event_user_id, event.instance_id AS event_instance_id \nFROM event LEFT OUTER JOIN instance ON instance.id = event.instance_id EXCEPT SELECT event.data AS event_data, event.event AS event_event, event.id AS event_id, event.time AS event_time, event.user_id AS event_user_id, event.instance_id AS event_instance_id \nFROM event LEFT OUTER JOIN instance ON instance.id = event.instance_id \nWHERE instance.hidden = true) AS anon_1 LEFT OUTER JOIN user AS user_1 ON anon_1.event_user_id = user_1.id LEFT OUTER JOIN (user_badges AS user_badges_1 INNER JOIN badge AS badge_1 ON badge_1.id = user_badges_1.badge_id AND badge_1.type IN (%s)) ON user_badges_1.user_id = user_1.id LEFT OUTER JOIN `group` AS group_1 ON group_1.id = badge_1.group_id LEFT OUTER JOIN (group_permission AS group_permission_1 INNER JOIN permission AS permission_1 ON permission_1.id = group_permission_1.permission_id) ON group_1.id = group_permission_1.group_id \nWHERE anon_1.event_event != %s AND anon_1.event_event != %s AND anon_1.event_user_id = %s ORDER BY anon_1.event_time DESC'] [parameters: ('user', 't_message_send', 't_massmessage_send', 3L)] |
|
From: Sylwester K. <iss...@bi...> - 2015-09-02 19:30:19
|
New issue 3522: Attribute history not tracked when using session.merge() https://bitbucket.org/zzzeek/sqlalchemy/issues/3522/attribute-history-not-tracked-when-using Sylwester Kardziejonek: I've noticed that the attribute history of my objects is not tracked. The object comes from session.merge() like it was just freshly retrieved from database. ``` #!python shipping_id = int(request.matchdict.get('id')) # Get model from db db_shipping = db.session.query(m.Shipping).get(shipping_id) # Create updated instance updated_shipping = m.Shipping(id=shipping_id, price=999) # Merge it with session merged = db.session.merge(updated_shipping) # I've noticed an actual UPDATE is emitted at this point # Attributes values are correctly reflected, but the history is all 'unchanged' # Simply iterates over history of each attribute changeset(merged) # History is all 'unchanged' ``` >From the other hand, if I add one line, this works: ``` #!python shipping_id = int(request.matchdict.get('id')) # Get model from db db_shipping = db.session.query(m.Shipping).get(shipping_id) # When I add this, the history is tracked properly changeset(db_shipping) # Create updated instance updated_shipping = m.Shipping(id=shipping_id, price=999) # Merge it with session merged = db.session.merge(updated_shipping) # Attributes values are correctly reflected, history state is correct # Simply iterates over history of each attribute changeset(merged) ``` ``` #!python def changeset(obj): data = {} for prop in obj.__mapper__.iterate_properties: history = get_history(obj, prop.key) if history.has_changes(): old_value = history.deleted[0] if history.deleted else None new_value = history.added[0] if history.added else None if new_value: data[prop.key] = [new_value, old_value] return data ``` |
|
From: Joseph S. <iss...@bi...> - 2015-08-31 19:57:33
|
New issue 3521: SQLAlchemy fails for MySQL 5.7.8 with a security error https://bitbucket.org/zzzeek/sqlalchemy/issues/3521/sqlalchemy-fails-for-mysql-578-with-a Joseph Schorr: Apologies if this has already been addressed; a search did not find any references. On MySQL 5.7.8, trying to connect via SQLAlchemy results in the following exception: ``` #!python sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (1142, u"SELECT command denied to user 'testuser'@'192.168.59.3' for table 'session_variables'") [SQL: "SHOW VARIABLES LIKE 'sql_mode'"] ``` This exception is raised because of a recent security change made by Oracle to MySQL. Details on the issue and some possible solutions can be found here: http://code.openark.org/blog/mysql/baffling-5-7-globalstatus-variables-issues-unclean-migration-path Is there a recommended approach to working around this issue? |
|
From: Yegor R. <iss...@bi...> - 2015-08-31 14:04:30
|
New issue 3520: SQLAlchemy 1.0.8 Core fails to convert to native type https://bitbucket.org/zzzeek/sqlalchemy/issues/3520/sqlalchemy-108-core-fails-to-convert-to Yegor Roganov: I have the following model: ```python class PortfolioItem(Base): class STATUS(Enum): active = 0 deleted = 1 status = Column(EnumInt(STATUS), default=STATUS.active) # ... ``` where `EnumInt` is a custom field (it just converts an enum back and forth). Consider the following statement: ```python PortfolioItem.__table__.insert(items).returning(PortfolioItem.id) ``` This then throws an exception: ``` sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) can't adapt type 'STATUS' [SQL: 'INSERT INTO portfolio (performer_id, status, date_created, description, category_id) VALUES (%(performer_id_0)s, %(status)s, %(date_created)s, %(description_0)s, %(category_id_0)s), (%(performer_id_1)s, %(status_1)s, %(date_created_1)s, %(description_1)s, %(category_id_1)s) RETURNING portfolio.id'] [parameters: {'description_1': '', 'performer_id_1': 1, 'category_id_1': None, 'category_id_0': None, 'status_1': <STATUS.active: 0>, 'performer_id_0': 1, 'description_0': '', 'date_created_1': datetime.datetime(2015, 8, 31, 13, 43, 58, 366724), 'date_created': datetime.datetime(2015, 8, 31, 13, 43, 58, 366716), 'status': 0}] ``` Note that `status` parameter have been converted to 0 as expected, while `status_1` is of type `Enum`. It works in 0.9.8 |
|
From: Joe J. <iss...@bi...> - 2015-08-28 20:34:17
|
New issue 3519: label does not copy bind from parent https://bitbucket.org/zzzeek/sqlalchemy/issues/3519/label-does-not-copy-bind-from-parent Joe Jevnik: If I have an expression where the bind is set to some engine and then call `.label()` on it, the result will have a bind=None. |
|
From: Thijs D. <iss...@bi...> - 2015-08-28 11:41:11
|
New issue 3518: (PostgreSQL) Ordering a DISTINCT ON query adds columns to SELECT https://bitbucket.org/zzzeek/sqlalchemy/issues/3518/postgresql-ordering-a-distinct-on-query Thijs Damsma: When a distinct query is ordered, the columns on which are sorted are added to the SELECT statement, see example code: ``` #!python from sqlalchemy.orm.session import sessionmaker from sqlalchemy.dialects import postgresql from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer Base = declarative_base() class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) att1 = Column(Integer) att2 = Column(Integer) Session = sessionmaker() sess = Session() q = sess.query(User.att2).distinct(User.att1) print(str(q.statement.compile(dialect=postgresql.dialect()))) ``` as expected, only user.att2 will be returned: ``` #!sql SELECT DISTINCT ON ("user".att1) "user".att2 FROM "user" ``` Now with an order_by statement: ``` #!python print(str(q.order_by(User.att2, User.id).statement.compile( dialect=postgresql.dialect()))) ``` not as expected, both user.att2 and user.id will be returned ``` #!sql SELECT DISTINCT ON ("user".att1) "user".att2, "user".id FROM "user" ORDER BY "user".att2, "user".id ``` |
|
From: Allan C. <iss...@bi...> - 2015-08-28 02:14:04
|
New issue 3517: Inconsistencies in documentation and code regarding strong referencing guidance https://bitbucket.org/zzzeek/sqlalchemy/issues/3517/inconsistencies-in-documentation-and-code Allan Crooks: My use case: I recently upgraded a project, which was built using SQLAlchemy 0.3, to SQLAlchemy 1.0. I went from version to version making changes where required. One of the features that the project made use of from SQLA 0.3 was the "objectstore" extension, that provided (amongst other things) object caching. As part of the update process, I moved to using sessions, and explicitly disabled the weak reference identity map to keep my code compatible. When I started to get warnings because *weak_identity_map=False*, I stopped passing that parameter. I then realised that it was causing performance issues, as now multiple calls were being made to the database to retrieve the same object, whereas only one call was made previously. So, I want to be able to have sessions which will persist objects until they are explicitly cleared. The problem is, I see inconsistent advice offered when it comes to object caching. --- * The documentation [here](http://docs.sqlalchemy.org/rel_1_0/latest/orm/session_state_management.html#session-attributes) explicitly suggests how you should disable weak references if that's your preferred option: *"To disable the weak referencing behavior and force all objects within the session to remain until explicitly expunged, configure sessionmaker with the weak_identity_map=False setting.*" The use of it is not discouraged at all. * The documentation [here](http://docs.sqlalchemy.org/rel_1_0/latest/orm/session_basics.html#is-the-session-a-cache) indicates that, although the Session object is capable of being used as a cache, it's not designed for it. This gives the impression that, while SQLAlchemy does provide an implementation of a cache, alternative options may work better. However, it does not discourage the use of it. * [This](http://docs.sqlalchemy.org/en/rel_1_0/orm/session_api.html#sqlalchemy.orm.session.Session.params.weak_identity_map) indicates that the option is obsolete. "Obsolete" would indicate that, regardless of the value passed to the flag, it would result in the same behaviour. This isn't true in my case. * The [generated warning](https://github.com/zzzeek/sqlalchemy/blob/a43b106aa717587f85d0d88c88b7c442104fac24/lib/sqlalchemy/orm/session.py#L492) indicates that the feature isn't needed. As a message, that's unclear what that's saying. Is it saying that SQLAlchemy doesn't need that feature? Perhaps. Is it implying that I don't need that feature? That would be incorrect. * [This](http://comments.gmane.org/gmane.comp.python.sqlalchemy.user/22960) discussion thread contains more enthusiastic advice that people should avoid using strong identity maps, and maintain a local cache of objects themselves. This is a strong indication that we should strongly move away from using strong identity maps. * There is issue #1473, which specifies the deprecation of StrongInstanceDict. However, there aren't any deprecation warnings in the direct usage of StrongInstanceDict itself - so I might infer from that the use of the classes in sqlalchemy.orm.identity are intended for internal use only, and that the *weak_identity_map* flag is meant to be the only way to make use of them. --- These inconsistencies make it unclear on how to proceed (especially in terms of expectations on how future versions will change the API): * Is the caching feature of SQLAlchemy going to be removed altogether? * Am I allowed to use StrongInstanceDict, or write my own IdentityMap class? * Is SQLAlchemy correct in asserting that the flag is obsolete? * Is SQLAlchemy indicating that the feature is unnecessary for its own internal use, or for my intended use? * Shouldn't SQLAlchemy generate a warning which suggests to clients how they should make changes to achieve the same effect for future versions? My ideal solution would be that I could continue using StrongInstanceDict - even if future versions of SQLAlchemy only supported this by allowing a class or factory function to return an instance of an object to use as an identity map (so allow any identity map implementation to be used, rather than only talking in terms of weak vs strong references). |