sqlalchemy-tickets Mailing List for SQLAlchemy (Page 23)
Brought to you by:
zzzeek
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
From: Konsta V. <iss...@bi...> - 2016-09-14 06:00:21
|
New issue 3794: Setting column as None does not explicitly add the NULL to insert statement https://bitbucket.org/zzzeek/sqlalchemy/issues/3794/setting-column-as-none-does-not-explicitly Konsta Vesterinen: The following code illustrates this problem: ``` #!python import sqlalchemy as sa from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = sa.create_engine( 'postgres://postgres@localhost/test' ) engine.echo = True Base = declarative_base() Session = sessionmaker(bind=engine) session = Session() class Question(Base): __tablename__ = 'question' id = sa.Column(sa.Integer, primary_key=True) weight = sa.Column(sa.Integer, default=1) Base.metadata.create_all(bind=session.bind) q = Question(weight=None) session.add(q) session.commit() assert q.weight is None ``` The problem also exists by changing the ``default=1`` to ``server_default='1'``. Also in this case the explicit weight=NULL is not added to INSERT statement. |
From: keeyip c. <iss...@bi...> - 2016-09-08 19:14:44
|
New issue 3793: Claim that 'column elements' are the most fundamental part of expression API seems contradictory https://bitbucket.org/zzzeek/sqlalchemy/issues/3793/claim-that-column-elements-are-the-most keeyip chan: I am referring to the first paragraph of http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.expression.text I may just be confused by the order of sentences, but it seems that the "most fundamental part" ought to be "clause elements" rather than "column elements". It just seems contradictory to state in one sentence that column elements are the most fundamental but then claim that the "core of all SQL expression constructs is the ClauseElement". **Proposed rewording:** The expression API represents each part of a SQL expression as a subclass of ClauseElement. Subclasses are used to represent conjunctions, column selections, filters, and many other types of SQL clauses. The most basic of these clauses is the ColumnElement, which represents a column and its datatype. The expression API defines interfaces for combining ClauseElement objects and to generate SQL statements from those combinations. |
From: Chris W. <iss...@bi...> - 2016-09-07 16:39:07
|
New issue 3792: Reflect exclude constraints from Postgres tables. https://bitbucket.org/zzzeek/sqlalchemy/issues/3792/reflect-exclude-constraints-from-postgres Chris Withers: SQLA currently doesn't, which makes Alembic a bit sad sometimes. |
From: Ken R. <iss...@bi...> - 2016-09-06 19:18:08
|
New issue 3791: Extending pymssql dialect is_disconnect again https://bitbucket.org/zzzeek/sqlalchemy/issues/3791/extending-pymssql-dialect-is_disconnect Ken Robbins: Similar to #3585 except a different initial error message. Using SQLAlchemy 1.0.12, pymssql 2.1.2, and SQL Server 2012. Here, we performed a DB failover, and the connection became unusable. First error: ```python File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 149, in do return getattr(self.registry(), name)(*args, **kwargs) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 934, in execute clause, params or {}) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 662, in execute params) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 761, in _execute_clauseelement compiled_sql, distilled_params File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 874, in _execute_context context) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1024, in _handle_dbapi_exception exc_info File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 195, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context context) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 324, in do_execute cursor.execute(statement, parameters) File "pymssql.pyx", line 467, in pymssql.Cursor.execute (pymssql.c:7569) raise OperationalError, e.args[0] OperationalError: (OperationalError) (20017, 'DB-Lib error message 20017, severity 9:\nUnexpected EOF from the server\nNet-Lib error during Resource temporarily unavailable (11)\n') ``` All subsequent queries resulted in this error: ```python File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/result.py", line 790, in fetchone self.cursor, self.context) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1027, in _handle_dbapi_exception util.reraise(*exc_info) File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/result.py", line 781, in fetchone row = self._fetchone_impl() File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/result.py", line 700, in _fetchone_impl self._non_result() File "/opt/webapp/svc/local/lib/python2.7/site-packages/sqlalchemy/engine/result.py", line 720, in _non_result "This result object does not return rows. " ResourceClosedError: This result object does not return rows. It has been closed automatically. ``` It appears the DB server is hanging up on the client and the client is not attempting to reconnect, so it stays in this bad state. I propose "message 20017" be added to the list of errors considered to be a disconnect. ``` diff --git a/lib/sqlalchemy/dialects/mssql/pymssql.py b/lib/sqlalchemy/dialects/mssql/pymssql.py index 32e3bd9..380418b 100644 --- a/lib/sqlalchemy/dialects/mssql/pymssql.py +++ b/lib/sqlalchemy/dialects/mssql/pymssql.py @@ -87,6 +87,7 @@ class MSDialect_pymssql(MSDialect): "Not connected to any MS SQL server", "Connection is closed", "message 20006", # Write to the server failed + "message 20017", # Unexpected EOF from the server ): if msg in str(e): return True ``` |
From: Konsta V. <iss...@bi...> - 2016-09-03 19:13:22
|
New issue 3790: Setting unexpected type for relationship throws exception but modifies session state https://bitbucket.org/zzzeek/sqlalchemy/issues/3790/setting-unexpected-type-for-relationship Konsta Vesterinen: Consider the following model definition and data. ``` #!python import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base, synonym_for Base = declarative_base() class Section(Base): __tablename__ = 'section' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String) class Question(Base): __tablename__ = 'question' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String) section_id = sa.Column(sa.Integer, sa.ForeignKey(Section.id)) section = sa.orm.relationship(Section, backref='questions') engine = sa.create_engine('sqlite:///:memory:') conn = engine.connect() sa.orm.configure_mappers() Base.metadata.create_all(conn) Session = sa.orm.sessionmaker(bind=conn) session = Session() section = Section(questions=[Question(name='1'), Question(name='2')]) session.add(section) session.commit() ``` Now if we did something like this accidentally ``` #!python section.questions[0] = 'Some updated question name' ``` It throws Exception as expected but it also modifies session state so that calling ``` #!python session.commit() ``` will result our given section having only one question. Maybe it would be better if set operations that threw an error would not affect the session state. I encountered this issue while doing stuff in console and later find out that I accidentally deleted one record. |
From: Michael B. <iss...@bi...> - 2016-09-02 15:44:14
|
New issue 3789: allow StrSQLCompiler to generate columns w/o names https://bitbucket.org/zzzeek/sqlalchemy/issues/3789/allow-strsqlcompiler-to-generate-columns-w Michael Bayer: e.g. for this: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a_id = Column(Integer) class AbstractThing(Base): __abstract__ = True id = Column(Integer, primary_key=True) class A(AbstractThing): __tablename__ = 'a' bs = relationship("B", primaryjoin=B.a_id == AbstractThing.id) configure_mappers() ``` patch: ``` #!diff diff --git a/lib/sqlalchemy/sql/compiler.py b/lib/sqlalchemy/sql/compiler.py index 85d5ff6..38013fb 100644 --- a/lib/sqlalchemy/sql/compiler.py +++ b/lib/sqlalchemy/sql/compiler.py @@ -660,12 +660,15 @@ class SQLCompiler(Compiled): return label.element._compiler_dispatch( self, within_columns_clause=False, **kw) + def _fallback_column_name(self, column): + raise exc.CompileError("Cannot compile Column object until " + "its 'name' is assigned.") + def visit_column(self, column, add_to_result_map=None, include_table=True, **kwargs): name = orig_name = column.name if name is None: - raise exc.CompileError("Cannot compile Column object until " - "its 'name' is assigned.") + name = self._fallback_column_name(column) is_literal = column.is_literal if not is_literal and isinstance(name, elements._truncated_label): @@ -2203,6 +2206,9 @@ class StrSQLCompiler(SQLCompiler): """ + def _fallback_column_name(self, column): + return "<name unknown>" + def visit_getitem_binary(self, binary, operator, **kw): return "%s[%s]" % ( self.process(binary.left, **kw), ``` |
From: Michael B. <iss...@bi...> - 2016-09-02 15:04:07
|
New issue 3788: simple many-to-one comparison is sensitive to order of clauses https://bitbucket.org/zzzeek/sqlalchemy/issues/3788/simple-many-to-one-comparison-is-sensitive Michael Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) assoc = relationship("AB") class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) class AB(Base): __tablename__ = 'ab' a_id = Column(ForeignKey('a.id'), primary_key=True) b_id = Column(ForeignKey('b.id'), primary_key=True) a = relationship("A") b = relationship("B") bug = True class C(Base): __tablename__ = 'c' id = Column(Integer, primary_key=True) if bug: k2 = Column(Integer) k1 = Column(Integer) else: k1 = Column(Integer) k2 = Column(Integer) assoc = relationship("AB") __table_args__ = ( ForeignKeyConstraint(['k1', 'k2'], ['ab.a_id', 'ab.b_id']), {}) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) a, b = A(id=100), B(id=1000) ab = AB(a=a, b=b) c1 = C(id=1, assoc=ab) c2 = C(id=2, assoc=ab) s.add_all([a, b, ab, c1, c2]) s.commit() s.close() # clears everything c1 = s.query(C).get(1) print "#1 EMITS LAZYLOAD:" assoc = c1.assoc # note we keep a strong reference here c2 = s.query(C).get(2) print "SHOULD NOT EMIT LAZYLOAD" assert c2.assoc is assoc ``` what happens here is mapper._get_clause() is determined as an and_() using the order of columns in primary key only. The relationship join is based on Join._join_condition which iterates through columns in the source selectable, so the clauses are reversed. the fix would be in the comparison: ``` #!diff diff --git a/lib/sqlalchemy/sql/elements.py b/lib/sqlalchemy/sql/elements.py index 75d5368..cff5737 100644 --- a/lib/sqlalchemy/sql/elements.py +++ b/lib/sqlalchemy/sql/elements.py @@ -1828,12 +1828,23 @@ class ClauseList(ClauseElement): if not isinstance(other, ClauseList) and len(self.clauses) == 1: return self.clauses[0].compare(other, **kw) elif isinstance(other, ClauseList) and \ - len(self.clauses) == len(other.clauses): - for i in range(0, len(self.clauses)): - if not self.clauses[i].compare(other.clauses[i], **kw): - return False + len(self.clauses) == len(other.clauses) and \ + self.operator is other.operator: + + if self.operator in (operators.and_, operators.or_): + completed = set() + for clause in self.clauses: + for other_clause in set(other.clauses).difference(completed): + if clause.compare(other_clause, **kw): + completed.add(other_clause) + break + return len(completed) == len(other.clauses) else: - return self.operator == other.operator + for i in range(0, len(self.clauses)): + if not self.clauses[i].compare(other.clauses[i], **kw): + return False + else: + return True else: return False ``` |
From: <and...@ha...> - 2016-09-01 12:48:43
|
New issue 3787: Mysql Dialect Not Parsing 'connect_timeout' query string param Option https://bitbucket.org/zzzeek/sqlalchemy/issues/3787/mysql-dialect-not-parsing-connect_timeout and...@ha...: ``` #!python from sqlalchemy import create_engine def main(): uri = "mysql+mysqlconnector://uname:password@mysqlserver/realdb2?connect_timeout=5" # correct credentials to test e = create_engine(uri) ## ## will produce a ## (mysql.connector.errors.OperationalError) an integer is required (got type str) ## from the ## site-packages/mysql/connector/network.py:471 mysql-connector 2.1.3 ## ## while attempting to execute self.sock.settimeout(self._connection_timeout) ## where self._connection_timeout is set to '5' (the str '5' not the integer 5) ## tracing back usages and parsing of the supplied uri ## ## MySQLDialect_mysqlconnector::create_connect_args in site-packages/sqlalchemy/dialects/mysql/mysqlconnector.py:115 ## ## and the lines: ## util.coerce_kw_type(opts, 'buffered', bool) ## util.coerce_kw_type(opts, 'raise_on_warnings', bool) ## ## Where it is parsing SOME of the options for mysql but not all of them and leaving ## connect_timeout as a string ## e.execute("select sleep(10);") # error will occur here print("done") if __name__ == '__main__': main() ``` Of course the workaround is to simply use: ``` #!python create_engine(uri, connect_args={ 'connect_timeout': 5)) ``` But if you're going to parse some of the documented options for the connector why not them all? Also this was uncovered in a system where changing the configuration of the system and supplying options via the url was trivial but modifying the code base was exceedingly painful. Adding the line ``` #!python util.coerce_kw_type(opts, 'connect_timeout', int) ``` to site-packages/sqlalchemy/dialects/mysql/mysqlconnector.py appears to correct the issue for the 'connect_timeout' args |
From: Philip M. <iss...@bi...> - 2016-08-31 17:12:16
|
New issue 3786: Postgres Table Declaration Error https://bitbucket.org/zzzeek/sqlalchemy/issues/3786/postgres-table-declaration-error Philip Martin: I don't think I had this issue previously, but I have noticed since using: python: 3.5.2 sqlalchemy: 1.0.13 I run into the error doing the following: ```python table = sa.Table('foobar', sa.MetaData(), sa.Column('foo', sa.CHAR(7), primary_key=True) engine.execute(table) Traceback (most recent call last): engine.execute(table) File "/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1991, in execute return connection.execute(statement, *multiparams, **params) File "/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 914, in execute return meth(self, multiparams, params) File "/anaconda/lib/python3.5/site-packages/sqlalchemy/sql/schema.py", line 65, in _execute_on_connection return connection._execute_default(self, multiparams, params) File "/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 942, in _execute_default ret = ctx._exec_default(default, None) File "/anaconda/lib/python3.5/site-packages/sqlalchemy/engine/default.py", line 952, in _exec_default if default.is_sequence: AttributeError: 'Table' object has no attribute 'is_sequence' In[80]: table.create(engine) ``` The ddl statement does execute when I run: ```python engine.create(table) ``` |
From: Michael B. <iss...@bi...> - 2016-08-31 14:46:24
|
New issue 3785: Postgresql COLLATE requires quotes https://bitbucket.org/zzzeek/sqlalchemy/issues/3785/postgresql-collate-requires-quotes Michael Bayer: |
From: Edd B. <iss...@bi...> - 2016-08-30 17:53:42
|
New issue 3784: Test failures on OpenBSD https://bitbucket.org/zzzeek/sqlalchemy/issues/3784/test-failures-on-openbsd Edd Barrett: Hi, Trying to update OpenBSD's py-sqlalchemy package to 1.0.14. Three tests currently fail. py.test output below. Thanks ``` > /usr/ports/pobj/py-sqlalchemy-1.0.14/SQLAlchemy-1.0.14/lib/sqlalchemy/testing/suite/test_update_delete.py(54) 48 def test_delete(self): 49 t = self.tables.plain_pk 50 r = config.db.execute( 51 t.delete().where(t.c.id == 2) 52 ) 53 assert not r.is_insert 54 -> assert not r.returns_rows 55 eq_( 56 config.db.execute(t.select().order_by(t.c.id)).fetchall(), 57 [ 58 (1, "d1"), 59 (3, "d3") 60 ] 61 ) 30 def test_update(self): 31 t = self.tables.plain_pk 32 r = config.db.execute( 33 t.update().where(t.c.id == 2), 34 data="d2_new" 35 ) 36 assert not r.is_insert 37 -> assert not r.returns_rows 38 39 eq_( 40 config.db.execute(t.select().order_by(t.c.id)).fetchall(), 41 [ 42 (1, "d1"), 43 (2, "d2_new"), 44 (3, "d3") 45 ] 46 ) 980 def test_no_rowcount_on_selects_inserts(self): 981 """assert that rowcount is only called on deletes and updates. 982 983 This because cursor.rowcount may can be expensive on some dialects 984 such as Firebird, however many dialects require it be called 985 before the cursor is closed. 986 987 """ 988 989 metadata = self.metadata 990 991 engine = engines.testing_engine() 992 993 t = Table('t1', metadata, 994 Column('data', String(10)) 995 ) 996 metadata.create_all(engine) 997 998 with patch.object( 999 engine.dialect.execution_ctx_cls, "rowcount") as mock_rowcount: 1000 mock_rowcount.__get__ = Mock() 1001 engine.execute(t.insert(), 1002 {'data': 'd1'}, 1003 {'data': 'd2'}, 1004 {'data': 'd3'}) 1005 1006 eq_(len(mock_rowcount.__get__.mock_calls), 0) 1007 1008 eq_( 1009 engine.execute(t.select()).fetchall(), 1010 [('d1', ), ('d2', ), ('d3', )] 1011 ) 1012 eq_(len(mock_rowcount.__get__.mock_calls), 0) 1013 1014 engine.execute(t.update(), {'data': 'd4'}) 1015 1016 >> eq_(len(mock_rowcount.__get__.mock_calls), 1) 1017 1018 engine.execute(t.delete()) 1019 -> eq_(len(mock_rowcount.__get__.mock_calls), 2) (Pdb++) len(mock_rowcount.__get__.mock_calls) 0 ``` |
From: Michael B. <iss...@bi...> - 2016-08-29 07:43:32
|
New issue 3782: test issue https://bitbucket.org/zzzeek/sqlalchemy/issues/3782/test-issue Michael Bayer: this is a test |
From: 종빈 박 <iss...@bi...> - 2016-08-29 07:42:28
|
New issue 3783: get() query does not load joinedloaded relationship when reloading expired object https://bitbucket.org/zzzeek/sqlalchemy/issues/3783/get-query-does-not-load-joinedloaded 종빈 박: I'm not so sure this is intended behavior or is a bug, but query().get() does not joinedload the relationship that is specified as `lazy=joinedload` when reloading the instance, marked as expired in identity map. https://gist.github.com/chongkong/1460884de96866d42f5a197fcec5652b |
From: Michael B. <iss...@bi...> - 2016-08-26 14:43:02
|
New issue 3781: implement version_id for bulk_save / bulk_update https://bitbucket.org/zzzeek/sqlalchemy/issues/3781/implement-version_id-for-bulk_save Michael Bayer: ``` #!diff diff --git a/test/orm/test_bulk.py b/test/orm/test_bulk.py index 7e1b052..6acd6d2 100644 --- a/test/orm/test_bulk.py +++ b/test/orm/test_bulk.py @@ -13,6 +13,56 @@ class BulkTest(testing.AssertsExecutionResults): run_define_tables = 'each' +class BulkInsertUpdateVersionId(BulkTest, fixtures.MappedTest): + @classmethod + def define_tables(cls, metadata): + Table('version_table', metadata, + Column('id', Integer, primary_key=True, + test_needs_autoincrement=True), + Column('version_id', Integer, nullable=False), + Column('value', String(40), nullable=False)) + + @classmethod + def setup_classes(cls): + class Foo(cls.Comparable): + pass + + @classmethod + def setup_mappers(cls): + Foo, version_table = cls.classes.Foo, cls.tables.version_table + + mapper(Foo, version_table, version_id_col=version_table.c.version_id) + + def test_bulk_insert_via_save(self): + Foo = self.classes.Foo + + s = Session() + + s.bulk_save_objects([Foo(value='value')]) + + eq_( + s.query(Foo).all(), + [Foo(version_id=1, value='value')] + ) + + def test_bulk_update_via_save(self): + Foo = self.classes.Foo + + s = Session() + + s.add(Foo(value='value')) + s.commit() + + f1 = s.query(Foo).first() + f1.value = 'new value' + s.bulk_save_objects([f1]) + + eq_( + s.query(Foo).all(), + [Foo(version_id=2, value='new value')] + ) + + class BulkInsertUpdateTest(BulkTest, _fixtures.FixtureTest): @classmethod ``` patch: ``` #!diff diff --git a/lib/sqlalchemy/orm/persistence.py b/lib/sqlalchemy/orm/persistence.py index 5d69f51..467f47f 100644 --- a/lib/sqlalchemy/orm/persistence.py +++ b/lib/sqlalchemy/orm/persistence.py @@ -82,11 +82,15 @@ def _bulk_update(mapper, mappings, session_transaction, cached_connections = _cached_connection_dict(base_mapper) + search_keys = mapper._primary_key_propkeys + if mapper._version_id_prop: + search_keys = set([mapper._version_id_prop.key]).union(search_keys) + def _changed_dict(mapper, state): return dict( (k, v) for k, v in state.dict.items() if k in state.committed_state or k - in mapper._primary_key_propkeys + in search_keys ) if isstates: ``` also, tricky, if you do a bulk update on an object that's also in the session, now the object you have locally is stale. because bulk doesn't do any object bookkeeping. also, tricky, need to close |
From: jbeluch <iss...@bi...> - 2016-08-25 14:32:16
|
New issue 3780: connectionless execution causes TLConnection to stay open incorreclty in certain scenarios https://bitbucket.org/zzzeek/sqlalchemy/issues/3780/connectionless-execution-causes jbeluch: Somewhat related to https://bitbucket.org/zzzeek/sqlalchemy/issues/3779/threadlocal-engine-sometimes-doesnt. All calls to .contextual_connect() return the same TLConnection object, however sometimes the caller wants a Connection with the should_close_with_result attr set to True on the connection (connectionless execution via engine.execute). So, if the TLConnection is initially created with close_with_result=False, then calls to engine.execute() will increment the __open_count counter while never calling .close() to decrement it. Failing test: ~~~ def test_tlbind_close_conn(self): e = create_engine(testing.db.url, strategy='threadlocal') conn = e.contextual_connect() e.execute('select 1').fetchall() conn.close() assert conn.closed ~~~ |
From: jbeluch <iss...@bi...> - 2016-08-25 14:28:27
|
New issue 3779: threadlocal engine sometimes doesn't properly close the connection https://bitbucket.org/zzzeek/sqlalchemy/issues/3779/threadlocal-engine-sometimes-doesnt jbeluch: In certain scenarios the __opencount attribute on TLConnection gets incremented but not properly decremented, which means the connection doesn't get closed properly unless you resort to _force_close(). Calling TLEngine.begin() returns the engine object itself which has __enter__ and __exit__ methods. While __enter__ will increment the open count, __exit__ doesn't call conn.close() so the count remains higher than it should. Potential PR here: https://github.com/zzzeek/sqlalchemy/pull/301#issuecomment-242161104 Failing test: ~~~ def test_tlbind_close_trans_conn(self): e = create_engine(testing.db.url, strategy='threadlocal') conn = e.contextual_connect() with e.begin(): pass conn.close() assert conn.closed ~~~ |
From: Konsta V. <iss...@bi...> - 2016-08-20 16:07:35
|
New issue 3778: Mapper property all_orm_descriptors memoized even after new attributes get added https://bitbucket.org/zzzeek/sqlalchemy/issues/3778/mapper-property-all_orm_descriptors Konsta Vesterinen: The following code should outline the issue: ``` #!python import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base, synonym_for Base = declarative_base() class Article(Base): __tablename__ = 'article' id = sa.Column(sa.Integer, primary_key=True) Article.__mapper__.all_orm_descriptors Article.name = sa.Column(sa.String) Article.__mapper__.all_orm_descriptors.keys() ``` I would expect the last line to return a list containing ``'name'``, but currently it only returns ``['id', '__mapper__']``. Even calling ``sa.orm.configure_mappers()`` won't help here. I noticed this problem when I had SQLAlchemy mapper configuration listeners that were accessing ``Mapper.all_orm_descriptors``. |
From: Konsta V. <iss...@bi...> - 2016-08-20 12:50:06
|
New issue 3777: Accessing the history of a synonym attribute https://bitbucket.org/zzzeek/sqlalchemy/issues/3777/accessing-the-history-of-a-synonym Konsta Vesterinen: Consider the following code: ``` #!python import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base, synonym_for Base = declarative_base() class User(Base): __tablename__ = 'user' _name = sa.Column(sa.String, primary_key=True) @synonym_for('_name') @property def name(self): return self._name u = User() sa.inspect(u).attrs['name'].history ``` Currently this raises the following error: ``` #!python Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/sqlalchemy/orm/state.py", line 684, in history PASSIVE_NO_INITIALIZE) File "/sqlalchemy/orm/state.py", line 309, in get_history return self.manager[key].impl.get_history(self, self.dict, passive) AttributeError: '_ProxyImpl' object has no attribute 'get_history' ``` I think it would make more sense if this either 1. Would not raise an AttributeError but rather show the history of the property that the synonym property is pointing at (in this case the history of ``_name``) 2. Raise an error with more descriptive error message This is just a minor issue for me and its easy to find a workaround for this. |
From: cdnsteve <iss...@bi...> - 2016-08-18 15:48:12
|
New issue 3776: Validating a field multiple times with different @validates decorators is causing them to be skipped https://bitbucket.org/zzzeek/sqlalchemy/issues/3776/validating-a-field-multiple-times-with cdnsteve: If a field has multiple @validates decorators it will not consistently validate each of them properly and cause subsequent @validates to be skipped. Example: the second validator, validate_field_positive_number, will not get run. ``` #!python @validates('email') def validate_test_123(self, key, field): return field @validates('email') def validate_field_positive_number(self, key, field): if True: raise ValidationError('failed multiple validation') return field ``` If you run it the first time. Validation will pass. If you remove/delete the validate_test_123 decorator and validation, the second one will always work. Thanks, Steve |
From: Brice M. <iss...@bi...> - 2016-08-16 14:25:16
|
New issue 3775: Fetch Entity from returning clause https://bitbucket.org/zzzeek/sqlalchemy/issues/3775/fetch-entity-from-returning-clause Brice Maron: Hi, i wanted to use an "upsert" in my code and having an ORM entity back so i tried to use the methode described here: http://docs.sqlalchemy.org/en/latest/changelog/migration_11.html#change-3529 and add a returning(MyTable.my_field) it worked well. But i was unable to do "returning *" or to fetch the related entity... is there something i missed? |
From: Michael B. <iss...@bi...> - 2016-08-15 20:12:09
|
New issue 3774: subqueryload across multiple of_type() fails to select correct "second to last" entity https://bitbucket.org/zzzeek/sqlalchemy/issues/3774/subqueryload-across-multiple-of_type-fails Michael Bayer: related to #3773, we get another comma join in this: ``` #!python from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import (relationship, sessionmaker, subqueryload, with_polymorphic) from sqlalchemy import create_engine, Column, String, Integer from sqlalchemy.schema import ForeignKey Base = declarative_base() class A(Base): __tablename__ = 't_a' id = Column(Integer, primary_key=True) class B(Base): __tablename__ = 't_b' type = Column(String(2)) __mapper_args__ = { 'polymorphic_identity': 'b', 'polymorphic_on': type } id = Column(Integer, primary_key=True) # Relationship to A a_id = Column(Integer, ForeignKey('t_a.id')) a = relationship('A', backref='bs') class B2(B): __tablename__ = 't_b2' __mapper_args__ = { 'polymorphic_identity': 'b2', } id = Column(Integer, ForeignKey('t_b.id'), primary_key=True) class C(Base): __tablename__ = 't_c' type = Column(String(2)) __mapper_args__ = { 'polymorphic_identity': 'c', 'polymorphic_on': type } id = Column(Integer, primary_key=True) # Relationship to B b_id = Column(Integer, ForeignKey('t_b.id')) b = relationship('B', backref='cs') class C2(C): __tablename__ = 't_c2' __mapper_args__ = { 'polymorphic_identity': 'c2', } id = Column(Integer, ForeignKey('t_c.id'), primary_key=True) class D(Base): __tablename__ = 't_d' id = Column(Integer, primary_key=True) # Relationship to B c_id = Column(Integer, ForeignKey('t_c.id')) c = relationship('C', backref='ds') engine = create_engine('sqlite://', echo=True) Base.metadata.drop_all(engine) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() for i in xrange(2): a = A() session.add(a) b = B2(a=a) session.add(b) c = C2(b=b) session.add(c) d = D(c=c) session.add(d) session.commit() b_b2 = with_polymorphic(B, [B2], flat=True) c_c2 = with_polymorphic(C, [C2], flat=True) # Broken -- the query on D has a cross join between # (A join B) and (B join C join D). r = session.query( A ).options( subqueryload( A.bs.of_type(b_b2) ).subqueryload( b_b2.cs.of_type(c_c2) ).subqueryload( c_c2.ds ) ).all() ``` this would appear to be the fix: ``` #!diff diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index 1d0058c..4920d5c 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -922,20 +922,11 @@ class SubqueryLoader(AbstractRelationshipLoader): # in the case of a one level eager load, this is the # leftmost "left_alias". parent_alias = left_alias - elif info.mapper.isa(self.parent): - # In the case of multiple levels, retrieve - # it from subq_path[-2]. This is the same as self.parent - # in the vast majority of cases, and [ticket:2014] - # illustrates a case where sub_path[-2] is a subclass - # of self.parent - parent_alias = orm_util.AliasedClass( - to_join[-1][0], - use_mapper_path=True) + elif info.is_aliased_class: + parent_alias = info.entity else: - # if of_type() were used leading to this relationship, - # self.parent is more specific than subq_path[-2] parent_alias = orm_util.AliasedClass( - self.parent, + info.entity, use_mapper_path=True) local_cols = self.parent_property.local_columns ``` |
From: Michael B. <iss...@bi...> - 2016-08-13 03:32:14
|
New issue 3773: subquery eager load from of_type() to plain mapper chains joins incorrectly https://bitbucket.org/zzzeek/sqlalchemy/issues/3773/subquery-eager-load-from-of_type-to-plain Michael Bayer: The SQL at the end gives us a "FROM <join of things>, <join of things>". ``` #!python from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import (relationship, sessionmaker, subqueryload, with_polymorphic, joinedload) from sqlalchemy import create_engine, Column, String, Integer from sqlalchemy.schema import ForeignKey Base = declarative_base() class B(Base): __tablename__ = 't_b' id = Column(Integer, primary_key=True) class C(Base): __tablename__ = 't_c' type = Column(String(2)) __mapper_args__ = { 'polymorphic_identity': 'c', 'polymorphic_on': type } id = Column(Integer, primary_key=True) # Relationship to B b_id = Column(Integer, ForeignKey('t_b.id')) b = relationship('B', backref='cs') class C2(C): __tablename__ = 't_c2' __mapper_args__ = { 'polymorphic_identity': 'c2', } id = Column(Integer, ForeignKey('t_c.id'), primary_key=True) class D(Base): __tablename__ = 't_d' id = Column(Integer, primary_key=True) # Relationship to B c_id = Column(Integer, ForeignKey('t_c.id')) c = relationship('C', backref='ds') engine = create_engine('sqlite://', echo=True) Base.metadata.drop_all(engine) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() for i in xrange(2): b = B() session.add(b) c = C2(b=b) session.add(c) d = D(c=c) session.add(d) session.commit() c_c2 = with_polymorphic(C, [C2], flat=True) r = session.query( B ).options( subqueryload( B.cs.of_type(c_c2) ).subqueryload( c_c2.ds# .of_type(D) ) ).all() ``` the patch is: ``` #!diff diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index 8260732..1d0058c 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -968,7 +968,7 @@ class SubqueryLoader(AbstractRelationshipLoader): if last and effective_entity is not self.mapper: attr = attr.of_type(effective_entity) else: - if last and effective_entity is not self.mapper: + if last: attr = getattr(parent_alias, key).\ of_type(effective_entity) else: ``` the code with effective_entity was introduced in 3dd536ac06808adcf9c10707dbf2ebb6e3842be7, the test for it is test_of_type -> SubclassRelationshipTest.test_twolevel_subqueryload_wsubclass introduced in 4a4afca9595c9b4b10e6557c6ee819ae386c477a. |
From: Yuan Y. <iss...@bi...> - 2016-08-12 09:30:47
|
New issue 3772: 'charset' in connection url not work on some server https://bitbucket.org/zzzeek/sqlalchemy/issues/3772/charset-in-connection-url-not-work-on-some Yuan Yao: I passed `charset` on connection url which works on almost all mysql except for one particular mysql server. I used `pymysql` as my connector. mysql version is `5.6` |
From: Alexandre A. <iss...@bi...> - 2016-08-12 06:51:02
|
New issue 3771: PyMySQL custom converters reset to default / not used during a commit https://bitbucket.org/zzzeek/sqlalchemy/issues/3771/pymysql-custom-converters-reset-to-default Alexandre Avanian: When creating a mysql engine with pymysql and custom converters, the converters dict passed in parameter is taken in account in the pymysql connection object but during a commit, it seems encoders revert to the default ones. See below some sample code. The error "AttributeError: 'Timestamp' object has no attribute 'translate'" occurs when pymysql cannot find a converter for the type of the value passed. Dropping in debugger confirms that the encoders are the default ones and the value is of type pandas.Timestamp. sqlalchemy: 1.0.13 pymysql: 0.7.6.None ``` from datetime import datetime from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, DateTime from pymysql.converters import conversions, escape_datetime from pandas import Timestamp Base = declarative_base() class TestConverter(Base): __tablename__ = 'test_converter' id = Column(Integer, primary_key=True) when = Column(DateTime) assert escape_datetime(Timestamp('2016-08-12 08:45:00')) == escape_datetime(datetime(2016, 8, 12, 8, 45)) conversions[Timestamp] = escape_datetime engine = create_engine('mysql+pymysql://user:password@host:port/db', connect_args={'conv': conversions}) Session = sessionmaker(bind=engine) session = Session() Base.metadata.create_all(engine) print(engine.contextual_connect()connection.connection.encoders) # shows that the encoder for pandas.Timestamp is there new = TestConverter(when=Timestamp('2016-08-12 08:45:00')) session.add(new) session.commit() # fails with AttributeError: 'Timestamp' object has no attribute 'translate' ``` |
From: Ye W. <iss...@bi...> - 2016-08-11 16:39:39
|
New issue 3770: Union query contains duplicated rows and calling .scalar() on it gets MultipleResultsFound exception https://bitbucket.org/zzzeek/sqlalchemy/issues/3770/union-query-contains-duplicated-rows-and Ye Wang: I am running into two issues with Union today using SQLAlchemy with MySQL. Here is the iPython session output. ``` #!python In [1]: from models.schedules import * In [2]: from models.products import * In [3]: from models.users import * In [4]: query = db.session.query(Appointment) In [5]: q_member = query.join(Schedule).join(User).filter( User.last_name == 'Smith') In [6]: q_practitioner = query.join(Product).join(User).filter( User.last_name == 'Smith') In [7]: q_member.count() Out[7]: 135 In [8]: q_practitioner.count() Out[8]: 277 In [9]: q_member.union(q_practitioner) Out[9]: <sqlalchemy.orm.query.Query at 0x7f72307845f8> In [10]: q_member.union(q_practitioner).count() Out[10]: 388 In [11]: q_member.union(q_practitioner).scalar() --------------------------------------------------------------------------- MultipleResultsFound Traceback (most recent call last) <ipython-input-11-0456717d8d59> in <module>() ----> 1 q_member.union(q_practitioner).scalar() /usr/local/lib/python3.4/site-packages/sqlalchemy/orm/query.py in scalar(self) 2622 try: -> 2623 ret = self.one() 2624 if not isinstance(ret, tuple): 2625 return ret /usr/local/lib/python3.4/site-packages/sqlalchemy/orm/query.py in one(self) 2599 else: 2600 raise orm_exc.MultipleResultsFound( -> 2601 "Multiple rows were found for one()") 2602 2603 def scalar(self): MultipleResultsFound: Multiple rows were found for one() In [12]: sqlalchemy.__version__ Out[12]: '0.9.11' ``` 1) The .union() call returns duplicated rows, as you can see above 135+277 > 388. This is violating MySQL's SELECT UNION statement syntax in that UNION DISTINCT is the default. (only UNION ALL allows duplicate rows) http://dev.mysql.com/doc/refman/5.6/en/union.html 2) Calling .scalar() on a union'd query object gets MultipleResultsFound exception |