sqlalchemy-tickets Mailing List for SQLAlchemy (Page 21)
Brought to you by:
zzzeek
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
From: Michael B. <iss...@bi...> - 2016-11-05 21:20:07
|
New issue 3844: passive_deletes='all' is not complete https://bitbucket.org/zzzeek/sqlalchemy/issues/3844/passive_deletes-all-is-not-complete Michael Bayer: Probably has to wait until 1.2 because this is a behavior change, but passive_deletes is not skipping setting the FK to NULL in all cases: ``` #!python from sqlalchemy import Column, Integer, String, ForeignKey, Boolean from sqlalchemy import create_engine, and_ from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship from sqlalchemy.orm import sessionmaker from sqlalchemy import event from sqlalchemy import inspect Base = declarative_base() engine = create_engine('sqlite:///:memory:', echo=True) Session = sessionmaker(bind=engine) class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) address = relationship( "Address", uselist=False, passive_deletes="all") class Address(Base): __tablename__ = 'addresses' id = Column(Integer, primary_key=True) email = Column(String, nullable=False) deleted = Column(Boolean, nullable=False, default=False) user_id = Column(Integer, ForeignKey('users.id')) user = relationship("User") Base.metadata.create_all(engine) sess = Session() a1 = Address(email='foo') u = User(id=1, address=a1) sess.add_all([u, a1]) sess.commit() u.address = Address(email='bar') sess.commit() assert a1.user_id == 1, a1.user_id ``` because we have a sync to None not checking it, needs this: ``` #!diff diff --git a/lib/sqlalchemy/orm/dependency.py b/lib/sqlalchemy/orm/dependency.py index a3e5b12..f2193b6 100644 --- a/lib/sqlalchemy/orm/dependency.py +++ b/lib/sqlalchemy/orm/dependency.py @@ -553,7 +553,8 @@ class OneToManyDP(DependencyProcessor): for child in history.deleted: if not self.cascade.delete_orphan and \ - not self.hasparent(child): + not self.hasparent(child) and \ + not self.passive_deletes == 'all': self._synchronize(state, child, None, True, uowcommit, False) ``` check if this bug is a dupe b.c. this seems kind of obvious, also check for any other synchronize(.. None) that is being allowed. review the docs for passive_deletes to make sure I'm not misunderstanding the intent. making this critical so it remains noticed. |
From: Valanto K. <iss...@bi...> - 2016-11-03 14:24:51
|
New issue 3843: `in_` operator does not supports bound parameter https://bitbucket.org/zzzeek/sqlalchemy/issues/3843/in_-operator-does-not-supports-bound Valanto Kousetti: This issue is similar to #3574. Using column.in_(bindparam('name')) (to place a variable list of values) throws exception Unfortunately for me func.any doesn't solve it because I'm using SQLite which does not support ANY. How can I overcome the issue? |
From: Adam M. <iss...@bi...> - 2016-11-03 06:12:15
|
New issue 3842: _warn_pk_with_no_anticipated_value - 'ColumnSet' object has no attribute 'columns' https://bitbucket.org/zzzeek/sqlalchemy/issues/3842/_warn_pk_with_no_anticipated_value Adam Mills: When inserting on a table without specifying the PK and with no auto generation specified in SqlAlchemy a warning is generated. The warning code throws ColumnSet' object has no attribute 'columns' ``` #!python test = Table('test', metadata, Column('id', Integer, nullable=False, primary_key=True), Column('notpk', String(10), nullable=True) ) ``` ``` #!python Db.session().execute(Database.test.insert().values(notpk="adam")) File "local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 945, in execute return meth(self, multiparams, params) File "local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1046, in _execute_clauseelement if not self.schema_for_object.is_default else None) File "<string>", line 1, in <lambda> File "local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 436, in compile return self._compiler(dialect, bind=bind, **kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 442, in _compiler return dialect.statement_compiler(dialect, self, **kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 435, in __init__ Compiled.__init__(self, dialect, statement, **kwargs) File "local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 216, in __init__ self.string = self.process(self.statement, **compile_kwargs) File "local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 242, in process return obj._compiler_dispatch(self, **kwargs) File "local/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch return meth(self, **kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py", line 1968, in visit_insert self, insert_stmt, crud.ISINSERT, **kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/crud.py", line 57, in _setup_crud_params return _get_crud_params(compiler, stmt, **kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/crud.py", line 137, in _get_crud_params _col_bind_name, check_columns, values, kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/crud.py", line 284, in _scan_cols _append_param_insert_pk(compiler, stmt, c, values, kw) File "local/lib/python2.7/site-packages/sqlalchemy/sql/crud.py", line 467, in _append_param_insert_pk _warn_pk_with_no_anticipated_value(c) File "local/lib/python2.7/site-packages/sqlalchemy/sql/crud.py", line 684, in _warn_pk_with_no_anticipated_value if len(c.table.primary_key.columns) > 1: AttributeError: 'ColumnSet' object has no attribute 'columns' ``` |
From: Christian <iss...@bi...> - 2016-10-28 17:34:26
|
New issue 3841: Enum validation causes problems with values not in Enum https://bitbucket.org/zzzeek/sqlalchemy/issues/3841/enum-validation-causes-problems-with Christian: After upgrading from 1.0 to 1.1.3, i get this error: ``` File "/home/xxx/.venv/xxx/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py", line 1317, in _object_value_for_elem '"%s" is not among the defined enum values' % elem) LookupError: "" is not among the defined enum values ``` MySQL allows to insert data, that is not in the Enum. Test Case ``` #!python import unittest from sqlalchemy import (Column, Integer, Enum) from sqlalchemy.ext.declarative import declarative_base from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = ("mysql://root:develop@localhost/test123") db = SQLAlchemy(app) Base = declarative_base() class Test(db.Model): __tablename__ = 'test' id = Column(Integer, primary_key=True, nullable=False) default = Column(Enum('VALUE1','VALUE2')) db.drop_all() db.create_all() class TestSelfEnum(unittest.TestCase): def test_enum(self): db.session.add(Test(default="VALUE1")) db.engine.execute('INSERT INTO test(`default`) VALUES("")') db.session.commit() result = db.session.query(Test).all() ``` |
From: Frazer M. <iss...@bi...> - 2016-10-27 13:14:59
|
New issue 3840: Autocommit for REFRESH MATERIALIZED VIEW https://bitbucket.org/zzzeek/sqlalchemy/issues/3840/autocommit-for-refresh-materialized-view Frazer McLean: I would expect that `REFRESH MATERIALIZED VIEW` on PostgreSQL is treated similarly to UPDATE/INSERT/CREATE/DELETE/DROP/ALTER. Failing MCVE: ``` from sqlalchemy import Column, Integer, MetaData, Table, create_engine, text metadata = MetaData() foo = Table('foo', metadata, Column('id', Integer, primary_key=True)) engine = create_engine('postgres://scott:tiger@localhost/test') foo.create(bind=engine) engine.execute(''' CREATE MATERIALIZED VIEW IF NOT EXISTS bar AS ( SELECT id FROM foo ) ''') engine.execute(foo.insert()) assert engine.scalar('SELECT count(*) FROM bar') == 0 engine.execute('REFRESH MATERIALIZED VIEW bar') # engine.execute(text('REFRESH MATERIALIZED VIEW bar').execution_options(autocommit=True)) assert engine.scalar('SELECT count(*) FROM bar') == 1 ``` |
From: Adrian <iss...@bi...> - 2016-10-27 09:21:50
|
New issue 3839: "FlushError: Over 100 subsequent flushes" when deleting same object twice in 1.1 https://bitbucket.org/zzzeek/sqlalchemy/issues/3839/flusherror-over-100-subsequent-flushes Adrian: ```python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Foo(Base): __tablename__ = 'foo' id = Column(Integer, primary_key=True) e = create_engine('sqlite:///', echo=False) Base.metadata.create_all(e) s = Session(e) s.add(Foo()) s.commit() foo = s.query(Foo).first() s.delete(foo) s.flush() s.delete(foo) s.flush() s.commit() ``` With SQLalchemy 1.0: ``` [adrian@blackhole:/tmp/test]> pip install -q 'sqlalchemy<1.1' [adrian@blackhole:/tmp/test]> python satest.py /tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py:925: SAWarning: DELETE statement on table 'foo' expected to delete 1 row(s); 0 were matched. Please set confirm_deleted_rows=False within the mapper configuration to prevent this warning. (table.description, expected, rows_matched) ``` With SQLAlchemy 1.1: ``` [adrian@blackhole:/tmp/test]> pip install -Uq sqlalchemy [adrian@blackhole:/tmp/test]> python satest.py Traceback (most recent call last): File "satest.py", line 26, in <module> s.commit() File "/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 874, in commit self.transaction.commit() File "/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 461, in commit self._prepare_impl() File "/tmp/test/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 444, in _prepare_impl "Over 100 subsequent flushes have occurred within " sqlalchemy.orm.exc.FlushError: Over 100 subsequent flushes have occurred within session.commit() - is an after_flush() hook creating new objects? ``` It looks like the second delete adds the object to `s.deleted` and it stays there forever. It also fails only during `s.commit()`. not during a normal `s.flush()` While it can be considered a bug in my application that I end up deleting the same object twice, I don't think the 1.1 behavior is correct - if it should indeed be an error case instead of just a warning as in 1.0 the error should probably be somewhat clear. |
From: Giorgos P. <iss...@bi...> - 2016-10-25 01:14:28
|
New issue 3838: Class with automap should override all columns? https://bitbucket.org/zzzeek/sqlalchemy/issues/3838/class-with-automap-should-override-all Giorgos Papadrosou: Hi, I have this example: ``` #!python class CarrierGroup(MyBase): __tablename__ = 'carrier_group' group_id = Column(Integer, Sequence('carrier_group_group_id_seq'), primary_key=True) # group_name = Column(String) # commented but exists in DB engine = create_engine('postgresql://...') metadata = MetaData(bind=engine) MyBase = automap_base(metadata=metadata) metadata.reflect(only=exclude_tables) MyBase.prepare(name_for_scalar_relationship=name_for_scalar_relationship) Session = scoped_session(sessionmaker()) session = Session() session.add(CarrierGroup(group_name='testgroup')) ``` This code fails in the last statement with "TypeError: 'group_name' is an invalid keyword argument for CarrierGroup ". Why? If I want to create a Class, do I need to explicitly write all the fields? Thank you |
From: 200 B. G. <iss...@bi...> - 2016-10-23 23:07:27
|
New issue 3837: Feature Request: Extensible automap classes https://bitbucket.org/zzzeek/sqlalchemy/issues/3837/feature-request-extensible-automap-classes 200 Billion Galaxies: Currently, automapped classes cannot be extended and used without error. Example: ``` #!python Base = automap_base() engine = create_engine(DATABASE_URL) Base.prepare(engine, reflect=True) ItemModelBase = Base.classes.item class DumbMixin: def some_helpful_method(self): print(self.id) class Item(DumbMixin, ItemModelBase): def lifetime(self): return self.updated - self.created item = Item() item.name = 'Just a test item.' ###################################################################### # Error stacktrace: Traceback (most recent call last): File "sqlalchemy/sql/elements.py", line 676, in __getattr__ return getattr(self.comparator, key) AttributeError: 'Comparator' object has no attribute '_supports_population' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "sqlalchemy/orm/attributes.py", line 185, in __getattr__ return getattr(self.comparator, key) File "sqlalchemy/util/langhelpers.py", line 840, in __getattr__ return self._fallback_getattr(key) File "sqlalchemy/orm/properties.py", line 267, in _fallback_getattr return getattr(self.__clause_element__(), key) File "sqlalchemy/sql/elements.py", line 682, in __getattr__ key) AttributeError: Neither 'AnnotatedColumn' object nor 'Comparator' object has an attribute '_supports_population' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "sqlalchemy/orm/attributes.py", line 234, in __get__ if self._supports_population and self.key in dict_: File "sqlalchemy/orm/attributes.py", line 193, in __getattr__ key) AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with item.name has an attribute '_supports_population' ``` It would be great if we could have our cake and eat it too. A healthy mix of automapped and declarative bases means I can use `alembic` to manage my migrations, and just have the classes update without having to maintain essentially two schema definitions. If this is already doable and I'm just missing something, I'd love to be enlightened. |
From: Michael B. <iss...@bi...> - 2016-10-21 12:52:49
|
New issue 3836: potential 1.1 regression re: query https://bitbucket.org/zzzeek/sqlalchemy/issues/3836/potential-11-regression-re-query Michael Bayer: looks like an illegal attribute access: ``` #!python Traceback (most recent call last): File "barbican/tests/cmd/test_db_cleanup.py", line 32, in project_wrapper test_result = test_func(self, *args, **kwargs) File "barbican/tests/cmd/test_db_cleanup.py", line 291, in test_soft_deleting_expired_secrets clean.soft_delete_expired_secrets(current_time) File "barbican/model/clean.py", line 293, in soft_delete_expired_secrets update_count = _soft_delete_expired_secrets(threshold_date) File "barbican/model/clean.py", line 198, in _soft_delete_expired_secrets synchronize_session='fetch') File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 3286, in update update_op.exec_() File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 1158, in exec_ self._do_post_synchronize() File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 1404, in _do_post_synchronize target_mapper = self.query._mapper_zero() File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 294, in _mapper_zero return self._entities[0].mapper AttributeError: '_ColumnEntity' object has no attribute 'mapper' ``` this is openstack barbican need to figure out what they are doing. http://paste.debian.net/884819/ |
From: Michael B. <iss...@bi...> - 2016-10-20 21:10:42
|
New issue 3835: invalid autoincrement setting can be reflected, causing failures https://bitbucket.org/zzzeek/sqlalchemy/issues/3835/invalid-autoincrement-setting-can-be Michael Bayer: ``` #!python from sqlalchemy import * m = MetaData() t = Table( 't', m, Column( 'x', Integer, primary_key=True) ) e = create_engine("postgresql://scott:tiger@localhost/test", echo=True) m.drop_all(e) m.create_all(e) e.execute("alter table t alter column x type varchar(50)") m2 = MetaData() t2 = Table('t', m2, autoload_with=e) from sqlalchemy.schema import CreateTable print CreateTable(t2) ``` output: sqlalchemy.exc.ArgumentError: Column type VARCHAR(50) on column 't.x' is not compatible with autoincrement=True |
From: Lukas S. <iss...@bi...> - 2016-10-20 20:17:25
|
New issue 3834: Regression: Enum.enums returning list instead of tuple https://bitbucket.org/zzzeek/sqlalchemy/issues/3834/regression-enumenums-returning-list Lukas Siemon: The Enum.enums returning a list now instead of tuple. Was this by design (I couldn't find it in the document)? Imo it makes more sense to keep it immutable as tuple. |
From: Lukas S. <iss...@bi...> - 2016-10-20 18:47:30
|
New issue 3833: Enum validation causes problems with column_property https://bitbucket.org/zzzeek/sqlalchemy/issues/3833/enum-validation-causes-problems-with Lukas Siemon: I'm currently trying to upgrade from 1.0.12 to 1.1.2. There is a problem where the enum validation gets propagated to column property and then fails. Any advice how to work around this for now? Adding `validate_strings=False` does not help unfortunately. Error: ``` LookupError: "#1" is not among the defined enum values ``` Test Case ``` #!python import unittest from sqlalchemy import (Column, Integer) from sqlalchemy.dialects.postgresql import ENUM from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import (column_property) from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = ("sqlite://") db = SQLAlchemy(app) Base = declarative_base() class Venue(db.Model): __tablename__ = 'venue' id = Column(Integer, primary_key=True, nullable=False) default = Column( ENUM('1', name='default_enum'), index=True, nullable=True ) dashed_default = column_property("#" + default) db.drop_all() db.create_all() class TestSelfJoin(unittest.TestCase): def test_self_join(self): db.session.add(Venue(default="1")) db.session.commit() venue = Venue.query.one() print venue.default ``` |
From: Adrian <iss...@bi...> - 2016-10-20 12:26:53
|
New issue 3832: _set_table not called in 1.1 on custom type based on TypeDecorator & SchemaType https://bitbucket.org/zzzeek/sqlalchemy/issues/3832/_set_table-not-called-in-11-on-custom-type Adrian: We are using a custom class handling int-based enums (since 1.0 didn't have python-native enum support) with an autogenerated CHECK constraint. In 1.1 this constraint is not added anymore. In the [changelog entry](http://docs.sqlalchemy.org/en/rel_1_1/changelog/migration_11.html#typedecorator-now-works-with-enum-boolean-schema-types-automatically) regarding TypeDecorator I couldn't see any reason why _set_table would not be called anymore. ```python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.sql.sqltypes import SchemaType from enum import Enum Base = declarative_base() class PyIntEnum(TypeDecorator, SchemaType): impl = SmallInteger def __init__(self, enum=None, exclude_values=None): self.enum = enum self.exclude_values = set(exclude_values or ()) TypeDecorator.__init__(self) SchemaType.__init__(self) def process_bind_param(self, value, dialect): return int(value) if value is not None else None def process_result_value(self, value, dialect): pass # not relevant for DDL def _set_table(self, column, table): e = CheckConstraint(type_coerce(column, self).in_(x.value for x in self.enum if x not in self.exclude_values), 'valid_enum_{}'.format(column.name)) e.info['alembic_dont_render'] = True assert e.table is table class MyEnum(int, Enum): a = 1 b = 2 c = 3 class Foo(Base): __tablename__ = 'foo' id = Column(Integer, primary_key=True) enum = Column(PyIntEnum(MyEnum)) e = create_engine('postgresql:///test', echo=True) Base.metadata.drop_all(e) Base.metadata.create_all(e) ``` SQL emitted in 1.1: ```sql CREATE TABLE foo ( id SERIAL NOT NULL, enum SMALLINT, PRIMARY KEY (id) ) ``` SQL emitted in 1.0 ```sql CREATE TABLE foo ( id SERIAL NOT NULL, enum SMALLINT, PRIMARY KEY (id), CONSTRAINT valid_enum_enum CHECK (enum IN (1, 2, 3)) ) ``` |
From: Lukas S. <iss...@bi...> - 2016-10-20 00:18:42
|
New issue 3831: Self referencing relationship can not determine join condition https://bitbucket.org/zzzeek/sqlalchemy/issues/3831/self-referencing-relationship-can-not Lukas Siemon: I'm trying to follow the documentation as described here: http://docs.sqlalchemy.org/en/latest/orm/join_conditions.html#non-relational-comparisons-materialized-path This works as expected, however when I try to use (any) `func` function this fails. I noticed this feature is experimental, but is it possible to fix it for this use case? Error: ` ArgumentError: Relationship Venue.parents could not determine any unambiguous local/remote column pairs based on join condition and remote_side arguments. Consider using the remote() annotation to accurately mark those elements of the join condition that are on the remote side of the relationship. ` The issue is also documented here: http://stackoverflow.com/questions/38006116/how-can-i-create-a-many-to-many-relationship-with-sqlalchemy-using-a-sql-functio Minimal Test case: ``` #!python import unittest from sqlalchemy import (Column, Integer, String, func) from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import (relationship, joinedload) from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy # -- create all the database models from sqlalchemy.orm import foreign from sqlalchemy.orm import remote app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = ("sqlite://") db = SQLAlchemy(app) Base = declarative_base() class Venue(db.Model): __tablename__ = 'venue' id = Column(Integer, primary_key=True, nullable=False) name = Column(String(254)) parents = relationship( "Venue", # doesn't work primaryjoin=func.substring(remote(foreign(name)), name), # works # primaryjoin=remote(foreign(name)).like('%' + name + '%'), viewonly=True, order_by=remote(foreign(name)) ) db.drop_all() db.create_all() class TestContainsEager(unittest.TestCase): def test_contains_eager(self): query = Venue.query.options(joinedload(Venue.parents)) # contains_eager does not pick up the alias from the inner query import sqlalchemy.dialects.postgresql as postgresql print query.statement.compile(dialect=postgresql.dialect()) ``` Responsible: zzzeek |
From: Adrian <iss...@bi...> - 2016-10-19 14:48:17
|
New issue 3830: Column(JSON, default=None) defaults to SQL NULL in 1.1 instead of JSON null https://bitbucket.org/zzzeek/sqlalchemy/issues/3830/column-json-default-none-defaults-to-sql Adrian: ```python from datetime import datetime, date from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.dialects.postgresql import JSON Base = declarative_base() class Foo(Base): __tablename__ = 'foo' id = Column(Integer, primary_key=True) json = Column(JSON, nullable=False, default=None) e = create_engine('postgresql:///test', echo=True) Base.metadata.create_all(e) s = Session(e) s.add(Foo()) s.flush() ``` This fails with the NOT NULL being violated. Using `default=JSON.NULL` works but unless I misunderstand [the docs](http://docs.sqlalchemy.org/en/rel_1_1/changelog/migration_11.html#json-null-is-inserted-as-expected-with-orm-operations-regardless-of-column-default-present) I shouldn't get a SQL NULL there. |
From: chendezhi <iss...@bi...> - 2016-10-18 11:06:54
|
New issue 3829: SAWarning: Unknown schema content: u" KEY `index_rating_count` https://bitbucket.org/zzzeek/sqlalchemy/issues/3829/sawarning-unknown-schema-content-u-key chendezhi: /usr/local/lib/python2.7/site-packages/sqlalchemy/dialects/mysql/reflection.py:56: SAWarning: Unknown schema content: u" KEY `index_rating_count` |
From: Basti <iss...@bi...> - 2016-10-17 11:57:51
|
New issue 3828: Postgres multi-value insert fails with enum-type columns and on_conflict statement https://bitbucket.org/zzzeek/sqlalchemy/issues/3828/postgres-multi-value-insert-fails-with Basti: When creating a (postgres specific) insert statement for a table with a enum typed column, the compilation of the statement fails if the enum-column is used in the `on_conflict` where-condition. Minimal Example: ``` #!python import sqlalchemy as _sa from sqlalchemy.dialects import postgresql as t from st.db.schema import PrimaryKey as PrimaryKey test_table = _sa.Table(u'test_table', _sa.MetaData(), _sa.Column('id', t.INTEGER, nullable=False, autoincrement=False), _sa.Column('state', t.ENUM(u'1st', u'2nd', u'3rd'), nullable=False), ) PrimaryKey( test_table.c.id, name=u'test_table_pkey', ) PG_INSERT = t.insert values = [dict(id=100, state=u'1st'), dict(id=200, state=u'1st')] stmt = PG_INSERT(test_table, values) stmt = stmt.on_conflict_do_update( index_elements=['id'], set_=dict(id=1000), where=(stmt.excluded.state == '2nd') ) str(stmt.compile(dialect=t.dialect())) # Fails with 'sqlalchemy.exc.CompileError: Bind parameter '%(4378430864 state)s' # conflicts with unique bind parameter of the same name' ``` If instead of a list of values, a single dict is used in the `insert()` call everything is compiled correctly. Also multi-values work if the where-condition is replaced with any other (non-enum) column. |
From: Grzegorz Ś. <iss...@bi...> - 2016-10-16 21:29:15
|
New issue 3827: Inability to Create table with Mixin based Enum column on PostgreSQL https://bitbucket.org/zzzeek/sqlalchemy/issues/3827/inability-to-create-table-with-mixin-based Grzegorz Śliwiński: A long while ago, I've created a mixin for a user model containing only password related columns and functionality. https://github.com/fizyk/pyramid_fullauth/blob/master/pyramid_fullauth/models/mixins/password.py And till SQLAlchemy 1.0.15 it worked fine on PostgreSQL, unfotunately, since 1.1.0, SQLAlchemy no longer creates ENUM in PostgreSQL that are used only in Mixins. I've just checked and it workes perfectly fine with 1.0.15 and in 1.1.1 with Enum in the Model, but not when it's placed in Mixin that's used in model later on. Example build: https://travis-ci.org/fizyk/pyramid_fullauth/jobs/166536077 And engine with echo=True ``` tests/views/test_activation.py::test_account_activation[postgresql] 2016-10-16 23:27:10,846 INFO sqlalchemy.engine.base.Engine select version() 2016-10-16 23:27:10,846 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,848 INFO sqlalchemy.engine.base.Engine select current_schema() 2016-10-16 23:27:10,848 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,848 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1 2016-10-16 23:27:10,849 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,849 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1 2016-10-16 23:27:10,849 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,850 INFO sqlalchemy.engine.base.Engine show standard_conforming_strings 2016-10-16 23:27:10,850 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,851 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,851 INFO sqlalchemy.engine.base.Engine {'name': u'user_authentication_provider'} 2016-10-16 23:27:10,852 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,852 INFO sqlalchemy.engine.base.Engine {'name': u'users_groups'} 2016-10-16 23:27:10,852 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,852 INFO sqlalchemy.engine.base.Engine {'name': u'users'} 2016-10-16 23:27:10,853 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,853 INFO sqlalchemy.engine.base.Engine {'name': u'groups'} 2016-10-16 23:27:10,856 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,856 INFO sqlalchemy.engine.base.Engine {'name': u'user_authentication_provider'} 2016-10-16 23:27:10,857 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,857 INFO sqlalchemy.engine.base.Engine {'name': u'users_groups'} 2016-10-16 23:27:10,858 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,858 INFO sqlalchemy.engine.base.Engine {'name': u'users'} 2016-10-16 23:27:10,859 INFO sqlalchemy.engine.base.Engine select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s 2016-10-16 23:27:10,859 INFO sqlalchemy.engine.base.Engine {'name': u'groups'} 2016-10-16 23:27:10,860 INFO sqlalchemy.engine.base.Engine SELECT relname FROM pg_class c join pg_namespace n on n.oid=c.relnamespace where relkind='S' and n.nspname=current_schema() and relname=%(name)s 2016-10-16 23:27:10,860 INFO sqlalchemy.engine.base.Engine {'name': u'users_sq'} 2016-10-16 23:27:10,861 INFO sqlalchemy.engine.base.Engine CREATE SEQUENCE users_sq 2016-10-16 23:27:10,861 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,862 INFO sqlalchemy.engine.base.Engine COMMIT 2016-10-16 23:27:10,864 INFO sqlalchemy.engine.base.Engine CREATE TABLE users ( password VARCHAR(40) NOT NULL, hash_algorithm hash_algorithms_enum NOT NULL, salt VARCHAR(50) NOT NULL, reset_key VARCHAR(255), email VARCHAR(254) NOT NULL, new_email VARCHAR(254), email_change_key VARCHAR(255), id INTEGER NOT NULL, username VARCHAR(32), firstname VARCHAR(100), lastname VARCHAR(100), activate_key VARCHAR(255), address_ip VARCHAR(15) NOT NULL, registered_at TIMESTAMP WITHOUT TIME ZONE NOT NULL, logged_at TIMESTAMP WITHOUT TIME ZONE NOT NULL, activated_at TIMESTAMP WITHOUT TIME ZONE, deactivated_at TIMESTAMP WITHOUT TIME ZONE, deleted_at TIMESTAMP WITHOUT TIME ZONE, is_admin BOOLEAN NOT NULL, PRIMARY KEY (id), UNIQUE (reset_key), UNIQUE (email), UNIQUE (new_email), UNIQUE (email_change_key), UNIQUE (username), UNIQUE (activate_key) ) 2016-10-16 23:27:10,864 INFO sqlalchemy.engine.base.Engine {} 2016-10-16 23:27:10,864 INFO sqlalchemy.engine.base.Engine ROLLBACK ``` |
From: metalstorm <iss...@bi...> - 2016-10-15 18:08:24
|
New issue 3826: Session commit changes the order of a relationship elements https://bitbucket.org/zzzeek/sqlalchemy/issues/3826/session-commit-changes-the-order-of-a metalstorm: If you have a model that contains a basic relationship like so: ``` #!python from sqlalchemy import Column, Integer, String, Boolean, ForeignKey, Table, DateTime from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Thing(Base) parts = relationship('Part', backref='thing') class Part(Base): name = Column(String, primary_key=True, default='') ``` And then you do: ``` #!python thing = Thing() part1 = Part(name="Part1") part2 = Part(name="Part2") # Note we do part2 then part1 thing.parts.append(part2) thing.parts.append(part1) print thing.parts # [Part2, Part1] session.commit() print thing.parts # [Part1, Part2] ``` It seems like the session orders the result based on primary key (by default), which I guess is expected. But if the session already contains the instances, then should it change the order? |
From: ddzialak <iss...@bi...> - 2016-10-15 16:10:32
|
New issue 3825: Exception during closing sessions if other thread just closed connection in parallel https://bitbucket.org/zzzeek/sqlalchemy/issues/3825/exception-during-closing-sessions-if-other ddzialak: I've got system with many threads, in case of shutdown and even got some trouble with joining other thread I want to close all connections and it may happen that I will receive an error: {code} self._session_factory.close_all() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 57, in close_all sess.close() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 1125, in close self._close_impl(invalidate=False) File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 1164, in _close_impl transaction.close(invalidate) File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/orm/session.py", line 542, in close connection.close() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 867, in close del self.__connection _Connection__connection {code} The other thread that was ongoing and logged an exception (that seems to be accurate/correct): {code} Traceback (most recent call last): File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 346, in connection return self.__connection AttributeError: 'Connection' object has no attribute '_Connection__connection' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/pool.py", line 687, in _finalize_fairy fairy._reset(pool) File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/pool.py", line 827, in _reset self._reset_agent.rollback() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1615, in rollback self._do_rollback() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1653, in _do_rollback self.connection._rollback_impl() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 706, in _rollback_impl self.connection._reset_agent is self.__transaction: File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 351, in connection self._handle_dbapi_exception(e, None, None, None, None) File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 1396, in _handle_dbapi_exception util.reraise(*exc_info) File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/util/compat.py", line 186, in reraise raise value File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 349, in connection return self._revalidate_connection() File "/var/lib/jenkins/shiningpanda/jobs/5156f59f/virtualenvs/d41d8cd9/lib/python3.5/site-packages/sqlalchemy/engine/base.py", line 429, in _revalidate_connection raise exc.ResourceClosedError("This Connection is closed") sqlalchemy.exc.ResourceClosedError: This Connection is closed {code} I think that the first one exception should not happen. Is it possible to guarantee that all connections would be closed? |
From: Michael B. <iss...@bi...> - 2016-10-14 20:54:44
|
New issue 3824: switch off of other-side joinedload from m2o will fail if source object has options already https://bitbucket.org/zzzeek/sqlalchemy/issues/3824/switch-off-of-other-side-joinedload-from Michael Bayer: this is from #1495, adding a bogus MapperOption that propagates to a lazyload causes the path arithmetic to fail: ``` #!diff --- a/test/orm/test_eager_relations.py +++ b/test/orm/test_eager_relations.py @@ -898,6 +898,44 @@ class EagerTest(_fixtures.FixtureTest, testing.AssertsCompiledSQL): {'param_1': 8}) ) + def test_useget_cancels_eager_propagated_present(self): + """test that a one to many lazyload cancels the unnecessary + eager many-to-one join on the other side, even when a propagated + option is present.""" + + users, Address, addresses, User = ( + self.tables.users, + self.classes.Address, + self.tables.addresses, + self.classes.User) + + mapper(User, users) + mapper(Address, addresses, properties={ + 'user': relationship(User, lazy='joined', backref='addresses') + }) + + from sqlalchemy.orm.interfaces import MapperOption + + class MyBogusOption(MapperOption): + propagate_to_loaders = True + + sess = create_session() + u1 = sess.query(User).options(MyBogusOption()).filter(User.id == 8).one() + + + def go(): + eq_(u1.addresses[0].user, u1) + self.assert_sql_execution( + testing.db, go, + CompiledSQL( + "SELECT addresses.id AS addresses_id, addresses.user_id AS " + "addresses_user_id, addresses.email_address AS " + "addresses_email_address FROM addresses WHERE :param_1 = " + "addresses.user_id", + {'param_1': 8}) + ) + + def test_manytoone_limit(self): """test that the subquery wrapping only occurs with limit/offset and m2m or o2m joins present.""" ``` gerrit forthcoming |
From: Hong M. <iss...@bi...> - 2016-10-14 03:45:18
|
New issue 3823: Column default raises AttributeError when it takes a callable without __module__ attribute https://bitbucket.org/zzzeek/sqlalchemy/issues/3823/column-default-raises-attributeerror-when Hong Minhee: Since SQLAlchemy 1.1, `Column` raises `AttributeError` when its `default` option takes a callable having no `__module__` attribute. The following example code had worked until SQLAlchemy 1.1: ```python created_at = Column( DateTime(timezone=True), default=functools.partial(datetime.datetime.now, datetime.timezone.utc) ) ``` The following traceback is from SQLAlchemy 1.1.0 (and I checked the same error on 1.1.1 as well): ```pytb Traceback (most recent call last): File "/.../ads/ad.py", line 112, in <module> class AdRevision(Base): File "/.../ads/ad.py", line 120, in AdRevision default=functools.partial(datetime.datetime.now, datetime.timezone.utc) File "/.../.env/lib/python3.5/site-packages/sqlalchemy/sql/schema.py", line 1210, in __init__ args.append(ColumnDefault(self.default)) File "/.../.env/lib/python3.5/site-packages/sqlalchemy/sql/schema.py", line 2016, in __init__ arg = self._maybe_wrap_callable(arg) File "/.../.env/lib/python3.5/site-packages/sqlalchemy/sql/schema.py", line 2043, in _maybe_wrap_callable return util.wrap_callable(lambda ctx: fn(), fn) File "/.../.env/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py", line 1401, in wrap_callable _f.__module__ = fn.__module__ AttributeError: 'functools.partial' object has no attribute '__module__' ``` |
From: Marcin B. <iss...@bi...> - 2016-10-13 10:35:01
|
New issue 3822: load_only on joined parent row causes issuing queries for already fetched columns https://bitbucket.org/zzzeek/sqlalchemy/issues/3822/load_only-on-joined-parent-row-causes Marcin Barczyński: Consider the following example: ``` #!python from sqlalchemy.engine import create_engine from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import aliased, contains_eager, relationship from sqlalchemy.orm import sessionmaker Base = declarative_base() class Node(Base): __tablename__ = 'node' id = Column(Integer, primary_key=True) parent_id = Column(ForeignKey('node.id')) parent = relationship('Node', remote_side=[id]) name = Column(String) engine = create_engine("postgresql://test:test@localhost/test", echo=True) Base.metadata.drop_all(engine) Base.metadata.create_all(engine) session_class = sessionmaker(bind=engine) session1 = session_class() node1 = Node(id=1, parent=None, name='str1') node2 = Node(id=2, parent=None, name='str2') node3 = Node(id=3, parent=node2, name='str3') node1.parent = node2 session1.add(node1) session1.add(node2) session1.add(node3) session1.commit() session2 = session_class() ParentNode = aliased(Node) query = session2.query(Node).\ outerjoin(ParentNode, Node.parent).\ options(contains_eager(Node.parent, alias=ParentNode).load_only(ParentNode.id, ParentNode.parent_id)) for row in query.order_by(Node.id): print row.id, row.name ``` Here are the queries emitted by SQLAlchemy: ``` 2016-10-13 12:14:38,778 INFO sqlalchemy.engine.base.Engine SELECT node_1.id AS node_1_id, node_1.parent_id AS node_1_parent_id, node.id AS node_id, node.parent_id AS node_parent_id, node.name AS node_name FROM node LEFT OUTER JOIN node AS node_1 ON node_1.id = node.parent_id ORDER BY node.id 2016-10-13 12:14:38,778 INFO sqlalchemy.engine.base.Engine {} 2016-10-13 12:14:38,780 INFO sqlalchemy.engine.base.Engine SELECT node.name AS node_name FROM node WHERE node.id = %(param_1)s 2016-10-13 12:14:38,780 INFO sqlalchemy.engine.base.Engine {'param_1': 2} ``` Despite the fact that the first query fetches all necessary columns, an additional query is issued for ```name``` of the second node. Note that ```order_by``` is crucial here - without it everything works as expected. |
From: Joerg R. <iss...@bi...> - 2016-10-12 21:29:17
|
New issue 3821: alias of cte not compiling correctly https://bitbucket.org/zzzeek/sqlalchemy/issues/3821/alias-of-cte-not-compiling-correctly Joerg Rittinger: The compiling of statements with aliases of CTEs seems kind of broken, e.g. the following example: ``` #!python from sqlalchemy import * from sqlalchemy.sql.compiler import SQLCompiler from sqlalchemy.engine.default import DefaultDialect tables = Table('my_table', MetaData(), Column('id', Integer)) cte = select(tables.columns).cte("cte") alias1 = cte.alias('a1') alias2 = cte.alias('a2') query = select( columns=[alias1.c.id, alias2.c.id], from_obj=alias1.join(alias2, onclause=alias1.c.id==alias2.c.id)) print(query) # WITH cte AS # (SELECT my_table.id AS id # FROM my_table) # SELECT a1.id, a2.id # FROM cte AS a1 JOIN cte AS a2 ON a1.id = a2.id def raw_sql(query): dialect = DefaultDialect() compiler = SQLCompiler(dialect, query) return compiler.process(query) print(raw_sql(query)) # WITH cte AS # (SELECT my_table.id AS id # FROM my_table) # SELECT a1.id, a2.id # FROM a1 JOIN a2 ON a1.id = a2.id ``` |
From: quintuitive <iss...@bi...> - 2016-10-07 03:03:42
|
New issue 3820: Connection fails against SQL Server 2016 https://bitbucket.org/zzzeek/sqlalchemy/issues/3820/connection-fails-against-sql-server-2016 quintuitive: The error is: sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('ODBC data type -150 is not supported. Cannot read column .', 'HY000') [SQL: "SELECT SERVERPROPERTY('ProductVersion')"] The repro is: engine = sa.create_engine('mssql+pyodbc://defdsn') conn = engine.connect() defdsn is a DSN on localhost, with trusted authentication and a database. I tried various alternatives - everything fails. Connecting via pyodbc works, but it seems that sqlalchemey is reading some additional stuff and that fails. It's not only me - the issue is also reported on stackoverflow: http://stackoverflow.com/questions/39904693/error-odbc-data-type-150-is-not-supported-when-connecting-sqlalchemy-to-mssql |