sqlalchemy-tickets Mailing List for SQLAlchemy (Page 11)
Brought to you by:
zzzeek
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
From: zaazbb <iss...@bi...> - 2017-10-17 10:36:17
|
New issue 4112: I needs a offline docs, where can i download?? https://bitbucket.org/zzzeek/sqlalchemy/issues/4112/i-needs-a-offline-docs-where-can-i zaazbb: thanks you. |
From: Daniel G. <iss...@bi...> - 2017-10-14 11:30:48
|
New issue 4111: Wrong DDL for table creation in Postgresql ARRAY type https://bitbucket.org/zzzeek/sqlalchemy/issues/4111/wrong-ddl-for-table-creation-in-postgresql Daniel Gonzalez: ``` #!python import sqlalchemy as sa from sqlalchemy.dialects import postgresql from sqlalchemy.schema import CreateTable _metadata = sa.MetaData() t = sa.Table( 'test_table2', _metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('city', sa.String), sa.Column('country', sa.String), sa.Column('properties', postgresql.JSONB), sa.Column('languages', postgresql.ARRAY(sa.String, dimensions=1)) ) print(str(CreateTable(t))) ``` Results in incorrect ARRAY column (missing type): ``` #!python CREATE TABLE test_table2 ( id INTEGER NOT NULL, city VARCHAR, country VARCHAR, properties JSONB, languages ARRAY, PRIMARY KEY (id) ) ``` Dialect: pypostgresql and psycopg2 |
From: Kai K. <iss...@bi...> - 2017-10-12 09:05:00
|
New issue 4110: "AttributeError: parent" when using association_proxy on polymorphic mapping https://bitbucket.org/zzzeek/sqlalchemy/issues/4110/attributeerror-parent-when-using Kai Kölger: MCVE: ``` #!python from sqlalchemy import create_engine, Integer from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker, relationship from sqlalchemy.schema import Column, ForeignKey Session = sessionmaker() engine = create_engine('sqlite:///:memory:') Session.configure(bind=engine) Base = declarative_base() _session = Session() class A1(Base): __tablename__ = 'a1' id = Column(Integer, primary_key=True) obj_type = Column(Integer) __mapper_args__ = { 'polymorphic_on': obj_type, 'polymorphic_identity': 1, } class ARef(Base): __tablename__ = 'aref' id = Column(Integer, primary_key=True) referer_id = Column(Integer, ForeignKey('a1.id')) refered_id = Column(Integer, ForeignKey('a1.id')) refered = relationship( 'A1', uselist=False, primaryjoin='a1.id == aref.refered_id') class A2(A1): __mapper_args__ = {'polymorphic_identity': 2} class A3(A1): __mapper_args__ = {'polymorphic_identity': 3} _a2_ref = relationship('ARef') a2_list = association_proxy(_a2_ref, 'refered') Base.metadata.create_all(engine) ``` The goal was to have a collection of references between some of the derived classes of A1. The MCVE produces with sqlalchemy 1.1.14 this stack trace: ``` #! Traceback (most recent call last): File "_experiment5.py", line 47, in <module> class A3(A1): File "_experiment5.py", line 51, in A3 a2_list = association_proxy(_a2_ref, 'refered') File "/home/kk/ve/arc2/lib/python3.4/site-packages/sqlalchemy/ext/associationproxy.py", line 76, in association_proxy return AssociationProxy(target_collection, attr, **kw) File "/home/kk/ve/arc2/lib/python3.4/site-packages/sqlalchemy/ext/associationproxy.py", line 156, in __init__ type(self).__name__, target_collection, id(self)) File "/home/kk/ve/arc2/lib/python3.4/site-packages/sqlalchemy/orm/relationships.py", line 1445, in __str__ return str(self.parent.class_.__name__) + "." + self.key File "/home/kk/ve/arc2/lib/python3.4/site-packages/sqlalchemy/util/langhelpers.py", line 850, in __getattr__ return self._fallback_getattr(key) File "/home/kk/ve/arc2/lib/python3.4/site-packages/sqlalchemy/util/langhelpers.py", line 828, in _fallback_getattr raise AttributeError(key) AttributeError: parent ``` |
From: Lars W. <iss...@bi...> - 2017-09-27 10:27:22
|
New issue 4093: .all_ condition doesn't act as expected, documentation is unclear https://bitbucket.org/zzzeek/sqlalchemy/issues/4093/all_-condition-doesnt-act-as-expected Lars Wikman: Unclear if this causes issues on other engines. Test case and my issue is postgres. It does not match up with what I'd expect from PostgreSQL ALL: https://www.postgresql.org/docs/9.1/static/functions-subquery.html#FUNCTIONS-SUBQUERY-ALL The code below throws a TypeError: Traceback (most recent call last): File "all_repro.py", line 32, in <module> posts = session.query(Post).filter(Post.title.all_(['test', 'test2'])).all() TypeError: all_() takes exactly 1 argument (2 given) I would expect it work similarly to .in_ where you can add a subquery and it will AND them rather than OR them. This part of the docs does show that it doesn't take any arguments, which fits with the error: http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.operators.ColumnOperators.all_ but it links to a part of the docs where it takes an argument though I think they should be the same? http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.expression.all_ ``` #!python # Issue against: SQLAlchemy==1.2.0b2 # Using psycopg2: psycopg2==2.7.3.1 # It does not match up with what I'd expect from PostgreSQL ALL: # https://www.postgresql.org/docs/9.1/static/functions-subquery.html#FUNCTIONS-SUBQUERY-ALL from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker import sys from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column from sqlalchemy import String from sqlalchemy.orm import relationship Base = declarative_base() connection_string = sys.argv[1] # Just add it as an argument class Post(Base): __tablename__ = 'post' uuid = Column(String(72), primary_key=True) title = Column(String(200), nullable=True) engine = create_engine(connection_string) Base.metadata.create_all(engine) SessionClass = sessionmaker(bind=engine) session = SessionClass() posts = session.query(Post).filter(Post.title.all_(['test', 'test2'])).all() # Throws a TyperError: """ Traceback (most recent call last): File "all_repro.py", line 32, in <module> posts = session.query(Post).filter(Post.title.all_(['test', 'test2'])).all() TypeError: all_() takes exactly 1 argument (2 given) """ # I would expect it work similarly to .in_ where you can add a subquery # This part of the docs does show that it doesn't take any arguments: # http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.operators.ColumnOperators.all_ # but it links to a part of the docs where it takes and argument though I think they should be the same? # http://docs.sqlalchemy.org/en/latest/core/sqlelement.html#sqlalchemy.sql.expression.all_ ``` I hope this fits with the information you need. I couldn't find the issue previously reported. |
From: Dave H. <iss...@bi...> - 2017-09-27 05:54:28
|
New issue 4092: Type problem with mssql.TIMESTAMP https://bitbucket.org/zzzeek/sqlalchemy/issues/4092/type-problem-with-mssqltimestamp Dave Hirschfeld: As detailed in [GH#382](https://github.com/zzzeek/sqlalchemy/pull/382) the `dialects.mssql.TIMESTAMP` class is a direct import of the `sa.sql.sqltypes.TIMESTAMP` class into the `mssql` namespace. The fact that the `mssql.TIMESTAMP` class is the same as the `sqltypes.TIMESTAMP` class causes problems in a 3rd party library, [odo](https://github.com/blaze/odo) which operates by mapping sqlalchemy types to the corresponding python/numpy types. The reason this causes problem is that unlike the ANSI SQL `TIMESTAMP` the `mssql.TIMESTAMP` type *doesn't* represent a datetime object but is instead just a binary type which cannot be interpreted or converted to a datetime object. This problem doesn't affect sqlalchemy itself because sqlalchemy emits the correct `TIMESTAMP` DDL and then simply passes through the results from the underlying driver. With `type(sqltypes.TIMESTAMP) == type(mssql.TIMESTAMP)` being true there is no way for odo to distinguish the two types. Because one represents a Python datetime object and the other a byte string they need to be able to be distinguished and handled differently in odo. A sufficient condition for odo to correctly handle MSSQL TIMESTAMP types is that the `mssql.TIMESTAMP` class does not inherit from the `sqltypes.TIMESTAMP` class - e.g. ```python def test_mssql_timestamp_is_not_timestamp(): """The MSSQL TIMESTAMP type does *not* represent a datetime value so should not inherit from the `sqltypes.TIMESTAMP` class :ref: https://msdn.microsoft.com/en-us/library/ms182776%28v=SQL.90%29.aspx """ from sqlalchemy.sql import sqltypes from sqlalchemy.dialects import mssql assert not issubclass(mssql.TIMESTAMP, sqltypes.TIMESTAMP) ``` |
From: David L. <iss...@bi...> - 2017-09-26 04:26:26
|
New issue 4091: declared_attr.cascading unconditionally sets attribute https://bitbucket.org/zzzeek/sqlalchemy/issues/4091/declared_attrcascading-unconditionally David Lord: I'm not sure if this is a bug or just a result of unclear documentation. I would expect that a cascading declared attr would not overwrite an attribute that's manually defined on a class, but that's not the case. This makes it less useful for things like `__tablename__` (with #4090), where I would like the user to be able to set their own tablename to override the default. ```python3 import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base, declared_attr, \ has_inherited_table class Base: @declared_attr.cascading def id(cls): if has_inherited_table(cls): type = sa.ForeignKey(cls.__mro__[1].id) else: type = sa.Integer return sa.Column(type, primary_key=True) Base = declarative_base(cls=Base) class User(Base): __tablename__ = 'user' class User2(User): __tablename__ = 'user2' # due to cascading, the created name is 'id' instead of 'user_id' id = sa.Column('user_id', sa.ForeignKey(User.id), primary_key=True) engine = sa.create_engine('sqlite://', echo=True) Base.metadata.create_all(engine) ``` |
From: David L. <iss...@bi...> - 2017-09-26 04:09:50
|
New issue 4090: declared_attr.cascading doesn't work for __tablename__ https://bitbucket.org/zzzeek/sqlalchemy/issues/4090/declared_attrcascading-doesnt-work-for David Lord: I noticed this while fixing tablename generation in Flask-SQLAlchemy. The reason I use the metaclass instead of `declared_attr` is because the `cascading` option doesn't work for `__tablename__`. Instead, it's treated like a normal declared attr, where once an intermediate class sets a value, that's used going forward. You also showed `declared_attr.cascading` for `__tablename__` in the docs for `__table_cls__`, which won't work as intended: https://gerrit.sqlalchemy.org/plugins/gitiles/zzzeek/sqlalchemy/+/refs/changes/27/527/2/doc/build/orm/extensions/declarative/api.rst#137 ```python3 import sqlalchemy as sa from sqlalchemy.ext.declarative import declarative_base, declared_attr class Base: @declared_attr.cascading def __tablename__(cls): return cls.__name__.lower() Base = declarative_base(cls=Base) class User(Base): id = sa.Column(sa.Integer, primary_key=True) class User2(User): __tablename__ = 'user2' id = sa.Column(sa.ForeignKey(User.id), primary_key=True) class User3(User2): id = sa.Column(sa.ForeignKey(User2.id), primary_key=True) engine = sa.create_engine('sqlite://', echo=True) Base.metadata.create_all(engine) ``` ```pytb Traceback (most recent call last): File "/home/david/Projects/flask-sqlalchemy/example.py", line 23, in <module> class User3(User2): File "/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.6/site-packages/sqlalchemy/ext/declarative/api.py", line 64, in __init__ _as_declarative(cls, classname, cls.__dict__) File "/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.6/site-packages/sqlalchemy/ext/declarative/base.py", line 88, in _as_declarative _MapperConfig.setup_mapping(cls, classname, dict_) File "/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.6/site-packages/sqlalchemy/ext/declarative/base.py", line 103, in setup_mapping cfg_cls(cls_, classname, dict_) File "/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.6/site-packages/sqlalchemy/ext/declarative/base.py", line 131, in __init__ self._setup_table() File "/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.6/site-packages/sqlalchemy/ext/declarative/base.py", line 395, in _setup_table **table_kw) File "/home/david/.virtualenvs/flask-sqlalchemy/lib/python3.6/site-packages/sqlalchemy/sql/schema.py", line 421, in __new__ "existing Table object." % key) sqlalchemy.exc.InvalidRequestError: Table 'user2' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object. ``` |
From: Brian H. <iss...@bi...> - 2017-09-25 15:04:19
|
New issue 4089: engine_url.URL does not support passwords with overridden str https://bitbucket.org/zzzeek/sqlalchemy/issues/4089/engine_urlurl-does-not-support-passwords Brian Heineman: The engine_url.URL class makes use of the ord() function which cannot be overrridden. This is preventing the direct use of overridden str values. Here is a test case that illustrates the issue: ``` #!python from sqlalchemy.engine import url as engine_url class SecurePassword(str): # any method that can be overridden by str to retrieve the value would be acceptable def __str__(self): return 'secured_password' # The ord() function in _rfc_1738_quote() https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/engine/url.py#L247 # is preventing the direct use of overridden strings # https://stackoverflow.com/questions/1893816/how-to-override-ord-behaivour-in-python-for-str-childs if __name__ == '__main__': password = SecurePassword('password_key') db_url = { 'drivername': 'mysql', 'host': 'localhost', 'port': '3306', 'username': 'root', 'password': password, 'database': 'test' } url = engine_url.URL(**db_url) print(url) assert str(url) == 'mysql://root:secured_password@localhost:3306/test' ``` |
From: Michael B. <iss...@bi...> - 2017-09-23 16:19:54
|
New issue 4088: why do we not transfer events on non native enum? https://bitbucket.org/zzzeek/sqlalchemy/issues/4088/why-do-we-not-transfer-events-on-non Michael Bayer: |
From: Rudolf C. <iss...@bi...> - 2017-09-23 09:06:35
|
New issue 4087: Column.copy() doesn't copy comment attribute https://bitbucket.org/zzzeek/sqlalchemy/issues/4087/columncopy-doesnt-copy-comment-attribute Rudolf Cardinal: In SQLAlchemy 1.2.0b2, the `Column.copy()` function doesn't copy the new `comment` attribute; it would need an additional `comment=self.comment` line in its call to `self._constructor()`. Reproduction: from sqlalchemy import Column, Integer a = Column("a", Integer, comment="hello") b = a.copy() a.comment # 'hello' b.comment # None |
From: Dan S. <iss...@bi...> - 2017-09-21 19:50:39
|
New issue 4086: mssql TIMESTAMP/ROWVERSION columns is wrong type https://bitbucket.org/zzzeek/sqlalchemy/issues/4086/mssql-timestamp-rowversion-columns-is Dan Stovall: In MSSQL a ROWVERSION or TIMESTAMP column do not contain a date time. Instead the contain an 8-byte integer. Currently, the column is set as sqlachemy.sql.sqltypes.TIMESTAMP, which is an instance of the DateTime type. Running a query against a table with such a column returns a byte string value for those columns. The mssql dialect should probably have a specific ROWVERSION type and it should be an integer, not a DateTime. Also, it would be nice if the values of those columns returned an integer by using int.from_bytes to convert the byte string. |
From: Isaac_Hernandez <iss...@bi...> - 2017-09-20 13:39:03
|
New issue 4085: Warning 1366 Incorrect string value https://bitbucket.org/zzzeek/sqlalchemy/issues/4085/warning-1366-incorrect-string-value Isaac_Hernandez: My code is the next from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, INTEGER, VARCHAR engine = create_engine('mysql://root:Vianney123@localhost:3306/test_sql_alchemy', echo=False) Session = sessionmaker(bind=engine) sesion_db = Session() Base = declarative_base() class Car(Base): __tablename__ = "Cars" Id = Column(INTEGER, primary_key=True) Name = Column(VARCHAR) Price = Column(INTEGER) book = Car(Name='Isaac', Price=5459) sesion_db.add(book) sesion_db.commit() rs = sesion_db.query(Car).all() for car in rs: print(car.Name, car.Price) The SQL of tables is this CREATE TABLE `cars` ( `id` int(10) NOT NULL AUTO_INCREMENT, `Name` varchar(255) NOT NULL, `Price` int(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=39 DEFAULT CHARSET=utf8 When i'm execute the warning is the next C:\Users\isaac.hernandez\AppData\Local\Programs\Python\Python36\lib\site-packages\sqlalchemy\engine\default.py:504: Warning: (1366, "Incorrect string value: '\\xE9xic' for column 'VARIABLE_VALUE' at row 496") cursor.execute(statement, parameters) I try change the encode to utf8 in create_engine, actually use SQLAlchemy V. 1.1.14 |
From: Michael B. <iss...@bi...> - 2017-09-19 20:37:28
|
New issue 4084: make_transient_to_pending expires deferred attrs making them load on refresh https://bitbucket.org/zzzeek/sqlalchemy/issues/4084/make_transient_to_pending-expires-deferred Michael Bayer: ``` #!python from sqlalchemy import Column, Integer, String, create_engine from sqlalchemy.orm import Session, deferred from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm.session import make_transient_to_detached Base = declarative_base() class MyTable(Base): __tablename__ = 'my_table' id = Column(Integer, primary_key=True) undeferred = Column(String) deferred_column = deferred(Column(String)) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) e.execute( "insert into my_table (id, undeferred, " "deferred_column) values (1, 'foo', 'bar')") s = Session(e) def expire_via_detached(): item = MyTable(id=1) make_transient_to_detached(item) s.add(item) item.undeferred assert 'deferred_column' not in item.__dict__ s.close() def expire_normally(): item = s.query(MyTable).first() s.expire(item) item.undeferred assert 'deferred_column' not in item.__dict__ s.close() def expire_explicit_attrs(): item = s.query(MyTable).first() s.expire(item, ['undeferred', 'deferred_column']) item.undeferred assert 'deferred_column' in item.__dict__ s.close() expire_normally() expire_explicit_attrs() expire_via_detached() ``` this is due to state._expire(state, state.unloaded) in make_transient_to_pending(). When a deferred attribute is explicitly expired, it becomes part of the next full load. |
From: Levon S. <iss...@bi...> - 2017-09-19 11:20:58
|
New issue 4083: Possible memory leak? https://bitbucket.org/zzzeek/sqlalchemy/issues/4083/possible-memory-leak Levon Saldamli: I have an application running with sqlalchemy which periodically imports some archives and creates database model objects, where objects of type BindParameter keeps increasing each time by some numbers. But I can't find any references to these objects that also increase. I'm using pdb and objgraph to find references, and as far as I can see using those tools, only BinaryExpression objects refer to BindParameter objects, but there is also a cyclic reference from Comparator objects (also not increasing in number). Is this a memory leak? See attached graph of the references. At this moment number of objects in memory are: Comparator=3, BinaryExpression=11, BindParameter=569 |
From: David L. <iss...@bi...> - 2017-09-18 16:16:00
|
New issue 4082: marking subclass of declared_attr with cascading ignores subclass https://bitbucket.org/zzzeek/sqlalchemy/issues/4082/marking-subclass-of-declared_attr-with David Lord: Before a class is mapped, I want to check if it defines a primary key. If the column is defined in a `declared_attr`, I can't know if it's primary without evaluating the attr, which SQLAlchemy correctly warns about, since the model is not mapped yet. To solve this, I thought I'd require marking declared attrs that define a primary key with a `declared_pkey` subclass. However, this breaks when using `declarded_pkey.cascading`, since internally `_stateful_declared_attr` creates a `declared_attr` instead of knowing about the subclass. I realize that `cascading` is not necessary in this contrived example, but it still demonstrates the issue. Running the following code should print 'pkey named id' but does not. ```python3 class declared_pkey(declared_attr): primary_key = True class IDMixin: @declared_pkey.cascading def id(cls): return Column(Integer, primary_key=True) class ExampleMeta(DeclarativeMeta): def __init__(cls, name, bases, d): for base in cls.__mro__: for key, value in base.__dict__.items(): if ( isinstance(value, (declared_attr, sa.Column, InstrumentedAttribute)) and getattr(value, 'primary_key', False) ): print(f'pkey named {key}') super(ExampleMeta, cls).__init__(name, bases, d) Base = declarative_base(metaclass=ExampleMeta) class User(IDMixin, Base): __tablename__ = 'user' ``` For now my solution has been to document that this doesn't work and tell people to do `id.primary_key = True` manually, but it would be nice if subclasses were preserved. ```python3 class IDMixin: @declared_attr.cascading def id(cls): return Column(Integer, primary_key=True) id.primary_key = True ``` To solve this, `_stateful_declared_attr` could be modifed to take the class as well. ```python3 class _stateful_declared_attr(declared_attr): def __init__(self, cls, **kw): self.cls = cls self.kw = kw def _stateful(self, **kw): new_kw = self.kw.copy() new_kw.update(kw) return _stateful_declared_attr(self.cls, **new_kw) def __call__(self, fn): return self.cls(fn, **self.kw) class declared_attr(...): ... def _stateful(cls, **kw): return _stateful_declared_attr(cls, **kw) ... ``` |
From: Charlie C. <iss...@bi...> - 2017-09-16 21:01:01
|
New issue 4081: Cannot Dynamically Set PostGres Schema at Runtime https://bitbucket.org/zzzeek/sqlalchemy/issues/4081/cannot-dynamically-set-postgres-schema-at Charlie Cliff: The Library is unable to Dynamically Set PostGres Schema at Runtime. This is pretty basic functionality. |
From: Michael B. <iss...@bi...> - 2017-09-15 21:41:48
|
New issue 4080: warn on ambiguous naming convention case https://bitbucket.org/zzzeek/sqlalchemy/issues/4080/warn-on-ambiguous-naming-convention-case Michael Bayer: from https://bitbucket.org/zzzeek/alembic/issues/453, there are five possibilities when rendering the name of a constraint: 1. if the constraint has a name of `None`, then the naming convention is applied. 2. if the constraint name has a name of `None`, and the naming convention includes the token `%(constraint_name)s`, then an error is raised: "Naming convention including %(constraint_name)s token requires that constraint is explicitly named." 3. if the constraint has a non-`None` name, and the convention does not include the `%(constraint_name)s` token, then it is assumed that the constraint was given an explicit name, and the convention is not applied. 4. if the constraint has a non-`None` name, and the convention includes the token `%(constraint_name)s`, then it is assumed that this name is part of fulfillment of the contract of the convention, and the naming convention is applied, using the existing name for `%(constraint_name)s`. 5. if the constraint has an `f()` name, then no naming convention is applied ever. Case #3 is ambiguous. propose a warning as: ``` #!diff diff --git a/lib/sqlalchemy/sql/naming.py b/lib/sqlalchemy/sql/naming.py index d93c916f6..1cf1e77ae 100644 --- a/lib/sqlalchemy/sql/naming.py +++ b/lib/sqlalchemy/sql/naming.py @@ -125,6 +125,12 @@ def _constraint_name_for_table(const, table): ) elif isinstance(convention, _defer_none_name): return None + elif convention is not None and const.name is not None: + from sqlalchemy import util + util.warn( + "Ambiguous naming behavior for name '%s' on %r object; " + "naming convention not being applied but name does not make " + "use of f()" % (const.name, type(const))) @event.listens_for(Constraint, "after_parent_attach") ``` |
From: Darin G. <iss...@bi...> - 2017-09-15 20:05:08
|
New issue 4079: can't print a select statement that contains aggregate_order_by https://bitbucket.org/zzzeek/sqlalchemy/issues/4079/cant-print-a-select-statement-that Darin Gordon: during debugging, I tried to print a select statement that is using the aggregate_order_by function (postgresql), but that raises a rendering exception: sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.sql.compiler.StrSQLCompiler object at 0x7fec0069fda0> can't render element of type <class 'sqlalchemy.dialects.postgresql.ext.aggregate_order_by'> |
From: Michael B. <iss...@bi...> - 2017-09-15 16:48:09
|
New issue 4078: column overlap warning in relationship needs tuning for sibling classes https://bitbucket.org/zzzeek/sqlalchemy/issues/4078/column-overlap-warning-in-relationship Michael Bayer: ``` #!python from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.engine import create_engine from sqlalchemy.orm import sessionmaker, relationship from sqlalchemy.schema import Column, ForeignKey, ForeignKeyConstraint from sqlalchemy.types import Integer, String Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) a_members = relationship('AMember', backref='a') class AMember(Base): __tablename__ = 'a_member' a_id = Column(Integer, ForeignKey('a.id'), primary_key=True) a_member_id = Column(Integer, primary_key=True) class B(Base): __tablename__ = 'b' __mapper_args__ = { 'polymorphic_on': 'type' } id = Column(Integer, primary_key=True) type = Column(String) a_id = Column(Integer, ForeignKey('a.id'), nullable=False) a_member_id = Column(Integer) __table_args__ = ( ForeignKeyConstraint( ('a_id', 'a_member_id'), ('a_member.a_id', 'a_member.a_member_id')), ) # if viewonly is removed, warning should emit: # "relationship 'B.a' will copy column a.id to column b.a_id, which # "conflicts with relationship(s): 'BSub2.a_member' (copies a_member.a_id to b.a_id). "" a = relationship('A', viewonly=True) # if uncommented, warning should emit: # relationship 'B.a_member' will copy column a_member.a_id to column # b.a_id, which conflicts with relationship(s): 'BSub1.a' (copies a.id to b.a_id) # a_member = relationship('AMember') # however, *no* warning should be emitted otherwise. class BSub1(B): a = relationship('A') __mapper_args__ = {'polymorphic_identity': 'bsub1'} class BSub2(B): a_member = relationship('AMember') @classmethod def __declare_first__(cls): cls.__table__.append_constraint( ForeignKeyConstraint( ('a_id', 'a_member_id'), ('a_member.a_id', 'a_member.a_member_id')) ) __mapper_args__ = {'polymorphic_identity': 'bsub2'} engine = create_engine('sqlite://', echo=True) Base.metadata.create_all(engine) Session = sessionmaker(bind=engine, autoflush=False) session = Session() bsub2 = BSub2() am1 = AMember(a_member_id=1) a1 = A(a_members=[am1]) bsub2.a_member = am1 bsub1 = BSub1() a2 = A() bsub1.a = a2 session.add_all([bsub1, bsub2, am1, a1, a2]) session.commit() assert bsub1.a is a2 assert bsub2.a is a1 # meaningless, because BSub1 doesn't have a_member bsub1.a_member = am1 # meaningless, because BSub2's ".a" is viewonly=True bsub2.a = a2 session.commit() assert bsub1.a is a2 # beacuse bsub1.a_member is not a relationship assert bsub2.a is a1 # because bsub2.a is viewonly=True # everyone has a B.a relationship print session.query(B, A).outerjoin(B.a).all() ``` patch: ``` #!diff diff --git a/lib/sqlalchemy/orm/mapper.py b/lib/sqlalchemy/orm/mapper.py index 1d172f71a..edfe45030 100644 --- a/lib/sqlalchemy/orm/mapper.py +++ b/lib/sqlalchemy/orm/mapper.py @@ -2450,6 +2450,15 @@ class Mapper(InspectionAttr): m = m.inherits return bool(m) + def shares_lineage(self, other): + """Return True if either this mapper or given mapper inherit from the other. + + This is a bidirectional form of "isa". + + """ + + return self.isa(other) or other.isa(self) + def iterate_to_root(self): m = self while m: diff --git a/lib/sqlalchemy/orm/relationships.py b/lib/sqlalchemy/orm/relationships.py index 94c0d6694..89ef641f8 100644 --- a/lib/sqlalchemy/orm/relationships.py +++ b/lib/sqlalchemy/orm/relationships.py @@ -2693,6 +2693,7 @@ class JoinCondition(object): prop_to_from = self._track_overlapping_sync_targets[to_] for pr, fr_ in prop_to_from.items(): if pr.mapper in mapperlib._mapper_registry and \ + self.prop.parent.shares_lineage(pr.parent) and \ fr_ is not from_ and \ pr not in self.prop._reverse_property: other_props.append((pr, fr_)) ``` |
From: Philip M. <iss...@bi...> - 2017-09-14 22:15:49
|
New issue 4077: Implementing next on ResultProxy https://bitbucket.org/zzzeek/sqlalchemy/issues/4077/implementing-next-on-resultproxy Philip Martin: Since ResultProxy references a DB API cursor is there any reason why the next protocol is not implemented on ResultProxy? Given it can iterator over query results in a loop, it seems to me that it would make sense from a Python language standpoint if it implemented __next__. I am not aware of how other DB API cursors operate but using psycopg2 one could: ```python import psycopg2 select_statement = 'select * from some_table' conn = psycopg2.connect() cursor = conn.cursor() cursor.execute(select_statement) next(cursor) ``` Whereas in Sqlalchemy, calling next on the result issues a TypeError: ```python from sqlalchemy import create_engine engine = create_engine() conn = engine.connect() results = conn.execute(select_query) next(results) ``` |
From: Michael B. <iss...@bi...> - 2017-09-13 18:50:47
|
New issue 4076: oracle 8 / non-ansi joins needs to apply (+) to all occurences of the right hand table https://bitbucket.org/zzzeek/sqlalchemy/issues/4076/oracle-8-non-ansi-joins-needs-to-apply-to Michael Bayer: https://docs.oracle.com/database/121/SQLRF/queries006.htm#SQLRF52354 "apply the outer join operator (+) to all columns of B in the join condition in the WHERE clause" ``` #!python from sqlalchemy import and_ from sqlalchemy import table, column from sqlalchemy import select from sqlalchemy.dialects import oracle a = table('a', column('x'), column('y')) b = table('b', column('q'), column('p')) stmt = select([a]).select_from(a.outerjoin(b, and_(a.c.x > b.c.q, b.c.p == None))) print stmt.compile(dialect=oracle.dialect(use_ansi=False)) ``` ``` #! SELECT a.x, a.y FROM a, b WHERE a.x > b.q AND b.p IS NULL ``` |
From: Michael B. <iss...@bi...> - 2017-09-13 15:29:01
|
New issue 4075: multi-valued insert invokes default for same column many times https://bitbucket.org/zzzeek/sqlalchemy/issues/4075/multi-valued-insert-invokes-default-for Michael Bayer: this case can't be handled without adding additional context: ``` #!python from sqlalchemy import * m = MetaData() def my_default(context): print "Called! What index are we? current params: %s" % context.current_parameters return 0 t = Table('t', m, Column('x', Integer), Column('y', Integer, default=my_default)) e = create_engine("mysql://scott:tiger@localhost/test") m.create_all(e) e.execute(t.insert().values([{"x": 5}, {"x": 6}, {"x": 7}])) ``` ``` #!python Called! What index are we? current params: {u'y_m2': None, u'y_m1': None, u'x_m0': 5, u'x_m1': 6, u'x_m2': 7, 'y': None} Called! What index are we? current params: {u'y_m2': None, u'y_m1': None, u'x_m0': 5, u'x_m1': 6, u'x_m2': 7, 'y': 0} Called! What index are we? current params: {u'y_m2': None, u'y_m1': 0, u'x_m0': 5, u'x_m1': 6, u'x_m2': 7, 'y': 0} ``` will add new method. |
From: Michael B. <iss...@bi...> - 2017-09-12 16:47:01
|
New issue 4074: on_conflict_do_update assumes the compiler statement is the insert https://bitbucket.org/zzzeek/sqlalchemy/issues/4074/on_conflict_do_update-assumes-the-compiler Michael Bayer: ``` #!python from sqlalchemy import table, column, select, literal_column from sqlalchemy.dialects.postgresql import insert from sqlalchemy.dialects import postgresql foo = table("foo", column("foo_id"), column("foo_data")) refreshed_foo = table("refreshed_foo", column("foo_id"), column("foo_data")) new_data = table("new_data", column("foo_id"), column("foo_data")) select_query = select([new_data]) ins = insert(foo).from_select(select_query.columns.keys(), select_query) upsert = ins.on_conflict_do_update( index_elements=[foo.c.foo_id], set_={"foo_data": ins.excluded.foo_data} ).returning(literal_column('1').label('x')).cte("perform_inserts") stmt = select([upsert.c.x]) print stmt.compile(dialect=postgresql.dialect()) ``` output: ``` #! File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/sql/compiler.py", line 1412, in visit_cte self, asfrom=True, **kwargs File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch return meth(self, **kw) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/sql/compiler.py", line 2104, in visit_insert insert_stmt._post_values_clause, **kw) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/sql/compiler.py", line 242, in process return obj._compiler_dispatch(self, **kwargs) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/sql/visitors.py", line 81, in _compiler_dispatch return meth(self, **kw) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/dialects/postgresql/base.py", line 1535, in visit_on_conflict_do_update cols = insert_statement.table.c AttributeError: 'Select' object has no attribute 'table' ``` patch: ``` #!diff diff --git a/lib/sqlalchemy/dialects/postgresql/base.py b/lib/sqlalchemy/dialects/postgresql/base.py index b56ac5b10..821752870 100644 --- a/lib/sqlalchemy/dialects/postgresql/base.py +++ b/lib/sqlalchemy/dialects/postgresql/base.py @@ -1529,7 +1529,9 @@ class PGCompiler(compiler.SQLCompiler): set_parameters = dict(clause.update_values_to_set) # create a list of column assignment clauses as tuples - cols = self.statement.table.c + + insert_statement = self.stack[-1]['selectable'] + cols = insert_statement.table.c for c in cols: col_key = c.key if col_key in set_parameters: ``` |
From: Michael B. <iss...@bi...> - 2017-09-10 03:32:16
|
New issue 4073: regression in bulk update, evaluator disallows literal_column https://bitbucket.org/zzzeek/sqlalchemy/issues/4073/regression-in-bulk-update-evaluator Michael Bayer: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) x = Column(Integer) y = Column(Integer) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) s = Session(e) s.add(A(x=1, y=2)) s.commit() s.query(A).update({A.x: literal_column('y')}) ``` result: ``` #! Traceback (most recent call last): File "test.py", line 22, in <module> s.query(A).update({A.x: literal_column('y')}) File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/query.py", line 3366, in update update_op.exec_() File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/persistence.py", line 1324, in exec_ self._do_pre_synchronize() File "/home/classic/dev/sqlalchemy/lib/sqlalchemy/orm/persistence.py", line 1399, in _do_pre_synchronize "Could not evaluate current criteria in Python. " sqlalchemy.exc.InvalidRequestError: Could not evaluate current criteria in Python. Specify 'fetch' or False for the synchronize_session parameter. ``` this affects openstack nova and the soft_delete() method in oslo.db which uses this. |
From: Jack Z. <iss...@bi...> - 2017-09-09 00:30:22
|
New issue 4072: `mysql.dml.Insert.values` shadows the generative function `sql.expression.Insert.values` https://bitbucket.org/zzzeek/sqlalchemy/issues/4072/mysqldmlinsertvalues-shadows-the Jack Zhou: When using the `ON DUPLICATE KEY UPDATE` support for MySQL, the `.values()` generative function is not available, contrary to the claims made by the [documentation](http://docs.sqlalchemy.org/en/latest/dialects/mysql.html#mysql-insert-on-duplicate-key-update). ``` Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "kronos/__main__.py", line 97, in <module> cli() File "/home/ashu/Documents/kronos/venv/lib/python3.5/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/home/ashu/Documents/kronos/venv/lib/python3.5/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "kronos/__main__.py", line 31, in invoke return super(CLI, self).invoke(ctx) File "/home/ashu/Documents/kronos/venv/lib/python3.5/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/ashu/Documents/kronos/venv/lib/python3.5/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/ashu/Documents/kronos/venv/lib/python3.5/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/home/ashu/Documents/kronos/scripts/gilmore_from_scratch.py", line 46, in gilmore_from_scratch stmt = insert(GilmoreItem.__table__).values(**to_insert) TypeError: 'ImmutableColumnCollection' object is not callable ``` It appears to be shadowed by the `.values` property intended to support the `VALUES(...)` construct. Perhaps the column collection could be named `values_` instead? The workaround is to specify `values` with the `insert` constructor: ``` stmt = insert(GilmoreItem.__table__, values=to_insert) ``` |