sqlalchemy-tickets Mailing List for SQLAlchemy (Page 3)
Brought to you by:
zzzeek
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
From: Dmytro S. <iss...@bi...> - 2018-08-23 11:17:37
|
New issue 4326: Inherited declared_attr hybrid_property looses extension_type https://bitbucket.org/zzzeek/sqlalchemy/issues/4326/inherited-declared_attr-hybrid_property Dmytro Starosud: This is using sqlalchemy 1.2.11. Please consider following code snippet. ``` #!python from sqlalchemy import inspect, Column, String from sqlalchemy.ext.declarative import declarative_base, declared_attr from sqlalchemy.ext.hybrid import hybrid_property class Mixin: @hybrid_property def hp1(cls): return 42 @declared_attr def hp2(cls): @hybrid_property def hp2(self): return 42 return hp2 class Base(declarative_base(), Mixin): __tablename__ = 'test' id = Column(String, primary_key=True) class Derived(Base): pass print(inspect(Base).mapper.all_orm_descriptors['hp1'].extension_type) print(inspect(Base).mapper.all_orm_descriptors['hp2'].extension_type) print(inspect(Derived).mapper.all_orm_descriptors['hp1'].extension_type) print(inspect(Derived).mapper.all_orm_descriptors['hp2'].extension_type) # prints # symbol('HYBRID_PROPERTY') # symbol('HYBRID_PROPERTY') # symbol('HYBRID_PROPERTY') # symbol('NOT_EXTENSION') ``` Should I use another way to determine type of orm descriptor? Thanks a lot in advance! |
From: Robert B. <iss...@bi...> - 2018-08-22 20:47:23
|
New issue 4325: Parameters in pgsql DISTINCT ON () not getting bound https://bitbucket.org/zzzeek/sqlalchemy/issues/4325/parameters-in-pgsql-distinct-on-not Robert Bertorelli: Hello, I've run into the following issue in sqlalchemy 1.2.11. I looked through the already-extant bugs and didn't find this. In the postgresql dialect, parameters within a DISTINCT ON clause will not be bound. Here's a simple example using python 3.6: ``` #!python >>> import sqlalchemy as sa >>> from sqlalchemy.dialects import postgresql as pg >>> col = sa.column('name') >>> col2 = col == 'foo' >>> sel = sa.select([col2]).distinct(col2) >>> print(sel.compile(dialect=pg.dialect(), compile_kwargs={"literal_binds": True})) SELECT DISTINCT ON (name = %(name_1)s) name = 'foo' AS anon_1 ``` So, as you can see, the parameter in the SELECT statement (name = 'foo') did get bound properly, but the same column in the DISTINCT ON did not. I believe that the issue can be found here: https://github.com/zzzeek/sqlalchemy/blob/master/lib/sqlalchemy/dialects/postgresql/base.py#L1528 Seems like `**kw` needs to get passed into self.process(). Thanks. |
From: Dmitry N. <iss...@bi...> - 2018-08-22 09:22:53
|
New issue 4324: array_agg ignores keyword type_ hint for postgres dialect, failing for a nested array https://bitbucket.org/zzzeek/sqlalchemy/issues/4324/array_agg-ignores-keyword-type_-hint-for Dmitry Nikonov: Hello. This issue is quite similar to the #4107 but about the [postgresql dialect version](https://bitbucket.org/zzzeek/sqlalchemy/src/master/lib/sqlalchemy/dialects/postgresql/ext.py#lines-209) of the `array_agg`. I suppose it should be fixed in the same manner. Code to reproduce (Python 3.7.0, SQLAlchemy 1.2.11): ```python from sqlalchemy import * from sqlalchemy.dialects import postgresql col = column('foo', postgresql.ARRAY(Integer)) function = func.array_agg(col, type_=ARRAY(Integer)) # Works fine function = func.array_agg(col, type_=postgresql.ARRAY(Integer)) # Works fine too function = postgresql.array_agg(col, type_=ARRAY(Integer)) # Fails with the trace shown below function = postgresql.array_agg(col, type_=postgresql.ARRAY(Integer)) # Fails with the same trace ``` Stack trace: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-8-841c5b8114ad> in <module>() ----> 1 function = postgresql.array_agg(col, type_=postgresql.ARRAY(Integer)) /usr/lib/python3.7/site-packages/sqlalchemy/dialects/postgresql/ext.py in array_agg(*arg, **kw) 215 216 """ --> 217 kw['type_'] = ARRAY(functions._type_from_args(arg)) 218 return functions.func.array_agg(*arg, **kw) /usr/lib/python3.7/site-packages/sqlalchemy/dialects/postgresql/array.py in __init__(self, item_type, as_tuple, dimensions, zero_indexes) 229 """ 230 if isinstance(item_type, ARRAY): --> 231 raise ValueError("Do not nest ARRAY types; ARRAY(basetype) " 232 "handles multi-dimensional arrays of basetype") 233 if isinstance(item_type, type): ValueError: Do not nest ARRAY types; ARRAY(basetype) handles multi-dimensional arrays of basetype ``` |
From: Steven H. <iss...@bi...> - 2018-08-21 23:12:45
|
New issue 4323: Reflection does not work on Postgres Enums that start with capital letters https://bitbucket.org/zzzeek/sqlalchemy/issues/4323/reflection-does-not-work-on-postgres-enums Steven Heidel: With the following Postgres schema: ``` #!sql CREATE TYPE "Status" AS ENUM ('ok', 'err'); CREATE TABLE statuses (status "Status"); ``` Run the following code: ``` #!python from sqlalchemy import create_engine, MetaData engine = create_engine('postgres://...') metadata = MetaData() metadata.reflect(engine) ``` This gives the error: ``` sqlalchemy/lib/python3.6/site-packages/sqlalchemy/dialects/postgresql/base.py:2679: SAWarning: Did not recognize type '"Status"' of column 'status' (attype, name)) ``` The problem is that in postgresql/base.py the list of enums is created with the lowercase-d version of the enums, meaning it won't be able to find the column type when it looks it up in the dictionary. This means that reflection does not work properly, and for instance when using Alembic it won't be able to autogenerate these migrations. Postgres Version: 10.4 SQLAlchemy Version: 1.2.11 psycopg2 Version: 2.7.5 |
From: Juan J. G. de S. <iss...@bi...> - 2018-08-19 20:03:28
|
New issue 4322: SQLAlchemy 1.2.10 support for over() with range_ or rows is broken. https://bitbucket.org/zzzeek/sqlalchemy/issues/4322/sqlalchemy-1210-support-for-over-with Juan Jesús García de Soria: Found this by trying to use the feature with GINO and PostgreSQL. While the feature is supposedly supported, the proxy functions in classes in sql/elements.py don't support the new arguments. See, for example: https://bitbucket.org/zzzeek/sqlalchemy/src/c3869f23836bd35d5ed565a4b84b4ab70293c0f7/lib/sqlalchemy/sql/elements.py#lines-3371 The result of trying to use, for instance, range_=(None, 0) in a call to over() for a window function in a select is Python complaining that over() got an unexpected keyword argument. |
From: Torsten L. <iss...@bi...> - 2018-08-17 12:01:21
|
New issue 4321: Inheriting from classical mapped class via intermediate class fails https://bitbucket.org/zzzeek/sqlalchemy/issues/4321/inheriting-from-classical-mapped-class-via Torsten Landschoff: We are currently trying to migrate a medium sized project from using classical SQLAlchemy mappings to declarative mappings. Not sure if this is supported but it seemed to work (I did not find anything in the documentation). For some classes I ran into problems though because SQLAlchemy does not identify the mapped class due to an unmapped intermediate class. I attached a patch with a full testcase to reproduce the issue. While the documentation does not seem to discuss this usage, I consider it highly unintuitive that just introducing a class into the inheritance hierarchy causes breakage. My current workaround is to add the last mapped class explicitly as a base class. Greetings, Torsten |
From: Alana P. <iss...@bi...> - 2018-08-15 19:08:01
|
New issue 4320: Negation with .self_group() https://bitbucket.org/zzzeek/sqlalchemy/issues/4320/negation-with-self_group Alana Pradhan: I think this bug is an extension of https://bitbucket.org/zzzeek/sqlalchemy/issues/3969/negation-of-labeled-element I hit this bug in 1.2.10 when I add `.self_group()` to the or_ expression: ``` #!python from sqlalchemy import * from sqlalchemy.dialects import sqlite expr = or_(column('x') == 1, column('y') == 2).self_group() bug = not_(expr.label('foo')) print(bug.compile()) print(bug.compile(dialect=sqlite.dialect())) ``` ``` NOT (x = :x_1 OR y = :y_1) (x = ? OR y = ?) = 0 ``` |
From: James F. <iss...@bi...> - 2018-08-15 19:01:32
|
New issue 4319: CTE repeated inside subquery https://bitbucket.org/zzzeek/sqlalchemy/issues/4319/cte-repeated-inside-subquery James Frick: When using CTE's, the desired behavior is to define your CTEs at the top of the statement, then reference them throughout the query. The current behavior is that if you use a subquery with a CTE in it, the entire WITH statement gets written again. I don't think this is legal syntax in most dialects. ``` from sqlalchemy.dialects import oracle table_a = sa.table('table1', sa.column('col_a'), sa.column('col_b')) cte_a = table_a.select().cte('cte_a') subq_1 = cte_a.select().where(cte_a.c.col_a == 1) subq_2 = cte_a.select().where(cte_a.c.col_b == 2) query = subq_1.join(subq_2, subq_1.c.col_a == subq_2.c.col_a) print(query.compile(dialect=oracle.dialect())) ``` This should define cte_a up at the top and recycle it in the subquery that gets inner joined, but instead it repeats the entire WITH statement, as seen below: ``` (WITH cte_a AS (SELECT table1.col_a AS col_a, table1.col_b AS col_b FROM table1) SELECT cte_a.col_a AS col_a, cte_a.col_b AS col_b FROM cte_a WHERE cte_a.col_a = :col_a_1) JOIN (WITH cte_a AS (SELECT table1.col_a AS col_a, table1.col_b AS col_b FROM table1) SELECT cte_a.col_a, cte_a.col_b FROM cte_a WHERE cte_a.col_b = :col_b_1) ON col_a = col_a ``` Tested with the oracle and postgresql dialects. |
From: Sai S. M. <iss...@bi...> - 2018-08-13 17:30:35
|
New issue 4318: Support retrieving Query objects for Bakedqueris https://bitbucket.org/zzzeek/sqlalchemy/issues/4318/support-retrieving-query-objects-for Sai Sindhur Malleni: Trying to use baked queries and refactored this function def get_oldest_pending_db_row_with_lock(context): journal_dep = aliased(models.OpenDaylightJournal) dep_query = context.session.query(journal_dep).filter( models.OpenDaylightJournal.seqnum == journal_dep.seqnum ).outerjoin( journal_dep.depending_on, aliased=True).filter( or_(models.OpenDaylightJournal.state == odl_const.PENDING, models.OpenDaylightJournal.state == odl_const.PROCESSING)) row = context.session.query(models.OpenDaylightJournal).filter( models.OpenDaylightJournal.state == odl_const.PENDING, ~ dep_query.exists() ).order_by( asc(models.OpenDaylightJournal.last_retried)).first() if row: update_db_row_state(context, row, odl_const.PROCESSING) return row to use baked queries, as follows: def get_oldest_pending_db_row_with_lock(): s = session journal_dep = aliased(models.OpenDaylightJournal) dep_query = bakery(lambda s1: session.query(journal_dep)) dep_query += lambda dep_query: dep_query.filter( models.OpenDaylightJournal.seqnum == journal_dep.seqnum) dep_query += lambda dep_query: dep_query.outerjoin( journal_dep.depending_on, aliased=True) dep_query += lambda dep_query: dep_query.filter( or_(models.OpenDaylightJournal.state == odl_const.PENDING, models.OpenDaylightJournal.state == odl_const.PROCESSING)) row = bakery(lambda s2: session.query(models.OpenDaylightJournal)) row += lambda row: row.filter( models.OpenDaylightJournal.state == odl_const.PENDING, ~ dep_query.exists()) row += lambda row: row.order_by( asc(models.OpenDaylightJournal.last_retried)) row(s).all() if row: update_db_row_state(row, odl_const.PROCESSING) return row We are seeing a traceback as follows, Traceback (most recent call last): File "saidb.py", line 129, in <module> get_oldest_pending_db_row_with_lock_baked() File "saidb.py", line 122, in get_oldest_pending_db_row_with_lock_baked row(s).all() File "/usr/lib64/python2.7/site-packages/sqlalchemy/ext/baked.py", line 434, in all return list(self) File "/usr/lib64/python2.7/site-packages/sqlalchemy/ext/baked.py", line 326, in __iter__ baked_context = bq._bake(self.session) File "/usr/lib64/python2.7/site-packages/sqlalchemy/ext/baked.py", line 198, in _bake query = self._as_query(session) File "/usr/lib64/python2.7/site-packages/sqlalchemy/ext/baked.py", line 221, in _as_query query = step(query) File "saidb.py", line 119, in <lambda> ~ dep_query.exists()) AttributeError: 'BakedQuery' object has no attribute 'exists' So, looks like the query object is not retrievable. Any workarounds, plans to implement this? |
From: Pinguin P. <iss...@bi...> - 2018-08-09 13:10:39
|
New issue 4317: MSSQL TIMESTAMP as primary_key error on Insert https://bitbucket.org/zzzeek/sqlalchemy/issues/4317/mssql-timestamp-as-primary_key-error-on Pinguin Pinguin: ``` #!python class Inventur(base.Base): __tablename__ = 't341' item = Column('item', Integer, primary_key=False) ts = Column('ts', TIMESTAMP(convert_int=False), primary_key=True) __table_args__ = ( ) article_inventur = Inventur( item=123, ) session.add(article_inventur) session.flush() ``` SAWarning: Column 't341.ts' is marked as a member of the primary key for table 't341', but has no Python-side or server-side default generator indicated, nor does it indicate 'autoincrement=True' or 'nullable=True', and no explicit value is passed. Primary key columns typically may not store NULL. TIMESTAMP is an auto value |
From: Alex R. <iss...@bi...> - 2018-08-07 17:43:14
|
New issue 4316: Improvement to query.get in the case of multiple primary keys https://bitbucket.org/zzzeek/sqlalchemy/issues/4316/improvement-to-queryget-in-the-case-of Alex Rothberg: Right now the `ident` parameter to `query.get` is a tuple in the case of composite primary keys with the stipulation: >For a composite primary key, the order of identifiers corresponds in most cases to that of the mapped :class:`.Table` object's primary key columns. Having to know and hard code the order of the primary keys seems less than ideal. I suggest accepting a dict mapping names to the values for the primary keys. My implementation of this looks like: ```python from sqlalchemy.inspection import inspect def get_by_pks(model, **kwargs): return model.query.get( tuple( kwargs[key.name] for key in inspect(model).primary_key ) ) ``` |
From: Ivan L. <iss...@bi...> - 2018-08-07 16:28:48
|
New issue 4315: Permission to distribute typing stubs for SQLAlchemy https://bitbucket.org/zzzeek/sqlalchemy/issues/4315/permission-to-distribute-typing-stubs-for Ivan Levkivskyi: Sorry, I am not sure if it is best to raise this question here or on the mailing list. The mypy team and contributors would like to distribute typing stub package (will also include mypy plugin for more precise types and better type inference) for SQLAlchemy as per PEP 484 and PEP 561. I just wanted to let you know about this an double-check if you have any objections. [mypy] https://github.com/python/mypy [sqlalchemy-stubs] https://github.com/dropbox/sqlalchemy-stubs (pre-alpha) Thank you! |
From: Winfried P. <iss...@bi...> - 2018-08-05 20:48:15
|
New issue 4314: sqlalchemy.sql.select with multiple joins creates wrong SQL code https://bitbucket.org/zzzeek/sqlalchemy/issues/4314/sqlalchemysqlselect-with-multiple-joins Winfried Plappert: sqlalchemy.__version__ is 1.2.10. I am using basic SQLAlchemy, not the ORM. I want to select from two memory-attached databases in SQLite. As soon as I join the second database to the query, the generated SQL spits out some erroneous information, as show in the attached example. The abbreviated error message is sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) ambiguous column name: source.md256.id. The table relationship: "fileinfo" contains a list of filenames in the filesystem, here in the example in the current directory, together with modification time (mtime), inode number, filesize, and a link () to the other table "md256", which contains the sha256 of that file. The second database in the example is an exact copy of the first database. The are supposed to be linked via the md256.sha256 values. The addditional index tables for "fileinfo" are not relevant here. |
From: Julien D. <iss...@bi...> - 2018-08-04 05:52:06
|
New issue 4313: Second SELECT not correlated in SELECT ... EXCEPT ... subqueries https://bitbucket.org/zzzeek/sqlalchemy/issues/4313/second-select-not-correlated-in-select Julien Demoor: I'm trying to prevent updates to unchanged rows in large UPDATE statements using the following construct: ``` UPDATE a SET a.xxx = b.xxx FROM b WHERE a.id = b.id AND EXISTS ( SELECT a.xxx EXCEPT b.xxx ); ``` It seems impossible to properly generate such queries with SQLAlchemy using the ``correlate()`` method, which I would expect to work. Code below. ``` from sqlalchemy import Table, MetaData, create_engine, Column, Integer, Unicode, update, select, exists e = create_engine('postgres:///update_correlation') c = e.connect() trans = c.begin() meta = MetaData(bind=c) a = Table('a', meta, Column('id', Integer, primary_key=True), Column('name', Unicode)) b = Table('b', meta, Column('id', Integer, primary_key=True), Column('name', Unicode)) meta.create_all() c.execute("insert into a (name) values ('foo');") # No correlation: both SELECTs in the SELECT ... EXCEPT ... construct have a FROM clause exists_select = select([a.c.name]).except_(select([b.c.name])) stmt = update(a, values={a.c.name: b.c.name}, whereclause=(b.c.id==a.c.id) & exists(exists_select)) print stmt # Only the first SELECT is correlated, both should be exists_select = select([a.c.name]).correlate(a).except_(select([b.c.name]).correlate(b)) stmt = update(a, values={a.c.name: b.c.name}, whereclause=(b.c.id==a.c.id) & exists(exists_select)) print stmt # Ugly workaround def _ugly_hack(*args, **kwargs): return [] except_select = select([b.c.name]) except_select._get_display_froms = _ugly_hack exists_select = select([a.c.name]).correlate(a).except_(except_select) stmt = update(a, values={a.c.name: b.c.name}, whereclause=(b.c.id==a.c.id) & exists(exists_select)) print stmt ``` |
From: Erdil A. <iss...@bi...> - 2018-08-03 19:48:47
|
New issue 4312: When a raw SQL command is executed with the mssql dialect OperationalError is thrown instead of ProgrammingError https://bitbucket.org/zzzeek/sqlalchemy/issues/4312/when-a-raw-sql-command-is-executed-with Erdil Albayrak: When a raw SQL command is executed with the mssql dialect OperationalError is thrown instead of ProgrammingError This issue can be tried by dropping an invalid database. |
From: Jan K. <iss...@bi...> - 2018-08-02 12:32:34
|
New issue 4311: [sqlite] ATTACH DATABASE does not have an effect https://bitbucket.org/zzzeek/sqlalchemy/issues/4311/sqlite-attach-database-does-not-have-an Jan Koprowski: Hi, We are trying to use cross database joins in our code. We are testing our code against sqlite. We are running tests using pytest. Some tests failing as follow (on drop_all): ``` sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unknown database "second" [SQL: 'PRAGMA "second".table_info("table2")'] ``` ``` #!python class Table2(Base): __tablename__ = 'table2' __table_args__ = {'schema': 'second'} ``` ``` #!python engine = create_engine('sqlite://') Session.configure(bind=engine) metadata.bind = engine engine.execute(sqlalchemy.text('ATTACH DATABASE \'second.db\' AS second')) metadata.drop_all() metadata.create_all() ``` Our SQLAlchemy version is 1.2.10. There are tests using sqlite and same setup which works. Are you able to reproduce the problem? Do you have idea why such setup sometimes works? Thank you in advance, Jan |
From: Li G. <iss...@bi...> - 2018-08-01 14:48:43
|
New issue 4310: the behavior about session commit is unexpected in ORM!!! https://bitbucket.org/zzzeek/sqlalchemy/issues/4310/the-behavior-about-session-commit-is Li GuangTian: ``` #!python from sqlalchemy import Column, Integer, String from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(16)) password = Column(String(16)) def __repr__(self): return "<User(name='%s', password='%s')>" % ( self.name, self.password) if __name__ == '__main__': engine = create_engine( 'mysql+pymysql://', echo=False) Session = sessionmaker(bind=engine) session = Session() user = User(name="test", password="test") print(len(user.__dict__.keys())) # 3 session.add(user) print(len(user.__dict__.keys())) # 3 session.flush() print(len(user.__dict__.keys())) # 4 session.commit() print(len(user.__dict__.keys())) # 1 user = User(name="test", password="test") print(len(user.__dict__.keys())) # 3 session.add(user) print(len(user.__dict__.keys())) # 3 session.flush() print(len(user.__dict__.keys())) # 4 session.commit() id = user.id print(len(user.__dict__.keys())) # 4 ``` The output seems that after session committed, the attribute of the object is not visible. But when you try to access any attribute, the behavior is excepted. |
From: EmilijusS <iss...@bi...> - 2018-08-01 14:40:00
|
New issue 4309: Query with a big number returns wrong result using cx_Oracle https://bitbucket.org/zzzeek/sqlalchemy/issues/4309/query-with-a-big-number-returns-wrong EmilijusS: ## Environment :## 1. Python 2.7.11 1. cx_Oracle 5.2.1 works fine, cx_Oracle 6.4.1 (and other 6.x.x versions) don't 1. Oracle client version 12.1.0.2.0 1. Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production 1. Linux CentOS 6.10 1. GCC 6.3.0 1. SQLAlchemy 1.2.10 ## Example code:## ``` #!python from sqlalchemy import create_engine, Column, Integer from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() def main(): engine = create_engine("oracle://CONNECTIONSTRING") Base.metadata.create_all( engine ) ss = sessionmaker( bind=engine ) s = ss() engine.execute("INSERT INTO Test (number) VALUES (1376537018368127);") engine.execute("INSERT INTO Test (number) VALUES (1376537018368130);") s.close() s = ss() result = s.query(Test).filter(Test.number == 1376537018368127).first() print result.number s.close() class Test( Base ): __tablename__ = 'Test' number = Column( Integer, primary_key=True ) main() ``` With cx_Oracle 5.2.1 prints: ``` #!python 1376537018368127 ``` With cx_Oracle 6.4.1 prints: ``` #!python 1376537018368130 ``` ##Additional information:## Sorry if the code doesn't work, I don't really have a way to test it as I can only access this one production Oracle database in which the bug was spotted. [Probably related post in stackoverflow ](https://stackoverflow.com/questions/48406354/sqlalchemy-large-number-truncation-rounding-issue-in-oracle) |
From: Dmytro S. <iss...@bi...> - 2018-08-01 08:51:36
|
New issue 4308: Cascade behavior with uselist=False and association_proxy https://bitbucket.org/zzzeek/sqlalchemy/issues/4308/cascade-behavior-with-uselist-false-and Dmytro Starosud: This is using sqlalchemy 1.2.10. Please consider following code snippet ``` #!python Base = declarative_base() class HasID: id = Column(Integer, primary_key=True) class A(Base, HasID): __tablename__ = 'test_a' ab = relationship('AB', backref='a', cascade='all, delete-orphan', uselist=False) b = association_proxy('ab', 'b', creator=lambda b: AB(b=b)) class B(Base, HasID): __tablename__ = 'test_b' ab = relationship('AB', backref='b', cascade='all, delete-orphan') class AB(Base, HasID): __tablename__ = 'test_ab' a_id = Column(Integer, ForeignKey(A.id), nullable=False) b_id = Column(Integer, ForeignKey(B.id), nullable=False) Base.metadata.create_all(session.bind) a = A(b=B()) session.add(a) session.commit() a.b = None session.commit() # throws # > cursor.execute(statement, parameters) # E sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) null value in column "b_id" violates not-null constraint # E DETAIL: Failing row contains (1, 1, null). # E [SQL: 'UPDATE test_ab SET b_id=%(b_id)s WHERE test_ab.id = %(test_ab_id)s'] [parameters: {'test_ab_id': 1, 'b_id': None}] ``` Is this just an unfortunate coincidence or I misused API? Thanks a lot in advance! |
From: jvanasco <iss...@bi...> - 2018-07-27 17:26:02
|
New issue 4307: roadmap request: infinite date handling (postgres, others?) https://bitbucket.org/zzzeek/sqlalchemy/issues/4307/roadmap-request-infinite-date-handling jvanasco: PostgresSQL has two special datetimes: UPDATE foo SET datetime_column = '-inifinity'; UPDATE foo SET datetime_column = 'inifinity'; SqlAlchemy doesn't natively support these because the underlying engines may or may not support them. Psycopg2, for example, converts it to a Julian calendar maxdate/mindate UNLESS a special adapter is registered -- but then decoding it from the database is janky and this is also at the cost of supporting 'datetime.maxdate'. It would be nice if SqlAlchemy could support this and figure out a way to standardize this across PostgreSQL drivers. I am not sure which other databases support this "infinity" concept. This would obviously require a lot of research and planning across all the supported drivers for a seldom used feature, and would require the help of numerous people familiar with each driver. I do not expect this to happen for a number of years, if ever, but wanted to log this. |
From: Alex R. <iss...@bi...> - 2018-07-18 14:19:11
|
New issue 4306: Create new install extra for `psycopg2-binary` https://bitbucket.org/zzzeek/sqlalchemy/issues/4306/create-new-install-extra-for-psycopg2 Alex Rothberg: Right now there is an [install extra for `postgresql` which installs `psycopg2`](https://bitbucket.org/zzzeek/sqlalchemy/src/fb377229cd4c4e503bde9c44b78d30ad48f3cf7e/setup.py#lines-159) however due to changes in `psycopg2` the wheel [will now be published under a new package name `psycopg2-binary` ](http://initd.org/psycopg/articles/2018/02/08/psycopg-274-released/). I suggest adding a new extra `postgresql-binary` that will install the wheel. |
From: MaxB <iss...@bi...> - 2018-07-18 08:54:32
|
New issue 4305: Sequence for PostgreSQL are not working as expected https://bitbucket.org/zzzeek/sqlalchemy/issues/4305/sequence-for-postgresql-are-not-working-as MaxB: Hi guys! I use PostgreSQL. I want to create an auto-increment field started from 1000, increment by 2. My not working code: ``` #!python class MyTable(db.Model): __tablename__ = 'my_table' id_seq = Sequence('my_table_id_seq', start=1000, increment=2) id = db.Column(db.Integer, id_seq, server_default=id_seq.next_value(), primary_key=True) ``` In my autogenerated migration scrip I see this (which will obviously not work, because there is no definition for my_table_id_seq): ``` #!python def upgrade(): # ### commands auto generated by Alembic - please adjust! ### op.create_table('my_table', sa.Column('id', sa.Integer(), server_default=sa.text("nextval('my_table_id_seq')"), nullable=False), sa.PrimaryKeyConstraint('id') ) # ### end Alembic commands ### ``` If I run it, I get expected error: ``` #!python File "/Users/maxb/Projects/ANDR3/Andr3Admin/venv/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 507, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "my_table_id_seq" does not exist [SQL: "\nCREATE TABLE my_table (\n\tid INTEGER DEFAULT nextval('my_table_id_seq') NOT NULL, \n\tPRIMARY KEY (id)\n)\n\n"] (Background on this error at: http://sqlalche.me/e/f405) ``` So, how do I code it to have expected behaviour? Thanks in advance! |
From: Michael B. <iss...@bi...> - 2018-07-14 16:16:25
|
New issue 4304: why does query.statement add no_replacement_traverse https://bitbucket.org/zzzeek/sqlalchemy/issues/4304/why-does-querystatement-add Michael Bayer: e.g. ``` #!diff diff --git a/lib/sqlalchemy/orm/query.py b/lib/sqlalchemy/orm/query.py index 272fed3e23..c623f10a95 100644 --- a/lib/sqlalchemy/orm/query.py +++ b/lib/sqlalchemy/orm/query.py @@ -514,7 +514,7 @@ class Query(object): # TODO: there's no tests covering effects of # the annotation not being there - return stmt._annotate({'no_replacement_traverse': True}) + return stmt def subquery(self, name=None, with_labels=False, reduce_columns=False): """return the full SELECT statement represented by ``` here is a confusing effect of it being there: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class MyTable(Base): __tablename__ = 'my_table' score = Column(Integer, primary_key=True) my_alias = aliased(MyTable, name="my_alias") s = Session() query_2 = s.query(MyTable).union(s.query(MyTable)) query_2 = query_2.filter(MyTable.score > 5) stmt1 = exists().where(my_alias.score > MyTable.score) stmt2 = s.query(my_alias).filter(my_alias.score > MyTable.score).exists() print(query_2.add_column(stmt1)) print(query_2.add_column(stmt2)) ``` ``` #!sql # correct: SELECT anon_1.my_table_score AS anon_1_my_table_score, EXISTS (SELECT * FROM my_table AS my_alias WHERE my_alias.score > anon_1.my_table_score) AS anon_2 FROM (SELECT my_table.score AS my_table_score FROM my_table UNION SELECT my_table.score AS my_table_score FROM my_table) AS anon_1 WHERE anon_1.my_table_score > :score_1 # incorrect: SELECT anon_1.my_table_score AS anon_1_my_table_score, EXISTS (SELECT 1 FROM my_table AS my_alias, my_table WHERE my_alias.score > my_table.score) AS anon_2 FROM (SELECT my_table.score AS my_table_score FROM my_table UNION SELECT my_table.score AS my_table_score FROM my_table) AS anon_1 WHERE anon_1.my_table_score > :score_1 ``` |
From: Patrick v. d. L. <iss...@bi...> - 2018-07-12 14:13:58
|
New issue 4303: Union over enum field raises LookUpError https://bitbucket.org/zzzeek/sqlalchemy/issues/4303/union-over-enum-field-raises-lookuperror Patrick van der Leer: There seems to be a bug in the union select over a enum field import enum from datetime import datetime import sqlalchemy as db from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker from sqlalchemy.ext.hybrid import hybrid_property from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class MyModel(Base): __abstract__ = True def save(self, commit=True): """Save the record.""" session.add(self) if commit: session.commit() return self class Setting(MyModel): __tablename__ = "setting" id = db.Column(db.Integer, primary_key=True, autoincrement=True) key = db.Column(db.String(240), nullable=False) type_value = db.Column(db.Unicode(16)) int_value = db.Column(db.Integer, info={'type': (int, 'integer')}) float_value = db.Column(db.Float, info={'type': (float, 'float')}) char_value = db.Column(db.UnicodeText, info={'type': (str, 'string')}) boolean_value = db.Column(db.Boolean, info={'type': (bool, 'boolean')}) datetime_value = db.Column(db.DateTime, info={'type': (datetime, 'datetime')}) @hybrid_property def value(self): if self.type_value and self.type_value in self.type_map.keys(): column, discriminator = self.type_map[self.type_value] if column is None: return None return getattr(self, column) return None @value.setter def value(self, v): _type = type(v) if v is not None and _type in self.type_map.keys(): column, discriminator = self.type_map[_type] setattr(self, column, v) self.type_value = discriminator else: raise TypeError("Unable to save value of type: %s" % str(type(v))) type = db.Column(db.String(50)) __mapper_args__ = { 'polymorphic_identity': 'setting', 'polymorphic_on': type } @db.event.listens_for(Setting, "mapper_configured", propagate=True) def on_new_class(mapper, cls_): """Look for Column objects with type info in them, and work up a lookup table.""" info_dict = { type(None): (None, 'none'), 'none': (None, 'none') } for k in mapper.c.keys(): col = mapper.c[k] if 'type' in col.info: python_type, discriminator = col.info['type'] info_dict[python_type] = (k, discriminator) info_dict[discriminator] = (k, discriminator) cls_.type_map = info_dict class SettingAppliesTo(enum.Enum): default = "default" company = "company" project = "project" user = "user" class DefaultSetting(Setting): applies_to = db.Column(db.Enum(SettingAppliesTo)) __mapper_args__ = { 'polymorphic_identity': 'global_setting', } class CompanySetting(Setting): #company_id = db.Column(db.ForeignKey('company.id')) # company = db.relationship("Company") company_id = db.Column(db.Integer) __mapper_args__ = { 'polymorphic_identity': 'company_setting', } @classmethod def get(cls, company_id: int, applies_to=SettingAppliesTo.company): ds = session.query(DefaultSetting).filter_by(applies_to=applies_to) cs = session.query(CompanySetting).filter_by(company_id=company_id) settings = ds.union(cs) return settings engine = create_engine('sqlite:///:memory:', echo=True) Session = sessionmaker(bind=engine) session = Session() Base.metadata.create_all(engine) MyModel.metadata.create_all(engine) DefaultSetting( key="company.public", value=False, applies_to=SettingAppliesTo.company ).save() DefaultSetting( key="company.test", value=False, applies_to=SettingAppliesTo.company ).save() CompanySetting( key="company.public", value=True, company_id=1 ).save() company_settings = list(CompanySetting.get(1)) This raises the following error: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/sqltypes.py", line 1424, in _object_value_for_elem return self._object_lookup[elem] KeyError: 1 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/unittest2/case.py", line 67, in testPartExecutor yield File "/usr/local/lib/python3.6/dist-packages/unittest2/case.py", line 625, in run testMethod() File "/home/patrick/PycharmProjects/regripp-backend/tests/models/test_setting.py", line 82, in test_company_settings company_settings = list(CompanySetting.get(company.id)) File "/home/patrick/PycharmProjects/regripp-backend/Regripp/models/settings.py", line 97, in get ff = list(settings) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/loading.py", line 98, in instances util.raise_from_cause(err) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/util/compat.py", line 249, in reraise raise value File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/loading.py", line 79, in instances rows = [proc(row) for row in fetch] File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/loading.py", line 79, in <listcomp> rows = [proc(row) for row in fetch] File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/loading.py", line 743, in polymorphic_instance return _instance(row) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/loading.py", line 511, in _instance loaded_instance, populate_existing, populators) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/loading.py", line 611, in _populate_full dict_[key] = getter(row) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/sqltypes.py", line 1507, in process value = self._object_value_for_elem(value) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/sql/sqltypes.py", line 1427, in _object_value_for_elem '"%s" is not among the defined enum values' % elem) LookupError: "1" is not among the defined enum values Now looking at the function: def _object_value_for_elem(self, elem): try: return self._object_lookup[elem] except KeyError: raise LookupError( '"%s" is not among the defined enum values' % elem) While setting a breakpoint on the return I see that the value of elem changes from a str `company` to int 1 Tested on sqlalchemy version 1.2.5 and 1.2.9 |
From: Charles Bouchard-L. <iss...@bi...> - 2018-07-10 17:41:30
|
New issue 4302: startswith is a comparison https://bitbucket.org/zzzeek/sqlalchemy/issues/4302/startswith-is-a-comparison Charles Bouchard-Legare: Since startswith is a kind of like and like is a comparison, startswith should be too. This would enable startswith-based custom join conditions without the use of a custom op. Without it, I get: ``` sqlalchemy.exc.ArgumentError: Relationship Order.project could not determine any unambiguous local/remote column pairs based on join condition and remote_side arguments. Consider using the remote() annotation to accurately mark those elements of the join condition that are on the remote side of the relationship. ``` To reproduce, try the following models: ``` #!python class Project(Base): __tablename__ = 'project' id = Column(Integer, primary_key=True) name = Column(String) class Order(Base): __tablename__ = 'order' id = Column(Integer, primary_key=True) project_name = Column(String) project = relationship( 'Project', primaryjoin='Order.project_name.startswith(foreign(Project.name))', viewonly=True ) ``` Workaround : ``` #!python from sqlalchemy.sql import operators operators._comparison.add(operators.startswith_op) ``` Test case could look like ``` #!python def test(): assert operators.is_comparison(operators.startswith_op) ``` |