sqlalchemy-tickets Mailing List for SQLAlchemy (Page 34)
Brought to you by:
zzzeek
You can subscribe to this list here.
| 2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
| 2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
| 2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
| 2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
| 2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
| 2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
| 2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
| 2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
| 2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
| 2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
| 2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
| 2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
|
From: Mike B. <iss...@bi...> - 2015-08-24 20:41:40
|
New issue 3516: make part of PG ARRAY part of base types, look into ANY / ALL https://bitbucket.org/zzzeek/sqlalchemy/issues/3516/make-part-of-pg-array-part-of-base-types Mike Bayer: ARRAY, array constructors, ANY, ALL, CONTAINS are all in DB2 as well not to mention the SQL standard. Oracle has `VARRAY` also which maybe we could support someday. this is needed for functions like #3132. At the moment, PG still has its own operators like contains, contained_by, etc., so I still think people will want to use postgresql.ARRAY. However, for #3132, we need to do a check on the input type to see if it is already Array. So I think putting the basic expression stuff into sqltypes.Array would be helpful here, as would an implementation of ANY / ALL. Ideally ANY/ALL would be able to accept subqueries as well, also SQL standard. They are *close* to IN in that they accept a subquery in this way but not quite the same. |
|
From: Adriel V. <iss...@bi...> - 2015-08-17 18:29:13
|
New issue 3515: AliasedClass or aliased(model) can't be inherited correctly https://bitbucket.org/zzzeek/sqlalchemy/issues/3515/aliasedclass-or-aliased-model-cant-be Adriel Velazquez: AliasedClass or an aliased(model) isn't an inhertiable class. Currently if you want to create a separate class that mimics the same model with complex polymorphic relationships. Creating an Aliased class is the only way of doing so so that the mappers are pointing to the same location; however, Aliased models can't except custom session.query_properties or inherited to expand on these functionalities. |
|
From: thiefmaster <iss...@bi...> - 2015-08-17 16:37:32
|
New issue 3514: Cannot insert JSON null via bulk_insert_mappings https://bitbucket.org/zzzeek/sqlalchemy/issues/3514/cannot-insert-json-null-via thiefmaster: ```python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.dialects.postgresql import JSON from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'test_a' id = Column(Integer, primary_key=True) data = Column(JSON(none_as_null=False), nullable=False) # e = create_engine('sqlite:///:memory:', echo=True) e = create_engine('postgresql:///test', echo=True) Base.metadata.create_all(e) s = Session(e) s.bulk_insert_mappings(A, [{'data': 'null'}]) s.bulk_insert_mappings(A, [{'data': None}]) s.commit() ``` output (only relevant parts) ``` 2015-08-17 18:34:06,882 INFO sqlalchemy.engine.base.Engine INSERT INTO test_a (data) VALUES (%(data)s) RETURNING test_a.id 2015-08-17 18:34:06,882 INFO sqlalchemy.engine.base.Engine {'data': '"null"'} 2015-08-17 18:34:06,883 INFO sqlalchemy.engine.base.Engine INSERT INTO test_a DEFAULT VALUES RETURNING test_a.id 2015-08-17 18:34:06,883 INFO sqlalchemy.engine.base.Engine {} ``` So the `None` is never converted to a `'null'` but the `'null'` string is converted to `'"null"'`... |
|
From: Rob v. d. L. <iss...@bi...> - 2015-08-12 22:14:01
|
New issue 3513: Caling setattr on model instance automatically adds it to the dbsession https://bitbucket.org/zzzeek/sqlalchemy/issues/3513/caling-setattr-on-model-instance Rob van der Linde: Not sure if this is a bug or not, but this has been causing us a lot of hassle. We have a CRUD style rest API and when we update an object (model instance), we call setattr on that model instance to update some fields. What we are noticing is that when calling setattr, SQLAlchemy always seems to be adding the object to the DBSession automatically so that when we check out DBSession.dirty, the object was put into the session simply by calling setattr. We have looked at alternative methods, like directly updating __dict__ but that doesn't always work with m2m fields. Is this a bug? is this expected behaviour? Is there anything else we can do to update an object without automatically getting added to the DBSession. |
|
From: Mike B. <iss...@bi...> - 2015-08-11 22:06:40
|
New issue 3512: raise / raiseload strategy https://bitbucket.org/zzzeek/sqlalchemy/issues/3512/raise-raiseload-strategy Mike Bayer: have a nice PR ready to go w/ this so this should be easy for 1.1 https://github.com/zzzeek/sqlalchemy/pull/193/files |
|
From: Jayson R. <iss...@bi...> - 2015-08-11 16:22:07
|
New issue 3511: Association proxies with relationships pointing to same table breaks on clear https://bitbucket.org/zzzeek/sqlalchemy/issues/3511/association-proxies-with-relationships Jayson Reis: Using documentation examples of association proxy (http://docs.sqlalchemy.org/en/latest/orm/extensions/associationproxy.html) I tried creating some of them to split data, in my case I want to get keywords of type1, or type2 inside user, creating and updating with append works fine but when I try to clear the list and commit to database, sqlalchemy executes an update trying to set UserKeyword.user_id as null instead of deleting that row. Here is the full code transcribed to the example. ```python from sqlalchemy import Column, Integer, String, ForeignKey, create_engine from sqlalchemy.orm import relationship, backref, sessionmaker from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm.collections import attribute_mapped_collection sqlite_engine = create_engine('sqlite:///:memory:') Base = declarative_base(bind=sqlite_engine) Session = sessionmaker(bind=sqlite_engine) session = Session() class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String(64)) type1_relationship = relationship( 'UserKeyword', primaryjoin='and_(UserKeyword.user_id == User.id, Keyword.type == \'type1\')') type1 = association_proxy('type1_relationship', 'keyword', creator=lambda v: UserKeyword(data=v, type='type1') ) type2_relationship = relationship( 'UserKeyword', primaryjoin='and_(UserKeyword.user_id == User.id, Keyword.type == \'type2\')') type2 = association_proxy('type2_relationship', 'keyword', creator=lambda v: UserKeyword(data=v, type='type2') ) def __init__(self, name): self.name = name class UserKeyword(Base): __tablename__ = 'user_keyword' user_id = Column(Integer, ForeignKey('user.id'), primary_key=True) keyword_id = Column(Integer, ForeignKey('keyword.id'), primary_key=True) keyword = relationship("Keyword") def __init__(self, data=None, type=None): if data and type: self.keyword = Keyword(type=type, **data) class Keyword(Base): __tablename__ = 'keyword' id = Column(Integer, primary_key=True) name = Column('name', String(64)) type = Column(String) def __init__(self, name, type): self.name = name self.type = type def __repr__(self): return 'Keyword name=%r, type=%r' % (self.name, self.type) Base.metadata.create_all() user = User('log') user.type1.append({'name': 'testing1'}) session.add(user) session.commit() print(user.type1) print(user.type2) user.type1.clear() session.commit() ``` The commit statement will break because of that update instead of a delete. |
|
From: thiefmaster <iss...@bi...> - 2015-08-11 16:18:44
|
New issue 3510: noload behaves like lazyload https://bitbucket.org/zzzeek/sqlalchemy/issues/3510/noload-behaves-like-lazyload thiefmaster: I would expect the last `print` in this script to show `None`, but as of 1.0.x it shows a C object (which is lazy-loaded). ```python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'test_a' id = Column(Integer, primary_key=True) class B(Base): __tablename__ = 'test_b' id = Column(Integer, primary_key=True) a_id = Column(Integer, ForeignKey('test_a.id')) c_id = Column(Integer, ForeignKey('test_c.id')) a = relationship('A', backref=backref('b')) c = relationship('C', backref=backref('b')) class C(Base): __tablename__ = 'test_c' id = Column(Integer, primary_key=True) e = create_engine('sqlite:///:memory:', echo=True) Base.metadata.create_all(e) s = Session(e) s.add(B(a=A(), c=C())) s.commit() a = s.query(A).options(joinedload('b').noload('c')).all() print a[0] print a[0].b[0] print a[0].b[0].c ``` Script + full output: https://gist.github.com/ThiefMaster/d6c16d26c77507612b0d |
|
From: Eugene Z. <iss...@bi...> - 2015-08-10 10:45:31
|
New issue 3509: sybase: can't wrap db table into Table object if the table has some foreign keys https://bitbucket.org/zzzeek/sqlalchemy/issues/3509/sybase-cant-wrap-db-table-into-table Eugene Zapolsky: **DETAILED DESCRIPTION:** I face this issue when try to create Table object for sybase table with foreign keys. ``` #!python base.py:1230 ERROR Error closing cursor Traceback (most recent call last): File sqlalchemy/engine/base.py", line 1226, in _safe_close_cursor cursor.close() File "Sybase.py", line 459, in close self._cmd.ct_cmd_drop() File "Sybase.py", line 265, in _clientmsg_cb raise DatabaseError(msg) DatabaseError: Layer: 1, Origin: 1 ct_cmd_drop(): user api layer: external error: This routine can be called only if the command structure is idle. 12:19:36 pool.py:638 ERROR Exception during reset or similar Traceback (most recent call last): File "sqlalchemy/pool.py", line 631, in _finalize_fairy fairy._reset(pool) File "sqlalchemy/pool.py", line 771, in _reset pool._dialect.do_rollback(self) File "sqlalchemy/engine/default.py", line 420, in do_rollback dbapi_connection.rollback() File "Sybase.py", line 1201, in rollback self.execute('rollback transaction') File "Sybase.py", line 1229, in execute cursor.execute(sql) File "Sybase.py", line 734, in execute self._start() File "Sybase.py", line 875, in _start status = self._cmd.ct_send() File "Sybase.py", line 265, in _clientmsg_cb raise DatabaseError(msg) DatabaseError: Layer: 1, Origin: 1 ct_send(): user api layer: external error: This routine cannot be called because another command structure has results pending. Traceback (most recent call last): .................................... cur_table = Table(record['tablename'], self.metadata, autoload=True) File "sqlalchemy/sql/schema.py", line 416, in __new__ metadata._remove_table(name, schema) File "sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "sqlalchemy/sql/schema.py", line 411, in __new__ table._init(name, metadata, *args, **kw) File "sqlalchemy/sql/schema.py", line 484, in _init self._autoload(metadata, autoload_with, include_columns) File "sqlalchemy/sql/schema.py", line 508, in _autoload self, include_columns, exclude_columns File "sqlalchemy/engine/base.py", line 1968, in run_callable return conn.run_callable(callable_, *args, **kwargs) File "sqlalchemy/engine/base.py", line 1477, in run_callable return callable_(self, *args, **kwargs) File "sqlalchemy/engine/default.py", line 364, in reflecttable return insp.reflecttable(table, include_columns, exclude_columns) File "sqlalchemy/engine/reflection.py", line 578, in reflecttable exclude_columns, reflection_options) File "sqlalchemy/engine/reflection.py", line 666, in _reflect_fk table_name, schema, **table.dialect_kwargs) File "sqlalchemy/engine/reflection.py", line 447, in get_foreign_keys **kw) File "<string>", line 2, in get_foreign_keys File "sqlalchemy/engine/reflection.py", line 54, in cache ret = fn(self, con, *args, **kw) File "sqlalchemy/dialects/sybase/base.py", line 624, in get_foreign_keys c = connection.execute(REFTABLE_SQL, table_id=reftable_id) File "sqlalchemy/engine/base.py", line 914, in execute return meth(self, multiparams, params) File "sqlalchemy/sql/elements.py", line 323, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "sqlalchemy/engine/base.py", line 1010, in _execute_clauseelement compiled_sql, distilled_params File "sqlalchemy/engine/base.py", line 1146, in _execute_context context) File "sqlalchemy/engine/base.py", line 1334, in _handle_dbapi_exception self._autorollback() File "sqlalchemy/engine/base.py", line 791, in _autorollback self._root._rollback_impl() File "sqlalchemy/engine/base.py", line 670, in _rollback_impl self._handle_dbapi_exception(e, None, None, None, None) File "sqlalchemy/engine/base.py", line 1266, in _handle_dbapi_exception exc_info File "sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "sqlalchemy/engine/base.py", line 668, in _rollback_impl self.engine.dialect.do_rollback(self.connection) File "sqlalchemy/engine/default.py", line 420, in do_rollback dbapi_connection.rollback() File "Sybase.py", line 1201, in rollback self.execute('rollback transaction') File "Sybase.py", line 1229, in execute cursor.execute(sql) File "Sybase.py", line 734, in execute self._start() File "Sybase.py", line 875, in _start status = self._cmd.ct_send() File "Sybase.py", line 265, in _clientmsg_cb raise DatabaseError(msg) sqlalchemy.exc.DatabaseError: (Sybase.DatabaseError) Layer: 1, Origin: 1 ct_send(): user api layer: external error: This routine cannot be called because another command structure has results pending. ``` I fixed this issue in the following way: ``` #!python *** dialects/sybase/base.py Thu Jun 25 17:11:32 2015 --- sqlalchemy/dialects/sybase/base.py Mon Aug 10 12:41:14 2015 *************** *** 609,615 **** WHERE r.tableid = :table_id """) referential_constraints = connection.execute(REFCONSTRAINT_SQL, ! table_id=table_id) REFTABLE_SQL = text(""" SELECT o.name AS name, u.name AS 'schema' --- 609,615 ---- WHERE r.tableid = :table_id """) referential_constraints = connection.execute(REFCONSTRAINT_SQL, ! table_id=table_id).fetchall() REFTABLE_SQL = text(""" SELECT o.name AS name, u.name AS 'schema' *************** ``` |
|
From: Eugene Z. <iss...@bi...> - 2015-08-10 10:17:17
|
New issue 3508: sybase: can't wrap db table into Table object if the table doesn't contain primary key https://bitbucket.org/zzzeek/sqlalchemy/issues/3508/sybase-cant-wrap-db-table-into-table Eugene Zapolsky: **Detailed Description:** When I try to wrap sybase table w/o primary key into sqlalchemy's Table object, I get the following exception. ``` #!python Traceback (most recent call last): .................. cur_table = Table(record['tablename'], self.metadata, autoload=True) File "sqlalchemy/sql/schema.py", line 416, in __new__ metadata._remove_table(name, schema) File "sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "sqlalchemy/sql/schema.py", line 411, in __new__ table._init(name, metadata, *args, **kw) File "sqlalchemy/sql/schema.py", line 484, in _init self._autoload(metadata, autoload_with, include_columns) File "sqlalchemy/sql/schema.py", line 508, in _autoload self, include_columns, exclude_columns File "sqlalchemy/engine/base.py", line 1968, in run_callable return conn.run_callable(callable_, *args, **kwargs) File "sqlalchemy/engine/base.py", line 1477, in run_callable return callable_(self, *args, **kwargs) File "sqlalchemy/engine/default.py", line 364, in reflecttable return insp.reflecttable(table, include_columns, exclude_columns) File "sqlalchemy/engine/reflection.py", line 574, in reflecttable table_name, schema, table, cols_by_orig_name, exclude_columns) File "sqlalchemy/engine/reflection.py", line 647, in _reflect_pk table_name, schema, **table.dialect_kwargs) File "sqlalchemy/engine/reflection.py", line 412, in get_pk_constraint **kw) File "<string>", line 2, in get_pk_constraint File "sqlalchemy/engine/reflection.py", line 54, in cache ret = fn(self, con, *args, **kw) File "sqlalchemy/dialects/sybase/base.py", line 743, in get_pk_constraint for i in range(1, pks["count"] + 1): TypeError: 'NoneType' object is not subscriptable ``` My quick and dirty fix is: 743,746c743,750 < for i in range(1, pks["count"] + 1): < constrained_columns.append(pks["pk_%i" % (i,)]) < return {"constrained_columns": constrained_columns, < "name": pks["name"]} --- > if pks: > for i in range(1, pks["count"] + 1): > constrained_columns.append(pks["pk_%i" % (i,)]) > return {"constrained_columns": constrained_columns, > "name": pks["name"]} > else: > return {"constrained_columns": [], > "name": None} |
|
From: Mike B. <iss...@bi...> - 2015-08-04 15:35:00
|
New issue 3507: revisit cx_oracle unicode handling https://bitbucket.org/zzzeek/sqlalchemy/issues/3507/revisit-cx_oracle-unicode-handling Mike Bayer: cx_oracle types that subclass _NativeUnicodeMixin but not _OracleUnicodeText are essentially text types where convert_unicode=True/'force' is entirely non functional. Even if the cx_oracle coerce_to_unicode flag is turned on, which we no longer recommend, a CLOB will never return unicode. this needs to be worked out so that the public flags at least do as expected. |
|
From: Sandeep S. <iss...@bi...> - 2015-08-03 19:44:23
|
New issue 3506: difference in label with first() or db.session.execute().first() https://bitbucket.org/zzzeek/sqlalchemy/issues/3506/difference-in-label-with-first-or Sandeep Srinivasa: we have a fairly complex query with inner and outer joins and aliases. we normally use the following syntax ``` #!python q = db.session.query(UserCarts,Merchant,Organization).join(Merchant,Merchant.c.id == UserCarts.c.merchant_id).outerjoin(Organization,Organization.c.id == Merchant.c.organization_id).filter(db.and_(UserCarts.c.cart_id == '0f05b5e25ce946af9e0a3b37f0843870')) ``` if we use ``` #!python q.first().keys() ``` , we get repeated keys "name" between Merchant and Organization. I tried this with "with_labels()" at the end as well. however if I do this: ``` #!python p = db.session.execute(q) s = p.first() ``` in that case, I get labeled keys. Why is there a difference ? |
|
From: Mike B. <iss...@bi...> - 2015-08-03 16:59:52
|
New issue 3505: join targeting broken for joined-inh-> joined-inh w/ secondary https://bitbucket.org/zzzeek/sqlalchemy/issues/3505/join-targeting-broken-for-joined-inh Mike Bayer: unfortunately the patch in #3366 does not resolve ``` #!python from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import Session, relationship from sqlalchemy import Column, String, Integer from sqlalchemy.schema import ForeignKey Base = declarative_base() class Object(Base): """ Object ORM """ __tablename__ = 'object' type = Column(String(30)) __mapper_args__ = { 'polymorphic_identity': 'object', 'polymorphic_on': type } id = Column(Integer, primary_key=True) name = Column(String(256)) class A(Object): __tablename__ = 'a' __mapper_args__ = { 'polymorphic_identity': 'a', } id = Column(Integer, ForeignKey('object.id'), primary_key=True) b_list = relationship( 'B', secondary='a_b_association', backref='a_list' ) class B(Object): __tablename__ = 'b' __mapper_args__ = { 'polymorphic_identity': 'b', } id = Column(Integer, ForeignKey('object.id'), primary_key=True) class ABAssociation(Base): __tablename__ = 'a_b_association' a_id = Column(Integer, ForeignKey('a.id'), primary_key=True) b_id = Column(Integer, ForeignKey('b.id'), primary_key=True) class X(Base): __tablename__ = 'x' id = Column(Integer, primary_key=True) name = Column(String(30)) obj_id = Column(Integer, ForeignKey('object.id')) obj = relationship('Object', backref='x_list') s = Session() # works q = s.query(B).\ join(B.a_list, 'x_list').filter(X.name == 'x1') print q # fails q = s.query(B).\ join(B.a_list, A.x_list).filter(X.name == 'x1') print q ``` |
|
From: adridg <iss...@bi...> - 2015-07-30 12:27:59
|
New issue 3504: SQL Server VARCHAR(MAX) on reflection can't compile or print https://bitbucket.org/zzzeek/sqlalchemy/issues/3504/sql-server-varchar-max-on-reflection-cant adridg: With SQL Server, you can create columns of type VARCHAR(MAX). This is the modern rendition of NTEXT or TEXT, and SQL Alchemy translates String() into VARCHAR(MAX). With SQL Server 2012 or later, this happens automatically, and it's described in http://docs.sqlalchemy.org/en/latest/changelog/changelog_10.html#change-f520106ec3455eaa056110c048aa4862 . However, on reflection (through the inspector) you can end up with a VARCHAR (sql type-)object that has a length of "max" (i.e. a string). Printing that object causes an exception. As a very simple and contrived example: from sqlalchemy import VARCHAR print VARCHAR() print VARCHAR(80) print VARCHAR("max") # Fails in _render_type_string Attached find a simple test program that creates a table with a String() column, then reflects it to discover the VARCHAR(MAX) column. |
|
From: Mike B. <iss...@bi...> - 2015-07-29 21:19:44
|
New issue 3503: add full control for return type under PG ARRAY, HSTORE, JSON, JSONB indexed (element) access https://bitbucket.org/zzzeek/sqlalchemy/issues/3503/add-full-control-for-return-type-under-pg Mike Bayer: e.g., if I want myhstorecol['somecol'] to have a custom type, how do I do that? Some system that works for all of these types that defines the return types for indexed access should be devised. 1.1 because this is like a big deal, it's going to come up a lot, but no concrete plan right now. Perhaps a "schema" dictionary. "*" is a wildcard: ``` #!python HSTORE(schema={'*': String}) JSON(schema={'subdict': JSON(schema={'elements': ARRAY(Integer)}}) HSTORE(schema={'foob': String, 'bar': MyMagicType}) ``` dictionary too coarse-grained? Supply a callable instead: ``` #!python HSTORE(schema=lambda key: String if key =="foo" else MyMagicType) ``` |
|
From: Cliff D. <iss...@bi...> - 2015-07-29 19:07:14
|
New issue 3502: Documentation typo on suffix_with() https://bitbucket.org/zzzeek/sqlalchemy/issues/3502/documentation-typo-on-suffix_with Cliff Dyer: In http://docs.sqlalchemy.org/en/latest/core/selectable.html?highlight=select_from#sqlalchemy.sql.expression.Select.suffix_with Documentation currenty reads: Multiple prefixes can be specified by multiple calls to suffix_with(). This should probably say: Multiple suffixes can be specified by multiple calls to suffix_with(). |
|
From: Mike B. <iss...@bi...> - 2015-07-26 22:33:55
|
New issue 3501: full positional mapping w/ ORM / corresponding compat for TextAsFrom https://bitbucket.org/zzzeek/sqlalchemy/issues/3501/full-positional-mapping-w-orm Mike Bayer: tons of use cases for TextAsFrom that should be intuitive that don't work. When we make a TextAsFrom with a positional set of columns, those columns should be welded to it. The statement should be able to work in any ORM context flawlessly, no reliance on names matching up should be needed as we do not target on name anymore: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) bs = relationship("B") class B(Base): __tablename__ = 'b' id = Column(Integer, primary_key=True) a_id = Column(ForeignKey('a.id')) e = create_engine("sqlite://", echo='debug') Base.metadata.create_all(e) s = Session(e) s.add_all([ A(bs=[B(), B()]), A(bs=[B(), B()]) ]) s.commit() b1 = aliased(B) # works sql = "select a.id, ba.id as bid, ba.a_id from "\ "a left outer join b as ba on a.id=ba.a_id" # fails. why? # sql = "select a.id as aid, ba.id as bid, ba.a_id from "\ # "a left outer join b as ba on a.id=ba.a_id" # fails. why? # sql = "select a.id as aid, ba.id, ba.a_id from "\ # "a left outer join b as ba on a.id=ba.a_id" # are we relying upon names somehow? we should be able to # be 100% positional now t = text(sql).columns(A.id, b1.id, b1.a_id) q = s.query(A).from_statement(t).options(contains_eager(A.bs, alias=b1)) for a in q: print a.id print a, a.bs # forget about if we try a1 = aliased(A) also... ``` I've added docs in 7d268d4bcb5e6205d05ac and 4f51fa947ffa0cadeab7ad7dc that we may even have to dial back for versions that don't have this feature. |
|
From: Mike B. <iss...@bi...> - 2015-07-24 22:37:45
|
New issue 3500: remove "--with(out)-cextensions" from installer https://bitbucket.org/zzzeek/sqlalchemy/issues/3500/remove-with-out-cextensions-from-installer Mike Bayer: we now control C extension builds using DISABLE_SQLALCHEMY_CEXT, and this has been since the 0.8 series. Per https://bitbucket.org/pypa/setuptools/issue/65/deprecate-and-remove-features the `Feature` add-on is still in limbo, so let's ditch it. |
|
From: legojoey17 <iss...@bi...> - 2015-07-24 18:52:10
|
New issue 3499: Deffered load with PostgreSQL JSON col causes "unhashable type dict" error https://bitbucket.org/zzzeek/sqlalchemy/issues/3499/deffered-load-with-postgresql-json-col legojoey17: There is an error while doing a query on a declarative base model where you add a column that evaluates to a `JSON` value. The following code ```python from sqlalchemy.dialects.postgresql import JSONB, BIGINT base = declarative_base() class ModelA(base): id = Column(BIGINT, primary_key=True) class ModelB(base): id = Column(BIGINT, primary_key=True) jsoncol = Column(JSONB) session.query(ModelA) \ .filter(ModelA.id.in_(ids)) \ .options(Load(ModelA).load_only("id")) \ .join(ModelB, ModelB.id == ModelA.id) \ .add_columns([ModelB.jsoncol['jsonfield']]) \ .all() ``` ModelB will look something like ``` id = 1 jsoncol = { 'jsonfield': { 'k1': 1, 'k2': 2 } } # in this case ModelB.jsoncol['jsonfield'] => { 'k1': 1, 'k2': 2 } ``` ``` File "/home/piinpoint/.virtualenv/piinpoint/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2399, in all return list(self) File "/home/piinpoint/.virtualenv/piinpoint/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 84, in instances util.raise_from_cause(err) File "/home/piinpoint/.virtualenv/piinpoint/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/home/piinpoint/.virtualenv/piinpoint/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 75, in instances rows = util.unique_list(rows, filter_fn) File "/home/piinpoint/.virtualenv/piinpoint/lib/python2.7/site-packages/sqlalchemy/util/_collections.py", line 756, in unique_list if hashfunc(x) not in seen TypeError: unhashable type: 'dict' ``` ``` x => sqlalchemy.util._collections.result((<ModelA object at 0x7f8088868710>, {u'19': 100935, u'24': 103674, u'14': 98276})) ``` ``` hashfunc(x) => (140190023059216, {u'19': 100935, u'24': 103674, u'14': 98276}) ``` |
|
From: jvanasco <iss...@bi...> - 2015-07-23 20:47:17
|
New issue 3498: cascade_iterator doesn't behave as expected for an `all` type_ , perhaps it is a literal (not implicit) `all`? https://bitbucket.org/zzzeek/sqlalchemy/issues/3498/cascade_iterator-doesnt-behave-as-expected jvanasco: http://docs.sqlalchemy.org/en/rel_1_0/orm/mapping_api.html?highlight=cascade_iterator#sqlalchemy.orm.mapper.Mapper.cascade_iterator Using this code, I would expect to the cascade for "all" contain all objects instance_state.mapper.cascade_iterator('all', instance_state) instead, I see an empty list. using the defaults, i see what I want instance_state.mapper.cascade_iterator('save-update', instance_state) instance_state.mapper.cascade_iterator('merge', instance_state) I think "all" is filtering for rules that match an explicit `all` cascade, not "all cascades" perhaps a new argument, such as "all_rules" is needed? The docs on the attribute could be better. It seems this returns a generator, of which every element is a tuple in the following form: * [0] orm object * [1] Mapper of [0] * [2] InstanceState of [0] * [3] __dict__ of [0] |
|
From: Mike B. <iss...@bi...> - 2015-07-22 19:59:51
|
New issue 3497: connectionrec recycle can create situation where checkout handler is called w/o connect handler being successful https://bitbucket.org/zzzeek/sqlalchemy/issues/3497/connectionrec-recycle-can-create-situation Mike Bayer: ``` #!diff diff --git a/lib/sqlalchemy/pool.py b/lib/sqlalchemy/pool.py index b38aefb..9af956b 100644 --- a/lib/sqlalchemy/pool.py +++ b/lib/sqlalchemy/pool.py @@ -453,6 +453,7 @@ class _ConnectionRecord(object): for_modify(pool.dispatch).\ exec_once(self.connection, self) pool.dispatch.connect(self.connection, self) + self.info['x'] = 1 connection = None """A reference to the actual DBAPI connection being tracked. @@ -563,6 +564,8 @@ class _ConnectionRecord(object): self.connection = self.__connect() if self.__pool.dispatch.connect: self.__pool.dispatch.connect(self.connection, self) + self.info['x'] = 1 + elif self.__pool._recycle > -1 and \ time.time() - self.starttime > self.__pool._recycle: self.__pool.logger.info( @@ -587,9 +590,17 @@ class _ConnectionRecord(object): if recycle: self.__close() self.info.clear() + + # this is the fix; ensure self.connection is None first + # self.connection = None + + # here, if __connect() fails, self.connection + # is still referring to the old connection, but info is empty + # and we won't attempt a __connect() next time we are here self.connection = self.__connect() if self.__pool.dispatch.connect: self.__pool.dispatch.connect(self.connection, self) + self.info['x'] = 1 return self.connection def __close(self): @@ -718,12 +729,14 @@ class _ConnectionFairy(object): fairy._counter += 1 if not pool.dispatch.checkout or fairy._counter != 1: + assert 'x' in fairy.info return fairy # Pool listeners can trigger a reconnection on checkout attempts = 2 while attempts > 0: try: + assert fairy.info['x'] pool.dispatch.checkout(fairy.connection, fairy._connection_record, fairy) ``` then run: py.test test/engine/test_pool.py -k test_error_on_pooled_reconnect_cleanup_recycle this warrants an immediate 1.0.8 IMO, I would like to try to produce an oslo.db-level reproduction case though |
|
From: Andrey S. <iss...@bi...> - 2015-07-15 14:41:59
|
New issue 3487: Multi-dimensional ARRAY in Postgres aren't correctly supported by ORM's query builders https://bitbucket.org/zzzeek/sqlalchemy/issues/3487/multi-dimensional-array-in-postgres-arent Andrey Semenov: Here's just an example: ``` #!python weights = Column(ARRAY(DECIMAL(precision=2, asdecimal=False), dimensions=2), nullable=False, default=[]) ``` then in query: ``` #!python CallServiceCampaign.weights[ func.idx(CallServiceCampaign.goods_ids, literal(goods_id)) ][1] != None, ``` produces: ``` #!text NotImplementedError: Operator 'getitem' is not supported on this expression ``` I think this is because https://bitbucket.org/zzzeek/sqlalchemy/src/447ee0af1d2fbb95f2f1244de301f2fe4a87a72f/lib/sqlalchemy/dialects/postgresql/base.py?at=rel_1_0_6#cl-911 doesn't handle `dimensions` parameter and always returns the underlying array's class on the first non-sliced reference. |
|
From: Konsta V. <iss...@bi...> - 2015-07-15 06:11:43
|
New issue 3486: Property all_orm_descriptors doesn't contain backref attributes https://bitbucket.org/zzzeek/sqlalchemy/issues/3486/property-all_orm_descriptors-doesnt Konsta Vesterinen: I don't know if its desired behaviour but currently the `Mapper.all_orm_descriptors` doesn't contain relationship backref attributes. Atleast I think the docs should clearly state that `all_orm_descriptors` doesn't contain these. Consider the following model definition: ```python import sqlalchemy as sa Base = sa.ext.declarative.declarative_base() group_user = sa.Table( 'group_user', Base.metadata, sa.Column('user_id', sa.Integer, sa.ForeignKey('user.id')), sa.Column('group_id', sa.Integer, sa.ForeignKey('group.id')) ) class Group(Base): __tablename__ = 'group' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String) class User(Base): __tablename__ = 'user' id = sa.Column(sa.Integer, primary_key=True) name = sa.Column(sa.String) groups = sa.orm.relationship( 'Group', secondary=group_user, backref='users' ) ``` ```python User.__mapper__.attrs.keys() # ['groups', 'id', 'name'] Group.__mapper__.attrs.keys() # ['users', 'id', 'name'] User.__mapper__.all_orm_descriptors.keys() # ['groups', 'id', 'name', '__mapper__'] Group.__mapper__.all_orm_descriptors.keys() # ['id', 'name', '__mapper__'] <- Doesn't contain 'users' ``` I have some situations where I need to inspect all model descriptors. Currently I need to use all_orm_descriptors and combine those with backref relationships of given model. |
|
From: Mike B. <iss...@bi...> - 2015-07-14 15:08:12
|
New issue 3485: dont do recursion overflow when FunctionElement type is None https://bitbucket.org/zzzeek/sqlalchemy/issues/3485/dont-do-recursion-overflow-when Mike Bayer: ``` #!python from sqlalchemy.sql.functions import FunctionElement from sqlalchemy import Integer from sqlalchemy.ext.compiler import compiles class MissingType(FunctionElement): name = 'mt' type = None class NotMissingType(FunctionElement): name = 'nmt' type = Integer @compiles(NotMissingType) @compiles(MissingType) def _fn(element, compiler, **kw): return element.name print NotMissingType() print MissingType() ``` the second one recursion overflows ``` #! File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) File "/Users/classic/dev/sqlalchemy/lib/sqlalchemy/sql/elements.py", line 722, in __getattr__ return getattr(self.comparator, key) ``` this goes back to at least 0.9 but 1.0.7 is fine |
|
From: Mike B. <iss...@bi...> - 2015-07-13 18:55:19
|
New issue 3484: support multiple AbstractConcreteBase branch points in a hierarchy https://bitbucket.org/zzzeek/sqlalchemy/issues/3484/support-multiple-abstractconcretebase Mike Bayer: e.g. below, we can query Document or ContactDocument and get a polymorphic query: ``` #!python class Document(AbstractConcreteBase, Base): date = Column(Date) documentType = Column(String) class NotAContact(Document): __tablename__ = 'not_a_contact' id = Column(Integer, primary_key=True) __mapper_args__ = {'polymorphic_identity': 'nc'} class ABCMiddle(object): _sa_abc_base = True @classmethod def _sa_decl_prepare_nocascade(cls): AbstractConcreteBase._sa_decl_prepare_nocascade.__func__(cls) class ContactDocument(ABCMiddle, Document): contactPersonName = Column(String) salesPersonName = Column(String) sendMethod = Column(String) @declared_attr def company_id(self): return Column(ForeignKey('companies.id')) class Offer(ContactDocument): __tablename__ = 'offers' id = Column(Integer, primary_key=True) __mapper_args__ = {'polymorphic_identity': 'offer'} ``` there's an easy patch that *seems* to do this, verify with tests and we can add to 1.0 with "experimental" label ``` #!diff diff --git a/lib/sqlalchemy/ext/declarative/api.py b/lib/sqlalchemy/ext/declarative/api.py index 3d46bd4..7493df3 100644 --- a/lib/sqlalchemy/ext/declarative/api.py +++ b/lib/sqlalchemy/ext/declarative/api.py @@ -20,7 +20,7 @@ import weakref from .base import _as_declarative, \ _declarative_constructor,\ - _DeferredMapperConfig, _add_attribute + _DeferredMapperConfig, _add_attribute, _get_immediate_cls_attr from .clsregistry import _class_resolver @@ -506,7 +506,8 @@ class AbstractConcreteBase(ConcreteBase): @classmethod def _sa_decl_prepare_nocascade(cls): - if getattr(cls, '__mapper__', None): + if getattr(cls, '__mapper__', None) and \ + not _get_immediate_cls_attr(cls, '_sa_abc_base', strict=True): return to_map = _DeferredMapperConfig.config_for_cls(cls) ``` then the middleware is: ``` #!python class ABCMiddle(object): _sa_abc_base = True @classmethod def _sa_decl_prepare_nocascade(cls): AbstractConcreteBase._sa_decl_prepare_nocascade.__func__(cls) ``` |
|
From: Mike B. <iss...@bi...> - 2015-07-11 05:42:49
|
New issue 3483: result.keys() 1.0 regression https://bitbucket.org/zzzeek/sqlalchemy/issues/3483/resultkeys-10-regression Mike Bayer: the resultproxy inlining seems to have lost the right keys() for anon labels ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() e = create_engine("sqlite://", echo=True) r = e.execute(select([func.count(1)])) assert r.keys() == ['count_1'] ``` |