sqlalchemy-tickets Mailing List for SQLAlchemy (Page 20)
Brought to you by:
zzzeek
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
From: Eloy F. <iss...@bi...> - 2016-12-13 22:17:52
|
New issue 3869: Column.unique not populated in reflected tables https://bitbucket.org/zzzeek/sqlalchemy/issues/3869/columnunique-not-populated-in-reflected Eloy Felix: checking Column.unique in a column with unique key is always None. Problem in all dialects. fix in this commit: https://github.com/eloyfelix/sqlalchemy/commit/c44f884dee5fdb0f5efdddec92eac0473c864e6b I already sent a pull request for oracle's get_unique_constraints: https://gerrit.sqlalchemy.org/#/c/260/ do I make another pull request? I'm new in collaborating in this kind of projects. Responsible: zzzeek |
From: Miroslav S. <iss...@bi...> - 2016-12-03 21:32:46
|
New issue 3868: Bug in _get_server_version_info() @dialects/mssql/pymssql.py https://bitbucket.org/zzzeek/sqlalchemy/issues/3868/bug-in-_get_server_version_info-dialects Miroslav Stampar: These two issues [1] and [2] seem to be the result of a bug inside the pymssql's banner parsing routine _get_server_version_info() which expects banner to start explicitly with "Microsoft", while in some cases there are whitespaces at the beginning. Would it be possible to adapt that routine as described in [2] - just to put that starting r"\s*..." at the beginning of the used regular expression? [1] https://github.com/sqlmapproject/sqlmap/issues/2297 [2] http://thread.gmane.org/gmane.comp.python.sqlalchemy.user/32762 |
From: Lele L. <iss...@bi...> - 2016-12-02 07:23:14
|
New issue 3867: MySQL table definition parser warns about valid key definition https://bitbucket.org/zzzeek/sqlalchemy/issues/3867/mysql-table-definition-parser-warns-about Lele Long: I am using mysql 5.6.16-log, and SA warns as following: ``` /local/lib/python2.7/site-packages/sqlalchemy/dialects/mysql/reflection.py:56: SAWarning: Unknown schema content: u" KEY `tbl_idx_pk` (`zip_code`,`shop_code`) USING BTREE COMMENT '(null)'," util.warn("Unknown schema content: %r" % line) ``` Code below demonstrates the problem, I have also found no such test case in mysql/test_reflection.py RawReflectionTest test_key_reflection ``` >>> from sqlalchemy.dialects.mysql import reflection >>> from sqlalchemy.dialects.mysql import base as mysql >>> dialect = mysql.dialect() >>> parser = reflection.MySQLTableDefinitionParser(dialect, dialect.identifier_preparer) >>> regex = parser._re_key >>> assert regex.match(" PRIMARY KEY (`id`) USING BTREE KEY_BLOCK_SIZE 16,") >>> assert regex.match(" PRIMARY KEY (`id`) USING BTREE COMMENT '(null)',") Traceback (most recent call last): File "<stdin>", line 1, in <module> AssertionError >>> ``` I would suggest adding comment to the regexp for key definition(_re_key) . |
From: literg <iss...@bi...> - 2016-12-01 10:43:52
|
New issue 3866: session.refresh(obj) spawns new connection that are held until session is closed https://bitbucket.org/zzzeek/sqlalchemy/issues/3866/sessionrefresh-obj-spawns-new-connection literg: Environment: MySQL 14.14, SQLAlchemy 1.0.11, 1.0.16, 1.1.4 When using session.refresh(obj) inside session, new connection is spawned and are held until the end of session. Code that i used: ``` #!python session = session_maker() try: yield session session.commit() for obj in session: session.refresh(obj) except Exception as e: session.rollback() raise e finally: session.close() ``` ``` #!python with make_session(...) as session: for mapped in [self._mapper.map(obj) for obj in objects]: saved_entities.append(mapped) session.add(mapped) session.flush() ``` Hint: The curious thing is that when i use session.commit() after session.refresh(obj) the connection is reused by next session.refresh(obj) invocation. |
From: jdkida <iss...@bi...> - 2016-12-01 00:03:26
|
New issue 3865: JSON column default with __init__ https://bitbucket.org/zzzeek/sqlalchemy/issues/3865/json-column-default-with-__init__ jdkida: ``` #!python class TestTable(Base): test_json = Column(JSON, default={"test": "test"}) test_string = Column(String, default='test') def __init__(self, test_json=None, test_string=None): test_json=test_json test_string=test_string # insert and query session.add(TestTable()) session.commit() tests = session.query(TestTable).all() for t in tests: print(t.test_json, t.test_string) ``` None, 'test' Responsible: zzzeek |
From: charles p. <iss...@bi...> - 2016-11-30 16:47:44
|
New issue 3864: Alert user about shortcoming on executemany for psycopg2 https://bitbucket.org/zzzeek/sqlalchemy/issues/3864/alert-user-about-shortcoming-on charles pierre: Hello, Thanks for the fantastic job you did on this library. I recently came across a shortcoming of the actual psycopg2 library: apparently the executemany function is not optimized at all and just do a loop over execute. In regard to sqlalchemy, this mean that your recommended way of doing massive insert/update statements is actually really slow when working with psycopg2. For me, the difference between using execute(insert(), ..) or bulk_insert_mappings (which in the end, just execute a tons of single INSERT statement) and doing .insert().values() (which construct the correct INSERT ... VALUES), was like day and night. Maybe you could add a warning or note in your documentation regarding the use of psycopg2 in the relevant section of your docs (for instance, here, http://docs.sqlalchemy.org/en/latest/faq/performance.html#i-m-inserting-400-000-rows-with-the-orm-and-it-s-really-slow) ? Responsible: zzzeek |
From: Antoine C. <iss...@bi...> - 2016-11-29 13:11:26
|
New issue 3863: Bulk save doesn't executemany() when model kwargs are different https://bitbucket.org/zzzeek/sqlalchemy/issues/3863/bulk-save-doesnt-executemany-when-model Antoine Catton: Let's consider this dialect in `test.py`: ``` #!python import sqlalchemy as sa from sqlalchemy.dialects import registry from sqlalchemy.dialects.sqlite.pysqlite import SQLiteDialect_pysqlite from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker class CustomSQLiteDialect(SQLiteDialect_pysqlite): def do_executemany(self, cursor, statement, parameters, context=None): print("executemany(): statement={s!r}, params={p!r}".format(s=statement, p=parameters)) super(CustomSQLiteDialect, self).do_executemany(cursor, statement, parameters, context) def do_execute(self, cursor, statement, parameters, context=None): print("EXECUTE!!(): statement={s!r}, params={p!r}".format(s=statement, p=parameters)) super(CustomSQLiteDialect, self).do_execute(cursor, statement, parameters, context) ``` And this model: ``` #!python Base = declarative_base() class Model(Base): __tablename__ = 'model' id = sa.Column(sa.Integer, primary_key=True) foo = sa.Column(sa.String, default='foo') bar = sa.Column(sa.String, default='bar') ``` And run the code: ``` #!python registry.register("sqlite.custom", "test", "CustomSQLiteDialect") if __name__ == '__main__': engine = sa.create_engine('sqlite+custom:///:memory:') Base.metadata.create_all(engine) Session = sessionmaker(bind=engine) session = Session() bulk1 = [Model(foo='a', bar='b'), Model(foo='c', bar='d')] print("== Saving first bulk:") session.bulk_save_objects(bulk1) bulk2 = [Model(foo='a'), Model(bar='b')] print("== Saving second bulk:") session.bulk_save_objects(bulk2) ``` Here's the output: ``` EXECUTE!!(): statement=u"SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1", params=() EXECUTE!!(): statement=u"SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1", params=() EXECUTE!!(): statement=u'PRAGMA table_info("model")', params=() EXECUTE!!(): statement=u'\nCREATE TABLE model (\n\tid INTEGER NOT NULL, \n\tfoo VARCHAR, \n\tbar VARCHAR, \n\tPRIMARY KEY (id)\n)\n\n', params=() == Saving first bulk: executemany(): statement=u'INSERT INTO model (foo, bar) VALUES (?, ?)', params=(('a', 'b'), ('c', 'd')) == Saving second bulk: EXECUTE!!(): statement=u'INSERT INTO model (foo, bar) VALUES (?, ?)', params=('a', 'bar') EXECUTE!!(): statement=u'INSERT INTO model (foo, bar) VALUES (?, ?)', params=('foo', 'b') ``` I don't see a reason why the second bulk should do many `execute()`. |
From: Антонио А. <iss...@bi...> - 2016-11-28 14:06:14
|
New issue 3862: strange compilation behavior https://bitbucket.org/zzzeek/sqlalchemy/issues/3862/strange-compilation-behavior Антонио Антуан: Hi I have such model: ``` #!python class Transaction(BaseMixin, Base, FlagsMixin): __tablename__ = 'transactions' Source = _TransactionSources() id = Column(Integer, primary_key=True, nullable=False, autoincrement=True, index=True) ts_spawn = Column(Integer, nullable=False) user_id = Column(Integer, nullable=False, default=0, index=True) source_type = Column(SQLEnum(*Source.values(), name='transaction_sources'), index=True, nullable=False) @hybrid_property def is_from_leads(self): return self.source_type in (self.Source.buyouts, self.Source.orders, self.Source.leads) @is_from_leads.expression def is_from_leads(self): return and_(self.source_type != None, self.source_type.in_([self.Source.buyouts, self.Source.orders, self.Source.leads])) @hybrid_property def considered_in_balance(self): return self.source_type in (Transaction.Source.compensations, Transaction.Source.balance_correcting, Transaction.Source.recalculations) @considered_in_balance.expression def considered_in_balance(self): return self.source_type.in_([Transaction.Source.compensations, Transaction.Source.balance_correcting, Transaction.Source.recalculations]) @hybrid_method def exist_flag(self, flag): if not self.flags or not flag: return False return self.flags & flag == flag @exist_flag.expression def exist_flag(self, flag): return and_(flag != 0, self.flags != 0, self.flags.op('&')(flag) != 0) class _TransactionSources(Enum): def __init__(self, **kwargs): super(_TransactionSources, self).__init__(**kwargs) self.orders = 'orders' self.leads = 'leads' self.transactions = 'transactions' self.compensations = 'compensations' self.invoices = 'invoices' self.balance_correcting = 'balance_correcting' self.translations = 'translations' self.recalculations = 'recalculations' ``` And such query: ``` #!python Session.query(Transaction.user_id, func.sum(Transaction.amount).label('sums')) \ .filter( Transaction.ts_spawn > 1447701600, Transaction.user_id > 0, Transaction.state == Transaction.State.finished, or_( and_( Transaction.is_from_leads == False, Transaction.considered_in_balance == False, Transaction.source_type != Transaction.Source.transactions ), Transaction.exist_flag(Transaction.Flags.revoking) ))\ .group_by(Transaction.user_id) ``` Compiled SQL query looks like this: ``` #!sql SELECT transactions.user_id, sum(transactions.amount) AS sums FROM transactions WHERE transactions.ts_spawn > 1447701600 AND transactions.user_id > 0 AND transactions.state = 'finished' AND (false OR transactions.flags != 0 AND (transactions.flags & 2) != 0) GROUP BY transactions.user_id ``` Why does compiler consider `and_( Transaction.is_from_leads == False, Transaction.considered_in_balance == False, Transaction.source_type != Transaction.Source.transactions )` as `false`? If I use such condition, compiled query looks right: ``` #!python and_( ~Transaction.is_from_leads, ~Transaction.considered_in_balance, Transaction.source_type != Transaction.Source.transactions ) ``` ``` #!sql SELECT transactions.user_id, sum(transactions.amount) AS sums FROM transactions WHERE transactions.ts_spawn > 1447701600 AND transactions.user_id > 0 AND transactions.state = 'finished' AND (NOT (transactions.source_type IS NOT NULL AND transactions.source_type IN ('buyouts', 'orders', 'leads')) AND transactions.source_type NOT IN ('compensations', 'balance_correcting', 'recalculations') AND transactions.source_type != 'transactions' OR transactions.flags != 0 AND (transactions.flags & 2) != 0) GROUP BY transactions.user_id ``` |
From: Rob S. <iss...@bi...> - 2016-11-23 08:58:28
|
New issue 3861: Automapping certain sqlalchemy indexes produces duplicate representation https://bitbucket.org/zzzeek/sqlalchemy/issues/3861/automapping-certain-sqlalchemy-indexes Rob Salmond: I'm not really sure what's going on here but for this particular schema after reflection via the automapper the indexes are duplicated. If you remove anything from the table definitions the duplication goes away. ``` $ cat repro.sql CREATE TABLE "users" ("id" integer not null primary key autoincrement not null, "slug" varchar(150) not null, "email" varchar(254) not null); CREATE UNIQUE INDEX users_email_unique on "users" ("email"); CREATE TABLE "refreshtokens" ("id" integer not null primary key autoincrement, "user_id" integer not null, foreign key("user_id") references "users"("id")); CREATE UNIQUE INDEX users_slug_unique on "users" ("slug"); $ sqlite3 repro.db ".read repro.sql" $ cat repro.py from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine Base = automap_base() engine = create_engine("sqlite:///repro.db") Base.prepare(engine, reflect=True) users = Base.metadata.tables.get('users') print(users.indexes) $ python repro.py set([Index('users_slug_unique', Column('slug', VARCHAR(length=150), table=<users>, nullable=False), unique=True), Index('users_email_unique', Column('email', VARCHAR(length=254), table=<users>, nullable=False), unique=True), Index('users_email_unique', Column('email', VARCHAR(length=254), table=<users>, nullable=False), unique=True), Index('users_slug_unique', Column('slug', VARCHAR(length=150), table=<users>, nullable=False), unique=True)]) $ pip freeze PyMySQL==0.7.9 SQLAlchemy==1.1.4 $ python --version Python 2.7.9 $ sqlite3 --version 3.8.7.1 2014-10-29 13:59:56 3b7b72c4685aa5cf5e675c2c47ebec10d9704221 ``` |
From: Arita <iss...@bi...> - 2016-11-22 16:57:21
|
New issue 3860: Google Big Table Connector https://bitbucket.org/zzzeek/sqlalchemy/issues/3860/google-big-table-connector Arita: While there is MySQL for Cloud SQL Google, for No SQL Google Big Table, it would be very useful to have a connector. |
From: Leonardo R. <iss...@bi...> - 2016-11-22 14:28:11
|
New issue 3859: JSON field with PostgreSQL and MySQL 5.7 https://bitbucket.org/zzzeek/sqlalchemy/issues/3859/json-field-with-postgresql-and-mysql-57 Leonardo Rossi: Hi, I found a different behaviour of the JSON column working with on MySQL and on PostgreSQL. Somebody can say me if I did something wrong or it's a bug? I run the query `session.query(table).filter(table.js['hello'] == 'world').first()`, and: 1. with `sa.Column(sa.JSON().with_variant(postgresql.JSONB(), 'postgresql'),)` works on PostgreSQL but not with MySQL. 2. with `sa.Column(sa.JSON())` doesn't work with PostgreSQL but works with MySQL. p.s. I attached a fully example ready-to-run. |
From: Mehdi G. <iss...@bi...> - 2016-11-21 09:37:32
|
New issue 3858: object has no attribute '_sa_instance_state' https://bitbucket.org/zzzeek/sqlalchemy/issues/3858/object-has-no-attribute-_sa_instance_state Mehdi GMIRA: Hey, I don't know if this a bug or something that I should expect this behaviour from SA, but I still find it very weird. Here is the test case: ```python from sqlalchemy import Column, DateTime, String, Integer, ForeignKey, func, Table, create_engine from sqlalchemy.orm import relationship, backref, sessionmaker, joinedload, contains_eager from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() engine = create_engine('sqlite://', echo=False) class Student(Base): __tablename__ = 'student' id = Column(Integer, primary_key=True) name = Column(String) Base.metadata.drop_all(engine) Base.metadata.create_all(engine) Session = sessionmaker(engine) session = Session() session.add(Student(name='foo')) session.commit() class Myclass(object): column = Student.name def test(self): print session.query(Student.id, Student.name).filter(self.__class__.column == 'foo').all() # does not raise print session.query(Student.id, Student.name).filter(self.column == 'foo').all() # raises error my_class = Myclass() my_class.test() ``` The error that is raised is: ```python AttributeError: 'Myclass' object has no attribute '_sa_instance_state' ``` |
From: David P. <iss...@bi...> - 2016-11-21 09:27:17
|
New issue 3857: Attribute history is (sometimes) not updated unless model is queried for https://bitbucket.org/zzzeek/sqlalchemy/issues/3857/attribute-history-is-sometimes-not-updated David Pärsson: Consider this example: ```python model = Model(string='first value') session.add(model) session.commit() model.string = 'second value' session.add(model) session.commit() model = session.query(Model).one() model.string = 'third value' session.add(model) session.commit() ``` The `history.deleted` for this attribute is only updated after the model has been queried for: ``` $ python3 repro.py Instance is new. History for attribute "string" is: History(added=['first value'], unchanged=(), deleted=()) Instance is dirty. History for attribute "string" is: History(added=['second value'], unchanged=(), deleted=()) Instance is dirty. History for attribute "string" is: History(added=['third value'], unchanged=(), deleted=['second value']) ``` I would expect `deleted` to be `['first value']` for the first row starting with `Instance is dirty.`. Please se the attached file for the full, runnable example. |
From: Oliver R. <iss...@bi...> - 2016-11-18 16:24:16
|
New issue 3856: Query.union producing query with polymorphic discriminator when not appropriate https://bitbucket.org/zzzeek/sqlalchemy/issues/3856/queryunion-producing-query-with Oliver Rutherfurd: a.union(b) is unnecessarily including polymorphic discriminator when a and b contain columns from a polymorphic subclass, whereas union(a, b) does not. The result is cartesian product from a.union(b). A demo file is attached. |
From: Jiří K. <iss...@bi...> - 2016-11-18 15:09:35
|
New issue 3855: Comparator on Variant field https://bitbucket.org/zzzeek/sqlalchemy/issues/3855/comparator-on-variant-field Jiří Kunčar: The comparator factory for Variant field doesn't lookup the engine specific implementation. ```python @property def comparator_factory(self): """express comparison behavior in terms of the base type""" return self.impl.comparator_factory ``` I would expect something like: ```python @property def comparator_factory(self): """express comparison behavior in terms of the base type""" return self.load_dialect_impl(FIXME).comparator_factory ``` There is a workaround which more or less works for me but it is not ideal: ```python TestJSONType = sa.JSON().with_variant( postgresql.JSONB(), 'postgresql' ).with_variant( mysql.JSON(), 'mysql' ) class TestJSON(db.Model): __tablename__ = 'test_json' pk = sa.Column(sa.Integer, primary_key=True) js = sa.Column(TestJSONType) ``` ```python from werkzeug.local import LocalProxy def variant_comparator(column): """Dialect specific comparator.""" return column.type.load_dialect_impl( db.engine.dialect ).comparator_factory(column) TestJSON.js.comparator = LocalProxy(lambda: variant_comparator(TestJSON.js)) ``` |
From: Kent R. <iss...@bi...> - 2016-11-17 21:50:24
|
New issue 3854: subqueryload is not executed when a member of the same collection is loaded via joinedload https://bitbucket.org/zzzeek/sqlalchemy/issues/3854/subqueryload-is-not-executed-when-a-member Kent Rakip: When a `joinedload` path causes any entity in a one-to-many relationship to be eagerly loaded, any `subqueryload` path for the same relationship will not be emitted, even if it might return additional results. Here is a minimal repro case: ``` #!python from sqlalchemy import Column, ForeignKey, Integer, create_engine, inspect from sqlalchemy.orm import relationship, sessionmaker, joinedload from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Thing(Base): __tablename__ = 'Thing' id = Column(Integer, primary_key=True, autoincrement=True) another_thing_id = Column(ForeignKey('AnotherThing.id')) another_thing = relationship('AnotherThing') yet_another_thing_id = Column(ForeignKey('YetAnotherThing.id')) yet_another_thing = relationship('YetAnotherThing') class AnotherThing(Base): __tablename__ = 'AnotherThing' id = Column(Integer, primary_key=True, autoincrement=True) class YetAnotherThing(Base): __tablename__ = 'YetAnotherThing' id = Column(Integer, primary_key=True, autoincrement=True) another_thing_id = Column(ForeignKey('AnotherThing.id')) another_thing = relationship('AnotherThing', backref='yet_another_things') last_thing_id = Column(ForeignKey('LastThing.id')) last_thing = relationship('LastThing') class LastThing(Base): __tablename__ = 'LastThing' id = Column(Integer, primary_key=True, autoincrement=True) engine = create_engine('sqlite:///', echo=True) Base.metadata.create_all(engine) session_factory = sessionmaker(bind=engine) session = session_factory() thing = Thing() thing.another_thing = AnotherThing() thing.another_thing.yet_another_things = [YetAnotherThing(), YetAnotherThing()] thing.yet_another_thing = thing.another_thing.yet_another_things[0] thing.another_thing.yet_another_things[0].last_thing = LastThing() thing.another_thing.yet_another_things[1].last_thing = LastThing() session.add(thing) session.commit() session2 = session_factory() t = ( session2.query(Thing) .options( joinedload(Thing.another_thing) .subqueryload(AnotherThing.yet_another_things) .joinedload(YetAnotherThing.last_thing), joinedload(Thing.yet_another_thing) .joinedload(YetAnotherThing.another_thing), ) .get(1) ) # Should print 'False'; actually prints 'True' on 1.1.4 print('yet_another_things' in inspect(t.another_thing).unloaded) ``` The query that should load all of the values for `AnotherThing.yet_another_things` is never emitted, and as a result `last_thing` is not loaded for any of the `YetAnotherThing`s either, since that occurs later on the same path. Commenting out the second `joinedload` path causes the `subqueryload` query to be emitted correctly. This seems to be a regression in 1.1; I'm unable to reproduce it on 1.0.16, although it is possible that the behavior is nondeterministic and I never got lucky. |
From: Daniel K. <iss...@bi...> - 2016-11-17 18:06:50
|
New issue 3853: MutableSet isn't change aware on bitwise operators https://bitbucket.org/zzzeek/sqlalchemy/issues/3853/mutableset-isnt-change-aware-on-bitwise Daniel Kolsoi: Python sets support common set operations as bitwise operators such as &=, |=, ^=, -=. The MutableSet class seems to fully support these operations as full methods (.intersection, .union, ...) but not bitwise operators. This is unexpected since the bitwise operators do in fact modify the underlying set(due to inheriting from set), but since the .changed() method is not called by MutableSet, sqlalchemy is never notified that the data has been modified. This leads to the possibility of having a mutable set column which is modified locally but does not get updated on commit or flush. For example, ``` #!python from sqlalchemy.ext.mutable import MutableSet from sqlalchemy.types import TypeDecorator, UnicodeText class _JSONEncodedSet(TypeDecorator): impl = UnicodeText def process_bind_param(self, value, dialect): if value is not None: value = json.dumps(list(value)) return value def process_result_value(self, value, dialect): if value is not None: value = set(json.loads(value)) return value JSONEncodedSet = MutableSet.as_mutable(_JSONEncodedSet) from sqlalchemy.ext.declarative import declarative_base BaseModel = declarative_base() class MyModel(BaseModel): json_set = Column(JSONEncodedSet) from sqlalchemy.orm import scoped_session, sessionmaker from zope.sqlalchemy import ZopeTransactionExtension session = scoped_session(sessionmaker(extension=ZopeTransactionExtension())) session.add(MyModel(json_set={1, 2, 3})) session.flush() model = session.query(MyModel).first() model.json_set &= {1, 3} session.flush() del model model = session.query(MyModel).first() assert model.json_set == {1, 3} # AssertionError ``` Tested in SqlAlchemy 1.1.3, Sqlite. |
From: ilan.gadassi <iss...@bi...> - 2016-11-14 19:31:14
|
New issue 3852: deepcopy on CollectionAdapter result in exception https://bitbucket.org/zzzeek/sqlalchemy/issues/3852/deepcopy-on-collectionadapter-result-in ilan.gadassi: version ------------ SQLAlchemy (1.1.3) Description ------------------ while calling deepcopy(destinationModel) i get the following exception ``` #!python File "research/detectors/tests/test_cnc_detector.py", line 391 in test_extend res = an.get_extension(Tick()) File "research/detectors/cnc_detector.py", line 271 in get_extension updated_destinations = deepcopy(self.destinations) File "/anaconda/lib/python2.7/copy.py", line 190 in deepcopy y = _reconstruct(x, rv, 1, memo) File "/anaconda/lib/python2.7/copy.py", line 352 in _reconstruct y.append(item) File "/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/collections.py", line 1043 in append item = __set(self, item, _sa_initiator) File "/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/collections.py", line 1015 in __set item = executor.fire_append_event(item, _sa_initiator) File "/anaconda/lib/python2.7/site-packages/sqlalchemy/orm/collections.py", line 678 in fire_append_event if self.invalidated: E AttributeError: invalidated ``` the destinationModel is of type InstrumentedList it seems that the deepcopy doesnt copy the invalidated attribute due to the CollectionAdapter.__getstate__ the invalidated attribute is used by CollectionAdapter.fire_append_event |
From: Антон Б. <iss...@bi...> - 2016-11-12 11:49:40
|
New issue 3851: pip install error (windows 10) https://bitbucket.org/zzzeek/sqlalchemy/issues/3851/pip-install-error-windows-10 Антон Бауткин: I have changed code page in my cmd to 65001 (it's utf-8), and while installing sqlalchemy by pip: ``` c:\Program Files\Python35\Scripts>pip3.exe install sqlalchemy ``` I obtained next error: ``` Collecting sqlalchemy Using cached SQLAlchemy-1.1.3.tar.gz Installing collected packages: sqlalchemy Running setup.py install for sqlalchemy ... error Exception: Traceback (most recent call last): File "c:\program files\python35\lib\site-packages\pip\compat\__init__.py", line 73, in console_to_str return s.decode(sys.__stdout__.encoding) File "c:\program files\python35\lib\encodings\cp65001.py", line 17, in decode return codecs.code_page_decode(65001, input, errors, True) UnicodeDecodeError: 'CP_UTF8' codec can't decode bytes in position 0--1: No mapping for the Unicode character exists in the target code page. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\program files\python35\lib\site-packages\pip\basecommand.py", line 215, in main status = self.run(options, args) File "c:\program files\python35\lib\site-packages\pip\commands\install.py", line 342, in run prefix=options.prefix_path, File "c:\program files\python35\lib\site-packages\pip\req\req_set.py", line 784, in install **kwargs File "c:\program files\python35\lib\site-packages\pip\req\req_install.py", line 878, in install spinner=spinner, File "c:\program files\python35\lib\site-packages\pip\utils\__init__.py", line 676, in call_subprocess line = console_to_str(proc.stdout.readline()) File "c:\program files\python35\lib\site-packages\pip\compat\__init__.py", line 75, in console_to_str return s.decode('utf_8') UnicodeDecodeError: 'utf-8' codec can't decode byte 0xed in position 34: invalid continuation byte ``` To solve this, change you code page to 866 ``` chcp 866 ``` And try to install sqlalchemy once again. |
From: Shengjing Z. <iss...@bi...> - 2016-11-11 08:20:41
|
New issue 3850: Request: SQLite json1 ext support https://bitbucket.org/zzzeek/sqlalchemy/issues/3850/request-sqlite-json1-ext-support Shengjing Zhu: sqlalchemy.types.JSON only supports PostgreSQL and MySQL. But since SQLite 3.9, it has json support, https://sqlite.org/json1.html I hope sqlalchemy can support json1 too. |
From: Michael B. <iss...@bi...> - 2016-11-10 15:43:08
|
New issue 3849: bulk update can't accomodate alt-named primary key https://bitbucket.org/zzzeek/sqlalchemy/issues/3849/bulk-update-cant-accomodate-alt-named Michael Bayer: ``` #!diff diff --git a/test/orm/test_bulk.py b/test/orm/test_bulk.py index 0a51a5a..07c430f 100644 --- a/test/orm/test_bulk.py +++ b/test/orm/test_bulk.py @@ -232,6 +232,61 @@ class BulkUDPostfetchTest(BulkTest, fixtures.MappedTest): eq_(a1.y, 2) +class BulkUDTestAltColKeys(BulkTest, fixtures.MappedTest): + @classmethod + def define_tables(cls, metadata): + Table( + 'people', metadata, + Column( + 'person_id', Integer, + primary_key=True), + Column('name', String(50))) + + @classmethod + def setup_classes(cls): + class Person(cls.Comparable): + pass + + @classmethod + def setup_mappers(cls): + Person = cls.classes.Person + people = cls.tables.people + + mapper(Person, people, properties={ + 'id': people.c.person_id, + 'personname': people.c.name + }) + + def test_insert(self): + Person = self.classes.Person + + s = Session() + s.bulk_insert_mappings( + Person, [{"id": 5, "personname": "thename"}] + ) + + eq_( + s.query(Person).first(), + Person(id=5, personname="thename") + ) + + def test_update(self): + Person = self.classes.Person + + s = Session() + s.add(Person(id=5, personname="thename")) + s.commit() + + s.bulk_update_mappings( + Person, [{"id": 5, "personname": "newname"}] + ) + + eq_( + s.query(Person).first(), + Person(id=5, personname="newname") + ) + + class BulkInheritanceTest(BulkTest, fixtures.MappedTest): @classmethod def define_tables(cls, metadata): ``` StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required for bind parameter 'people_person_id' [SQL: u'UPDATE people SET person_id=?, name=? WHERE people.person_id = ?'] [parameters: [{'person_id': 5, 'name': 'newname'}]] fortunately there's no workaround so this is safe for immediate release + 1.0.x backport |
From: Michael B. <iss...@bi...> - 2016-11-08 02:03:50
|
New issue 3848: returning None (or anything) from @declared_attr means it won't be called further, though this usage may be intuitive on AbstractConcreteBase https://bitbucket.org/zzzeek/sqlalchemy/issues/3848/returning-none-or-anything-from Michael Bayer: this is a conceptual one that might not be possible to fix. With AbstractConcreteBase, we're both "abstract" and "mapped", so the scheme below is intuitive. But because it returns None, the logic in _extract_mappable_attributes ("# using @declared_attr for some object that isn't Column/MapperProperty; remove from the dict_ and place the evaluated value onto the class.") blows it away. need to re-identify the rationale for the "remove the attribute" logic, determine if the "might return None" style is likely only with AbstractConcreteBase or in a more general sense, then identify a potential warning, or even behavioral change, to make this more intuitive. A special symbol like "don't map" might serve as a more explicit replacement for None. ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.declarative import declared_attr from sqlalchemy.ext.declarative import AbstractConcreteBase Base = declarative_base() class FooBase(AbstractConcreteBase, Base): @declared_attr def foo(cls): print "FOO! ", cls.__name__ if cls.__name__ != 'A': #return Column(String) # gets discarded, but affects A return None # causes attr to not be called again else: return Column(Integer) class A(FooBase): __tablename__ = 'a' id = Column(Integer, primary_key=True) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) ``` output: ``` #!python CREATE TABLE a ( id INTEGER NOT NULL, PRIMARY KEY (id) ) ``` 1.1 for the moment but this might be a more long term thing. |
From: Michael B. <iss...@bi...> - 2016-11-08 01:41:00
|
New issue 3847: declared_attr.cascading does not warn / error for non-mixin use https://bitbucket.org/zzzeek/sqlalchemy/issues/3847/declared_attrcascading-does-not-warn-error Michael Bayer: use declared_attr.cascading on a non mixin, the "cascading" is ignored and it's only called once, totally confusing: ``` #!python from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.ext.declarative import declared_attr Base = declarative_base() class A(Base): __tablename__ = 'a' id = Column(Integer, primary_key=True) @declared_attr.cascading def foo(cls): if cls.__name__ == 'A': return Column(String) else: return Column(Integer) class B(A): __tablename__ = 'b' id = Column(Integer, ForeignKey('a.id'), primary_key=True) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) ``` output: ``` #!python CREATE TABLE a ( id INTEGER NOT NULL, foo VARCHAR, PRIMARY KEY (id) ) 2016-11-07 20:35:49,091 INFO sqlalchemy.engine.base.Engine () 2016-11-07 20:35:49,091 INFO sqlalchemy.engine.base.Engine COMMIT 2016-11-07 20:35:49,092 INFO sqlalchemy.engine.base.Engine CREATE TABLE b ( id INTEGER NOT NULL, PRIMARY KEY (id), FOREIGN KEY(id) REFERENCES a (id) ) ``` this should ideally either work or raise an error, though for now a warning is likely the best we can do. |
From: Pawel <iss...@bi...> - 2016-11-07 23:57:29
|
New issue 3846: ON CONFLICT index_where (index_predicate) ... missing FROM-clause entry for table.... https://bitbucket.org/zzzeek/sqlalchemy/issues/3846/on-conflict-index_where-index_predicate Pawel: PostgreSQL 9.5 SQLAlchemy 1.1.3 Hi, got this error and I'm pretty sure it didn't occur in version beta `1.1.b3` was it? I got a similar problem then and at first I thought it can be related to #3807. But after some digging I must admit I'm a bit puzzled by PostgreSQL behavior Lack of good ON CONFLICT examples with index_where (index_predicate) and partial indexes in PostgreSQL doc doesn't help. The problem seems to be that PostgreSQL doesn't allow any qualification in WHERE clause between ON CONFLICT and SET phrases (index_predicate in doc). So this is wrong: ``` #!SQL insert into users_xtra AS u (name, login_email, lets_index_this) values ('bla', 'bli', 'ble') ON CONFLICT (name, lets_index_this) where (user_xtra.lets_index_this = 'unique_name') do update set name = excluded.name, lets_index_this = excluded.lets_index_this; ``` and gives an error: ``` #!SQL ERROR: missing FROM-clause entry for table "user_xtra" LINE 2: ON CONFLICT (name, lets_index_this) where (user_xtra.lets_in... ^ ``` This executes without error: ``` #!SQL insert into users_xtra AS u (name, login_email, lets_index_this) values ('bla', 'bli', 'ble') ON CONFLICT (name, lets_index_this) where (lets_index_this = 'unique_name') do update set name = excluded.name, lets_index_this = excluded.lets_index_this; ``` Here is the test for this behavior. ``` #!diff diff --git a/test/dialect/postgresql/test_on_conflict.py b/test/dialect/postgresql/test_on_conflict.py index 154d3fe..bee30c9 100644 --- a/test/dialect/postgresql/test_on_conflict.py +++ b/test/dialect/postgresql/test_on_conflict.py @@ -1,5 +1,5 @@ # coding: utf-8 +from sqlalchemy import text from sqlalchemy.testing.assertions import eq_, assert_raises from sqlalchemy.testing import fixtures from sqlalchemy import testing @@ -30,6 +30,11 @@ class OnConflictTest(fixtures.TablesTest): ) cls.unique_constraint = schema.UniqueConstraint( users_xtra.c.login_email, name='uq_login_email') + cls.unique_partial_index = schema.Index( + 'idx_unique_partial_name', + users_xtra.c.name, users_xtra.c.lets_index_this, + unique=True, + postgresql_where=users_xtra.c.lets_index_this == 'unique_name') cls.bogus_index = schema.Index( 'idx_special_ops', users_xtra.c.lets_index_this, @@ -405,6 +410,44 @@ class OnConflictTest(fixtures.TablesTest): lets_index_this='bogus') ) + def test_on_conflict_do_update_exotic_targets_six(self): + users = self.tables.users_xtra + + with testing.db.connect() as conn: + # use partial index + i = insert(users) + i = i.on_conflict_do_update( + index_elements=self.unique_partial_index.columns, + # or + #index_elements=[users.c.name, users.c.lets_index_this], + index_where=self.unique_partial_index.dialect_options['postgresql']['where'], + # or + #index_where=users.c.lets_index_this == 'unique_name', + # but this passes! + # vvv + #index_where=text("lets_index_this = 'unique_name'"), + set_=dict( + name=i.excluded.name, + login_email=i.excluded.login_email), + ) + + conn.execute( + i, + [ + dict(name='same_name', login_email='ma...@gm...', + lets_index_this='unique_name'), + dict(name='same_name', login_email='ma...@gm...', + lets_index_this='unique_name') + ] + ) + + eq_( + conn.execute(users.select()).fetchall(), + [ + (2, 'same_name', 'ma...@gm...', 'unique_name'), + ] + ) + def test_on_conflict_do_update_no_row_actually_affected(self): users = self.tables.users_xtra ``` So if this in fact is a valid behavior than compiler tests `test_do_update_index_elements_where_target`, `test_do_update_index_elements_where_target_multivalues` in file `/test/dialect/postgresql/test_on_conflict.py ` validate wrong results! |
From: Yegor R. <iss...@bi...> - 2016-11-06 10:48:24
|
New issue 3845: back_populates does not work with lazy="noload" https://bitbucket.org/zzzeek/sqlalchemy/issues/3845/back_populates-does-not-work-with-lazy Yegor Roganov: Other side of relationship is not populated when relationship declared with `lazy="noload"`, though it works with `lazy=True`. If this is by design, docs should probably mention this. Here is the code: ``` #!python from sqlalchemy import Column, Integer, ForeignKey from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship Base = declarative_base() class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) addresses = relationship("Address", lazy="noload", back_populates="user") class Address(Base): __tablename__ = 'addresses' id = Column(Integer, primary_key=True) user_id = Column('user_id', Integer, ForeignKey('users.id')) user = relationship("User") a1 = Address() u = User() u.addresses.append(a1) assert a1.user == u ``` |