sqlalchemy-tickets Mailing List for SQLAlchemy (Page 47)
Brought to you by:
zzzeek
You can subscribe to this list here.
| 2006 |
Jan
|
Feb
|
Mar
(174) |
Apr
(50) |
May
(71) |
Jun
(129) |
Jul
(113) |
Aug
(141) |
Sep
(82) |
Oct
(142) |
Nov
(97) |
Dec
(72) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2007 |
Jan
(159) |
Feb
(213) |
Mar
(156) |
Apr
(151) |
May
(58) |
Jun
(166) |
Jul
(296) |
Aug
(198) |
Sep
(89) |
Oct
(133) |
Nov
(150) |
Dec
(122) |
| 2008 |
Jan
(144) |
Feb
(65) |
Mar
(71) |
Apr
(69) |
May
(143) |
Jun
(111) |
Jul
(113) |
Aug
(159) |
Sep
(81) |
Oct
(135) |
Nov
(107) |
Dec
(200) |
| 2009 |
Jan
(168) |
Feb
(109) |
Mar
(141) |
Apr
(128) |
May
(119) |
Jun
(132) |
Jul
(136) |
Aug
(154) |
Sep
(151) |
Oct
(181) |
Nov
(223) |
Dec
(169) |
| 2010 |
Jan
(103) |
Feb
(209) |
Mar
(201) |
Apr
(183) |
May
(134) |
Jun
(113) |
Jul
(110) |
Aug
(159) |
Sep
(138) |
Oct
(96) |
Nov
(116) |
Dec
(94) |
| 2011 |
Jan
(97) |
Feb
(188) |
Mar
(157) |
Apr
(158) |
May
(118) |
Jun
(102) |
Jul
(137) |
Aug
(113) |
Sep
(104) |
Oct
(108) |
Nov
(91) |
Dec
(162) |
| 2012 |
Jan
(189) |
Feb
(136) |
Mar
(153) |
Apr
(142) |
May
(90) |
Jun
(141) |
Jul
(67) |
Aug
(77) |
Sep
(113) |
Oct
(68) |
Nov
(101) |
Dec
(122) |
| 2013 |
Jan
(60) |
Feb
(77) |
Mar
(77) |
Apr
(129) |
May
(189) |
Jun
(155) |
Jul
(106) |
Aug
(123) |
Sep
(53) |
Oct
(142) |
Nov
(78) |
Dec
(102) |
| 2014 |
Jan
(143) |
Feb
(93) |
Mar
(35) |
Apr
(26) |
May
(27) |
Jun
(41) |
Jul
(45) |
Aug
(27) |
Sep
(37) |
Oct
(24) |
Nov
(22) |
Dec
(20) |
| 2015 |
Jan
(17) |
Feb
(15) |
Mar
(34) |
Apr
(55) |
May
(33) |
Jun
(31) |
Jul
(27) |
Aug
(17) |
Sep
(22) |
Oct
(26) |
Nov
(27) |
Dec
(22) |
| 2016 |
Jan
(20) |
Feb
(24) |
Mar
(23) |
Apr
(13) |
May
(17) |
Jun
(14) |
Jul
(31) |
Aug
(23) |
Sep
(24) |
Oct
(31) |
Nov
(23) |
Dec
(16) |
| 2017 |
Jan
(24) |
Feb
(20) |
Mar
(27) |
Apr
(24) |
May
(28) |
Jun
(18) |
Jul
(18) |
Aug
(23) |
Sep
(30) |
Oct
(17) |
Nov
(12) |
Dec
(12) |
| 2018 |
Jan
(27) |
Feb
(23) |
Mar
(13) |
Apr
(19) |
May
(21) |
Jun
(29) |
Jul
(11) |
Aug
(22) |
Sep
(14) |
Oct
(9) |
Nov
(24) |
Dec
|
|
From: Mike B. <iss...@bi...> - 2014-09-02 16:11:59
|
New issue 3180: don't fall over for dialects that have discrepancies in table cols vs. index cols reported https://bitbucket.org/zzzeek/sqlalchemy/issue/3180/dont-fall-over-for-dialects-that-have Mike Bayer: this is happening on oracle but we can just guard against this more fundamentally: ``` #!python from sqlalchemy import create_engine, Table, MetaData, Integer import mock e = create_engine("sqlite://") e.execute("""create table x (a integer, b integer)""") e.execute("""create index x_i on x(a, b)""") def mock_get_columns(self, connection, table_name, **kw): return [ {"name": "b", "type": Integer, "primary_key": False} ] with mock.patch.object(e.dialect, "get_columns", mock_get_columns): m = MetaData() t = Table('x', m, autoload_with=e) ``` |
|
From: Marek B. <iss...@bi...> - 2014-09-02 16:04:29
|
New issue 3179: Python 3, MSSQL 2008R2 exception in dialect when using sqltap https://bitbucket.org/zzzeek/sqlalchemy/issue/3179/python-3-mssql-2008r2-exception-in-dialect Marek Baczyński: ``` Traceback (most recent call last): File "C:\Python34\lib\runpy.py", line 170, in _run_module_as_main "__main__", mod_spec) File "C:\Python34\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Python34\lib\cProfile.py", line 160, in <module> main() File "C:\Python34\lib\cProfile.py", line 153, in main runctx(code, globs, None, options.outfile, options.sort) File "C:\Python34\lib\cProfile.py", line 20, in runctx filename, sort) File "C:\Python34\lib\profile.py", line 64, in runctx prof.runctx(statement, globals, locals) File "C:\Python34\lib\cProfile.py", line 100, in runctx exec(cmd, globals, locals) File "tests/benchmark.py", line 207, in <module> sqltap.report(statistics, "report.html") File "...\venv\lib\site-packages\sqltap\sqltap.py", line 277, in report group = query_groups[str(qstats.text)] File "...\venv\lib\site-packages\sqlalchemy\sql\elements.py", line 502, in __str__ return str(self.compile()) File "<string>", line 1, in <lambda> File "...\venv\lib\site-packages\sqlalchemy\sql\elements.py", line 492, in compile return self._compiler(dialect, bind=bind, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\elements.py", line 498, in _compiler return dialect.statement_compiler(dialect, self, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 395, in __init__ Compiled.__init__(self, dialect, statement, **kwargs) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 199, in __init__ self.string = self.process(self.statement, **compile_kwargs) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 222, in process return obj._compiler_dispatch(self, **kwargs) File "...\venv\lib\site-packages\sqlalchemy\sql\visitors.py", line 80, in _compiler_dispatch return meth(self, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 1544, in visit_select t = select._whereclause._compiler_dispatch(self, **kwargs) File "...\venv\lib\site-packages\sqlalchemy\sql\visitors.py", line 80, in _compiler_dispatch return meth(self, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 662, in visit_clauselist for c in clauselist.clauses) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 659, in <genexpr> s for s in File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 662, in <genexpr> for c in clauselist.clauses) File "...\venv\lib\site-packages\sqlalchemy\sql\visitors.py", line 80, in _compiler_dispatch return meth(self, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 847, in visit_binary return self._generate_generic_binary(binary, opstring, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 864, in _generate_generic_binary binary.right._compiler_dispatch(self, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\visitors.py", line 80, in _compiler_dispatch return meth(self, **kw) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 981, in visit_bindparam skip_bind_expression=True) File "...\venv\lib\site-packages\sqlalchemy\sql\compiler.py", line 222, in process return obj._compiler_dispatch(self, **kwargs) File "...\venv\lib\site-packages\sqlalchemy\ext\compiler.py", line 413, in <lambda> lambda *arg, **kw: existing(*arg, **kw)) File "...\venv\lib\site-packages\sqlalchemy\ext\compiler.py", line 451, in __call__ return fn(element, compiler, **kw) File "...\venv\lib\site-packages\sqlalchemy\dialects\mssql\information_schema.py", line 41, in _compile if compiler.dialect.server_version_info < base.MS_2005_VERSION: TypeError: unorderable types: NoneType() < tuple() ``` I worked around it by hacking in venv\lib\site-packages\sqlalchemy\dialects\mssql\information_schema.py: ```python @compiles(_cast_on_2005) def _compile(element, compiler, **kw): from . import base try: if compiler.dialect.server_version_info < base.MS_2005_VERSION: return compiler.process(element.bindvalue, **kw) else: return compiler.process(cast(element.bindvalue, Unicode), **kw) except TypeError: return compiler.process(cast(element.bindvalue, Unicode), **kw) ``` (added try/except) I don't really know if it's a sqlalchemy bug or a sqltap bug; reporting it here for now. |
|
From: Mike B. <iss...@bi...> - 2014-08-31 16:12:45
|
New issue 3178: overhaul warning use so that we can parameterize warnings https://bitbucket.org/zzzeek/sqlalchemy/issue/3178/overhaul-warning-use-so-that-we-can Mike Bayer: took 5 minutes to actually look at the warnings registry. Let's change our long standing approach of parameterized warnings, e.g. for unicode and other things where it really helps to see the value, and just go for no registry for these. We will augment warn() to accept *args that can be interpolated; if these are present, we skip the registry. Also, if we just re-implement warnings.warn(), we can do the stack frame stuff they are doing there anyway and express it in terms of the application outside of sqlalchemy without guessing on stacklevel. this applies to #2992 as well as we want to warn for textual strings and I'd really like to put the string in the warning. |
|
From: mjallday <iss...@bi...> - 2014-08-29 15:19:22
|
New issue 3177: Single table polymorphic count issue https://bitbucket.org/zzzeek/sqlalchemy/issue/3177/single-table-polymorphic-count-issue mjallday: Originally posted at [StackOverflow](https://stackoverflow.com/questions/25222025/sqlalchemy-generates-a-different-query-for-counts-than-expected-with-a-polymorph) When using a polymorphic identity and attempting to count the number of records, incorrect SQL is generated. This SQL will count the entire table rather than the items matching the polymorphic discriminator. Iterating all the results works as expected, I've only experienced this bug using `.count()`. ```python from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Table, Column, Unicode, Integer, create_engine, MetaData, func from sqlalchemy.orm import scoped_session, sessionmaker metadata = MetaData() Base = declarative_base(metadata=metadata) widgets = Table( 'widgets', metadata, Column('id', Integer, primary_key=True), Column('type', Unicode), Column('state', Unicode) ) class Widget(Base): __table__ = widgets class types(object): FOO_WIDGET = 'foo' BAR_WIDGET = 'bar' __mapper_args__ = { 'polymorphic_on': widgets.c.type, } class FooWidget(Widget): __mapper_args__ = { 'polymorphic_identity': Widget.types.FOO_WIDGET } db_engine = create_engine('sqlite:///:memory:', echo=True) Session = scoped_session(sessionmaker()) Session.configure(bind=db_engine) metadata.create_all(db_engine) items = Session.query(FooWidget.id).filter_by( state='new' ) print str(items) print 'i expect the next statement to print something approximating:' print ''' select count(*) from widgets where type = 'foo' and state = 'new' ''' print items.count() # What this actually prints ''' 2014-08-28 09:55:15,055 INFO sqlalchemy.engine.base.Engine SELECT count(*) AS count_1 FROM widgets, (SELECT widgets.id AS widgets_id FROM widgets WHERE widgets.state = ?) AS anon_1 WHERE widgets.type IN (?) ''' ``` |
|
From: Mike B. <iss...@bi...> - 2014-08-28 15:53:21
|
New issue 3176: faster KeyedTuple https://bitbucket.org/zzzeek/sqlalchemy/issue/3176/faster-keyedtuple Mike Bayer: collections.namedtuple() is much much faster on creation of tuples, and KeyedTuple is actually slower than straight object. But namedtuple() is very slow to create new types as the code to do so uses eval(), stack frame inspection, and other overbuilt things that slow us down too much on a per-query basis. What to do? We can hedge between both approaches. Here is a speed breakdown: ``` #!python import collections import timeit from sqlalchemy.util import KeyedTuple, lightweight_named_tuple def go1(size): nt = collections.namedtuple('a', ['x', 'y', 'z']) result = [ nt(1, 2, 3) for i in range(size) ] def go2(size): labels = ['x', 'y', 'z'] result = [ KeyedTuple([1, 2, 3], labels) for i in range(size) ] def go3(size): nt = lightweight_named_tuple('a', ['x', 'y', 'z']) result = [ nt([1, 2, 3]) for i in range(size) ] for size, num in [ (10, 10000), (100, 1000), (10000, 100), (1000000, 10), ]: print "-----------------" print "size=%d num=%d" % (size, num) print "namedtuple:", timeit.timeit("go1(%s)" % size, "from __main__ import go1", number=num) print "keyedtuple:", timeit.timeit("go2(%s)" % size, "from __main__ import go2", number=num) print "lw keyed tuple:", timeit.timeit("go3(%s)" % size, "from __main__ import go3", number=num) ``` output: ``` #! ----------------- size=10 num=10000 namedtuple: 3.60116696358 keyedtuple: 0.257042884827 lw keyed tuple: 0.571335792542 ----------------- size=100 num=1000 namedtuple: 0.362484931946 keyedtuple: 0.24974322319 lw keyed tuple: 0.0887930393219 ----------------- size=10000 num=100 namedtuple: 0.562417030334 keyedtuple: 2.53507685661 lw keyed tuple: 0.607440948486 ----------------- size=1000000 num=10 namedtuple: 5.84964299202 keyedtuple: 28.8070271015 lw keyed tuple: 6.69921588898 ``` we can see that namedtuple is very slow for lots of distinct types. But then that keyedtuple is *really* slow for a lot of instances on a small number of types. the new lw_tuple is almost as fast as namedtuple on instance create and just a bit slower than keyedtuple on making new types. it is definitely the best option in the graph. |
|
From: Mike B. <iss...@bi...> - 2014-08-28 03:45:02
|
New issue 3175: create resultproxy._getter to remove overhead of column lookups https://bitbucket.org/zzzeek/sqlalchemy/issue/3175/create-resultproxy_getter-to-remove Mike Bayer: proof of concept, see if we can get into the C code to speed this up note also it looks like collections.namedtuple() might be faster in the majority of cases, tricky one though as it is slow on the init ``` #!diff diff --git a/lib/sqlalchemy/engine/result.py b/lib/sqlalchemy/engine/result.py index 06a81aa..8e43709 100644 --- a/lib/sqlalchemy/engine/result.py +++ b/lib/sqlalchemy/engine/result.py @@ -268,6 +268,19 @@ class ResultMetaData(object): # high precedence keymap. keymap.update(primary_keymap) + def _getter(self, key): + try: + processor, obj, index = self._keymap[key] + except KeyError: + processor, obj, index = self._parent._key_fallback(key) + + if index is None: + raise exc.InvalidRequestError( + "Ambiguous column name '%s' in result set! " + "try 'use_labels' option on select statement." % key) + + return operator.itemgetter(index) + @util.pending_deprecation("0.8", "sqlite dialect uses " "_translate_colname() now") def _set_keymap_synonym(self, name, origname): @@ -517,6 +530,9 @@ class ResultProxy(object): """ return self.context.isinsert + def _getter(self, key): + return self._metadata._getter(key) + def _cursor_description(self): """May be overridden by subclasses.""" diff --git a/lib/sqlalchemy/orm/loading.py b/lib/sqlalchemy/orm/loading.py index 232eb89..098bf73 100644 --- a/lib/sqlalchemy/orm/loading.py +++ b/lib/sqlalchemy/orm/loading.py @@ -12,7 +12,7 @@ the functions here are called primarily by Query, Mapper, as well as some of the attribute loading strategies. """ - +from __future__ import absolute_import from .. import util from . import attributes, exc as orm_exc, state as statelib @@ -20,6 +20,7 @@ from .interfaces import EXT_CONTINUE from ..sql import util as sql_util from .util import _none_set, state_str from .. import exc as sa_exc +import collections _new_runid = util.counter() @@ -50,10 +51,13 @@ def instances(query, cursor, context): (process, labels) = \ list(zip(*[ query_entity.row_processor(query, - context, custom_rows) + context, custom_rows, cursor) for query_entity in query._entities ])) + if not custom_rows and not single_entity: + keyed_tuple = collections.namedtuple('result', labels) + while True: context.progress = {} context.partials = {} @@ -72,8 +76,9 @@ def instances(query, cursor, context): elif single_entity: rows = [process[0](row, None) for row in fetch] else: - rows = [util.KeyedTuple([proc(row, None) for proc in process], - labels) for row in fetch] + rows = [ + keyed_tuple(*[proc(row, None) for proc in process]) + for row in fetch] if filtered: rows = util.unique_list(rows, filter_fn) diff --git a/lib/sqlalchemy/orm/query.py b/lib/sqlalchemy/orm/query.py index 12e11b2..10097d0 100644 --- a/lib/sqlalchemy/orm/query.py +++ b/lib/sqlalchemy/orm/query.py @@ -3082,7 +3082,7 @@ class _MapperEntity(_QueryEntity): return ret - def row_processor(self, query, context, custom_rows): + def row_processor(self, query, context, custom_rows, result): adapter = self._get_entity_clauses(query, context) if context.adapter and adapter: @@ -3344,7 +3344,7 @@ class _BundleEntity(_QueryEntity): for ent in self._entities: ent.setup_context(query, context) - def row_processor(self, query, context, custom_rows): + def row_processor(self, query, context, custom_rows, result): procs, labels = zip( *[ent.row_processor(query, context, custom_rows) for ent in self._entities] @@ -3473,15 +3473,17 @@ class _ColumnEntity(_QueryEntity): def _resolve_expr_against_query_aliases(self, query, expr, context): return query._adapt_clause(expr, False, True) - def row_processor(self, query, context, custom_rows): + def row_processor(self, query, context, custom_rows, result): column = self._resolve_expr_against_query_aliases( query, self.column, context) if context.adapter: column = context.adapter.columns[column] + getter = result._getter(column) + def proc(row, result): - return row[column] + return getter(row) return proc, self._label_name diff --git a/lib/sqlalchemy/orm/strategies.py b/lib/sqlalchemy/orm/strategies.py index c3edbf6..27dcce5 100644 --- a/lib/sqlalchemy/orm/strategies.py +++ b/lib/sqlalchemy/orm/strategies.py @@ -165,8 +165,10 @@ class ColumnLoader(LoaderStrategy): if adapter: col = adapter.columns[col] if col is not None and col in row: + getter = row._parent._getter(col) + def fetch_col(state, dict_, row): - dict_[key] = row[col] + dict_[key] = getter(row) return fetch_col, None, None else: def expire_for_non_present_col(state, dict_, row): ``` |
|
From: Mike B. <iss...@bi...> - 2014-08-27 23:44:26
|
New issue 3174: support text() sent to indexes https://bitbucket.org/zzzeek/sqlalchemy/issue/3174/support-text-sent-to-indexes Mike Bayer: this works in general, but fails on at least the Postgresql dialect because it is linking each expression to a column. See https://bitbucket.org/zzzeek/alembic/issue/222/support-functional-indexes-with for how alembic works around this. ``` #!python from sqlalchemy import * from sqlalchemy.schema import CreateIndex from sqlalchemy.dialects import postgresql m = MetaData() t = Table('x', m) idx = Index('foo', text("lower(c)")) idx._set_parent(t) print CreateIndex(idx).compile(dialect=postgresql.dialect()) ``` |
|
From: Mike B. <iss...@bi...> - 2014-08-25 21:12:47
|
New issue 3173: add count_matched_rows API for all MySQL dialects https://bitbucket.org/zzzeek/sqlalchemy/issue/3173/add-count_matched_rows-api-for-all-mysql Mike Bayer: includes: * `count_matched_rows` flag * documentation * implementation for all dialects, MySQLdb, MySQL-connector, OurSQL, etc. * unit tests, at least against `create_connect_args()` basic idea: ``` #!diff diff --git a/lib/sqlalchemy/dialects/mysql/base.py b/lib/sqlalchemy/dialects/mysql/base.py index 012d178..0327043 100644 --- a/lib/sqlalchemy/dialects/mysql/base.py +++ b/lib/sqlalchemy/dialects/mysql/base.py @@ -2256,10 +2256,12 @@ class MySQLDialect(default.DefaultDialect): }) ] - def __init__(self, isolation_level=None, **kwargs): + def __init__( + self, isolation_level=None, count_matched_rows=True, **kwargs): kwargs.pop('use_ansiquotes', None) # legacy default.DefaultDialect.__init__(self, **kwargs) self.isolation_level = isolation_level + self.count_matched_rows = count_matched_rows def on_connect(self): if self.isolation_level is not None: classics-MacBook-Pro-2:sqlalchemy classic$ git diff diff --git a/lib/sqlalchemy/dialects/mysql/base.py b/lib/sqlalchemy/dialects/mysql/base.py index 012d178..0327043 100644 --- a/lib/sqlalchemy/dialects/mysql/base.py +++ b/lib/sqlalchemy/dialects/mysql/base.py @@ -2256,10 +2256,12 @@ class MySQLDialect(default.DefaultDialect): }) ] - def __init__(self, isolation_level=None, **kwargs): + def __init__( + self, isolation_level=None, count_matched_rows=True, **kwargs): kwargs.pop('use_ansiquotes', None) # legacy default.DefaultDialect.__init__(self, **kwargs) self.isolation_level = isolation_level + self.count_matched_rows = count_matched_rows def on_connect(self): if self.isolation_level is not None: ``` Responsible: sqlalchemy_sprinters |
|
From: Tomas M. <iss...@bi...> - 2014-08-25 10:34:28
|
New issue 3172: misleading error message in case of missing permissions to the parent directory (sqlite engine) https://bitbucket.org/zzzeek/sqlalchemy/issue/3172/misleading-error-message-in-case-of Tomas Machalek: In a special case (described below) when trying to update a sqlite3 database file located in a directory I have no write permissions to, I obtain quite a misleading error message: *unable to open database file "insert into ......."* I.e. it looks like Sqlalchemy thinks the SQL is in fact a path to the database. ### How to replicate ### (tested on Linux OS) Manually (i.e. not using a Python code) create a sqlite3 database, e.g. **/home/you/data/test.db** with some table in it: ``` #!sql CREATE TABLE cache (key text primary key, value text); ``` Remove your write permissions from the **data** directory (i.e. not the file itself!). Run the following code: ``` #!python from sqlalchemy import create_engine en = create_engine('sqlite:////home/you/data/test.db') en.execute("INSERT INTO cache (key, value) VALUES ('foo', 'bar')") ``` |
|
From: Mike B. <iss...@bi...> - 2014-08-21 15:11:58
|
New issue 3171: remove "resurrect" event and related elements https://bitbucket.org/zzzeek/sqlalchemy/issue/3171/remove-resurrect-event-and-related Mike Bayer: this seems to be a leftover from the "mutable" days so let's remove that totally. it's not called anywhere. |
|
From: Mike B. <iss...@bi...> - 2014-08-21 00:26:20
|
New issue 3170: use of singletons in 0.9 causes non-intuitive folding of constants https://bitbucket.org/zzzeek/sqlalchemy/issue/3170/use-of-singletons-in-09-causes-non Mike Bayer: ``` #!python from sqlalchemy import select, null from sqlalchemy.sql.elements import Null print select([null(), null()]) print select([Null(), Null()]) ``` ``` #!sql SELECT NULL AS anon_1 SELECT NULL AS anon_1, NULL AS anon_2 ``` this is due to the singleton. it's totally a regression, so this is critical that something is figured out. |
|
From: Richard F. <iss...@bi...> - 2014-08-20 21:55:11
|
New issue 3169: _fetch_implicit_returning raises TypeError if INSERT FROM SELECT inserts no rows https://bitbucket.org/zzzeek/sqlalchemy/issue/3169/_fetch_implicit_returning-raises-typeerror Richard Frank: I'm inserting into a table using the results of a query. When that query returns no rows, a TypeError is raised. A test example: ``` #!python from sqlalchemy import create_engine, String, Column, Table, MetaData, \ literal_column, select engine = create_engine('postgresql://rich@localhost/test', echo=True) metadata = MetaData() with engine.connect() as conn: tbl = Table('test_table', metadata, Column('id', String, primary_key=True)) tbl.create(bind=conn) conn.execute( tbl.insert() .from_select( [tbl.c.id], select([literal_column("'id'")]) .where(literal_column("false"))) ) ``` dies with the following error on 0.9.7 with psycopg2 2.5.3 and postgresql 9.3.4: ``` #!python 2014-08-20 17:34:19,054 INFO sqlalchemy.engine.base.Engine select version() 2014-08-20 17:34:19,054 INFO sqlalchemy.engine.base.Engine {} 2014-08-20 17:34:19,057 INFO sqlalchemy.engine.base.Engine select current_schema() 2014-08-20 17:34:19,057 INFO sqlalchemy.engine.base.Engine {} 2014-08-20 17:34:19,058 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1 2014-08-20 17:34:19,058 INFO sqlalchemy.engine.base.Engine {} 2014-08-20 17:34:19,059 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1 2014-08-20 17:34:19,059 INFO sqlalchemy.engine.base.Engine {} 2014-08-20 17:34:19,060 INFO sqlalchemy.engine.base.Engine show standard_conforming_strings 2014-08-20 17:34:19,060 INFO sqlalchemy.engine.base.Engine {} 2014-08-20 17:34:19,062 INFO sqlalchemy.engine.base.Engine CREATE TABLE test_table ( id VARCHAR NOT NULL, PRIMARY KEY (id) ) 2014-08-20 17:34:19,062 INFO sqlalchemy.engine.base.Engine {} 2014-08-20 17:34:19,069 INFO sqlalchemy.engine.base.Engine COMMIT 2014-08-20 17:34:19,071 INFO sqlalchemy.engine.base.Engine INSERT INTO test_table (id) SELECT 'id' WHERE false RETURNING test_table.id 2014-08-20 17:34:19,071 INFO sqlalchemy.engine.base.Engine {} Traceback (most recent call last): File "test.py", line 19, in <module> .where(literal_column("false"))) File "/Users/rich/.virtualenvs/test/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/Users/rich/.virtualenvs/test/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 321, in _execute_on_connection return connection._execute_clauseelement(self, multiparams, params) File "/Users/rich/.virtualenvs/test/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 826, in _execute_clauseelement compiled_sql, distilled_params File "/Users/rich/.virtualenvs/test/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 978, in _execute_context context._fetch_implicit_returning(result) File "/Users/rich/.virtualenvs/test/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 815, in _fetch_implicit_returning ipk.append(row[c]) TypeError: 'NoneType' object has no attribute '__getitem__' ``` Assuming that functionality isn't needed, looks like this can be worked around by setting ```implicit_returning=False``` on the engine or table. |
|
From: oargon <iss...@bi...> - 2014-08-20 11:58:27
|
New issue 3168: Connection checkouts from pool are not logged https://bitbucket.org/zzzeek/sqlalchemy/issue/3168/connection-checkouts-from-pool-are-not oargon: I remeber that this worked on previous versions but I'm not sure when it stopped working. It seems like there are no longer log lines when a connection is being checked out from the pool, only when it is returned back to the pool. Tested with 0.9.7. The output for running the attached test file is: DEBUG:sqlalchemy.pool.QueuePool:Created new connection <pymysql.connections.Connection object at 0x7f129737d3d0> DEBUG:sqlalchemy.pool.QueuePool:Connection <pymysql.connections.Connection object at 0x7f129737d3d0> being returned to pool DEBUG:sqlalchemy.pool.QueuePool:Connection <pymysql.connections.Connection object at 0x7f129737d3d0> rollback-on-return DEBUG:sqlalchemy.pool.QueuePool:Connection <pymysql.connections.Connection object at 0x7f129737d3d0> being returned to pool DEBUG:sqlalchemy.pool.QueuePool:Connection <pymysql.connections.Connection object at 0x7f129737d3d0> rollback-on-return |
|
From: Mario B. R. <iss...@bi...> - 2014-08-17 20:57:10
|
New issue 3167: Key updating after insert is triggering validations https://bitbucket.org/zzzeek/sqlalchemy/issue/3167/key-updating-after-insert-is-triggering Mario Benito Ríos: With MySQL backend, on commit after creating a new object with "autoincrement" PK, key updating is triggering validator. See https://bitbucket.org/zzzeek/sqlalchemy/src/2de7f94739ec1873e1dce48797e1e6f12044cf4c/lib/sqlalchemy/orm/persistence.py?at=master#cl-650 Documentation says only user assignments trigger validations, so this case shouldn't trigger validation. |
|
From: lieryan <iss...@bi...> - 2014-08-15 03:20:36
|
New issue 3166: autoincrement should not be True for non-integer primary key https://bitbucket.org/zzzeek/sqlalchemy/issue/3166/autoincrement-should-not-be-true-for-non lieryan: The behavior of autoincrement attribute is misleading when the primary key is not an integer primary key. Test case: ``` #!python from sqlalchemy import Column, String from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Table(Base): __tablename__ = 'table' code = Column(String(50), primary_key = True) table = Base.metadata.tables['table'] pk_column = table.primary_key.columns['code'] assert not pk_column.autoincrement ``` |
|
From: Mike B. <iss...@bi...> - 2014-08-14 23:17:49
|
New issue 3165: shouldnt the keys be sorted for the key in cached connections https://bitbucket.org/zzzeek/sqlalchemy/issue/3165/shouldnt-the-keys-be-sorted-for-the-key-in Mike Bayer: ``` #!diff --- a/lib/sqlalchemy/engine/base.py +++ b/lib/sqlalchemy/engine/base.py @@ -805,7 +805,7 @@ class Connection(Connectable): dialect = self.dialect if 'compiled_cache' in self._execution_options: - key = dialect, elem, tuple(keys), len(distilled_params) > 1 + key = dialect, elem, sorted(tuple(keys)), len(distilled_params) > 1 if key in self._execution_options['compiled_cache']: compiled_sql = self._execution_options['compiled_cache'][key] else: ``` just a thought. |
|
From: Robert W. <iss...@bi...> - 2014-08-14 16:07:37
|
New issue 3164: Broken links to core & orm tutorials from Library page on sqlalchemy.org https://bitbucket.org/zzzeek/sqlalchemy/issue/3164/broken-links-to-core-orm-tutorials-from Robert Warner: This is a bug related to the site - please let me know if there is a better place to report site issues. I had a look but didn't see a repo specifically for the website. There are two broken links on the Library page, that lead to the ORM & Core tutorials. http://www.sqlalchemy.org/library.html#tutorials - in the first paragraph. The links currently go to: core - http://docs.sqlalchemy.org/core/tutorial.html orm - http://docs.sqlalchemy.org/orm/tutorial.html I believe they should be going to: core - http://sqlalchemy.readthedocs.org/en/rel_0_9/orm/tutorial.html orm - http://sqlalchemy.readthedocs.org/en/rel_0_9/core/tutorial.html |
|
From: Mike B. <iss...@bi...> - 2014-08-12 16:12:34
|
New issue 3163: use deque() for event lists so that remove() raises during an event https://bitbucket.org/zzzeek/sqlalchemy/issue/3163/use-deque-for-event-lists-so-that-remove Mike Bayer: quick test: ``` #!python import unittest from sqlalchemy import event from sqlalchemy.orm import Session class Receiver(object): def __init__(self, instance, remove): self.instance = instance self.remove = remove def __call__(self, session, context): self.instance.calls += 1 if self.remove: event.remove(self.instance.session, 'after_flush_postexec', self) @classmethod def register(cls, instance, remove): self = cls(instance, remove=remove) event.listen(instance.session, 'after_flush_postexec', self) class FlushingTest(unittest.TestCase): def setUp(self): self.session = Session() self.calls = 0 def flush(self): self.session.dispatch.after_flush_postexec(self.session, {}) def testTripleRemove(self): Receiver.register(self, remove=True) Receiver.register(self, remove=True) Receiver.register(self, remove=True) self.flush() self.assertEqual(self.calls, 3) unittest.main() ``` which fails, because we're iterating through a list. This patch replaces the lists with a deque, just as fast or faster than a list, all tests pass: ``` #!diff diff --git a/lib/sqlalchemy/event/attr.py b/lib/sqlalchemy/event/attr.py index 7641b59..dba1063 100644 --- a/lib/sqlalchemy/event/attr.py +++ b/lib/sqlalchemy/event/attr.py @@ -37,6 +37,7 @@ from . import registry from . import legacy from itertools import chain import weakref +import collections class RefCollection(object): @@ -96,8 +97,8 @@ class _DispatchDescriptor(RefCollection): self.update_subclass(cls) else: if cls not in self._clslevel: - self._clslevel[cls] = [] - self._clslevel[cls].insert(0, event_key._listen_fn) + self._clslevel[cls] = collections.deque() + self._clslevel[cls].appendleft(event_key._listen_fn) registry._stored_in_collection(event_key, self) def append(self, event_key, propagate): @@ -113,13 +114,13 @@ class _DispatchDescriptor(RefCollection): self.update_subclass(cls) else: if cls not in self._clslevel: - self._clslevel[cls] = [] + self._clslevel[cls] = collections.deque() self._clslevel[cls].append(event_key._listen_fn) registry._stored_in_collection(event_key, self) def update_subclass(self, target): if target not in self._clslevel: - self._clslevel[target] = [] + self._clslevel[target] = collections.deque() clslevel = self._clslevel[target] for cls in target.__mro__[1:]: if cls in self._clslevel: @@ -145,7 +146,7 @@ class _DispatchDescriptor(RefCollection): to_clear = set() for dispatcher in self._clslevel.values(): to_clear.update(dispatcher) - dispatcher[:] = [] + dispatcher.clear() registry._clear(self, to_clear) def for_modify(self, obj): @@ -287,7 +288,7 @@ class _ListenerCollection(RefCollection, _CompoundListener): self.parent_listeners = parent._clslevel[target_cls] self.parent = parent self.name = parent.__name__ - self.listeners = [] + self.listeners = collections.deque() self.propagate = set() def for_modify(self, obj): @@ -337,7 +338,7 @@ class _ListenerCollection(RefCollection, _CompoundListener): def clear(self): registry._clear(self, self.listeners) self.propagate.clear() - self.listeners[:] = [] + self.listeners.clear() class _JoinedDispatchDescriptor(object): diff --git a/lib/sqlalchemy/event/registry.py b/lib/sqlalchemy/event/registry.py index a34de3c..ba2f671 100644 --- a/lib/sqlalchemy/event/registry.py +++ b/lib/sqlalchemy/event/registry.py @@ -243,4 +243,4 @@ class _EventKey(object): def prepend_to_list(self, owner, list_): _stored_in_collection(self, owner) - list_.insert(0, self._listen_fn) + list_.appendleft(self._listen_fn) ``` this is a totally easy win. |
|
From: eevee <iss...@bi...> - 2014-08-11 21:11:46
|
New issue 3162: ORM evaluator is surprisingly inflexible https://bitbucket.org/zzzeek/sqlalchemy/issue/3162/orm-evaluator-is-surprisingly-inflexible eevee: Current problem I'm having is that it doesn't know how to evaluate `op("&")`, but from the looks of it, it also won't understand hybrid properties (which would seem to exist for exactly this purpose!) or custom operators or custom `ClauseElement` subclasses. There doesn't seem to be any way to extend it at all, and it's barely been updated since it was written. |
|
From: Jonathan S. <iss...@bi...> - 2014-08-11 13:29:22
|
New issue 3161: Support partial unique constrains on PostgreSQL https://bitbucket.org/zzzeek/sqlalchemy/issue/3161/support-partial-unique-constrains-on Jonathan Stoppani: This is already supported for [normal indexes](http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html?highlight=postgres#partial-indexes), but if I try to pass the `postgresql_where` keyword arg to `UniqueConstraint`, it fails with the following error: sqlalchemy.exc.ArgumentError: Argument 'postgresql_where' is not accepted by dialect 'postgresql' on behalf of <class 'sqlalchemy.sql.schema.UniqueConstraint'> |
|
From: jvanasco <iss...@bi...> - 2014-08-07 22:03:39
|
New issue 3160: postgresql - to_tsquery docs and implementation detail https://bitbucket.org/zzzeek/sqlalchemy/issue/3160/postgresql-to_tsquery-docs-and jvanasco: I recently realized some implementation details of `to_tsquery` that are incompatible with the docs I drafted and how sqlalchemy integrates it. Not sure how to handle this. I based the docs/examples on existing tests and text. So we have this as the first bit of "Full Text Search" ( http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#full-text-search ) select([sometable.c.text.match("search string")]) SELECT text @@ to_tsquery('search string') FROM table well, if we put this into psql... badsql=> select to_tsquery('search string') ; ERROR: syntax error in tsquery: "search string" that's because tsquery has a rigid enforcement of input. text must either be tokenized and joined with acceptable operators : select to_tsquery('cat & rat'); be quoted : select to_tsquery('''search string'''); or use the alternate function select plainto_tsquery('search string'); So, these are acceptable: select to_tsquery('search & string'); select to_tsquery('''search string'''); select plainto_tsquery('search string'); but this is not: select to_tsquery('search string'); I'm not sure the best way to handle this nuance in the docs. |
|
From: Mike B. <iss...@bi...> - 2014-08-07 14:13:27
|
New issue 3159: cant insert NULL into a json column if column is present https://bitbucket.org/zzzeek/sqlalchemy/issue/3159/cant-insert-null-into-a-json-column-if Mike Bayer: the serializer grabs None if it is present. for backwards compat we should support null() which currently also blows up: ``` #!python from sqlalchemy import create_engine, Column, Table, MetaData, func, select, null from sqlalchemy.dialects.postgresql import JSON e = create_engine("postgresql://scott:tiger@localhost/test", echo='debug') c = e.connect() t = c.begin() table = Table('json_test', MetaData(), Column('data', JSON)) table.create(c) c.execute(table.insert(), [{"data": {"foo": "bar"}}, {"data": None}]) assert c.scalar( select([func.count('*')]). select_from(table).where(table.c.data == null())) # probable workaround, also fails to invoke #c.execute(table.insert(), [{"data": {"foo": "bar"}}, {"data": null()}]) ``` |
|
From: Gabor G. <iss...@bi...> - 2014-08-07 11:44:14
|
New issue 3158: orm.with_polymorphic() does not load columns of descendant classes https://bitbucket.org/zzzeek/sqlalchemy/issue/3158/ormwith_polymorphic-does-not-load-columns Gabor Gombas: Hi, The documentation of orm.with_polymorphic() says: > Using this method will ensure that each descendant mapper’s tables are included in the FROM clause, and will allow filter() criterion to be used against those tables. The resulting instances will also have those columns already loaded so that no “post fetch” of those columns will be required. However, that does not seem to be the case in the attached example - the subquery does load b.prop, but the top level query does not, so accessing it still generates an extra query. |
|
From: yedpodtrzitko <iss...@bi...> - 2014-08-05 17:59:27
|
New issue 3157: Invalid default value for Array https://bitbucket.org/zzzeek/sqlalchemy/issue/3157/invalid-default-value-for-array yedpodtrzitko: I'm not sure if I should report it here or in Alembic bugtracker, but as Alembic uses SqlAlchemy for generating migrations content, I'll try it here: I've a following column in DB: ``` \d+ "objects._external_txns"; #!bash Column | Type | Modifiers | Storage | Stats target | Description ----------------+-----------------------------+-------------------------------+----------+--------------+------------- product_ids | text[] | not null default '{}'::text[] | extended | | ``` when I try to use Alembic for generating a migration the result looks like this: ``` #!python sa.Column('product_ids', postgresql.ARRAY(TEXT()), server_default="{}'::text[]", autoincrement=False, nullable=False) ``` When I try to run given migration, I get following error: ``` #!python INFO [alembic.migration] Context impl PostgresqlImpl. INFO [alembic.migration] Will assume transactional DDL. INFO [alembic.migration] Running upgrade None -> 148fa0264d4, empty message '{}'::text[]' Traceback (most recent call last): File "<string>", line 1, in <module> File "shell.py", line 28, in <module> ctx = domain_mgr.ensure_domain(DOMAIN) File "fireline/domains.py", line 263, in ensure_domain flush_redis=flush_redis) File "fireline/domains.py", line 226, in initialize_domain objects.update_system_objects(domain_db_ctx) File "fireline/objects.py", line 3198, in update_system_objects upgrade(_get_config(), 'head') File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/command.py", line 125, in upgrade script.run_env() File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/script.py", line 203, in run_env util.load_python_file(self.dir, 'env.py') File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/util.py", line 212, in load_python_file module = load_module_py(module_id, path) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/compat.py", line 58, in load_module_py mod = imp.load_source(module_id, path, fp) File "alembic/env.py", line 129, in <module> run_migrations_online() File "alembic/env.py", line 121, in run_migrations_online context.run_migrations() File "<string>", line 7, in run_migrations File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/environment.py", line 688, in run_migrations self.get_context().run_migrations(**kw) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/migration.py", line 258, in run_migrations change(**kw) File "alembic/versions/148fa0264d4_.py", line 425, in upgrade sa.PrimaryKeyConstraint('_id', name=u'objects._external_txns_pkey') File "<string>", line 7, in create_table File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/operations.py", line 713, in create_table self._table(name, *columns, **kw) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/ddl/impl.py", line 149, in create_table self._exec(schema.CreateTable(table)) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/alembic/ddl/impl.py", line 76, in _exec conn.execute(construct, *multiparams, **params) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 729, in execute return meth(self, multiparams, params) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 69, in _execute_on_connection return connection._execute_ddl(self, multiparams, params) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 783, in _execute_ddl compiled File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 958, in _execute_context context) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1160, in _handle_dbapi_exception exc_info File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 199, in raise_from_cause reraise(type(exception), exception, tb=exc_tb) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 951, in _execute_context context) File "/Users/yed/dev/sd/event-core/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 436, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (ProgrammingError) unterminated quoted string at or near "' NOT NULL, CONSTRAINT "objects._external_txns_pkey" PRIMARY KEY (_id) ) " LINE 14: product_ids TEXT[] DEFAULT '{}'::text[]' NOT NULL, ^ '\nCREATE TABLE "objects._external_txns" (\n\t_id UUID NOT NULL, \n\t_created_at TIMESTAMP WITHOUT TIME ZONE NOT NULL, \n\t_modified_at TIMESTAMP WITHOUT TIME ZONE NOT NULL, \n\tvendor VARCHAR NOT NULL, \n\texternal_id TEXT NOT NULL, \n\tnonce_identity UUID NOT NULL, \n\tidentity UUID NOT NULL, \n\tpayload BYTEA, \n\tcurrency TEXT, \n\tamount NUMERIC, \n\tdata JSON, \n\tproduct_ids TEXT[] DEFAULT \'{}\'::text[]\' NOT NULL, \n\tCONSTRAINT "objects._external_txns_pkey" PRIMARY KEY (_id)\n)\n\n' {} ``` SqlAlchemy: 0.9.7 Postgres: 9.3.5 Alembic: 0.6.5 |
|
From: Serge K. <iss...@bi...> - 2014-08-05 14:30:59
|
New issue 3156: PickleType does not work with raw SQL queries https://bitbucket.org/zzzeek/sqlalchemy/issue/3156/pickletype-does-not-work-with-raw-sql Serge Koval: I'm working on query precompilation layer and stumbled upon a bug: PickleType column does not automatically unpickle data received from the database. [10]: m = db.session.query(Report).from_statement('SELECT * FROM reports LIMIT 1').all() [11]: m[0].pickled_data ?}q(UurlqX634/654/aqgvppyv.jpgUserverqUcheetahUmodeqKUsizeqM?Du. Just in case, using PostgreSQL. |