sqlobject-cvs Mailing List for SQLObject (Page 185)
SQLObject is a Python ORM.
Brought to you by:
ianbicking,
phd
You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(9) |
Apr
(74) |
May
(29) |
Jun
(16) |
Jul
(28) |
Aug
(10) |
Sep
(57) |
Oct
(9) |
Nov
(29) |
Dec
(12) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(14) |
Mar
(6) |
Apr
(3) |
May
(12) |
Jun
(34) |
Jul
(9) |
Aug
(29) |
Sep
(22) |
Oct
(2) |
Nov
(15) |
Dec
(52) |
2005 |
Jan
(47) |
Feb
(78) |
Mar
(14) |
Apr
(35) |
May
(33) |
Jun
(16) |
Jul
(26) |
Aug
(63) |
Sep
(40) |
Oct
(96) |
Nov
(96) |
Dec
(123) |
2006 |
Jan
(159) |
Feb
(144) |
Mar
(64) |
Apr
(31) |
May
(88) |
Jun
(48) |
Jul
(16) |
Aug
(64) |
Sep
(87) |
Oct
(92) |
Nov
(56) |
Dec
(76) |
2007 |
Jan
(94) |
Feb
(103) |
Mar
(126) |
Apr
(123) |
May
(85) |
Jun
(11) |
Jul
(130) |
Aug
(47) |
Sep
(65) |
Oct
(70) |
Nov
(12) |
Dec
(11) |
2008 |
Jan
(30) |
Feb
(55) |
Mar
(88) |
Apr
(20) |
May
(50) |
Jun
|
Jul
(38) |
Aug
(1) |
Sep
(9) |
Oct
(5) |
Nov
(6) |
Dec
(39) |
2009 |
Jan
(8) |
Feb
(16) |
Mar
(3) |
Apr
(33) |
May
(44) |
Jun
(1) |
Jul
(10) |
Aug
(33) |
Sep
(74) |
Oct
(22) |
Nov
|
Dec
(15) |
2010 |
Jan
(28) |
Feb
(22) |
Mar
(46) |
Apr
(29) |
May
(1) |
Jun
(1) |
Jul
(27) |
Aug
(8) |
Sep
(5) |
Oct
(33) |
Nov
(24) |
Dec
(41) |
2011 |
Jan
(4) |
Feb
(12) |
Mar
(35) |
Apr
(29) |
May
(19) |
Jun
(16) |
Jul
(32) |
Aug
(25) |
Sep
(5) |
Oct
(11) |
Nov
(21) |
Dec
(12) |
2012 |
Jan
(3) |
Feb
(4) |
Mar
(20) |
Apr
(4) |
May
(25) |
Jun
(13) |
Jul
|
Aug
|
Sep
(2) |
Oct
(25) |
Nov
(9) |
Dec
(1) |
2013 |
Jan
(6) |
Feb
(8) |
Mar
|
Apr
(10) |
May
(31) |
Jun
(7) |
Jul
(18) |
Aug
(33) |
Sep
(4) |
Oct
(16) |
Nov
|
Dec
(27) |
2014 |
Jan
(2) |
Feb
|
Mar
|
Apr
(11) |
May
(39) |
Jun
(8) |
Jul
(11) |
Aug
(4) |
Sep
|
Oct
(27) |
Nov
|
Dec
(71) |
2015 |
Jan
(17) |
Feb
(47) |
Mar
(33) |
Apr
|
May
|
Jun
(9) |
Jul
(7) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(8) |
2016 |
Jan
(4) |
Feb
(4) |
Mar
|
Apr
|
May
(12) |
Jun
(7) |
Jul
(9) |
Aug
(31) |
Sep
(8) |
Oct
(3) |
Nov
(15) |
Dec
(1) |
2017 |
Jan
(13) |
Feb
(7) |
Mar
(14) |
Apr
(8) |
May
(10) |
Jun
(4) |
Jul
(2) |
Aug
(1) |
Sep
|
Oct
(8) |
Nov
(4) |
Dec
(5) |
2018 |
Jan
(2) |
Feb
(8) |
Mar
|
Apr
(4) |
May
|
Jun
(6) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
2019 |
Jan
(1) |
Feb
(16) |
Mar
(1) |
Apr
(3) |
May
(5) |
Jun
(1) |
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
(1) |
Dec
(3) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(2) |
Nov
|
Dec
(2) |
2021 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(6) |
Oct
(1) |
Nov
(1) |
Dec
(4) |
2023 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(3) |
Sep
(2) |
Oct
(2) |
Nov
(4) |
Dec
|
2024 |
Jan
|
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(9) |
2025 |
Jan
|
Feb
(4) |
Mar
(2) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <ian...@us...> - 2003-09-07 18:12:05
|
Update of /cvsroot/sqlobject/SOWeb In directory sc8-pr-cvs1:/tmp/cvs-serv21936 Modified Files: index.txt Log Message: Added note about CVS snapshot. Index: index.txt =================================================================== RCS file: /cvsroot/sqlobject/SOWeb/index.txt,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -d -r1.6 -r1.7 *** index.txt 21 Aug 2003 05:06:58 -0000 1.6 --- index.txt 7 Sep 2003 18:12:02 -0000 1.7 *************** *** 82,85 **** --- 82,87 ---- .. _sqlobject-discuss archives: http://sourceforge.net/mailarchive/forum.php?forum=sqlobject-discuss + .. _`anonymous cvs`: + If you are tracking changes, keeping up with CVS is helpful. To access CVS:: *************** *** 123,126 **** --- 125,136 ---- __ http://prdownloads.sourceforge.net/sqlobject/SQLObject-0.4.win32.exe?download + + You can also use `anonymous CVS`_ to access the latest files. + However, SourceForge is currently delaying the anonymous CVS servers + by 24 hours, so you won't see the newest code, particularly when a + change is made to CVS specifically to address a problem you are + having. Instead use the `Mostly Live CVS Tarball`__ + + .. __: http://colorstudy.com/ianb/SQLObject-cvs.tar.bz2 Documentation |
From: <ian...@us...> - 2003-09-07 18:07:17
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv20943/tests Modified Files: test.py Added Files: coverage.py Log Message: Added --coverage option to tests, which creates annotated source files that show coverage. --- NEW FILE: coverage.py --- #!/usr/bin/python # # Perforce Defect Tracking Integration Project # <http://www.ravenbrook.com/project/p4dti/> # # COVERAGE.PY -- COVERAGE TESTING # # Gareth Rees, Ravenbrook Limited, 2001-12-04 # # # 1. INTRODUCTION # # This module provides coverage testing for Python code. # # The intended readership is all Python developers. # # This document is not confidential. # # See [GDR 2001-12-04a] for the command-line interface, programmatic # interface and limitations. See [GDR 2001-12-04b] for requirements and # design. """Usage: coverage.py -x MODULE.py [ARG1 ARG2 ...] Execute module, passing the given command-line arguments, collecting coverage data. coverage.py -e Erase collected coverage data. coverage.py -r [-m] FILE1 FILE2 ... Report on the statement coverage for the given files. With the -m option, show line numbers of the statements that weren't executed. coverage.py -a [-d dir] FILE1 FILE2 ... Make annotated copies of the given files, marking statements that are executed with > and statements that are missed with !. With the -d option, make the copies in that directory. Without the -d option, make each copy in the same directory as the original. Coverage data is saved in the file .coverage by default. Set the COVERAGE_FILE environment variable to save it somewhere else.""" import os import re import string import sys import types # 2. IMPLEMENTATION # # This uses the "singleton" pattern. # # The word "morf" means a module object (from which the source file can # be deduced by suitable manipulation of the __file__ attribute) or a # filename. # # When we generate a coverage report we have to canonicalize every # filename in the coverage dictionary just in case it refers to the # module we are reporting on. It seems a shame to throw away this # information so the data in the coverage dictionary is transferred to # the 'cexecuted' dictionary under the canonical filenames. # # The coverage dictionary is called "c" and the trace function "t". The # reason for these short names is that Python looks up variables by name # at runtime and so execution time depends on the length of variables! # In the bottleneck of this application it's appropriate to abbreviate # names to increase speed. # A dictionary with an entry for (Python source file name, line number # in that file) if that line has been executed. c = {} # t(f, x, y). This method is passed to sys.settrace as a trace # function. See [van Rossum 2001-07-20b, 9.2] for an explanation of # sys.settrace and the arguments and return value of the trace function. # See [van Rossum 2001-07-20a, 3.2] for a description of frame and code # objects. def t(f, x, y): c[(f.f_code.co_filename, f.f_lineno)] = 1 return t the_coverage = None class coverage: error = "coverage error" # Name of the cache file (unless environment variable is set). cache_default = ".coverage" # Environment variable naming the cache file. cache_env = "COVERAGE_FILE" # A map from canonical Python source file name to a dictionary in # which there's an entry for each line number that has been # executed. cexecuted = {} # Cache of results of calling the analysis() method, so that you can # specify both -r and -a without doing double work. analysis_cache = {} # Cache of results of calling the canonical_filename() method, to # avoid duplicating work. canonical_filename_cache = {} def __init__(self): global the_coverage if the_coverage: raise self.error, "Only one coverage object allowed." self.cache = os.environ.get(self.cache_env, self.cache_default) self.restore() self.analysis_cache = {} def help(self, error=None): if error: print error print print __doc__ sys.exit(1) def command_line(self): import getopt settings = {} optmap = { '-a': 'annotate', '-d:': 'directory=', '-e': 'erase', '-h': 'help', '-i': 'ignore-errors', '-m': 'show-missing', '-r': 'report', '-x': 'execute', } short_opts = string.join(map(lambda o: o[1:], optmap.keys()), '') long_opts = optmap.values() options, args = getopt.getopt(sys.argv[1:], short_opts, long_opts) for o, a in options: if optmap.has_key(o): settings[optmap[o]] = 1 elif optmap.has_key(o + ':'): settings[optmap[o + ':']] = a elif o[2:] in long_opts: settings[o[2:]] = 1 elif o[2:] + '=' in long_opts: settings[o[2:]] = a else: self.help("Unknown option: '%s'." % o) if settings.get('help'): self.help() for i in ['erase', 'execute']: for j in ['annotate', 'report']: if settings.get(i) and settings.get(j): self.help("You can't specify the '%s' and '%s' " "options at the same time." % (i, j)) args_needed = (settings.get('execute') or settings.get('annotate') or settings.get('report')) action = settings.get('erase') or args_needed if not action: self.help("You must specify at least one of -e, -x, -r, " "or -a.") if not args_needed and args: self.help("Unexpected arguments %s." % args) if settings.get('erase'): self.erase() if settings.get('execute'): if not args: self.help("Nothing to do.") sys.argv = args self.start() import __main__ sys.path[0] = os.path.dirname(sys.argv[0]) execfile(sys.argv[0], __main__.__dict__) if not args: args = self.cexecuted.keys() ignore_errors = settings.get('ignore-errors') show_missing = settings.get('show-missing') directory = settings.get('directory=') if settings.get('report'): self.report(args, show_missing, ignore_errors) if settings.get('annotate'): self.annotate(args, directory, ignore_errors) def start(self): sys.settrace(t) def stop(self): sys.settrace(None) def erase(self): global c c = {} self.analysis_cache = {} self.cexecuted = {} if os.path.exists(self.cache): os.remove(self.cache) # save(). Save coverage data to the coverage cache. def save(self): self.canonicalize_filenames() cache = open(self.cache, 'wb') import marshal marshal.dump(self.cexecuted, cache) cache.close() # restore(). Restore coverage data from the coverage cache (if it # exists). def restore(self): global c c = {} self.cexecuted = {} if not os.path.exists(self.cache): return try: cache = open(self.cache, 'rb') import marshal cexecuted = marshal.load(cache) cache.close() if isinstance(cexecuted, types.DictType): self.cexecuted = cexecuted except: pass # canonical_filename(filename). Return a canonical filename for the # file (that is, an absolute path with no redundant components and # normalized case). See [GDR 2001-12-04b, 3.3]. def canonical_filename(self, filename): if not self.canonical_filename_cache.has_key(filename): f = filename if os.path.isabs(f) and not os.path.exists(f): f = os.path.basename(f) if not os.path.isabs(f): for path in [os.curdir] + sys.path: g = os.path.join(path, f) if os.path.exists(g): f = g break cf = os.path.normcase(os.path.abspath(f)) self.canonical_filename_cache[filename] = cf return self.canonical_filename_cache[filename] # canonicalize_filenames(). Copy results from "executed" to # "cexecuted", canonicalizing filenames on the way. Clear the # "executed" map. def canonicalize_filenames(self): global c for filename, lineno in c.keys(): f = self.canonical_filename(filename) if not self.cexecuted.has_key(f): self.cexecuted[f] = {} self.cexecuted[f][lineno] = 1 c = {} # morf_filename(morf). Return the filename for a module or file. def morf_filename(self, morf): if isinstance(morf, types.ModuleType): if not hasattr(morf, '__file__'): raise self.error, "Module has no __file__ attribute." file = morf.__file__ else: file = morf return self.canonical_filename(file) # analyze_morf(morf). Analyze the module or filename passed as # the argument. If the source code can't be found, raise an error. # Otherwise, return a pair of (1) the canonical filename of the # source code for the module, and (2) a list of lines of statements # in the source code. def analyze_morf(self, morf): if self.analysis_cache.has_key(morf): return self.analysis_cache[morf] filename = self.morf_filename(morf) ext = os.path.splitext(filename)[1] if ext == '.pyc': if not os.path.exists(filename[0:-1]): raise self.error, ("No source for compiled code '%s'." % filename) filename = filename[0:-1] elif ext != '.py': raise self.error, "File '%s' not Python source." % filename source = open(filename, 'r') import parser tree = parser.suite(source.read()).totuple(1) source.close() statements = {} self.find_statements(tree, statements) lines = statements.keys() lines.sort() result = filename, lines self.analysis_cache[morf] = result return result # find_statements(tree, dict). Find each statement in the parse # tree and record the line on which the statement starts in the # dictionary (by assigning it to 1). # # It works by walking the whole tree depth-first. Every time it # comes across a statement (symbol.stmt -- this includes compound # statements like 'if' and 'while') it calls find_statement, which # descends the tree below the statement to find the first terminal # token in that statement and record the lines on which that token # was found. # # This algorithm may find some lines several times (because of the # grammar production statement -> compound statement -> statement), # but that doesn't matter because we record lines as the keys of the # dictionary. # # See also [GDR 2001-12-04b, 3.2]. def find_statements(self, tree, dict): import symbol, token if token.ISNONTERMINAL(tree[0]): for t in tree[1:]: self.find_statements(t, dict) if tree[0] == symbol.stmt: self.find_statement(tree[1], dict) elif (tree[0] == token.NAME and tree[1] in ['elif', 'except', 'finally']): dict[tree[2]] = 1 def find_statement(self, tree, dict): import token while token.ISNONTERMINAL(tree[0]): tree = tree[1] dict[tree[2]] = 1 # format_lines(statements, lines). Format a list of line numbers # for printing by coalescing groups of lines as long as the lines # represent consecutive statements. This will coalesce even if # there are gaps between statements, so if statements = # [1,2,3,4,5,10,11,12,13,14] and lines = [1,2,5,10,11,13,14] then # format_lines will return "1-2, 5-11, 13-14". def format_lines(self, statements, lines): pairs = [] i = 0 j = 0 start = None pairs = [] while i < len(statements) and j < len(lines): if statements[i] == lines[j]: if start == None: start = lines[j] end = lines[j] j = j + 1 elif start: pairs.append((start, end)) start = None i = i + 1 if start: pairs.append((start, end)) def stringify(pair): start, end = pair if start == end: return "%d" % start else: return "%d-%d" % (start, end) import string return string.join(map(stringify, pairs), ", ") def analysis(self, morf): filename, statements = self.analyze_morf(morf) self.canonicalize_filenames() if not self.cexecuted.has_key(filename): self.cexecuted[filename] = {} missing = [] for line in statements: if not self.cexecuted[filename].has_key(line): missing.append(line) return (filename, statements, missing, self.format_lines(statements, missing)) def morf_name(self, morf): if isinstance(morf, types.ModuleType): return morf.__name__ else: return os.path.splitext(os.path.basename(morf))[0] def report(self, morfs, show_missing=1, ignore_errors=0): if not isinstance(morfs, types.ListType): morfs = [morfs] max_name = max([5,] + map(len, map(self.morf_name, morfs))) fmt_name = "%%- %ds " % max_name fmt_err = fmt_name + "%s: %s" header = fmt_name % "Name" + " Stmts Exec Cover" fmt_coverage = fmt_name + "% 6d % 6d % 5d%%" if show_missing: header = header + " Missing" fmt_coverage = fmt_coverage + " %s" print header print "-" * len(header) total_statements = 0 total_executed = 0 for morf in morfs: name = self.morf_name(morf) try: _, statements, missing, readable = self.analysis(morf) n = len(statements) m = n - len(missing) if n > 0: pc = 100.0 * m / n else: pc = 100.0 args = (name, n, m, pc) if show_missing: args = args + (readable,) print fmt_coverage % args total_statements = total_statements + n total_executed = total_executed + m except KeyboardInterrupt: raise except: if not ignore_errors: type, msg = sys.exc_info()[0:2] print fmt_err % (name, type, msg) if len(morfs) > 1: print "-" * len(header) if total_statements > 0: pc = 100.0 * total_executed / total_statements else: pc = 100.0 args = ("TOTAL", total_statements, total_executed, pc) if show_missing: args = args + ("",) print fmt_coverage % args # annotate(morfs, ignore_errors). blank_re = re.compile("\\s*(#|$)") else_re = re.compile("\\s*else\\s*:\\s*(#|$)") def annotate(self, morfs, directory=None, ignore_errors=0): for morf in morfs: try: filename, statements, missing, _ = self.analysis(morf) source = open(filename, 'r') if directory: dest_file = os.path.join(directory, os.path.basename(filename) + ',cover') else: dest_file = filename + ',cover' dest = open(dest_file, 'w') lineno = 0 i = 0 j = 0 covered = 1 while 1: line = source.readline() if line == '': break lineno = lineno + 1 while i < len(statements) and statements[i] < lineno: i = i + 1 while j < len(missing) and missing[j] < lineno: j = j + 1 if i < len(statements) and statements[i] == lineno: covered = j >= len(missing) or missing[j] > lineno if self.blank_re.match(line): dest.write(' ') elif self.else_re.match(line): # Special logic for lines containing only # 'else:'. See [GDR 2001-12-04b, 3.2]. if i >= len(statements) and j >= len(missing): dest.write('! ') elif i >= len(statements) or j >= len(missing): dest.write('> ') elif statements[i] == missing[j]: dest.write('! ') else: dest.write('> ') elif covered: dest.write('> ') else: dest.write('! ') dest.write(line) source.close() dest.close() except KeyboardInterrupt: raise except: if not ignore_errors: raise # Singleton object. the_coverage = coverage() # Module functions call methods in the singleton object. def start(*args): return apply(the_coverage.start, args) def stop(*args): return apply(the_coverage.stop, args) def erase(*args): return apply(the_coverage.erase, args) def analysis(*args): return apply(the_coverage.analysis, args) def report(*args): return apply(the_coverage.report, args) # Save coverage data when Python exits. (The atexit module wasn't # introduced until Python 2.0, so use sys.exitfunc when it's not # available.) try: import atexit atexit.register(the_coverage.save) except ImportError: sys.exitfunc = the_coverage.save # Command-line interface. if __name__ == '__main__': the_coverage.command_line() # A. REFERENCES # # [GDR 2001-12-04a] "Statement coverage for Python"; Gareth Rees; # Ravenbrook Limited; 2001-12-04; # <http://www.garethrees.org/2001/12/04/python-coverage/>. # # [GDR 2001-12-04b] "Statement coverage for Python: design and # analysis"; Gareth Rees; Ravenbrook Limited; 2001-12-04; # <http://www.garethrees.org/2001/12/04/python-coverage/design.html>. # # [van Rossum 2001-07-20a] "Python Reference Manual (releae 2.1.1)"; # Guide van Rossum; 2001-07-20; # <http://www.python.org/doc/2.1.1/ref/ref.html>. # # [van Rossum 2001-07-20b] "Python Library Reference"; Guido van Rossum; # 2001-07-20; <http://www.python.org/doc/2.1.1/lib/lib.html>. # # # B. DOCUMENT HISTORY # # 2001-12-04 GDR Created. # # 2001-12-06 GDR Added command-line interface and source code # annotation. # # 2001-12-09 GDR Moved design and interface to separate documents. # # 2001-12-10 GDR Open cache file as binary on Windows. Allow # simultaneous -e and -x, or -a and -r. # # 2001-12-12 GDR Added command-line help. Cache analysis so that it # only needs to be done once when you specify -a and -r. # # 2001-12-13 GDR Improved speed while recording. Portable between # Python 1.5.2 and 2.1.1. # # 2002-01-03 GDR Module-level functions work correctly. # # 2002-01-07 GDR Update sys.path when running a file with the -x option, # so that it matches the value the program would get if it were run on # its own. # # # C. COPYRIGHT AND LICENCE # # Copyright 2001 Gareth Rees. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the # distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # HOLDERS AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS # OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR # TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH # DAMAGE. # # # # $Id: coverage.py,v 1.1 2003/09/07 18:07:11 ianbicking Exp $ Index: test.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/test.py,v retrieving revision 1.24 retrieving revision 1.25 diff -C2 -d -r1.24 -r1.25 *** test.py 7 Sep 2003 07:17:50 -0000 1.24 --- test.py 7 Sep 2003 18:07:11 -0000 1.25 *************** *** 7,14 **** """ from SQLObjectTest import * from SQLObject import * from mx import DateTime - from __future__ import generators ######################################## --- 7,22 ---- """ + from __future__ import generators + + import sys + if '--coverage' in sys.argv: + import coverage + print 'Starting coverage' + coverage.erase() + coverage.start() + from SQLObjectTest import * from SQLObject import * from mx import DateTime ######################################## *************** *** 620,627 **** ######################################## if __name__ == '__main__': ! import unittest, sys dbs = [] newArgs = [] for arg in sys.argv[1:]: if arg.startswith('-d'): --- 628,706 ---- ######################################## + def coverModules(): + sys.stdout.write('Writing coverage...') + sys.stdout.flush() + here = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + from SQLObject import DBConnection as tmp + there = os.path.dirname(os.path.abspath(tmp.__file__)) + for name, mod in sys.modules.items(): + if not mod: + continue + try: + modFile = os.path.abspath(mod.__file__) + except AttributeError: + # Probably a C extension + continue + if modFile.startswith(here) or modFile.startswith(there): + writeCoverage(mod, there, os.path.join(here, 'SQLObject')) + sys.stdout.write('done.\n') + + + def writeCoverage(module, oldBase, newBase): + filename, numbers, unexecuted, s = coverage.analysis(module) + coverFilename = filename + ',cover' + if coverFilename.startswith(oldBase): + coverFilename = newBase + coverFilename[len(oldBase):] + fout = open(coverFilename, 'w') + fin = open(filename) + i = 1 + lines = 0 + good = 0 + while 1: + line = fin.readline() + if not line: break + assert line[-1] == '\n' + fout.write(line[:-1]) + unused = i in unexecuted + interesting = interestingLine(line, unused) + if interesting: + if unused: + fout.write(' '*(72-len(line))) + fout.write('#@@@@') + lastUnused = True + else: + lastUnused = False + good += 1 + lines += 1 + fout.write('\n') + i += 1 + fout.write('\n# Coverage:\n') + fout.write('# %i/%i, %i%%' % ( + good, lines, lines and int(good*100/lines))) + fout.close() + fin.close() + + def interestingLine(line, unused): + line = line.strip() + if not line: + return False + if line.startswith('#'): + return False + if line in ('"""', '"""'): + return False + if line.startswith('global '): + return False + if line.startswith('def ') and not unused: + # If a def *isn't* executed, that's interesting + return False + if line.startswith('class ') and not unused: + return False + return True + if __name__ == '__main__': ! import unittest, sys, os dbs = [] newArgs = [] + doCoverage = False for arg in sys.argv[1:]: if arg.startswith('-d'): *************** *** 641,644 **** --- 720,727 ---- SQLObjectTest.debugInserts = True continue + if arg in ('--coverage',): + # Handled earlier, so we get better coverage + doCoverage = True + continue newArgs.append(arg) sys.argv = [sys.argv[0]] + newArgs *************** *** 654,655 **** --- 737,742 ---- except SystemExit: pass + if doCoverage: + coverage.stop() + coverModules() + |
From: <ian...@us...> - 2003-09-07 18:06:28
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv20830/SQLObject Modified Files: SQLObject.py Log Message: added a little comment Index: SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.54 retrieving revision 1.55 diff -C2 -d -r1.54 -r1.55 *** SQLObject.py 7 Sep 2003 07:17:50 -0000 1.54 --- SQLObject.py 7 Sep 2003 18:06:21 -0000 1.55 *************** *** 672,675 **** --- 672,681 ---- self._SO_writeLock.acquire() try: + # Maybe, just in the moment since we got the lock, + # some other thread did a _SO_loadValue and we have + # the attribute! Let's try and find out! We can keep + # trying this all day and still beat the performance + # on the database call (okay, we can keep trying this + # for a few msecs at least)... result = getattr(self, attrName) except AttributeError: |
From: <ian...@us...> - 2003-09-07 07:26:01
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv13135/SQLObject Modified Files: DBConnection.py Log Message: Took out Firebird auto-class generation methods that were just copies of Postgres's Index: DBConnection.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/DBConnection.py,v retrieving revision 1.45 retrieving revision 1.46 diff -C2 -d -r1.45 -r1.46 *** DBConnection.py 7 Sep 2003 07:17:50 -0000 1.45 --- DBConnection.py 7 Sep 2003 07:25:58 -0000 1.46 *************** *** 818,880 **** column.dbName)) - def columnsFromSchema(self, tableName, soClass): - #let's punt for now!!! - return - keyQuery = """ - SELECT pg_catalog.pg_get_constraintdef(oid) as condef - FROM pg_catalog.pg_constraint r - WHERE r.conrelid = '%s'::regclass AND r.contype = 'f'""" - - colQuery = """ - SELECT a.attname, - pg_catalog.format_type(a.atttypid, a.atttypmod), a.attnotnull, - (SELECT substring(d.adsrc for 128) FROM pg_catalog.pg_attrdef d - WHERE d.adrelid=a.attrelid AND d.adnum = a.attnum) - FROM pg_catalog.pg_attribute a - WHERE a.attrelid ='%s'::regclass - AND a.attnum > 0 AND NOT a.attisdropped - ORDER BY a.attnum""" - - keyData = self.queryAll(keyQuery % tableName) - keyRE = re.compile("\((.+)\) REFERENCES (.+)\(") - keymap = {} - for (condef,) in keyData: - match = keyRE.search(condef) - if match: - field, reftable = match.groups() - keymap[field] = reftable.capitalize() - colData = self.queryAll(colQuery % tableName) - results = [] - for field, t, notnull, defaultstr in colData: - if field == 'id': - continue - colClass, kw = self.guessClass(t) - kw['name'] = soClass._style.dbColumnToPythonAttr(field) - kw['notNone'] = notnull - if defaultstr is not None: - kw['default'] = getattr(SQLBuilder.const, defaultstr) - if keymap.has_key(field): - kw['foreignKey'] = keymap[field] - results.append(colClass(**kw)) - return results - - def guessClass(self, t): - # ditto on the punting - return - if t.count('int'): - return Col.IntCol, {} - elif t.count('varying'): - return Col.StringCol, {'length': int(t[t.index('(')+1:-1])} - elif t.startswith('character('): - return Col.StringCol, {'length': int(t[t.index('(')+1:-1]), - 'varchar': False} - elif t=='text': - return Col.StringCol, {} - elif t.startswith('datetime'): - return Col.DateTimeCol, {} - else: - return Col.Col, {} - - ######################################## ## File-based connections --- 818,821 ---- |
From: <ian...@us...> - 2003-09-07 07:17:59
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv12054/tests Modified Files: test.py Log Message: Added expiring and syncing. Transactions expire their objects whenever there's a rollback. Index: test.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/test.py,v retrieving revision 1.23 retrieving revision 1.24 diff -C2 -d -r1.23 -r1.24 *** test.py 7 Sep 2003 07:05:26 -0000 1.23 --- test.py 7 Sep 2003 07:17:50 -0000 1.24 *************** *** 229,233 **** class TestSOTrans(SQLObject): ! _cacheValues = False name = StringCol(length=10, alternateID=True) _defaultOrderBy = 'name' --- 229,233 ---- class TestSOTrans(SQLObject): ! #_cacheValues = False name = StringCol(length=10, alternateID=True) _defaultOrderBy = 'name' |
From: <ian...@us...> - 2003-09-07 07:17:58
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv12054/SQLObject Modified Files: Cache.py DBConnection.py SQLObject.py Log Message: Added expiring and syncing. Transactions expire their objects whenever there's a rollback. Index: Cache.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/Cache.py,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -d -r1.8 -r1.9 *** Cache.py 15 Jul 2003 02:25:31 -0000 1.8 --- Cache.py 7 Sep 2003 07:17:49 -0000 1.9 *************** *** 14,24 **** """ ! def __init__(self, expireFrequency=100, expireFraction=2, cache=True): """ ! Every expireFrequency times that an item is retrieved from ! this cache, the expire method is called. ! The expire method then expires an arbitrary fraction of the cached objects. The idea is at no time will the cache be entirely emptied, placing a potentially high load at that --- 14,24 ---- """ ! def __init__(self, cullFrequency=100, cullFraction=2, cache=True): """ ! Every cullFrequency times that an item is retrieved from ! this cache, the cull method is called. ! The cull method then expires an arbitrary fraction of the cached objects. The idea is at no time will the cache be entirely emptied, placing a potentially high load at that *************** *** 35,42 **** """ ! self.expireFrequency = expireFrequency ! self.expireCount = expireFrequency ! self.expireOffset = 0 ! self.expireFraction = expireFraction self.doCache = cache --- 35,42 ---- """ ! self.cullFrequency = cullFrequency ! self.cullCount = cullFrequency ! self.cullOffset = 0 ! self.cullFraction = cullFraction self.doCache = cache *************** *** 46,59 **** self.lock = threading.Lock() def get(self, id): if self.doCache: ! if self.expireCount > self.expireFrequency: ! # Two threads could hit the expire in a row, but ! # that's not so bad. At least by setting expireCount ! # back to zero right away we avoid this. The expire # method has a lock, so it's threadsafe. ! self.expireCount = 0 ! self.expire() try: --- 46,62 ---- self.lock = threading.Lock() + def tryGet(self, id): + return self.cache.get(id) + def get(self, id): if self.doCache: ! if self.cullCount > self.cullFrequency: ! # Two threads could hit the cull in a row, but ! # that's not so bad. At least by setting cullCount ! # back to zero right away we avoid this. The cull # method has a lock, so it's threadsafe. ! self.cullCount = 0 ! self.cull() try: *************** *** 115,122 **** self.expiredCache[id] = ref(obj) ! def expire(self): self.lock.acquire() keys = self.cache.keys() ! for i in xrange(self.expireOffset, len(keys), self.expireFraction): id = keys[i] self.expiredCache[id] = ref(self.cache[id]) --- 118,125 ---- self.expiredCache[id] = ref(obj) ! def cull(self): self.lock.acquire() keys = self.cache.keys() ! for i in xrange(self.cullOffset, len(keys), self.cullFraction): id = keys[i] self.expiredCache[id] = ref(self.cache[id]) *************** *** 124,131 **** # This offset tries to balance out which objects we expire, so # no object will just hang out in the cache forever. ! self.expireOffset = (self.expiredOffset + 1) % self.expireFraction self.lock.release() ! def purge(self, id): self.lock.acquire() if self.cache.has_key(id): --- 127,134 ---- # This offset tries to balance out which objects we expire, so # no object will just hang out in the cache forever. ! self.cullOffset = (self.culldOffset + 1) % self.cullFraction self.lock.release() ! def expire(self, id): self.lock.acquire() if self.cache.has_key(id): *************** *** 135,139 **** self.lock.release() ! def clear(self): self.lock.acquire() for key, value in self.cache.items(): --- 138,142 ---- self.lock.release() ! def expireAll(self): self.lock.acquire() for key, value in self.cache.items(): *************** *** 142,145 **** --- 145,151 ---- self.lock.release() + def allIDs(self): + return self.cache.keys() + class CacheSet(object): *************** *** 169,175 **** self.caches[cls.__name__].created(id, obj) ! def purge(self, id, cls): try: ! self.caches[cls.__name__].purge(id) except KeyError: pass --- 175,181 ---- self.caches[cls.__name__].created(id, obj) ! def expire(self, id, cls): try: ! self.caches[cls.__name__].expire(id) except KeyError: pass *************** *** 182,183 **** --- 188,203 ---- self.caches[cls.__name__].clear() + def tryGet(self, id, cls): + try: + self.caches[cls.__name__].tryGet(id) + except KeyError: + return None + + def allIDs(self, cls): + try: + self.caches[cls.__name__].allIDs() + except KeyError: + return [] + + def allSubCaches(self): + return self.caches.values() Index: DBConnection.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/DBConnection.py,v retrieving revision 1.44 retrieving revision 1.45 diff -C2 -d -r1.44 -r1.45 *** DBConnection.py 7 Sep 2003 00:49:03 -0000 1.44 --- DBConnection.py 7 Sep 2003 07:17:50 -0000 1.45 *************** *** 393,397 **** --- 393,404 ---- if self._dbConnection.debug: self._dbConnection.printDebug(self._connection, '', 'ROLLBACK') + subCaches = [(sub, sub.allIDs()) for sub in self.cache.allSubCaches()] self._connection.rollback() + + for subCache, ids in subCaches: + for id in ids: + inst = subCache.tryGet(id) + if inst is not None: + inst.expire() def __getattr__(self, attr): Index: SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.53 retrieving revision 1.54 diff -C2 -d -r1.53 -r1.54 *** SQLObject.py 7 Sep 2003 07:05:26 -0000 1.53 --- SQLObject.py 7 Sep 2003 07:17:50 -0000 1.54 *************** *** 708,711 **** --- 708,712 ---- delattr(self, instanceName(column.name)) self._expired = True + self._connection.cache.expire(self.id, self.__class__) self._SO_writeLock.release() *************** *** 972,976 **** self._SO_obsolete = True self._connection._SO_delete(self) ! self._connection.cache.purge(self.id, self.__class__) def delete(cls, id): --- 973,977 ---- self._SO_obsolete = True self._connection._SO_delete(self) ! self._connection.cache.expire(self.id, self.__class__) def delete(cls, id): |
From: <ian...@us...> - 2003-09-07 07:05:30
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv10132/SQLObject Modified Files: Converters.py SQLObject.py Log Message: Added expire and sync methods Index: Converters.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/Converters.py,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -d -r1.3 -r1.4 *** Converters.py 23 Aug 2003 18:12:43 -0000 1.3 --- Converters.py 7 Sep 2003 07:05:26 -0000 1.4 *************** *** 24,28 **** sqlStringReplace = [ ('\\', '\\\\'), ! ('\'', '\\\''), ('\000', '\\0'), ('\b', '\\b'), --- 24,28 ---- sqlStringReplace = [ ('\\', '\\\\'), ! ("'", "''"), ('\000', '\\0'), ('\b', '\\b'), Index: SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.52 retrieving revision 1.53 diff -C2 -d -r1.52 -r1.53 *** SQLObject.py 7 Sep 2003 00:49:03 -0000 1.52 --- SQLObject.py 7 Sep 2003 07:05:26 -0000 1.53 *************** *** 66,82 **** def setNeedSet(): global needSet - for registryName, needList in needSet.items(): - newNeedList = [] - for needClass, setCls in needList: - if classRegistry.get(registryName, {}).has_key(setCls): - setattr(findClass(needClass, registry=registryName), - '_SO_class_%s' % setCls, - findClass(setCls, registry=registryName)) - else: - newNeedList.append((needClass, setCls)) - needSet[registryName] = newNeedList - - def setNeedSet(): - global needSet for registryName, needClassDict in needSet.items(): newNeedClassDict = {} --- 66,69 ---- *************** *** 327,330 **** --- 314,321 ---- class CreateNewSQLObject: + """ + Dummy singleton to use in place of an ID, to signal we want + a new object. + """ pass *************** *** 374,377 **** --- 365,372 ---- _registry = None + # Default is false, but we set it to true for the *instance* + # when necessary: (bad clever? maybe) + _expired = False + def __new__(cls, id, connection=None, selectResults=None): *************** *** 435,439 **** # We create a method here, which is just a function # that takes "self" as the first argument. ! getter = eval('lambda self: self.%s' % instanceName(name)) else: # If we aren't caching values, we just call the --- 430,434 ---- # We create a method here, which is just a function # that takes "self" as the first argument. ! getter = eval('lambda self: self._SO_loadValue(%s)' % repr(instanceName(name))) else: # If we aren't caching values, we just call the *************** *** 664,669 **** self._SO_perConnection = True - dbNames = [col.dbName for col in self._SO_columns] if not selectResults: selectResults = self._connection._SO_selectOne(self, dbNames) if not selectResults: --- 659,664 ---- self._SO_perConnection = True if not selectResults: + dbNames = [col.dbName for col in self._SO_columns] selectResults = self._connection._SO_selectOne(self, dbNames) if not selectResults: *************** *** 671,674 **** --- 666,712 ---- self._SO_selectInit(selectResults) + def _SO_loadValue(self, attrName): + try: + return getattr(self, attrName) + except AttributeError: + self._SO_writeLock.acquire() + try: + result = getattr(self, attrName) + except AttributeError: + pass + else: + self._SO_writeLock.release() + return result + self._expired = False + dbNames = [col.dbName for col in self._SO_columns] + selectResults = self._connection._SO_selectOne(self, dbNames) + if not selectResults: + raise SQLObjectNotFound, "The object %s by the ID %s has been deleted" % (self.__class__.__name__, self.id) + self._SO_selectInit(selectResults) + result = getattr(self, attrName) + self._SO_writeLock.release() + return result + + def sync(self): + self._SO_writeLock.acquire() + dbNames = [col.dbName for col in self._SO_columns] + selectResults = self._connection._SO_selectOne(self, dbNames) + if not selectResults: + raise SQLObjectNotFound, "The object %s by the ID %s has been deleted" % (self.__class__.__name__, self.id) + self._SO_selectInit(selectResults) + self._expired = False + self._SO_writeLock.release() + + def expire(self): + if self._expired: + return + self._SO_writeLock.acquire() + if self._expired: + self._SO_writeLock.release() + return + for column in self._SO_columns: + delattr(self, instanceName(column.name)) + self._expired = True + self._SO_writeLock.release() def _SO_setValue(self, name, value): |
From: <ian...@us...> - 2003-09-07 07:05:30
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv10132/tests Modified Files: test.py Log Message: Added expire and sync methods Index: test.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/test.py,v retrieving revision 1.22 retrieving revision 1.23 diff -C2 -d -r1.22 -r1.23 *** test.py 7 Sep 2003 00:49:03 -0000 1.22 --- test.py 7 Sep 2003 07:05:26 -0000 1.23 *************** *** 18,23 **** class TestSO1(SQLObject): ! name = StringCol(length=10) ! _columns = [ StringCol('passwd', length=10), --- 18,23 ---- class TestSO1(SQLObject): ! name = StringCol(length=50) ! _cacheValues = False _columns = [ StringCol('passwd', length=10), *************** *** 44,47 **** --- 44,54 ---- self.assertEqual(bob.passwd, 'god'.encode('rot13')) + def testNewline(self): + bob = self.MyClass.selectBy(name='bob')[0] + testString = 'hey\nyou\\can\'t you see me?\t' + bob.name = testString + self.assertEqual(bob.name, testString) + + class TestCaseGetSet(TestCase1): *************** *** 54,58 **** class TestSO2(SQLObject): ! name = StringCol(length=10) passwd = StringCol(length=10) --- 61,65 ---- class TestSO2(SQLObject): ! name = StringCol(length=50) passwd = StringCol(length=10) *************** *** 581,584 **** --- 588,618 ---- + ######################################## + ## Expiring, syncing + ######################################## + + class SyncTest(SQLObject): + name = StringCol(length=50, alternateID=True) + + class ExpireTest(SQLObjectTest): + + classes = [SyncTest] + + def inserts(self): + SyncTest.new(name='bob') + SyncTest.new(name='tim') + + def testExpire(self): + conn = SyncTest._connection + b = SyncTest.byName('bob') + conn.query("UPDATE sync_test SET name = 'robert' WHERE id = %i" + % b.id) + self.assertEqual(b.name, 'bob') + b.expire() + self.assertEqual(b.name, 'robert') + conn.query("UPDATE sync_test SET name = 'bobby' WHERE id = %i" + % b.id) + b.sync() + self.assertEqual(b.name, 'bobby') ######################################## |
From: <ian...@us...> - 2003-09-07 07:05:13
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv10013/tests Modified Files: SQLObjectTest.py Log Message: Took out DBM as a standard backend to test against (too many bugs, I don't care enough about it) Index: SQLObjectTest.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/SQLObjectTest.py,v retrieving revision 1.13 retrieving revision 1.14 diff -C2 -d -r1.13 -r1.14 *** SQLObjectTest.py 7 Sep 2003 00:49:03 -0000 1.13 --- SQLObjectTest.py 7 Sep 2003 07:05:09 -0000 1.14 *************** *** 58,62 **** 'sqlite': 'sqlite', 'firebird': 'kinterbasdb', ! 'dbm': 'anydbm'} def supportedDatabases(): --- 58,62 ---- 'sqlite': 'sqlite', 'firebird': 'kinterbasdb', ! } def supportedDatabases(): |
From: <ian...@us...> - 2003-09-07 00:49:15
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv26904/SQLObject Modified Files: DBConnection.py SQLObject.py Log Message: Lots of transaction fixes. Index: DBConnection.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/DBConnection.py,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -d -r1.43 -r1.44 *** DBConnection.py 6 Sep 2003 02:37:42 -0000 1.43 --- DBConnection.py 7 Sep 2003 00:49:03 -0000 1.44 *************** *** 8,11 **** --- 8,12 ---- import atexit import os + import new import SQLBuilder from Cache import CacheSet *************** *** 19,22 **** --- 20,24 ---- MySQLdb = None psycopg = None + pgdb = None sqlite = None kinterbasdb = None *************** *** 31,36 **** class DBConnection: ! def __init__(self, name=None, debug=False, cache=True, ! style=None): if name: assert not _connections.has_key(name), 'A database by the name %s has already been created: %s' % (name, _connections[name]) --- 33,38 ---- class DBConnection: ! def __init__(self, name=None, debug=False, debugOutput=False, ! cache=True, style=None, autoCommit=True): if name: assert not _connections.has_key(name), 'A database by the name %s has already been created: %s' % (name, _connections[name]) *************** *** 38,45 **** self.name = name self.debug = debug self.cache = CacheSet(cache=cache) self.doCache = cache self.style = style ! def connectionForName(name): --- 40,50 ---- self.name = name self.debug = debug + self.debugOutput = debugOutput self.cache = CacheSet(cache=cache) self.doCache = cache self.style = style ! self._connectionNumbers = {} ! self._connectionCount = 1 ! self.autoCommit = autoCommit def connectionForName(name): *************** *** 63,67 **** conn = self.getConnection() val = meth(conn, *args) - conn.commit() self.releaseConnection(conn) return val --- 68,71 ---- *************** *** 70,74 **** self._poolLock.acquire() if not self._pool: ! self._pool.append(self.makeConnection()) val = self._pool.pop() self._poolLock.release() --- 74,81 ---- self._poolLock.acquire() if not self._pool: ! newConn = self.makeConnection() ! self._pool.append(newConn) ! self._connectionNumbers[id(newConn)] = self._connectionCount ! self._connectionCount += 1 val = self._pool.pop() self._poolLock.release() *************** *** 76,85 **** def releaseConnection(self, conn): if self._pool is not None: self._pool.append(conn) def _query(self, conn, s): if self.debug: ! print 'Query : %s' % s conn.cursor().execute(s) --- 83,116 ---- def releaseConnection(self, conn): + if self.supportTransactions: + if self.autoCommit == 'exception': + if self.debug: + self.printDebug(conn, 'auto/exception', 'ROLLBACK') + conn.rollback() + raise Exception, 'Object used outside of a transaction; implicit COMMIT or ROLLBACK not allowed' + elif self.autoCommit: + if self.debug: + self.printDebug(conn, 'auto', 'COMMIT') + conn.commit() + else: + if self.debug: + self.printDebug(conn, 'auto', 'ROLLBACK') + conn.rollback() if self._pool is not None: self._pool.append(conn) + def printDebug(self, conn, s, name, type='query'): + if type == 'query': + sep = ': ' + else: + sep = '->' + s = repr(s) + n = self._connectionNumbers[id(conn)] + spaces = ' '*(8-len(name)) + print '%(n)2i/%(name)s%(spaces)s%(sep)s %(s)s' % locals() + def _query(self, conn, s): if self.debug: ! self.printDebug(conn, s, 'Query') conn.cursor().execute(s) *************** *** 89,96 **** def _queryAll(self, conn, s): if self.debug: ! print 'QueryAll: %s' % s c = conn.cursor() c.execute(s) ! return c.fetchall() def queryAll(self, s): --- 120,130 ---- def _queryAll(self, conn, s): if self.debug: ! self.printDebug(conn, s, 'QueryAll') c = conn.cursor() c.execute(s) ! value = c.fetchall() ! if self.debugOutput: ! self.printDebug(conn, value, 'QueryAll', 'result') ! return value def queryAll(self, s): *************** *** 99,106 **** def _queryOne(self, conn, s): if self.debug: ! print 'QueryOne: %s' % s c = conn.cursor() c.execute(s) ! return c.fetchone() def queryOne(self, s): --- 133,143 ---- def _queryOne(self, conn, s): if self.debug: ! self.printDebug(conn, s, 'QueryOne') c = conn.cursor() c.execute(s) ! value = c.fetchone() ! if self.debugOutput: ! self.printDebug(conn, value, 'QueryOne', 'result') ! return value def queryOne(self, s): *************** *** 118,139 **** return self._runWithConnection(self._queryInsertID, table, idName, names, values) ! def iterSelect(self, select): ! conn = self.getConnection() cursor = conn.cursor() query = self.queryForSelect(select) if self.debug: ! print "Select: %s" % query cursor.execute(query) while 1: result = cursor.fetchone() if result is None: ! self.releaseConnection(conn) break if select.ops.get('lazyColumns', 0): ! yield select.sourceClass(result[0]) else: ! obj = select.sourceClass(result[0], selectResults=result[1:]) yield obj def countSelect(self, select): q = "SELECT COUNT(*) FROM %s WHERE %s" % \ --- 155,182 ---- return self._runWithConnection(self._queryInsertID, table, idName, names, values) ! def _iterSelect(self, conn, select, withConnection=None, ! keepConnection=False): cursor = conn.cursor() query = self.queryForSelect(select) if self.debug: ! self.printDebug(conn, query, 'Select') cursor.execute(query) while 1: result = cursor.fetchone() if result is None: ! if not keepConnection: ! self.releaseConnection(conn) break if select.ops.get('lazyColumns', 0): ! obj = select.sourceClass(result[0], connection=withConnection) ! yield obj else: ! obj = select.sourceClass(result[0], selectResults=result[1:], connection=withConnection) yield obj + def iterSelect(self, select): + return self._runWithConnection(self._iterSelect, select, self, + False) + def countSelect(self, select): q = "SELECT COUNT(*) FROM %s WHERE %s" % \ *************** *** 141,146 **** self.whereClauseForSelect(select, limit=0, order=0)) val = int(self.queryOne(q)[0]) - if self.debug: - print "COUNT results:", val return val --- 184,187 ---- *************** *** 245,249 **** # or classes freely, but keep the SQLObject class from accessing # the database directly. This way no SQL is actually created ! # in SQLObject. def _SO_update(self, so, values): --- 286,290 ---- # or classes freely, but keep the SQLObject class from accessing # the database directly. This way no SQL is actually created ! # in the SQLObject class. def _SO_update(self, so, values): *************** *** 320,323 **** --- 361,365 ---- self._dbConnection = dbConnection self._connection = dbConnection.getConnection() + self._dbConnection._setAutoCommit(self._connection, 0) self.cache = CacheSet(cache=dbConnection.doCache) *************** *** 329,340 **** def queryOne(self, s): ! return self._dbConnection._queryAll(self._connection, s) def commit(self): self._connection.commit() def rollback(self): self._connection.rollback() def __del__(self): self.rollback() --- 371,413 ---- def queryOne(self, s): ! return self._dbConnection._queryOne(self._connection, s) ! ! def queryInsertID(self, table, idName, names, values): ! return self._dbConnection._queryInsertID( ! self._connection, table, idName, names, values) ! ! def iterSelect(self, select): ! # @@: Bad stuff here, because the connection will be used ! # until the iteration is over, or at least a cursor from ! # the connection, which not all database drivers support. ! return self._dbConnection._iterSelect( ! self._connection, select, withConnection=self, ! keepConnection=True) def commit(self): + if self._dbConnection.debug: + self._dbConnection.printDebug(self._connection, '', 'COMMIT') self._connection.commit() def rollback(self): + if self._dbConnection.debug: + self._dbConnection.printDebug(self._connection, '', 'ROLLBACK') self._connection.rollback() + def __getattr__(self, attr): + """ + If nothing else works, let the parent connection handle it. + Except with this transaction as 'self'. Poor man's + acquisition? Bad programming? Okay, maybe. + """ + attr = getattr(self._dbConnection, attr) + try: + func = attr.im_func + except AttributeError: + return attr + else: + meth = new.instancemethod(func, self, self.__class__) + return meth + def __del__(self): self.rollback() *************** *** 343,346 **** --- 416,421 ---- class MySQLConnection(DBAPI): + supportTransactions = False + def __init__(self, db, user, passwd='', host='localhost', **kw): global MySQLdb *************** *** 361,367 **** q = self._insertSQL(table, names, values) if self.debug: ! print 'QueryIns: %s' % q c.execute(q) ! return c.insert_id() def _queryAddLimitOffset(self, query, start, end): --- 436,445 ---- q = self._insertSQL(table, names, values) if self.debug: ! self.printDebug(conn, q, 'QueryIns') c.execute(q) ! id = c.insert_id() ! if self.debugOutput: ! self.printDebug(conn, id, 'QueryIns', 'result') ! return id def _queryAddLimitOffset(self, query, start, end): *************** *** 429,437 **** class PostgresConnection(DBAPI): def __init__(self, dsn=None, host=None, db=None, ! user=None, passwd=None, autoCommit=1, **kw): ! global psycopg ! if psycopg is None: ! import psycopg if not autoCommit and not kw.has_key('pool'): # Pooling doesn't work with transactions... --- 507,527 ---- class PostgresConnection(DBAPI): + supportTransactions = True + def __init__(self, dsn=None, host=None, db=None, ! user=None, passwd=None, autoCommit=1, ! usePygresql=False, ! **kw): ! global psycopg, pgdb ! if usePygresql: ! if pgdb is None: ! import pgdb ! self.pgmodule = pgdb ! else: ! if psycopg is None: ! import psycopg ! self.pgmodule = psycopg ! ! self.autoCommit = autoCommit if not autoCommit and not kw.has_key('pool'): # Pooling doesn't work with transactions... *************** *** 452,457 **** DBAPI.__init__(self, **kw) def makeConnection(self): ! return psycopg.connect(self.dsn) def _queryInsertID(self, conn, table, idName, names, values): --- 542,553 ---- DBAPI.__init__(self, **kw) + def _setAutoCommit(self, conn, auto): + conn.autocommit(auto) + def makeConnection(self): ! conn = self.pgmodule.connect(self.dsn) ! if self.autoCommit: ! conn.autocommit(1) ! return conn def _queryInsertID(self, conn, table, idName, names, values): *************** *** 459,467 **** q = self._insertSQL(table, names, values) if self.debug: ! print 'QueryIns: %s' % q c.execute(q) c.execute('SELECT %s FROM %s WHERE oid = %s' % (idName, table, c.lastoid())) ! return c.fetchone()[0] def _queryAddLimitOffset(self, query, start, end): --- 555,566 ---- q = self._insertSQL(table, names, values) if self.debug: ! self.printDebug(conn, q, 'QueryIns') c.execute(q) c.execute('SELECT %s FROM %s WHERE oid = %s' % (idName, table, c.lastoid())) ! id = c.fetchone()[0] ! if self.debugOutput: ! self.printDebug(conn, id, 'QueryIns', 'result') ! return id def _queryAddLimitOffset(self, query, start, end): *************** *** 555,558 **** --- 654,659 ---- class SQLiteConnection(DBAPI): + supportTransactions = True + def __init__(self, filename, autoCommit=1, **kw): global sqlite *************** *** 568,571 **** --- 669,675 ---- DBAPI.__init__(self, **kw) + def _setAutoCommit(self, conn, auto): + conn.autocommit = auto + def makeConnection(self): return self._conn *************** *** 575,582 **** q = self._insertSQL(table, names, values) if self.debug: ! print 'QueryIns: %s' % q c.execute(q) # lastrowid is a DB-API extension from "PEP 0249": ! return int(c.lastrowid) def _queryAddLimitOffset(self, query, start, end): --- 679,689 ---- q = self._insertSQL(table, names, values) if self.debug: ! self.printDebug(conn, q, 'QueryIns') c.execute(q) # lastrowid is a DB-API extension from "PEP 0249": ! id = int(c.lastrowid) ! if self.debugOutput: ! self.printDebug(conn, id, 'QueryIns', 'result') ! return id def _queryAddLimitOffset(self, query, start, end): *************** *** 607,610 **** --- 714,719 ---- class FirebirdConnection(DBAPI): + supportTransactions = False + def __init__(self, host, db, user='sysdba', passwd='masterkey', autoCommit=1, **kw): *************** *** 644,649 **** qry = self._insertSQL(table, names, values) if self.debug: ! print 'QueryIns: %s' % q self.query(qry) return new_key --- 753,760 ---- qry = self._insertSQL(table, names, values) if self.debug: ! self.printDebug(conn, q, 'QueryIns') self.query(qry) + if self.debugOutput: + self.printDebug(conn, new_key, 'QueryIns', 'result') return new_key *************** *** 659,664 **** match = self.limit_re.match(query) - if self.debug: - print query if match and len(match.groups()) == 2: return ' '.join([limit_str, match.group(1)]) --- 770,773 ---- *************** *** 855,858 **** --- 964,969 ---- class DBMConnection(FileConnection): + + supportTransactions = False def __init__(self, path, **kw): Index: SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.51 retrieving revision 1.52 diff -C2 -d -r1.51 -r1.52 *** SQLObject.py 6 Sep 2003 03:05:12 -0000 1.51 --- SQLObject.py 7 Sep 2003 00:49:03 -0000 1.52 *************** *** 378,383 **** assert id is not None, 'None is not a possible id for %s' % cls.__name ! # When id is None, that means we are trying ! # to create a new object. This is done by the # `new()` method. if id is CreateNewSQLObject: --- 378,383 ---- assert id is not None, 'None is not a possible id for %s' % cls.__name ! # When id is CreateNewSQLObject, that means we are trying to ! # create a new object. This is a contract of sorts with the # `new()` method. if id is CreateNewSQLObject: *************** *** 388,391 **** --- 388,394 ---- # column-values for the new row: inst._SO_createValues = {} + if connection is not None: + inst._connection = connection + assert selectResults is None return inst *************** *** 507,511 **** if column.alternateMethodName: ! func = eval('lambda cls, val: cls._SO_fetchAlternateID(%s, val)' % repr(column.dbName)) setattr(cls, column.alternateMethodName, classmethod(func)) --- 510,514 ---- if column.alternateMethodName: ! func = eval('lambda cls, val, connection=None: cls._SO_fetchAlternateID(%s, val, connection=connection)' % repr(column.dbName)) setattr(cls, column.alternateMethodName, classmethod(func)) *************** *** 663,667 **** dbNames = [col.dbName for col in self._SO_columns] if not selectResults: ! selectResults = (connection or self._connection)._SO_selectOne(self, dbNames) if not selectResults: raise SQLObjectNotFound, "The object %s by the ID %s does not exist" % (self.__class__.__name__, self.id) --- 666,670 ---- dbNames = [col.dbName for col in self._SO_columns] if not selectResults: ! selectResults = self._connection._SO_selectOne(self, dbNames) if not selectResults: raise SQLObjectNotFound, "The object %s by the ID %s does not exist" % (self.__class__.__name__, self.id) *************** *** 809,813 **** # Here's where an INSERT is finalized. # These are all the column values that were supposed ! # to be set, but weren't. setters = self._SO_createValues.items() # Here's their database names: --- 812,816 ---- # Here's where an INSERT is finalized. # These are all the column values that were supposed ! # to be set, but were delayed until now: setters = self._SO_createValues.items() # Here's their database names: *************** *** 815,818 **** --- 818,823 ---- values = [v[1] for v in setters] # Get rid of _SO_create*, we aren't creating anymore. + # Doesn't have to be threadsafe because we're still in + # new(), which doesn't need to be threadsafe. del self._SO_createValues del self._SO_creating *************** *** 830,835 **** return getID(obj) ! def _SO_fetchAlternateID(cls, dbIDName, value): ! result = cls._connection._SO_selectOneAlt( cls, [cls._idName] + --- 835,840 ---- return getID(obj) ! def _SO_fetchAlternateID(cls, dbIDName, value, connection=None): ! result = (connection or cls._connection)._SO_selectOneAlt( cls, [cls._idName] + *************** *** 839,843 **** if not result: raise SQLObjectNotFound, "The %s by alternateID %s=%s does not exist" % (cls.__name__, dbIDName, repr(value)) ! obj = cls(result[0]) if not obj._cacheValues: obj._SO_writeLock.acquire() --- 844,851 ---- if not result: raise SQLObjectNotFound, "The %s by alternateID %s=%s does not exist" % (cls.__name__, dbIDName, repr(value)) ! if connection: ! obj = cls(result[0], connection=connection) ! else: ! obj = cls(result[0]) if not obj._cacheValues: obj._SO_writeLock.acquire() *************** *** 847,863 **** _SO_fetchAlternateID = classmethod(_SO_fetchAlternateID) - # 3-03 @@: Should this have a connection argument? def select(cls, clause=None, clauseTables=None, orderBy=NoDefault, groupBy=None, limit=None, ! lazyColumns=False, reversed=False): return SelectResults(cls, clause, clauseTables=clauseTables, orderBy=orderBy, groupBy=groupBy, limit=limit, lazyColumns=lazyColumns, ! reversed=reversed) select = classmethod(select) ! def selectBy(cls, **kw): return SelectResults(cls, ! cls._connection._SO_columnClause(cls, kw)) selectBy = classmethod(selectBy) --- 855,873 ---- _SO_fetchAlternateID = classmethod(_SO_fetchAlternateID) def select(cls, clause=None, clauseTables=None, orderBy=NoDefault, groupBy=None, limit=None, ! lazyColumns=False, reversed=False, ! connection=None): return SelectResults(cls, clause, clauseTables=clauseTables, orderBy=orderBy, groupBy=groupBy, limit=limit, lazyColumns=lazyColumns, ! reversed=reversed, ! connection=connection) select = classmethod(select) ! def selectBy(cls, connection=None, **kw): return SelectResults(cls, ! cls._connection._SO_columnClause(cls, kw), ! connection=connection) selectBy = classmethod(selectBy) *************** *** 988,991 **** --- 998,1003 ---- orderBy = self._mungeOrderBy(orderBy) self.ops['dbOrderBy'] = orderBy + if ops.has_key('connection') and ops['connection'] is None: + del ops['connection'] def _mungeOrderBy(self, orderBy): |
From: <ian...@us...> - 2003-09-07 00:49:14
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv26904/tests Modified Files: SQLObjectTest.py test.py Log Message: Lots of transaction fixes. Index: SQLObjectTest.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/SQLObjectTest.py,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -d -r1.12 -r1.13 *** SQLObjectTest.py 6 Sep 2003 07:12:02 -0000 1.12 --- SQLObjectTest.py 7 Sep 2003 00:49:03 -0000 1.13 *************** *** 50,56 **** SQLObjectTest.supportAuto = False SQLObjectTest.supportRestrictedEnum = False ! return FirebirdConnection('localhost', 'data/firebird.gdb') ! supportedDatabases = ['mysql', 'postgres', 'sqlite', 'firebird', 'dbm'] class SQLObjectTest(unittest.TestCase): --- 50,73 ---- SQLObjectTest.supportAuto = False SQLObjectTest.supportRestrictedEnum = False ! return FirebirdConnection('localhost', '/usr/home/ianb/w/SQLObject/data/firebird.gdb', ! user='sysdba', passwd='masterkey') ! _supportedDatabases = { ! 'mysql': 'MySQLdb', ! 'postgres': 'psycopg', ! 'sqlite': 'sqlite', ! 'firebird': 'kinterbasdb', ! 'dbm': 'anydbm'} ! ! def supportedDatabases(): ! result = [] ! for name, module in _supportedDatabases.items(): ! try: ! exec 'import %s' % module ! except ImportError: ! pass ! else: ! result.append(name) ! return result class SQLObjectTest(unittest.TestCase): *************** *** 58,62 **** classes = [] ! debugSQL = 0 databaseName = None --- 75,81 ---- classes = [] ! debugSQL = False ! debugOutput = False ! debugInserts = False databaseName = None *************** *** 67,71 **** print '#' * 70 unittest.TestCase.setUp(self) ! #__connection__.debug = self.debugSQL for c in self.classes: c._connection = __connection__ --- 86,93 ---- print '#' * 70 unittest.TestCase.setUp(self) ! if self.debugInserts: ! __connection__.debug = True ! __connection__.debugOuput = self.debugOutput ! for c in self.classes: c._connection = __connection__ *************** *** 88,94 **** elif hasattr(c, 'createTable'): c.createTable(ifNotExists=True) - #__connection__.debug = self.debugSQL self.inserts() __connection__.debug = self.debugSQL def inserts(self): --- 110,116 ---- elif hasattr(c, 'createTable'): c.createTable(ifNotExists=True) self.inserts() __connection__.debug = self.debugSQL + __connection__.debugOutput = self.debugOutput def inserts(self): Index: test.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/test.py,v retrieving revision 1.21 retrieving revision 1.22 diff -C2 -d -r1.21 -r1.22 *** test.py 17 Jul 2003 01:20:18 -0000 1.21 --- test.py 7 Sep 2003 00:49:03 -0000 1.22 *************** *** 1,2 **** --- 1,10 ---- + """ + Main (um, only) unit testing for SQLObject. + + Use -vv to see SQL queries, -vvv to also see output from queries, + and together with --inserts to see the SQL from the standard + insert statements (which are often boring). + """ + from SQLObjectTest import * from SQLObject import * *************** *** 213,230 **** ######################################## class TransactionTest(SQLObjectTest): ! classes = [Names] def inserts(self): ! Names.new(fname='bob', lname='jones') def testTransaction(self): if not self.supportTransactions: return ! trans = Names._connection.transaction() ! Names.new(fname='joe', lname='jones', connection=trans) ! trans.rollback() ! self.assertEqual([n.fname for n in Names.select()], ! ['bob']) --- 221,255 ---- ######################################## + class TestSOTrans(SQLObject): + _cacheValues = False + name = StringCol(length=10, alternateID=True) + _defaultOrderBy = 'name' + class TransactionTest(SQLObjectTest): ! classes = [TestSOTrans] def inserts(self): ! TestSOTrans.new(name='bob') ! TestSOTrans.new(name='tim') def testTransaction(self): if not self.supportTransactions: return ! trans = TestSOTrans._connection.transaction() ! try: ! TestSOTrans._connection.autoCommit = 'exception' ! TestSOTrans.new(name='joe', connection=trans) ! trans.rollback() ! self.assertEqual([n.name for n in TestSOTrans.select(connection=trans)], ! ['bob', 'tim']) ! b = TestSOTrans.byName('bob', connection=trans) ! b.name = 'robert' ! trans.commit() ! self.assertEqual(b.name, 'robert') ! b.name = 'bob' ! trans.rollback() ! self.assertEqual(b.name, 'robert') ! finally: ! TestSOTrans._connection.autoCommit = True *************** *** 564,580 **** import unittest, sys dbs = [] for arg in sys.argv[1:]: if arg.startswith('-d'): dbs.append(arg[2:]) if arg.startswith('--database='): dbs.append(arg[11:]) if arg in ('-vv', '--extra-verbose'): ! SQLObjectTest.debugSQL = 1 ! sys.argv = [a for a in sys.argv ! if not a.startswith('-d') and not a.startswith('--database=')] if not dbs: dbs = ['mysql'] if dbs == ['all']: ! dbs = supportedDatabases for db in dbs: setDatabaseType(db) --- 589,616 ---- import unittest, sys dbs = [] + newArgs = [] for arg in sys.argv[1:]: if arg.startswith('-d'): dbs.append(arg[2:]) + continue if arg.startswith('--database='): dbs.append(arg[11:]) + continue if arg in ('-vv', '--extra-verbose'): ! SQLObjectTest.debugSQL = True ! if arg in ('-vvv', '--super-verbose'): ! SQLObjectTest.debugSQL = True ! SQLObjectTest.debugOutput = True ! newArgs.append('-vv') ! continue ! if arg in ('--inserts',): ! SQLObjectTest.debugInserts = True ! continue ! newArgs.append(arg) ! sys.argv = [sys.argv[0]] + newArgs if not dbs: dbs = ['mysql'] if dbs == ['all']: ! dbs = supportedDatabases() for db in dbs: setDatabaseType(db) |
From: <ian...@us...> - 2003-09-06 07:12:10
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv26184/tests Modified Files: SQLObjectTest.py Log Message: Tweaked Firebird a little Index: SQLObjectTest.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/SQLObjectTest.py,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -d -r1.11 -r1.12 *** SQLObjectTest.py 6 Sep 2003 02:33:18 -0000 1.11 --- SQLObjectTest.py 6 Sep 2003 07:12:02 -0000 1.12 *************** *** 32,35 **** --- 32,42 ---- return PostgresConnection(db='test') + def pygresConnection(): + SQLObjectTest.supportDynamic = True + SQLObjectTest.supportAuto = True + SQLObjectTest.supportRestrictedEnum = True + SQLObjectTest.supportTransactions = True + return PostgresConnection(db='test', usePygresql=True) + def sqliteConnection(): SQLObjectTest.supportDynamic = False *************** *** 43,47 **** SQLObjectTest.supportAuto = False SQLObjectTest.supportRestrictedEnum = False ! return FirebirdConnection('localhost', 'data/sotest.gdb') supportedDatabases = ['mysql', 'postgres', 'sqlite', 'firebird', 'dbm'] --- 50,54 ---- SQLObjectTest.supportAuto = False SQLObjectTest.supportRestrictedEnum = False ! return FirebirdConnection('localhost', 'data/firebird.gdb') supportedDatabases = ['mysql', 'postgres', 'sqlite', 'firebird', 'dbm'] |
From: <ian...@us...> - 2003-09-06 03:05:16
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv26427/SQLObject Modified Files: SQLObject.py Log Message: Applied fix for [ 793467 ] buglet in set() method, with patch supplied by John A. Barbuto (jbarbuto) Index: SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.50 retrieving revision 1.51 diff -C2 -d -r1.50 -r1.51 *** SQLObject.py 1 Aug 2003 01:19:34 -0000 1.50 --- SQLObject.py 6 Sep 2003 03:05:12 -0000 1.51 *************** *** 706,710 **** # old setattr() to change the value, since we can't # read the user's mind. We'll combine everything ! # else into a single UPDATE. toUpdate = {} for name, value in kw.items(): --- 706,710 ---- # old setattr() to change the value, since we can't # read the user's mind. We'll combine everything ! # else into a single UPDATE, if necessary. toUpdate = {} for name, value in kw.items(): *************** *** 716,720 **** setattr(self, name, value) ! self._connection._SO_update(self, [(self._SO_columnDict[name].dbName, value) for name, value in toUpdate.items()]) self._SO_writeLock.release() --- 716,721 ---- setattr(self, name, value) ! if toUpdate: ! self._connection._SO_update(self, [(self._SO_columnDict[name].dbName, value) for name, value in toUpdate.items()]) self._SO_writeLock.release() |
From: <ian...@us...> - 2003-09-06 02:37:45
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv22825/SQLObject Modified Files: DBConnection.py Log Message: Cleaned up imports, so we only import the drivers that we need. Index: DBConnection.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/DBConnection.py,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -d -r1.42 -r1.43 *** DBConnection.py 6 Sep 2003 02:33:17 -0000 1.42 --- DBConnection.py 6 Sep 2003 02:37:42 -0000 1.43 *************** *** 4,41 **** import threading import SQLBuilder from Cache import CacheSet import Col - try: - import cPickle as pickle - except ImportError: - import pickle - import os - import anydbm from Join import sorter ! try: ! import MySQLdb ! except ImportError: ! MySQLdb = None ! ! try: ! import psycopg ! except ImportError: ! psycopg = None ! ! try: ! import sqlite ! except ImportError: ! sqlite = None ! ! try: ! import kinterbasdb ! except ImportError: ! kinterbasdb = None ! ! import re ! import warnings ! import atexit warnings.filterwarnings("ignore", "DB-API extension cursor.lastrowid used") --- 4,24 ---- import threading + import re + import warnings + import atexit + import os import SQLBuilder from Cache import CacheSet import Col from Join import sorter ! # We set these up as globals, which will be set if we end up ! # needing the drivers: ! anydbm = None ! pickle = None ! MySQLdb = None ! psycopg = None ! sqlite = None ! kinterbasdb = None warnings.filterwarnings("ignore", "DB-API extension cursor.lastrowid used") *************** *** 361,365 **** def __init__(self, db, user, passwd='', host='localhost', **kw): ! assert MySQLdb, 'MySQLdb module cannot be found' self.host = host self.db = db --- 344,350 ---- def __init__(self, db, user, passwd='', host='localhost', **kw): ! global MySQLdb ! if MySQLdb is None: ! import MySQLdb self.host = host self.db = db *************** *** 446,450 **** def __init__(self, dsn=None, host=None, db=None, user=None, passwd=None, autoCommit=1, **kw): ! assert psycopg, 'psycopg module cannot be found' if not autoCommit and not kw.has_key('pool'): # Pooling doesn't work with transactions... --- 431,437 ---- def __init__(self, dsn=None, host=None, db=None, user=None, passwd=None, autoCommit=1, **kw): ! global psycopg ! if psycopg is None: ! import psycopg if not autoCommit and not kw.has_key('pool'): # Pooling doesn't work with transactions... *************** *** 569,573 **** def __init__(self, filename, autoCommit=1, **kw): ! assert sqlite, 'sqlite module cannot be found' self.filename = filename # full path to sqlite-db-file if not autoCommit and not kw.has_key('pool'): --- 556,562 ---- def __init__(self, filename, autoCommit=1, **kw): ! global sqlite ! if sqlite is None: ! import sqlite self.filename = filename # full path to sqlite-db-file if not autoCommit and not kw.has_key('pool'): *************** *** 620,624 **** def __init__(self, host, db, user='sysdba', passwd='masterkey', autoCommit=1, **kw): ! assert kinterbasdb, 'kinterbasdb module cannot be found' self.limit_re = re.compile('^\s*(select )(.*)', re.IGNORECASE) --- 609,615 ---- def __init__(self, host, db, user='sysdba', passwd='masterkey', autoCommit=1, **kw): ! global kinterbasdb ! if kinterbasdb is None: ! import kinterbasdb self.limit_re = re.compile('^\s*(select )(.*)', re.IGNORECASE) *************** *** 866,869 **** --- 857,868 ---- def __init__(self, path, **kw): + global anydbm, pickle + if anydbm is None: + import anydbm + if pickle is None: + try: + import cPickle as pickle + except ImportError: + import pickle self.path = path try: |
From: <ian...@us...> - 2003-09-06 02:33:22
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv22180/tests Modified Files: SQLObjectTest.py Log Message: Added Firebird support. Untested! Need to figure out how to set up Firebird on my computer... Index: SQLObjectTest.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/SQLObjectTest.py,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** SQLObjectTest.py 1 Aug 2003 01:29:28 -0000 1.10 --- SQLObjectTest.py 6 Sep 2003 02:33:18 -0000 1.11 *************** *** 39,43 **** return SQLiteConnection('data/sqlite.data') ! supportedDatabases = ['mysql', 'postgres', 'sqlite', 'dbm'] class SQLObjectTest(unittest.TestCase): --- 39,49 ---- return SQLiteConnection('data/sqlite.data') ! def firebirdConnection(): ! SQLObjectTest.supportDynamic = False ! SQLObjectTest.supportAuto = False ! SQLObjectTest.supportRestrictedEnum = False ! return FirebirdConnection('localhost', 'data/sotest.gdb') ! ! supportedDatabases = ['mysql', 'postgres', 'sqlite', 'firebird', 'dbm'] class SQLObjectTest(unittest.TestCase): |
From: <ian...@us...> - 2003-09-06 02:33:22
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv22180/SQLObject Modified Files: Col.py DBConnection.py Log Message: Added Firebird support. Untested! Need to figure out how to set up Firebird on my computer... Index: Col.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/Col.py,v retrieving revision 1.24 retrieving revision 1.25 diff -C2 -d -r1.24 -r1.25 *** Col.py 28 Jun 2003 22:21:21 -0000 1.24 --- Col.py 6 Sep 2003 02:33:17 -0000 1.25 *************** *** 162,165 **** --- 162,169 ---- return '' + def _firebirdType(self): + return self._sqlType() + + def mysqlCreateSQL(self): return ' '.join([self.dbName, self._mysqlType()] + self._extraSQL()) *************** *** 171,174 **** --- 175,181 ---- return ' '.join([self.dbName, self._sqliteType()] + self._extraSQL()) + def firebirdCreateSQL(self): + return ' '.join([self.dbName, self._firebirdType()] + self._extraSQL()) + class Col(object): *************** *** 258,261 **** --- 265,271 ---- def _postgresType(self): + return 'INT' + + def _firebirdType(self): return 'INT' Index: DBConnection.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/DBConnection.py,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -d -r1.41 -r1.42 *** DBConnection.py 18 Jul 2003 03:15:50 -0000 1.41 --- DBConnection.py 6 Sep 2003 02:33:17 -0000 1.42 *************** *** 29,32 **** --- 29,38 ---- except ImportError: sqlite = None + + try: + import kinterbasdb + except ImportError: + kinterbasdb = None + import re import warnings *************** *** 36,40 **** __all__ = ['MySQLConnection', 'PostgresConnection', 'SQLiteConnection', ! 'DBMConnection'] _connections = {} --- 42,46 ---- __all__ = ['MySQLConnection', 'PostgresConnection', 'SQLiteConnection', ! 'DBMConnection', 'FirebirdConnection'] _connections = {} *************** *** 605,608 **** --- 611,772 ---- # turn it into a boolean: return not not result + + ######################################## + ## Firebird connection + ######################################## + + class FirebirdConnection(DBAPI): + + def __init__(self, host, db, user='sysdba', + passwd='masterkey', autoCommit=1, **kw): + assert kinterbasdb, 'kinterbasdb module cannot be found' + + self.limit_re = re.compile('^\s*(select )(.*)', re.IGNORECASE) + + if not autoCommit and not kw.has_key('pool'): + # Pooling doesn't work with transactions... + kw['pool'] = 0 + + self.host = host + self.db = db + self.user = user + self.passwd = passwd + + DBAPI.__init__(self, **kw) + + def makeConnection(self): + return kinterbasdb.connect( + host = self.host, database = self.db, + user = self.user, password = self.passwd + ) + + def _queryInsertID(self, conn, table, idName, names, values): + """Firebird uses 'generators' to create new ids for a table. + The users needs to create a generator named GEN_<tablename> + for each table this method to work.""" + + row = self.queryOne('SELECT gen_id(GEN_%s,1) FROM rdb$database' + % table) + new_key = row[0] + names.append('ID') + values.append(new_key) + qry = self._insertSQL(table, names, values) + if self.debug: + print 'QueryIns: %s' % q + self.query(qry) + return new_key + + def _queryAddLimitOffset(self, query, start, end): + """Firebird slaps the limit and offset (actually 'first' and + 'skip', respectively) statement right after the select.""" + if not start: + limit_str = "SELECT FIRST %i" % end + if not end: + limit_str = "SELECT SKIP %i" % start + else: + limit_str = "SELECT FIRST %i SKIP %i" % (end-start, start) + + match = self.limit_re.match(query) + if self.debug: + print query + if match and len(match.groups()) == 2: + return ' '.join([limit_str, match.group(1)]) + else: + return query + + def createTable(self, soClass): + self.query('CREATE TABLE %s (\n%s\n)' % \ + (soClass._table, self.createColumns(soClass))) + self.query("CREATE GENERATOR GEN_%s" % soClass._table) + + def createColumn(self, soClass, col): + return col.firebirdCreateSQL() + + def createIDColumn(self, soClass): + return '%s INT NOT NULL PRIMARY KEY' % soClass._idName + + def joinSQLType(self, join): + return 'INT NOT NULL' + + def tableExists(self, tableName): + # there's something in the database by this name...let's + # assume it's a table. By default, fb 1.0 stores EVERYTHING + # it cares about in uppercase. + result = self.queryOne("SELECT COUNT(rdb$relation_name) FROM rdb$relations WHERE rdb$relation_name = '%s'" + % tableName.upper()) + return result[0] + + def addColumn(self, tableName, column): + self.query('ALTER TABLE %s ADD COLUMN %s' % + (tableName, + column.firebirdCreateSQL())) + + def dropTable(self, tableName): + self.query("DROP TABLE %s" % tableName) + self.query("DROP GENERATOR GEN_%s" % tableName) + + def delColumn(self, tableName, column): + self.query('ALTER TABLE %s DROP COLUMN %s' % + (tableName, + column.dbName)) + + def columnsFromSchema(self, tableName, soClass): + #let's punt for now!!! + return + keyQuery = """ + SELECT pg_catalog.pg_get_constraintdef(oid) as condef + FROM pg_catalog.pg_constraint r + WHERE r.conrelid = '%s'::regclass AND r.contype = 'f'""" + + colQuery = """ + SELECT a.attname, + pg_catalog.format_type(a.atttypid, a.atttypmod), a.attnotnull, + (SELECT substring(d.adsrc for 128) FROM pg_catalog.pg_attrdef d + WHERE d.adrelid=a.attrelid AND d.adnum = a.attnum) + FROM pg_catalog.pg_attribute a + WHERE a.attrelid ='%s'::regclass + AND a.attnum > 0 AND NOT a.attisdropped + ORDER BY a.attnum""" + + keyData = self.queryAll(keyQuery % tableName) + keyRE = re.compile("\((.+)\) REFERENCES (.+)\(") + keymap = {} + for (condef,) in keyData: + match = keyRE.search(condef) + if match: + field, reftable = match.groups() + keymap[field] = reftable.capitalize() + colData = self.queryAll(colQuery % tableName) + results = [] + for field, t, notnull, defaultstr in colData: + if field == 'id': + continue + colClass, kw = self.guessClass(t) + kw['name'] = soClass._style.dbColumnToPythonAttr(field) + kw['notNone'] = notnull + if defaultstr is not None: + kw['default'] = getattr(SQLBuilder.const, defaultstr) + if keymap.has_key(field): + kw['foreignKey'] = keymap[field] + results.append(colClass(**kw)) + return results + + def guessClass(self, t): + # ditto on the punting + return + if t.count('int'): + return Col.IntCol, {} + elif t.count('varying'): + return Col.StringCol, {'length': int(t[t.index('(')+1:-1])} + elif t.startswith('character('): + return Col.StringCol, {'length': int(t[t.index('(')+1:-1]), + 'varchar': False} + elif t=='text': + return Col.StringCol, {} + elif t.startswith('datetime'): + return Col.DateTimeCol, {} + else: + return Col.Col, {} + ######################################## |
From: <ian...@us...> - 2003-09-06 02:33:22
|
Update of /cvsroot/sqlobject/SQLObject/docs In directory sc8-pr-cvs1:/tmp/cvs-serv22180/docs Modified Files: Authors.txt Log Message: Added Firebird support. Untested! Need to figure out how to set up Firebird on my computer... Index: Authors.txt =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/docs/Authors.txt,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** Authors.txt 30 May 2003 02:20:22 -0000 1.2 --- Authors.txt 6 Sep 2003 02:33:17 -0000 1.3 *************** *** 11,12 **** --- 11,13 ---- * David M. Cook <da...@da...> * Luke Opperman <lu...@me...> + * James Ralston <jra...@ho...> |
From: <ian...@us...> - 2003-08-23 18:12:46
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv28122/SQLObject Modified Files: Converters.py Log Message: Added struct_time and datetime converter support Index: Converters.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/Converters.py,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -d -r1.2 -r1.3 *** Converters.py 1 Aug 2003 01:30:09 -0000 1.2 --- Converters.py 23 Aug 2003 18:12:43 -0000 1.3 *************** *** 11,15 **** origISOStr = None DateTimeType = None ! from types import InstanceType, ClassType, TypeType --- 11,19 ---- origISOStr = None DateTimeType = None ! import time ! try: ! import datetime ! except ImportError: ! datetime = None from types import InstanceType, ClassType, TypeType *************** *** 93,96 **** --- 97,122 ---- registerConverter(type(()), SequenceConverter) registerConverter(type([]), SequenceConverter) + + if hasattr(time, 'struct_time'): + def StructTimeConverter(value): + return time.strftime("'%Y-%m-%d %H:%M:%S'", value) + + registerConverter(time.struct_time, StructTimeConverter) + + if datetime: + def DateTimeConverter(value): + return value.strftime("'%Y-%m-%d %H:%M:%s'") + + registerConverter(datetime.datetime, DateTimeConverter) + + def TimeConverter(value): + return value.strftime("'%H:%M:%s'") + + registerConverter(datetime.time, TimeConverter) + + def DateConverter(value): + return value.strftime("'%Y-%m-%d'") + + registerConverter(datetime.date, DateConverter) def sqlRepr(obj): |
From: <ian...@us...> - 2003-08-22 11:28:49
|
Update of /cvsroot/sqlobject/SQLObject/docs In directory sc8-pr-cvs1:/tmp/cvs-serv28975/docs Added Files: FAQ.txt Log Message: Added a FAQ, with requisite examples --- NEW FILE: FAQ.txt --- +++++++++++++ SQLObject FAQ +++++++++++++ .. contents:: How can I do a LEFT JOIN? ------------------------- The short: you can't. You don't need to. That's a relational way of thinking, not an object way of thinking. But it's okay! It's not hard to do the same thing, even if it's not with the same query. For these examples, imagine you have a bunch of customers, with contacts. Not all customers have a contact, some have several. The left join would look like:: SELECT customer.id, customer.first_name, customer.last_name, contact.id, contact.address FROM customer LEFT JOIN contact ON contact.customer_id = customer.id Simple ~~~~~~ .. raw:: html :file: ../examples/snippets/leftjoin-simple.html The effect is the same as the left join -- you get all the customers, and you get all their contacts. The problem, however, is that you will be executing more queries -- a query for each customer to fetch the contacts -- where with the left join you'd only do one query. The actual amount of information returned from the database will be the same. There's a good chance that this won't be significantly slower. I'd advise doing it this way unless you hit an actual performance problem. Efficient ~~~~~~~~~ Lets say you really don't want to do all those queries. Okay, fine: .. raw:: html :file: ../examples/snippets/leftjoin-more.html This way there will only be at most two queries. It's a little more crude, but this is an optimization, and optimizations often look less than pretty. But, say you don't want to get everyone, just some group of people (presumably a large enough group that you still need this optimization): .. raw:: html :file: ../examples/snippets/leftjoin-more-query.html How Does Inheritance Work? -------------------------- SQLObject is not intended to represent every Python inheritance structure in an RDBMS -- rather it is intended to represent RDBMS structures as Python objects. So lots of things you can do in Python you can't do with SQLObject classes. However, some form of inheritance is possible. One way of using this is to create local conventions. Perhaps: .. raw:: html :file: ../examples/snippets/site-sqlobject.html Since SQLObject doesn't have a firm introspection mechanism (at least not yet) the example shows the beginnings of a bit of ad hoc introspection (in this case exposing the ``_columns`` attribute in a more pleasing/public interface). However, this doesn't relate to *database* inheritance at all, since we didn't define any columns. What if we do? .. raw:: html :file: ../examples/snippets/inheritance.html Unfortunately, the resultant schema probably doesn't look like what you might have wanted: .. raw:: html :file: ../examples/snippets/inheritance-schema.html All the columns from ``person`` are just repeated in the ``employee`` table. What's more, an ID for a Person is distinct from an ID for an employee, so for instance you must choose ``ForeignKey("Person")`` or ``ForeignKey("Employee")``, you can't have a foreign key that sometimes refers to one, and sometimes refers to the other. Altogether, not very useful. You probably want a ``person`` table, and then an ``employee`` table with a one-to-one relation between the two. Of course, you can have that, just create the appropriate classes/tables -- but it will appear as two distinct classes, and you'd have to do something like ``Person(1).employee.position``. Of course, you can always create the necessary shortcuts, like: .. raw:: html :file: ../examples/snippets/inheritance-faked.html It's not the most elegant setup, but it's functional and flexible. There are no plans for further support for inheritance (especially since the composition of multiple classes is usually a better solution anyway). Composite/Compound Attributes ----------------------------- A composite attribute is an attribute formed from two columns. For example: .. raw:: html :file: ../examples/snippets/composite-schema.html Now, you'll probably want to deal with one amount/currency value, instead of two columns. SQLObject doesn't directly support this, but it's easy (and encouraged) to do it on your own: .. raw:: html :file: ../examples/snippets/composite.html You'll note we go to some trouble to make sure that ``Price`` is an immutable object. This is important, because if ``Price`` wasn't and someone changed an attribute, the containing ``InvoiceItem`` instance wouldn't detect the change and update the database. (Also, since ``Price`` doesn't subclass ``SQLObject``, we have to be explicit about creating properties) Some people refer to this sort of class as a *Value Object*, that can be used similar to how an integer or string is used. You could also use a mutable composite class: .. raw:: html :file: ../examples/snippets/composite-mutable.html Pretty much a proxy, really, but ``SOCoords`` could contain other logic, could interact with non-SQLObject-based latitude/longitude values, or could be used among several objects that have latitude/longitude columns. |
From: <ian...@us...> - 2003-08-22 03:23:41
|
Update of /cvsroot/sqlobject/SOWeb In directory sc8-pr-cvs1:/tmp/cvs-serv29475 Modified Files: index.html index.txt Log Message: Added FAQ (plus link+regen) Index: index.html =================================================================== RCS file: /cvsroot/sqlobject/SOWeb/index.html,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -d -r1.7 -r1.8 *** index.html 5 Jul 2003 18:43:39 -0000 1.7 --- index.html 21 Aug 2003 05:06:58 -0000 1.8 *************** *** 4,8 **** <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> ! <meta name="generator" content="Docutils 0.2.8: http://docutils.sourceforge.net/" /> <title>SQLObject</title> <link rel="stylesheet" href="default.css" type="text/css" /> --- 4,8 ---- <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> ! <meta name="generator" content="Docutils 0.3.0: http://docutils.sourceforge.net/" /> <title>SQLObject</title> <link rel="stylesheet" href="default.css" type="text/css" /> *************** *** 14,23 **** <p class="topic-title"><a name="contents">Contents</a></p> <ul class="simple"> ! <li><a class="reference" href="#introduction" id="id8" name="id8">Introduction</a></li> ! <li><a class="reference" href="#example" id="id9" name="id9">Example</a></li> ! <li><a class="reference" href="#community" id="id10" name="id10">Community</a></li> ! <li><a class="reference" href="#bugs-patches-etc" id="id11" name="id11">Bugs, patches, etc</a></li> ! <li><a class="reference" href="#download" id="id12" name="id12">Download</a></li> ! <li><a class="reference" href="#documentation" id="id13" name="id13">Documentation</a></li> </ul> </div> --- 14,23 ---- <p class="topic-title"><a name="contents">Contents</a></p> <ul class="simple"> ! <li><a class="reference" href="#introduction" id="id9" name="id9">Introduction</a></li> ! <li><a class="reference" href="#example" id="id10" name="id10">Example</a></li> ! <li><a class="reference" href="#community" id="id11" name="id11">Community</a></li> ! <li><a class="reference" href="#bugs-patches-etc" id="id12" name="id12">Bugs, patches, etc</a></li> ! <li><a class="reference" href="#download" id="id13" name="id13">Download</a></li> ! <li><a class="reference" href="#documentation" id="id14" name="id14">Documentation</a></li> </ul> </div> *************** *** 65,71 **** </pre> <p>SQLObject supports most database schemas that you already have, and ! can also issue the <tt class="literal"><span class="pre">CREATE</span></tt> statement for you. Postgres and SQLite are ! also supported, and SQLObject provides an abstraction layer that helps ! make your application much more portable between these databases.</p> <p>Here's how you'd use the object:</p> <pre class="literal-block"> --- 65,72 ---- </pre> <p>SQLObject supports most database schemas that you already have, and ! can also issue the <tt class="literal"><span class="pre">CREATE</span></tt> statement for you. Postgres and SQLite ! are also supported (with Sybase and Firebird in the working), and ! SQLObject provides an abstraction layer that helps make your ! application much more portable between these databases.</p> <p>Here's how you'd use the object:</p> <pre class="literal-block"> *************** *** 115,118 **** --- 116,120 ---- <div class="section" id="documentation"> <h1><a name="documentation">Documentation</a></h1> + <p><a class="reference" href="docs/FAQ.html">FAQ</a></p> <p><a class="reference" href="docs/News.html">New in 0.4</a></p> <p><a class="reference" href="docs/SQLObject.html">SQLObject documentation</a></p> Index: index.txt =================================================================== RCS file: /cvsroot/sqlobject/SOWeb/index.txt,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -d -r1.5 -r1.6 *** index.txt 5 Jul 2003 18:43:39 -0000 1.5 --- index.txt 21 Aug 2003 05:06:58 -0000 1.6 *************** *** 52,58 **** SQLObject supports most database schemas that you already have, and ! can also issue the ``CREATE`` statement for you. Postgres and SQLite are ! also supported, and SQLObject provides an abstraction layer that helps ! make your application much more portable between these databases. Here's how you'd use the object:: --- 52,59 ---- SQLObject supports most database schemas that you already have, and ! can also issue the ``CREATE`` statement for you. Postgres and SQLite ! are also supported (with Sybase and Firebird in the working), and ! SQLObject provides an abstraction layer that helps make your ! application much more portable between these databases. Here's how you'd use the object:: *************** *** 125,128 **** --- 126,133 ---- Documentation ============= + + `FAQ`__ + + __ docs/FAQ.html `New in 0.4`__ |
From: <ian...@us...> - 2003-08-22 00:56:09
|
Update of /cvsroot/sqlobject/SQLObject/examples In directory sc8-pr-cvs1:/tmp/cvs-serv28975/examples Modified Files: codebits.py Added Files: leftjoin.py Log Message: Added a FAQ, with requisite examples --- NEW FILE: leftjoin.py --- from SQLObject import * ## Use one of these to define your connection: """ conn = MySQLConnection(user='test', db='testdb') conn = PostgresConnection('user=test dbname=testdb') conn = SQLiteConnect('database.db') conn = DBMConnection('database/') """ __connection__ = MySQLConnection(user='test', db='test') class Customer(SQLObject): firstName = StringCol(length=100) lastName = StringCol(length=100) contacts = MultipleJoin('Contact') class Contact(SQLObject): customer = ForeignKey('Customer') phoneNumber = StringCol(length=20) Customer.dropTable(ifExists=True) Customer.createTable() Contact.dropTable(ifExists=True) Contact.createTable() data = [ ['Joe Henry', '384-374-3584', '984-384-8594', '384-957-3948'], ['Tim Jackson', '204-485-9384'], ['Jane Austin'], ] for insert in data: firstName, lastName = insert[0].split(' ', 1) customer = Customer.new(firstName=firstName, lastName=lastName) for number in insert[1:]: contact = Contact.new(customer=customer, phoneNumber=number) ## Snippet "leftjoin-simple" for customer in Customer.select(): print customer.firstName, customer.lastName for contact in customer.contacts: print ' ', contact.phoneNumber ## end snippet ## Snippet "leftjoin-more" custContacts = {} for contact in Contact.select(): custContacts.setdefault(contact.customerID, []).append(contact) for customer in Customer.select(): print customer.firstName, customer.lastName for contact in custContacts.get(customer.id, []): print ' ', contact.phoneNumber ## end snippet ## Snippet "leftjoin-more-query" query = Customer.q.firstName.startswith('J') custContacts = {} for contact in Contact.select(AND(Contact.q.customerID == Customer.q.id, query)): custContacts.setdefault(contact.customerID, []).append(contact) for customer in Customer.select(query): print customer.firstName, customer.lastName for contact in custContacts.get(customer.id, []): print ' ', contact.phoneNumber ## end snippet Index: codebits.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/examples/codebits.py,v retrieving revision 1.1 retrieving revision 1.2 diff -C2 -d -r1.1 -r1.2 *** codebits.py 28 Jun 2003 22:17:36 -0000 1.1 --- codebits.py 21 Aug 2003 05:02:17 -0000 1.2 *************** *** 81,82 **** --- 81,222 ---- ## end snippet + ## Snippet "site-sqlobject" + class SiteSQLObject(SQLObject): + _connection = DBConnection.MySQLConnection(user='test', db='test') + _style = MixedCaseStyle() + + # And maybe you want a list of the columns, to autogenerate + # forms from: + def columns(self): + return [col.name for col in self._columns] + ## end snippet + + ## Snippet "inheritance" + class Person(SQLObject): + firstName = StringCol() + lastName = StringCol() + + class Employee(Person): + position = StringCol() + ## end snippet + + """ + ## Snippet "inheritance-schema" + CREATE TABLE person ( + id INT PRIMARY KEY, + first_name TEXT, + last_name TEXT + ); + + CREATE TABLE employee ( + id INT PRIMARY KEY + first_name TEXT, + last_name TEXT, + position TEXT + ) + ## end snippet + """ + + ## Snippet "inheritance-faked" + class Person(SQLObject): + firstName = StringCol() + lastName = StringCol() + + def _get_employee(self): + value = Employee.selectBy(person=self) + if value: + return value[0] + else: + raise AttributeError, '%r is not an employee' % self + def _get_isEmployee(self): + value = Employee.selectBy(person=self) + # turn into a bool: + return not not value + def _set_isEmployee(self, value): + if value: + # Make sure we are an employee... + if not self.isEmployee: + Empoyee.new(person=self, position=None) + else: + if self.isEmployee: + self.employee.destroySelf() + def _get_position(self): + return self.employee.position + def _set_position(self, value): + self.employee.position = value + + class Employee(SQLObject): + person = ForeignKey('Person') + position = StringCol() + ## end snippet + + """ + ## Snippet "composite-schema" + CREATE TABLE invoice_item ( + id INT PRIMARY KEY, + amount NUMERIC(10, 2), + currency CHAR(3) + ); + ## end snippet + """ + + ## Snippet "composite" + class InvoiceItem(SQLObject): + amount = Currency() + currency = StringChar(length=3) + + def _get_price(self): + return Price(self.amount, self.currency) + def _set_price(self, price): + self.amount = price.amount + self.currency = price.currency + + class Price(object): + def __init__(self, amount, currency): + self._amount = amount + self._currency = currency + + def _get_amount(self): + return self._amount + amount = property(_get_amount) + + def _get_currency(self): + return self._currency + currency = property(_get_currency) + + def __repr__(self): + return '<Price: %s %s>' % (self.amount, self.currency) + ## end snippet + + ## Snippet "composite-mutable" + class Address(SQLObject): + street = StringCol() + city = StringCol() + state = StringCol(length=2) + + latitude = FloatCol() + longitude = FloatCol() + + def _init(self, id): + SQLObject._init(self, id) + self._coords = SOCoords(self) + + def _get_coords(self): + return self._coords + + class SOCoords(object): + def __init__(self, so): + self._so = so + + def _get_latitude(self): + return self._so.latitude + def _set_latitude(self, value): + self._so.latitude = value + latitude = property(_get_latitude, set_latitude) + + def _get_longitude(self): + return self._so.longitude + def _set_longitude(self, value): + self._so.longitude = value + longitude = property(_get_longitude, set_longitude) + ## end snippet |
From: <dre...@us...> - 2003-08-19 12:53:13
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv23081/tests Modified Files: Tag: sybase-support-branch SQLObjectTest.py test.py Log Message: Branching for adding Sybase support Index: SQLObjectTest.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/SQLObjectTest.py,v retrieving revision 1.10 retrieving revision 1.10.2.1 diff -C2 -d -r1.10 -r1.10.2.1 *** SQLObjectTest.py 1 Aug 2003 01:29:28 -0000 1.10 --- SQLObjectTest.py 19 Aug 2003 12:53:10 -0000 1.10.2.1 *************** *** 39,43 **** return SQLiteConnection('data/sqlite.data') ! supportedDatabases = ['mysql', 'postgres', 'sqlite', 'dbm'] class SQLObjectTest(unittest.TestCase): --- 39,54 ---- return SQLiteConnection('data/sqlite.data') ! def sybaseConnection(): ! SQLObjectTest.supportDynamic = False ! SQLObjectTest.supportAuto = False ! SQLObjectTest.supportRestrictedEnum = False ! SQLObjectTest.supportTransactions = True ! return SybaseConnection(host='muppet', ! db='test', ! user='sa', ! passwd='sybasesa', ! autoCommit=1) ! ! supportedDatabases = ['mysql', 'postgres', 'sqlite', 'dbm', 'sybase'] class SQLObjectTest(unittest.TestCase): *************** *** 54,57 **** --- 65,69 ---- print '#' * 70 unittest.TestCase.setUp(self) + #__connection__.debug = self.debugSQL for c in self.classes: Index: test.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/test.py,v retrieving revision 1.21 retrieving revision 1.21.2.1 diff -C2 -d -r1.21 -r1.21.2.1 *** test.py 17 Jul 2003 01:20:18 -0000 1.21 --- test.py 19 Aug 2003 12:53:10 -0000 1.21.2.1 *************** *** 199,202 **** --- 199,203 ---- ######################################## + class DeleteSelectTest(TestCase1): *************** *** 209,212 **** --- 210,214 ---- self.assertEqual(list(TestSO1.select('all')), []) + ######################################## ## Transaction test *************** *** 410,413 **** --- 412,426 ---- """ + sybaseCreate = """ + CREATE TABLE auto_test ( + id integer, + first_name VARCHAR(100), + last_name VARCHAR(200) NOT NULL, + age INT DEFAULT 0, + created VARCHAT(40) NOT NULL, + happy char(1) DEFAULT 'Y' NOT NULL + ) + """ + mysqlDrop = """ DROP TABLE IF EXISTS auto_test *************** *** 415,418 **** --- 428,435 ---- postgresDrop = """ + DROP TABLE auto_test + """ + + sybaseDrop = """ DROP TABLE auto_test """ |
From: <dre...@us...> - 2003-08-19 12:53:13
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv23081/SQLObject Modified Files: Tag: sybase-support-branch Col.py DBConnection.py SQLObject.py Log Message: Branching for adding Sybase support Index: Col.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/Col.py,v retrieving revision 1.24 retrieving revision 1.24.2.1 diff -C2 -d -r1.24 -r1.24.2.1 *** Col.py 28 Jun 2003 22:21:21 -0000 1.24 --- Col.py 19 Aug 2003 12:53:10 -0000 1.24.2.1 *************** *** 162,165 **** --- 162,168 ---- return '' + def _sybaseType(self): + return self._sqlType() + def mysqlCreateSQL(self): return ' '.join([self.dbName, self._mysqlType()] + self._extraSQL()) *************** *** 171,174 **** --- 174,180 ---- return ' '.join([self.dbName, self._sqliteType()] + self._extraSQL()) + def sybaseCreateSQL(self): + return ' '.join([self.dbName, self._sybaseType()] + self._extraSQL()) + class Col(object): *************** *** 306,309 **** --- 312,318 ---- return self._postgresType() + def _sybaseType(self): + return self._postgresType() + class EnumCol(Col): baseClass = SOEnumCol *************** *** 320,325 **** --- 329,354 ---- return 'TIMESTAMP' + def _sybaseType(self): + return 'TIMESTAMP' + class DateTimeCol(Col): baseClass = SODateTimeCol + + class SODateCol(SOCol): + + # 3-03 @@: provide constraints; right now we let the database + # do any parsing and checking. And DATE and TIME? + + def _mysqlType(self): + return 'DATE' + + def _postgresType(self): + return 'DATE' + + def _sybaseType(self): + return 'DATE' + + class DateCol(Col): + baseClass = SODateCol class SODecimalCol(SOCol): Index: DBConnection.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/DBConnection.py,v retrieving revision 1.41 retrieving revision 1.41.2.1 diff -C2 -d -r1.41 -r1.41.2.1 *** DBConnection.py 18 Jul 2003 03:15:50 -0000 1.41 --- DBConnection.py 19 Aug 2003 12:53:10 -0000 1.41.2.1 *************** *** 7,10 **** --- 7,11 ---- from Cache import CacheSet import Col + try: import cPickle as pickle *************** *** 29,32 **** --- 30,43 ---- except ImportError: sqlite = None + + try: + import Sybase + from Sybase import NumericType + from Converters import registerConverter, IntConverter + registerConverter(NumericType, IntConverter) + + except ImportError: + Sybase = None + import re import warnings *************** *** 36,40 **** __all__ = ['MySQLConnection', 'PostgresConnection', 'SQLiteConnection', ! 'DBMConnection'] _connections = {} --- 47,51 ---- __all__ = ['MySQLConnection', 'PostgresConnection', 'SQLiteConnection', ! 'DBMConnection', 'SybaseConnection'] _connections = {} *************** *** 605,608 **** --- 616,720 ---- # turn it into a boolean: return not not result + + class SybaseConnection(DBAPI): + + def __init__(self, db, user, passwd='', host='localhost', autoCommit=0, **kw): + assert Sybase, 'Sybase module cannot be found' + if not autoCommit and not kw.has_key('pool'): + # Pooling doesn't work with transactions... + kw['pool'] = 0 + self.autoCommit=autoCommit + self.host = host + self.db = db + self.user = user + self.passwd = passwd + DBAPI.__init__(self, **kw) + + def insert_id(self, conn): + """ Sybase adapter/cursor does not support the + insert_id method. + """ + c = conn.cursor() + c.execute('SELECT @@IDENTITY') + return c.fetchone()[0] + + def makeConnection(self): + return Sybase.connect(self.host, self.user, self.passwd, + database=self.db, auto_commit=self.autoCommit) + + def _queryInsertID(self, conn, table, idName, names, values): + c = conn.cursor() + q = self._insertSQL(table, names, values) + if self.debug: + print 'QueryIns: %s' % q + c.execute(q) + return self.insert_id(conn) + + def _queryAddLimitOffset(self, query, start, end): + if not start: + return "%s LIMIT %i" % (query, end) + if not end: + return "%s LIMIT %i, -1" % (query, start) + return "%s LIMIT %i, %i" % (query, start, end-start) + + def createColumn(self, soClass, col): + return col.sybaseCreateSQL() + + def createIDColumn(self, soClass): + #return '%s INT PRIMARY KEY AUTO_INCREMENT' % soClass._idName + return '%s NUMERIC(18,0) IDENTITY' % soClass._idName + + def joinSQLType(self, join): + return 'NUMERIC(18,0) NOT NULL' #INT NOT NULL' + + SHOW_TABLES="""SELECT name FROM sysobjects WHERE type='U'""" + def tableExists(self, tableName): + for (table,) in self.queryAll(self.SHOW_TABLES): + if table.lower() == tableName.lower(): + return True + return False + + def addColumn(self, tableName, column): + self.query('ALTER TABLE %s ADD COLUMN %s' % + (tableName, + column.sybaseCreateSQL())) + + def delColumn(self, tableName, column): + self.query('ALTER TABLE %s DROP COLUMN %s' % + (tableName, + column.dbName)) + + SHOW_COLUMNS="""select 'column' = COL_NAME(id, colid) + from syscolumns + where id = OBJECT_ID(%s) + """ + def columnsFromSchema(self, tableName, soClass): + colData = self.queryAll(self.SHOW_COLUMNS + % tableName) + results = [] + for field, t, nullAllowed, key, default, extra in colData: + if field == 'id': + continue + colClass, kw = self.guessClass(t) + kw['name'] = soClass._style.dbColumnToPythonAttr(field) + kw['notNone'] = not nullAllowed + kw['default'] = default + # @@ skip key... + # @@ skip extra... + results.append(colClass(**kw)) + return results + + def guessClass(self, t): + if t.startswith('int'): + return Col.IntCol, {} + elif t.startswith('varchar'): + return Col.StringCol, {'length': int(t[8:-1])} + elif t.startswith('char'): + return Col.StringCol, {'length': int(t[5:-1]), + 'varchar': False} + elif t.startswith('datetime'): + return Col.DateTimeCol, {} + else: + return Col.Col, {} ######################################## Index: SQLObject.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLObject.py,v retrieving revision 1.50 retrieving revision 1.50.2.1 diff -C2 -d -r1.50 -r1.50.2.1 *** SQLObject.py 1 Aug 2003 01:19:34 -0000 1.50 --- SQLObject.py 19 Aug 2003 12:53:10 -0000 1.50.2.1 *************** *** 801,804 **** --- 801,805 ---- # Then we finalize the process: + #import pdb; pdb.set_trace() inst._SO_finishCreate() return inst |
From: <ian...@us...> - 2003-08-01 02:16:40
|
Update of /cvsroot/sqlobject/SQLObject/tests In directory sc8-pr-cvs1:/tmp/cvs-serv32162/tests Modified Files: SQLObjectTest.py Log Message: Added missing attribute supportsTransactions Index: SQLObjectTest.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/tests/SQLObjectTest.py,v retrieving revision 1.9 retrieving revision 1.10 diff -C2 -d -r1.9 -r1.10 *** SQLObjectTest.py 21 Apr 2003 22:37:17 -0000 1.9 --- SQLObjectTest.py 1 Aug 2003 01:29:28 -0000 1.10 *************** *** 10,13 **** --- 10,15 ---- # care when you assign incorrect to an ENUM... SQLObjectTest.supportRestrictedEnum = False + # Technically it does, but now how we're using it: + SQLObjectTest.supportTransactions = False return MySQLConnection(host='localhost', db='test', *************** *** 20,23 **** --- 22,26 ---- SQLObjectTest.supportAuto = False SQLObjectTest.supportRestrictedEnum = False + SQLObjectTest.supportTransactions = False return DBMConnection('data') *************** *** 26,29 **** --- 29,33 ---- SQLObjectTest.supportAuto = True SQLObjectTest.supportRestrictedEnum = True + SQLObjectTest.supportTransactions = True return PostgresConnection(db='test') *************** *** 32,35 **** --- 36,40 ---- SQLObjectTest.supportAuto = False SQLObjectTest.supportRestrictedEnum = False + SQLObjectTest.supportTransactions = True return SQLiteConnection('data/sqlite.data') |
From: <ian...@us...> - 2003-08-01 02:04:03
|
Update of /cvsroot/sqlobject/SQLObject/SQLObject In directory sc8-pr-cvs1:/tmp/cvs-serv32212/SQLObject Modified Files: SQLBuilder.py Log Message: Make sure __getattr__ doesn't catch magic methods Index: SQLBuilder.py =================================================================== RCS file: /cvsroot/sqlobject/SQLObject/SQLObject/SQLBuilder.py,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -d -r1.10 -r1.11 *** SQLBuilder.py 31 Jul 2003 14:25:32 -0000 1.10 --- SQLBuilder.py 1 Aug 2003 01:29:46 -0000 1.11 *************** *** 302,305 **** --- 302,307 ---- class TableSpace: def __getattr__(self, attr): + if attr.startswith('__'): + raise AttributeError return Table(attr) *************** *** 308,311 **** --- 310,315 ---- self.tableName = tableName def __getattr__(self, attr): + if attr.startswith('__'): + raise AttributeError return Field(self.tableName, attr) def sqlRepr(self): *************** *** 321,324 **** --- 325,330 ---- def __getattr__(self, attr): + if attr.startswith('__'): + raise AttributeError if attr == 'id': return SQLObjectField(self.tableName, self.soClass._idName, attr) *************** *** 348,351 **** --- 354,359 ---- class ConstantSpace: def __getattr__(self, attr): + if attr.startswith('__'): + raise AttributeError return SQLConstant(attr) |