I have done a major rewrite on google-goopy. Changes
include:
* All functions that take a sequence now accept either
a list or an iterator. (But they still work on
versions of Python that don't have iterators.)
* All type checking is now "duck typing". Anyplace you
can use a list, you can now use a user-defined class
that implements list behavior.
* Several functions that used "for i in xrange" and
then used "x = lst[i]" have been rewritten to use "for
x in lst". This is faster, and also works with
iterators. The speedup is most noticeable in maximum()
and minimum().
* Functions that return a boolean will now use Python's
True and False values if available; if not, they will
use defined values of True and False.
* I have re-written the docstrings, and IMHO they are
better now. They follow PEP 257 conventions.
* I have added a bunch of test cases.
* I added new functions, compatible with some new
functions to be added to Python 2.5:
any(), all(), max(), min()
max() and min() accept a "key=" argument. When the
module is compiled under Python 2.5 or later, the
built-in versions of these functions will be used
instead, automatically.
* remove_duplicates() and intersection() now degrade
gracefully if they encounter unhashable elements (e.g.
lists). The old versions would give up completely,
abandoning any work already done, and start over
whenever they encountered unhashable elements; the new
versions simply handle unhashable elements on a
case-by-case basis. This causes a slight slowdown in
some typical cases, but any time you have a list with
many hashable elements and a few unhashable ones, this
is a huge win.
* If the module is compiled under a version of Python
with a built-in sum(), the built-in one will be used
automatically. This gives a noticeable speedup in the
statistics functions. Also, the Python version of
sum() now has an optional start parameter, to be
compatible with Python's built-in sum().
* The new version of lebesgue_norm() is noticeably
faster, because it avoids some needless allocation and
deallocation of memory.
* I wrote a benchmarking module, and checked that the
new functions perform acceptably. For the most part,
they are the same as the originals, or faster.
flatten1() and flatten() are slower now, but not slow;
it's just that the original functions were so
lightweight that the new type checks actually cause a
noticeable hit.
* The new functions have been tested, and benchmarked,
under Python 2.1, 2.2, 2.3, and 2.4.
The attached file is called "fun.tar.gz". It contains
the new version of goopy in a file called fun.py, the
new unittest file, the new benchmark tool, notes in a
file called fun.txt, and a few other files.
If you have any questions, please feel free to contact
me about this.