You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Andy D. <adu...@co...> - 2000-04-10 16:34:08
|
On Sun, 9 Apr 2000, Tim Churches wrote: > I've been experimenting with pulling quantitative data out of a MySQL > table into NumPy arrays via Andy Dustman's excellent MySQLdb module and > then calculating various statistics from the data using Gary Strangman's > excellent stats.py functions, which when operating on NumPy arrays are > lightning-fast. > > The problem is the speed with which data can be extracted from a column > of a MySQL (or any other SQL database) query result set and stuffed into > a NumPy array. This inevitably involves forming a Python list and then > assigning that to a NumPy array. This is both slow and memory-hungry, > especially with large datsets (I have een playing with a few million > rows). > > I was wondering if it would be feasible to initially add a method to the > _mysql class in the MySQLdb module which iterated through a result set > using a C routine (rather than a Python routine) and stuffed the data > directly into a NumPy array (or arrays - one for each column in the > result set) in one fell swoop (or even iterating row-by-row but in C)? I > suspect that such a facility would be much faster than having to move > the data into NumPy via a standard Python list (or actually via tuples > within a list, which i sthe way the Python DB-API returns results). > > If this direct MySQL-to-NumPy interface worked well, it might be > desirable to add it to the Python DB-API specification for optional > implementation in the other database modules which conform to the API. > There are probably other extensions which would make the DB-API more > useful for statistical applications, which tend to be set > (column)-oriented rather than row-oriented - will post to the list as > these occur to me. It might be possible to do something like this. I would prefer that such a feature work as a seperate module (i.e. I don't think it is generally applicable to MySQLdb/_mysql). Or perhaps it could be a compile-time option for _mysql (-DUSE_NUMPY). The object that you want to mess with is the _mysql result object. It contains an attribute MYSQL_RES *result, which is a pointer to the actual MySQL structure. I don't remember if NumPy arrays are extensible or not, i.e. can rows be appended? That would affect the design. If they are not extensible, then you are probably limited to using mysql_store_result() (result set stored on the client side), as opposed to mysql_use_result() (result set stored on the server side). mysql_store_result is probably preferable in this case anyway, so extensibility doesn't matter, as we can find the size of the result set in advance with mysql_num_rows(). Then we know the full size of the array. However, with very large result sets, it may be necessary to use mysql_use_result(), in which case the array will need to be extended, possibly row-by-row. I could do this, but I need to know how to create and assign values to a NumPy array from within C. Or perhaps an initial (empty) array with the correct number of columns can be passed. I am pretty sure NumPy arrays look like sequences (of sequences), so assignment should not be a big problem. Easiest solution (for me, and puts least bloat in _mysql) would be for the user to pass in a NumPy array. Question: Would it be adequate to put all columns returned into the array? If label columns need to be returned, this could pose a problem. They may have to be returned as a separate query. Or else non-numeric columns would be excluded and returned in a list of tuples (this would be harder). I suspect the existing cursor.executemany() is capable of INSERTing and UPDATEing NumPy arrays. -- andy dustman | programmer/analyst | comstar.net, inc. telephone: 770.485.6025 / 706.549.7689 | icq: 32922760 | pgp: 0xc72f3f1d "Therefore, sweet knights, if you may doubt your strength or courage, come no further, for death awaits you all, with nasty, big, pointy teeth!" |
From: Paul F. D. <pau...@ho...> - 2000-04-10 16:13:02
|
I have sent this out before but here it is again. It is a beta of a missing-observation class. Please help me refine it and complete it. I intend to add it to the numpy distribution since this facility is much-requested. MAtest.py shows how to use it. The intention is that it is used the same way you use a Numeric, and in fact if there are no masked values that there isn't a lot of overhead. The basic concept is that each MA holds an array and a mask that indicates which values of the array are valid. Note the change in semantics for indexing shown below. Later I imagine creating a compiled extension class for bit masks to improve the space and time efficiency. Paul # Note copy semantics here differ from Numeric def __getitem__(self, i): m = self.__mask if m is None: return Numeric.array(self.__data[i]) else: return MA(Numeric.array(self.__data[i]), Numeric.array(m[i])) def __getslice__(self, i, j): m = self.__mask if m is None: return Numeric.array(self.__data[i:j]) else: return MA(Numeric.array(self.__data[i:j]), Numeric.array(m[i:j])) # -------- |
From: Martin M. <mae...@st...> - 2000-04-10 08:39:42
|
>>>>> "TimC" == gestalt-system-discuss-admin <ges...@li...> writes: TimC> Date: Sun, 09 Apr 2000 01:07:13 +1000 TimC> From: Tim Churches <tc...@bi...> TimC> Organization: Gestalt Institute TimC> To: st...@nm..., st...@bu..., TimC> ges...@li..., TimC> num...@li... TimC> I'm a new user of MumPy so forgive me if this is a FAQ. ...... TimC> I've been experimenting with using Gary Strangman's excellent stats.py TimC> functions. The spped of these functions when operating on NumPy arrays TimC> and the ability of NumPy to swallow very large arrays is remarkable. TimC> However, one deficiency I have noticed is the lack of the ability TimC> to represent nulls (i.e. missing values, None or NaN TimC> [Not-a-Number] in NumPy arrays. Missing values commonly occur in TimC> real-life statistical data and although they are usually excluded TimC> from most statistical calculations, it is important to be able to TimC> keep track of the number of missing data elements and report TimC> this. I'm just a recent "listener" on gestalt-system-discuss, and don't even have any python experience. I'm member of the R core team (www.r-project.org). In R (and even in S-plus, but almost invisibly there), we even do differentiate between "NA" (missing / not available) and "NaN" (IEEE result of 0/0, etc). I'd very much like to have these different as in R. I think our implementation of these is quite efficient, implementing NA as one particular bit pattern from the whole possible NaN set. We use code like the following (R source, src/main/arithmetic.c ) : static double R_ValueOfNA(void) { ieee_double x; x.word[hw] = 0x7ff00000; x.word[lw] = 1954; return x.value; } int R_IsNA(double x) { if (isnan(x)) { ieee_double y; y.value = x; return (y.word[lw] == 1954); } return 0; } Martin Maechler <mae...@st...> http://stat.ethz.ch/~maechler/ TimC> Because NumPy arrays can't represent missing data via a TimC> special value, it is necessary to exclude missing data elements TimC> from NumPy arrays and keep track of them elsewhere (in standard TimC> Python lists). This is messy. Also, it is quite common to use TimC> various imputation techniques to estimate the values of missing TimC> data elements - the ability to represent missing data in a NumPy TimC> array and then change it to an imputed value would be a real TimC> boon. |
From: Charles G W. <cg...@fn...> - 2000-04-08 23:01:12
|
I can't import "Matrix" due to the following cruft: __id__ = """ $Id: Matrix.py,v 1.1.1.1 2000/01/13 21:23:06 dubois Exp $ """[1:-1] import string __version__ = int(__id__[string.index(__id__, '#')+1:-1]) You can't count on the CVS ID having a "#" character in it; each time the file is checked in and out of CVS the Id is rewritten. I don't think this trick can be made to work. I think what is needed is either a simpler more failsafe way of setting __version__, or simply to eliminate __version__ altogether. |
From: Tim C. <tc...@bi...> - 2000-04-08 12:37:58
|
I've been experimenting with pulling quantitative data out of a MySQL table into NumPy arrays via Andy Dustman's excellent MySQLdb module and then calculating various statistics from the data using Gary Strangman's excellent stats.py functions, which when operating on NumPy arrays are lightning-fast. The problem is the speed with which data can be extracted from a column of a MySQL (or any other SQL database) query result set and stuffed into a NumPy array. This inevitably involves forming a Python list and then assigning that to a NumPy array. This is both slow and memory-hungry, especially with large datsets (I have een playing with a few million rows). I was wondering if it would be feasible to initially add a method to the _mysql class in the MySQLdb module which iterated through a result set using a C routine (rather than a Python routine) and stuffed the data directly into a NumPy array (or arrays - one for each column in the result set) in one fell swoop (or even iterating row-by-row but in C)? I suspect that such a facility would be much faster than having to move the data into NumPy via a standard Python list (or actually via tuples within a list, which i sthe way the Python DB-API returns results). If this direct MySQL-to-NumPy interface worked well, it might be desirable to add it to the Python DB-API specification for optional implementation in the other database modules which conform to the API. There are probably other extensions which would make the DB-API more useful for statistical applications, which tend to be set (column)-oriented rather than row-oriented - will post to the list as these occur to me. Cheers, Tim Churches PS I will be away for the next week so apologies in advance for not replying immediately to any follow-ups to this posting. TC |
From: Tim C. <tc...@bi...> - 2000-04-08 12:37:02
|
I'm a new user of MumPy so forgive me if this is a FAQ. I would normally check the list archives but I'm on holidays at the moment in Manila and the speed of the Internet connection here does not permit much Web browsing... I've been experimenting with using Gary Strangman's excellent stats.py functions. The spped of these functions when operating on NumPy arrays and the ability of NumPy to swallow very large arrays is remarkable. However, one deficiency I have noticed is the lack of the ability to represent nulls (i.e. missing values, None or NaN [Not-a-Number] in NumPy arrays. Missing values commonly occur in real-life statistical data and although they are usually excluded from most statistical calculations, it is important to be able to keep track of the number of missing data elements and report this. ecause NumPy arrays can't represent missing data via a special value, it is necessary to exclude missing data elements from NumPy arrays and keep track of them elsewhere (in standard Python lists). This is messy. Also, it is quite common to use various imputation techniques to estimate the values of missing data elements - the ability to represent missing data in a NumPy array and then change it to an imputed value would be a real boon. Regards, Tim C . The speed of these functions arelightning-fast. The problem is the speed with which data can be extracted from a column of a MySQL (or any other SQL database) query result set and stuffed into a NumPy array. This inevitably involves forming a Python list and then assigning that to a NumPy array. This is both slow and memory-hungry, especially with large datsets (I have een playing with a few million rows). I was wondering if it would be feasible to initially add a method to the _mysql class in the MySQLdb module which iterated through a result set using a C routine (rather than a Python routine) and stuffed the data directly into a NumPy array (or arrays - one for each column in the result set) in one fell swoop (or even iterating row-by-row but in C)? I suspect that such a facility would be much faster than having to move the data into NumPy via a standard Python list (or actually via tuples within a list, which i sthe way the Python DB-API returns results). If this direct MySQL-to-NumPy interface worked well, it might be desirable to add it to the Python DB-API specification for optional implementation in the other database modules which conform to the API. There are probably other extensions which would make the DB-API more useful for statistical applications, which tend to be set (column)-oriented rather than row-oriented - will post to the list as these occur to me. Cheers, Tim Churches PS I will be away for the next week so apologies in advance for not replying immediately to any follow-ups to this posting. TC |
From: Joe V. A. <van...@at...> - 2000-04-03 23:39:50
|
Using Python 1.5.2, gcc 2.95.2 and Numeric CVS as of 4/3/2000 on Solaris 7 (Sparc). When I attempt to import Numeric, I get a segmentation fault. After 'multiarray'. '_numpy' and 'umath' are imported, the stack trace is: #0 PyArray_DescrFromType (type=70) at Src/arraytypes.c:593 #1 0xfe768dd0 in PyArray_FromDims (nd=1, d=0xffbecfc8, type=70) at Src/arrayobject.c:416 #2 0xfe7952f0 in array_zeros (ignored=0x1, args=0x1) at Src/multiarraymodule.c:961 #3 0x1f72c in call_builtin (func=0xe6928, arg=0xedd40, kw=0x0) at ceval.c:2359 #4 0x1f5f8 in PyEval_CallObjectWithKeywords (func=0xe6928, arg=0xedd40, kw=0x0) at ceval.c:2324 #5 0x1dde0 in eval_code2 (co=0xe7ec8, globals=0x0, locals=0x83, args=0xffffffff, argcount=944424, kws=0x0, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:1654 #6 0x1dc88 in eval_code2 (co=0xc9dc0, globals=0xfda98, locals=0xffffffff, args=0x1, argcount=1015296, kws=0x0, kwcount=0, defs=0xe6b2c, defcount=1, owner=0x0) at ceval.c:1612 #7 0x1dc88 in eval_code2 (co=0xf78f0, globals=0xfdc7c, locals=0x1, args=0x1, argcount=1014976, kws=0x0, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:1612 #8 0x1b8a0 in PyEval_EvalCode (co=0xf78f0, globals=0xf0f70, locals=0xf0f70) at ceval.c:324 #9 0x277a8 in PyImport_ExecCodeModuleEx (name=0xffbede80 "Precision", co=0xf78f0, pathname=0xffbed4a0 "/usr/local/lib/python1.5/site-packages/Numeric/Precision.pyc") at import.c:485 gdb shows the fault right after line 596 in Src/arraytypes.c : if (type < PyArray_NTYPES) { return descrs[type]; # type = 'F' } else { switch(type) { case 'c': return descrs[PyArray_CHAR]; case 'b': return descrs[PyArray_UBYTE]; case '1': return descrs[PyArray_SBYTE]; case 's': return descrs[PyArray_SHORT]; case 'i': return descrs[PyArray_INT]; case 'l': return descrs[PyArray_LONG]; case 'f': return descrs[PyArray_FLOAT]; case 'd': return descrs[PyArray_DOUBLE]; case 'F': return descrs[PyArray_CFLOAT]; If I try to examine descrs[0], gdb says: (gdb) print descrs[0] Cannot access memory at address 0x0. This is probably shared library weardness, but I'm not sure how to fix it. Any ideas? -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Charles G W. <cg...@fn...> - 2000-04-02 01:06:11
|
I'm playing around with the brand-new Python1.6 alpha (formerly known as 1.5.2+) and noticed a problem when installing Numeric into the new python1.6 directories - the "setup.py" that comes with Numeric has a hardcoded "python1.5" in it. Patch uploaded to the sourceforge patch page. |
From: Just v. R. <ju...@le...> - 2000-04-01 15:59:51
|
Folks, Whatever happend to the plans to fold multiarray and ufuncs into the core Python distribution? Did Guido decide to not want it? Or is it just that nobody came round to fulfilling the requirements, whatever they be? Since Python 1.6 is only in alpha, it may not be too late... (Sorry if this has been answered before and I missed it.) Just |
From: Janne S. <ja...@nn...> - 2000-03-31 10:10:48
|
Could somebody give me a pointer to a easily installable Windows binary kit for NumPy? Even a bit older version (such as 11) would do fine. I couldn't find any reference to such a kit from sourceforge or the LLNL site. -- Janne |
From: Paul F. D. <pau...@ho...> - 2000-03-28 20:05:20
|
> -----Original Message----- > From: num...@li... > [mailto:num...@li...]On Behalf Of Joe > Van Andel > Sent: Tuesday, March 28, 2000 11:31 AM > To: van...@uc... > Cc: numpy-discussion; Paul F. Dubois > Subject: Re: [Numpy-discussion] single precision patch for > arrayfnsmodule.c :interp() > Patch completed. I added a doc string. I used the second patch you sent. |
From: Joe V. A. <van...@at...> - 2000-03-28 19:37:02
|
I'm very sorry, I previously sent the wrong patch. Here's the correct patch. (The other was a previous version of arrayfnsmodule.c, that added an interpf() command. This version adds an optional typecode to allow specifying single precision results from interp) -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Les S. <god...@ne...> - 2000-03-28 18:55:50
|
> Hmm, I'd hate to hear the impolite form! ;-) isn't it nice when cranky-heads are dealt with politely? ;-) > Just do M-x compile and set the compilation command to "python > setup.py build", then you can click on the errors. ahhhh... hadnt thought of that. thanks.... here is a one line fix to build_ext.py in the distutils distribution that switches the order of the includes, so that -IInclude comes before -I/usr/lib/python1.5 in the build process, so API changes in the .h files don't hose the build before the install. les schaffer (gustav)/usr/lib/python1.5/site-packages/distutils/command/: diff -c build_ext.py~ build_ext.py *** build_ext.py~ Sun Jan 30 13:34:12 2000 --- build_ext.py Tue Mar 28 13:30:03 2000 *************** *** 99,105 **** self.include_dirs = string.split (self.include_dirs, os.pathsep) ! self.include_dirs.insert (0, py_include) if exec_py_include != py_include: self.include_dirs.insert (0, exec_py_include) --- 99,105 ---- self.include_dirs = string.split (self.include_dirs, os.pathsep) ! self.include_dirs.append(py_include) if exec_py_include != py_include: self.include_dirs.insert (0, exec_py_include) |
From: Charles G W. <cg...@fn...> - 2000-03-28 18:31:28
|
Ooops, that message was somehow truncated, the last line should obviously have read: install: python setup.py install |
From: Charles G W. <cg...@fn...> - 2000-03-28 18:29:44
|
Les Schaffer writes: > > let me say it politely, using distutils at this point is for the > birds. Hmm, I'd hate to hear the impolite form! ;-) > but for right now, i miss the Makefile stuff. > > an example: > (gustav)~/system/numpy/Numerical/: python setup.py build <snip> > /usr/include/python1.5/ufuncobject.h:100: previous declaration of `PyUFunc_FromFuncAndData' > Src/ufuncobject.c: In function `PyUFunc_FromFuncAndData': > Src/ufuncobject.c:805: structure has no member named `doc' > Src/ufuncobject.c: In function `ufunc_getattr': > Src/ufuncobject.c:955: structure has no member named `doc' > [blah blah blah] > > if this was from running make, i could be inside xemacs and click on > the damn error and go right to the line in question. now, i gotta > horse around, its a waste of time. No, you don't have to horse around! Of course you can use xemacs and compile mode. (I wouldn't dream of building anything from an xterm window!) Just do M-x compile and set the compilation command to "python setup.py build", then you can click on the errors. Or create a trivial Makefile with these contents: numpy: python setup.py build install: python |
From: Joe V. A. <van...@at...> - 2000-03-28 18:06:32
|
As I've mentioned earlier, I'm using interp() with very large datasets, and I can't afford to use double precision arrays. Here's a patch that lets interp() accept an optional typecode argument. Passing 'f' calls the new single precision version, and returns a single precision array. Passing no argument or 'd' uses the previous, double precision version. (No other array types are supported - an error is returned.) I hope this can be added to the CVS version of Numeric. Thanks! -- Joe VanAndel National Center for Atmospheric Research http://www.atd.ucar.edu/~vanandel/ Internet: van...@uc... |
From: Les S. <god...@ne...> - 2000-03-28 17:38:53
|
I am just trying to compile the latest CVS updates on NumPy, and am getting very cranky about the dependence of build process on distutils. let me say it politely, using distutils at this point is for the birds. maybe for distributing released versions to the general public, where the distutils setup.py script has been tested on many platforms, is a long term solution. but for right now, i miss the Makefile stuff. an example: (gustav)~/system/numpy/Numerical/: python setup.py build running build [snip] /usr/bin/egcc -c -I/usr/include/python1.5 -IInclude -O3 -mpentium -fpic Src/_numpymodule.c Src/arrayobject.c Src/ufuncobject.c Src/ufuncobject.c:783: conflicting types for `PyUFunc_FromFuncAndData' /usr/include/python1.5/ufuncobject.h:100: previous declaration of `PyUFunc_FromFuncAndData' Src/ufuncobject.c: In function `PyUFunc_FromFuncAndData': Src/ufuncobject.c:805: structure has no member named `doc' Src/ufuncobject.c: In function `ufunc_getattr': Src/ufuncobject.c:955: structure has no member named `doc' [blah blah blah] if this was from running make, i could be inside xemacs and click on the damn error and go right to the line in question. now, i gotta horse around, its a waste of time. another example, i posed to c.l.p a week or so ago. where it turned out that distutils doesnt know enough that, for example, arrayobject.h in distribution is newer than the one in /usr/include/python1.5 so breaks the build when API changes have been made. distutils should at leaast get the order of the -I's correct. but we're not a distutils SIG, we're NumPy, right? can we please go back, at least for now, to using -- or at a minimum distributing -- the Makefile's? les schaffer |
From: Jon S. <js...@wm...> - 2000-03-28 10:56:54
|
<P><A HREF="http://lcdx00.wm.lc.ehu.es/~jsaenz/pyclimate">Pyclimate 0.0</A> - Climate variability analysis using Numeric Python (28-Mar-00) Tuesday, 03/28/2000 Hello, all. We are making the first announce of a pre-alpha release (version 0.0) of our package pyclimate, which presents some tools used for climate variability analysis and which make extensive use of Numerical Python. It is released under the GNU Public License. We call them a pre-alpha release. Even though the routines are quite debugged, they are yet growing and we are thinking in making a stable release shortly after receiving some feedback from users. The package contains: IO functions ------------ -ASCII files (simple, but useful) -ncstruct.py: netCDF structure copier. From a COARDS compliant netCDF file, this module creates a COARDS compliant file, copying the needed attributes, dimensions, auxiliary variables, comments, and so on in one call. Time handling routines ---------------------- * JDTime.py -> Some C/Python functions to convert from date to Scaliger's Julian Day and from Julian Day to date. We are not trying to replace mxDate, but addressing a different problem. In particular, this module contains a routine especially suited to handling monthly time steps for climatological use. * JDTimeHandler.py -> Python module which parses the units attribute of the time variable in a COARDS file and which offsets and scales adequately the time values to read/save date fields. Interface to DCDFLIB.C ---------------------- A C/Python interface to the free DCDFLIB.C library is provided. This library allows direct and inverse computations of parameters for several probability distribution functions like Chi^2, normal, binomial, F, noncentral F, and many many more. EOF analysis ------------ Empirical Orthogonal Function analysis based on the SVD decomposition of the data matrix and related functions to test the reliability/degeneracy of eigenvalues (truncation rules). Monte Carlo test of the stability of eigenvectors to temporal subsampling. SVD decomposition ----------------- SVD decomposition of the correlation matrix of two datasets, functions to compute the expansion coefficients, the squared cumulative covariance fraction and the homogeneous and heterogeneous correlation maps. Monte Carlo test of the stability of singular vectors to temporal subsampling. Multivariate digital filter --------------------------- Multivariate digital filter (high and low pass) based on the Kolmogorov-Zurbenko filter Differential operators on the sphere ------------------------------------ Some classes to compute differential operators (gradient and divergence) on a regular latitude/longitude grid. PREREQUISITES ============= To be able to use it, you will need: 1. Python ;-) 2. netCDF library 3.4 or later 3. Scientific Python, by Konrad Hinsen 4. DCDFLIB.C version 1.1 IF AND ONLY IF you really want to change the C code (JDTime.[hc] and pycdf.[hc]), then, you will also need SWIG. COMPILATION =========== There is no a automatic compilation/installation procedure, but the Makefile is quite straightforward. After manually editing the Makefile for different platforms, the commands make make test -> Runs a (not infalible) regression test make install will do it. SORRY, we don't use it under Windows, only UNIX. Volunteers that generate a Windows installation file would be appreciated, but we will not do it. DOCUMENTATION ============= LaTeX, Postscript and PDF versions of the manual are included in the distribution. However, we are preparing a new set of documentation according to PSA rules. AVAILABILITY ============ http://lcdx00.wm.lc.ehu.es/~jsaenz/pyclimate (Europe) http://pyclimate.zubi.net/ (USA) http://starship.python.net/crew/~jsaenz (USA) Any feedback from the users of the package will be really appreciated by the authors. We will try to incorporate new developments, in case we are able to do so. Our time availability is scarce. Enjoy. Jon Saenz, js...@wm... Juan Zubillaga, wmp...@lg... |
From: Paul F. D. <pau...@ho...> - 2000-03-24 23:14:51
|
The CVS version now has doc strings for all the functions in umath. (C. Waldman, P. Dubois). |
From: David A. <Da...@Ac...> - 2000-03-24 16:49:03
|
> Great, this is a step in the right direction. I am still hoping for an > interactive environment that lets me consult docstrings why I work, > but that won't happen before most Python functions actually have > docstrings. Did you look at recent versions of IDLE (and Pythonwin, but not on your platforms =)? --david |
From: Konrad H. <hi...@cn...> - 2000-03-24 16:04:47
|
> I really, really, really like the Cephes module and a lot of the work Me too! > So, I hereby declare doing something about the useless doc-strings to > be fixing a bug and not adding a feature ;-) Great, this is a step in the right direction. I am still hoping for an interactive environment that lets me consult docstrings why I work, but that won't happen before most Python functions actually have docstrings. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Charles G W. <cg...@fn...> - 2000-03-23 21:20:34
|
I really, really, really like the Cephes module and a lot of the work Travis Oliphant has been doing on Numeric Python. Nice to see Numeric getting more and more powerful all the time. However, as more and more functions get added to the libraries, their names become more and more obscure, and it sure would be nice to have doc-strings for them. For the stock Numeric ufuncs like "add", the meanings are self-evident, but things like "pdtri" and "incbet" are a bit less obvious. (One of my pet peeves with Python is that there are so many functions and classes with empty doc-strings. Bit by bit, I'm trying to add them in). However, one little problem with the current Numeric implementation is that ufunc objects don't have support for a doc string, the doc string is hardwired in the type object, and comes up as: "Optimizied FUNCtions make it possible to implement arithmetic with matrices efficiently" This is not only non-helpful, it's misspelled ("Optimizied"?) Well, according to the charter, a well-behaved Nummie will "Fix bugs at will, without prior consultation." But a well-behaved Nummie wil also "Add new features only after consultation with other Nummies" So, I hereby declare doing something about the useless doc-strings to be fixing a bug and not adding a feature ;-) The patch below adds doc strings to all the ufuncs in the Cephes module. They are automatically extracted from the HTML documentation for the Cephes module. (In the process of doing this, I also added some missing items to said HTML documentation). This patch depends on another patch, which I am submitting via the SourceForge, which allows ufunc objects to have doc strings. With these patches, you get this: >>> import cephes >>> print cephes.incbet.__doc__ incbet(a,b,x) returns the incomplete beta integral of the arguments, evaluated from zero to x: gamma(a+b) / (gamma(a)*gamma(b)) * integral(t**(a-1) (1-t)**(b-1), t=0..x). instead of this: >>> print cephes.incbet.__doc__ Optimizied FUNCtions make it possible to implement arithmetic with matrices efficiently Isn't that nicer? "Ni-Ni-Numpy!" Here's the "gendoc.py" script to pull the docstrings out of included_functions.h: |
From: Jean-Bernard A. <jb...@ph...> - 2000-03-23 16:06:27
|
Hey! I am now able to reply to my question! Quadpack from multipack is much more quicker and accurate! and it needs not Python 1.5.2 (on my system it crashes with new python, but it is a quick installation) comparision: >>> quadrature.quad(lambda t: t**(H-1), 0, 1) 2.54866576894 >>> quadpack.quad(lambda t: t**(H-1), 0, 1) (3.33333333333, 4.26325641456e-14) >>> 1/H 3.33333333333 The expected result is 1/H, (H was .3). It is possible to improve the precison of the result of quadrature, it becomes very slow, but never very precise. Jean-Bernard On Wed, 22 Mar 2000, Jean-Bernard Addor wrote: > Hey Numpy people! > > I have to integrate functions like: > > Int_0^1 (t*(t-1))**-(H/2) dt > > or > > Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 > > I just tried quadrature on the 1st one: it needs very order of quadrature > to be precise and is in that case slow. > > Would it work better with Multipack? > > (I have to upgrade to python 1.5.2 to try Multipack!) > > Thank you for your help. > > Jean-Bernard > > |
From: <pet...@no...> - 2000-03-23 15:42:39
|
On Wed, 22 Mar 2000, Jean-Bernard Addor wrote: > Hey Numpy people! > > I have to integrate functions like: > > Int_0^1 (t*(t-1))**-(H/2) dt This is a beta function with arguments (1-H/2,1-H/2) and is related to gamma functions. B(x,y) = Gamma(x)Gamma(y)/Gamma(x+y) > or > > Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 This can be can done analytically = 2 Int_0^1 t**(H-1) dt = 2 [ t**(H)/H ]_0^1 = 2/H > I just tried quadrature on the 1st one: it needs very order of quadrature > to be precise and is in that case slow. HTH Peter |
From: Jean-Bernard A. <jb...@ph...> - 2000-03-23 01:53:54
|
Hey Numpy people! I have to integrate functions like: Int_0^1 (t*(t-1))**-(H/2) dt or Int_-1^1 1/abs(t)**(1-H) dt, with H around .3 I just tried quadrature on the 1st one: it needs very order of quadrature to be precise and is in that case slow. Would it work better with Multipack? (I have to upgrade to python 1.5.2 to try Multipack!) Thank you for your help. Jean-Bernard |