You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Fernando P. <fpe...@gm...> - 2006-07-17 19:24:02
|
On 7/17/06, Travis Oliphant <oli...@ie...> wrote: > Fernando Perez wrote: > > Hi all, > > > > I received this message today from a collaborator. I don't have > > direct access to this box, but he posted fairly detailed logs. Has > > anyone seen a similar issue with current code? If not, I'll try to > > track down further with him what the problem may actually be. > > > > This looks like a problem with left-over headers and/or C-API files > being picked up. Make sure the old header files are deleted and he has > a fresh install of NumPy from SVN (with the build directory deleted > before re-building). > > Look in > > /usr/lib64/python2.3/site-packages/numpy/core/include/__multiarray_api.h > > to make sure there are now isolated intp references (particularly look at PyArray_New). > If there are, then the NumPy build was not clean. Thanks, Travis. I just wanted to make sure it wasn't a more widespread problem. I'll track it down with my colleague in private then. Cheers, f |
From: Travis O. <oli...@ie...> - 2006-07-17 19:20:18
|
Fernando Perez wrote: > Hi all, > > I received this message today from a collaborator. I don't have > direct access to this box, but he posted fairly detailed logs. Has > anyone seen a similar issue with current code? If not, I'll try to > track down further with him what the problem may actually be. > This looks like a problem with left-over headers and/or C-API files being picked up. Make sure the old header files are deleted and he has a fresh install of NumPy from SVN (with the build directory deleted before re-building). Look in /usr/lib64/python2.3/site-packages/numpy/core/include/__multiarray_api.h to make sure there are now isolated intp references (particularly look at PyArray_New). If there are, then the NumPy build was not clean. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-17 19:10:56
|
Sebastian Haase wrote: > Traceback (most recent call last): > File "<input>", line 1, in ? > TypeError: array cannot be safely cast to required type > >>>> dd=d.astype(N.float32) >>>> N.dot(dd,ccc) >>>> > [[[ 1. 1. 1.] > [ 1. 1. 1.] > [ 1. 1. 1.]] > > [[ 2. 2. 2.] > [ 2. 2. 2.] > [ 2. 2. 2.]]] > > > > The TypeError looks like a numpy bug ! > I don't see why this is a bug. You are trying to coerce a 32-bit integer to a 32-bit float. That is going to lose precision and so you get the error indicated. -Travis |
From: Sebastian H. <ha...@ms...> - 2006-07-17 18:45:59
|
Thanks Chris, If this got fixed towards numarray version 1.5.1 it looks like it is now on both Linux and Mac different(!) from the numpy result I got on Window !! On Linux I get: >>> import numpy as N >>> N.__version__ '0.9.9.2823' >>> bbb=N.zeros((2,3,3), N.float32) >>> bbb[0,:,:]=1 >>> bbb[1,:,:]=2 >>> bbb [[[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 2. 2. 2.] [ 2. 2. 2.] [ 2. 2. 2.]]] >>> ccc=N.transpose(bbb,(1,0,2)) >>> d=N.array([[1,0],[0,1]]) >>> d [[1 0] [0 1]] >>> N.dot(d,ccc) Traceback (most recent call last): File "<input>", line 1, in ? TypeError: array cannot be safely cast to required type >>> dd=d.astype(N.float32) >>> N.dot(dd,ccc) [[[ 1. 1. 1.] [ 1. 1. 1.] [ 1. 1. 1.]] [[ 2. 2. 2.] [ 2. 2. 2.] [ 2. 2. 2.]]] >>> The TypeError looks like a numpy bug ! Thanks, Sebastian Haase On Monday 17 July 2006 11:29, Christopher Barker wrote: > And on my Mac: > > OS-X 10.4.*, PPC, universal python 2.4.3: > >>> import numarray as na > >>> na.__version__ > > '1.5.1' > > >>> bbb=na.zeros((2,3,3), na.Float32) > >>> bbb[0,:,:]=1 > >>> bbb[1,:,:]=2 > >>> ccc=na.transpose(bbb,(1,0,2)) > >>> d=na.array([[1,0],[0,1]]) > >>> na.dot(d,ccc) > > array([[[ 1., 1., 1.], > [ 2., 2., 2.], > [ 1., 1., 1.]], > > [[ 2., 2., 2.], > [ 1., 1., 1.], > [ 2., 2., 2.]]], type=Float32) |
From: Fernando P. <fpe...@gm...> - 2006-07-17 18:31:55
|
Hi all, I received this message today from a collaborator. I don't have direct access to this box, but he posted fairly detailed logs. Has anyone seen a similar issue with current code? If not, I'll try to track down further with him what the problem may actually be. Thanks for any help, f ============ Original message: We are having trouble with scipy (not)compiling on the 64-bit machine, and it seems to be related to the intp type. I put the log file and connected files at http://www.math.ohiou.edu/~mjm/pickup/scipy.log http://www.math.ohiou.edu/~mjm/pickup/fortranobject.c http://www.math.ohiou.edu/~mjm/pickup/fortranobject.h The relevant parts are at the end of scipy.log: creating build/temp.linux-x86_64-2.3/Lib/fftpack/src compile options: '-Ibuild/src.linux-x86_64-2.3 -I/usr/lib64/python2.3/site-packages/numpy/core/include -I/usr/include/python2.3 -c' gcc: Lib/fftpack/src/drfft.c gcc: Lib/fftpack/src/zfft.c gcc: build/src.linux-x86_64-2.3/fortranobject.c build/src.linux-x86_64-2.3/fortranobject.c: In function `PyFortranObject_New': build/src.linux-x86_64-2.3/fortranobject.c:55: error: syntax error before "intp" build/src.linux-x86_64-2.3/fortranobject.c:60: error: syntax error before "intp" build/src.linux-x86_64-2.3/fortranobject.c: In function `fortran_getattr': build/src.linux-x86_64-2.3/fortranobject.c:179: error: syntax error before "intp" build/src.linux-x86_64-2.3/fortranobject.c: In function `fortran_setattr': build/src.linux-x86_64-2.3/fortranobject.c:243: error: syntax error before ')' token build/src.linux-x86_64-2.3/fortranobject.c:245: error: syntax error before ')' token ... I don't see "intp" in fortranobject.c, so it must be included from elsewhere. |
From: Christopher B. <Chr...@no...> - 2006-07-17 18:28:36
|
And on my Mac: OS-X 10.4.*, PPC, universal python 2.4.3: >>> import numarray as na >>> na.__version__ '1.5.1' >>> bbb=na.zeros((2,3,3), na.Float32) >>> bbb[0,:,:]=1 >>> bbb[1,:,:]=2 >>> ccc=na.transpose(bbb,(1,0,2)) >>> d=na.array([[1,0],[0,1]]) >>> na.dot(d,ccc) array([[[ 1., 1., 1.], [ 2., 2., 2.], [ 1., 1., 1.]], [[ 2., 2., 2.], [ 1., 1., 1.], [ 2., 2., 2.]]], type=Float32) -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Christopher B. <Chr...@no...> - 2006-07-17 18:24:45
|
Sebastian Haase wrote: > Hi! > This is what I got: > Can someone confirm this ? This is what I get on Linux (ix86, Fedora core4, python 2.4.3) >>> import numarray as na >>> na.__version__ '1.5.0' >>> bbb=na.zeros((2,3,3), na.Float32) >>> bbb[0,:,:]=1 >>> bbb[1,:,:]=2 >>> bbb array([[[ 1., 1., 1.], [ 1., 1., 1.], [ 1., 1., 1.]], [[ 2., 2., 2.], [ 2., 2., 2.], [ 2., 2., 2.]]], type=Float32) >>> ccc=na.transpose(bbb,(1,0,2)) >>> ccc array([[[ 1., 1., 1.], [ 2., 2., 2.]], [[ 1., 1., 1.], [ 2., 2., 2.]], [[ 1., 1., 1.], [ 2., 2., 2.]]], type=Float32) >>> d=na.array([[1,0],[0,1]]) >>> d array([[1, 0], [0, 1]]) >>> na.dot(d,ccc) array([[[ 1., 1., 1.], [ 2., 2., 2.], [ 1., 1., 1.]], [[ 2., 2., 2.], [ 1., 1., 1.], [ 2., 2., 2.]]], type=Float32) >>> -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Sebastian H. <ha...@ms...> - 2006-07-17 16:40:49
|
Hi! This is what I got: >>> import numarray as na >>> na.__version__ '1.4.0' >>>bbb=3Dna.zeros((2,3,3), naFloat32) >>>bbb[0,:,:]=3D1 >>>bbb[1,:,:]=3D2 >>>ccc=3Dna.transpose(bbb,(1,0,2)) >>>d=3Dna.array([[1,0],[0,1]]) >>>na.dot(d,ccc) [[[ 1. =A01. =A01.] =A0 [ 1. =A01. =A01.] =A0 [ 1. =A01. =A01.]] =A0[[ 2. =A02. =A02.] =A0 [ 2. =A02. =A02.] =A0 [ 2. =A02. =A02.]]] This is on a PC (Windows or Linux): But if you do the same thing on Mac, the result will be: [[[ 1. 1. 1.] =A0 [ 2. 2. 2.] =A0 [ 1. 1. 1.]] =A0[[ 2. 2. 2.] =A0 [ 1. 1. 1.] =A0 [ 2. 2. 2.]]] Can someone confirm this ? (the new numpy on windows also gives the first result) Thanks, Sebastian Haase |
From: <sk...@po...> - 2006-07-17 15:32:43
|
James> Sets are available in python 2.3 as part of the sets module so it James> is possible. However afaict the provided patch does not use the James> module and so will need to be adapted for use in 2.3. I got so tired of the ugly test for the set builtin during our 2.3-to-2.4 transition that I finally just added if sys.hexversion < 0x2040000: from sets import Set as set import __builtin__ __builtin__.set = set to sitecustomize.py. I'm not suggesting that scipy's installer should do the same, but the change worked for us. Skip |
From: Jon W. <wr...@es...> - 2006-07-17 15:20:02
|
Keith Goodman wrote: >I prfr shrtr nams lke inv eig and sin. > > Dwn wth vwls!! Srsly thgh: >>> from numpy import linalg >>> help(linalg.inv) Help on function inv in module numpy.linalg.linalg: inv(a) >>> ??? While I prefer inverse to inv, I don't really care as long as the word "inverse" appears in the docstring and it faithfully promises that it is going to try to do an inverse in the matrix linear algebra sense and not in the one_over_x or power_minus_one sense or bit inversion (~ x, numpy.invert) sense. It would be handy to know which exception which will be raised for a singular matrix, what are valid arguments (eg, square shape, numbers), what is the type of the output ('<f8' for me) and what is the expected cost (O(n^3)). Other handy information - like a pointer to pinv and matrix.I and a note that this is implemented by "solve"ing Identity = b = A.x. The docs for solve should indicate they use an LU factorisation in lapack as this is of interest (ie it is not the matlab \ operator and not cholesky). If you have all that then I suspect people might accept any even vaguely sane naming scheme, as they don't have to guess or read the source to find out what it actually does. The documentation for "invert" is for a generic ufunc in my interpreter (python 2.4.3 and numpy 0.9.8) which could cause confusion - one might imagine it is going to invert a matrix. The ".I" attribute (instead of ".I()") of a matrix implies to me that the inverse is already known and is an O(1) lookup - this seems seriously misleading - it seems you need to store the temporary yourself if you want to reuse the inverse without recomputing it, but maybe I miss some deep magic? I did buy the book, and it doesn't contain all the information about "inv" that I've listed above, and I don't want Travis to spend his time putting all that in the book as then it'd be too long for me to print out and read ;-) I think numpy would benefit from having a great deal more documentation available than it does right now in the form of docstrings and doctests - or am I the only person who relies on the interpreter and help(thing) as being the ultimate reference? This is an area that many of us might be able to help with fixing, via the wiki for example. Has there been a decision on the numpy wiki examples not being converted to doctests? (these examples could usefully be linked from the numeric.scipy.org home page). I saw "rundocs" in testing.numpytest, which sort of suggests the possibility is there. With the wiki being a moving target I can understand that synchronisation is an issue, but perhaps there could be a wiki dump to the numpy.doc directory each time a new release is made, along with a doctest? Catching and fixing misinformation would be as useful as catching actual bugs. Let me know if you are against the "convert examples to doctests" idea or if it has already been done. Perhaps this increases the testsuite coverage for free...? It would be equally possible place that example code into the actual docstrings in numpy, along with more detailed explanations which could also be pulled in from the wiki. I don't think this would detract much from Travis' book, since that contains a lot of information that doesn't belong in docstrings anyway? Jon PS: Doubtless someone might do better, but here is what I mean: copy and paste the ascii (editor) formatted wiki text into a file wiki.txt from the wiki example page and get rid of the {{{ python formatting that confuses doctest: $ grep -v "{{{" wiki.txt | grep -v "}}}" > testnumpywiki.txt ==testwiki.py:== import doctest doctest.testfile("testnumpywiki.txt") $ python testwiki.py > problems.txt problems.txt is 37kB in size (83 of 1028) throwing out the blanklines issues via: doctest.testfile("testnumpywiki.txt", optionflags=doctest.NORMALIZE_WHITESPACE) reduces this to 24kB (62 of 1028). ... most cases are not important, just needing to be fixed for formatting on the wiki or flagged as version dependent, but a few are worth checking out the intentions, eg: ********************************************************************** File "testnumpywiki.txt", line 69, in testnumpywiki.txt Failed example: a[:,b2] Exception raised: Traceback (most recent call last): File "c:\python24\lib\doctest.py", line 1243, in __run compileflags, 1) in test.globs File "<doctest testnumpywiki.txt[18]>", line 1, in ? a[:,b2] IndexError: arrays used as indices must be of integer type ********************************************************************** File "testnumpywiki.txt", line 893, in testnumpywiki.txt Failed example: ceil(a) # nearest integers greater-than or equal to a Expected: array([-1., -1., -0., 1., 2., 2.]) Got: array([-1., -1., 0., 1., 2., 2.]) ********************************************************************** File "testnumpywiki.txt", line 1162, in testnumpywiki.txt Failed example: cov(T,P) # covariance between temperature and pressure Expected: 3.9541666666666657 Got: array([[ 1.97583333, 3.95416667], [ 3.95416667, 8.22916667]]) ********************************************************************** File "testnumpywiki.txt", line 2235, in testnumpywiki.txt Failed example: type(a[0]) Expected: <type 'int32_arrtype'> Got: <type 'int32scalar'> |
From: James G. <jg...@ca...> - 2006-07-17 15:17:07
|
Albert Strasheim wrote: > I think it has been discussed on the list that Python >= 2.3 is assumed. > However, according to the Python documentation, the built-in set type is new > in Python 2.4. Sets are available in python 2.3 as part of the sets module so it is possible. However afaict the provided patch does not use the module and so will need to be adapted for use in 2.3. -- "You see stars that clear have been dead for years But the idea just lives on..." -- Bright Eyes |
From: Albert S. <fu...@gm...> - 2006-07-17 14:59:35
|
Hey David > -----Original Message----- > From: num...@li... [mailto:numpy- > dis...@li...] On Behalf Of David Huard > Sent: 17 July 2006 16:11 > To: num...@li... > Subject: Re: [Numpy-discussion] unique() should return a sorted array > > Hi, > > I attached a patch for unique (with a test case) based on Norbert's > suggestion. I removed the sort keyword since iterable arrays would be > sorted anyway. The function uses a python set, and I was wondering if it > is ok to assume that everyone running numpy has a python version >= 2.3 ? I think it has been discussed on the list that Python >= 2.3 is assumed. However, according to the Python documentation, the built-in set type is new in Python 2.4. Regards, Albert |
From: David H. <dav...@gm...> - 2006-07-17 14:10:58
|
Hi, I attached a patch for unique (with a test case) based on Norbert's suggestion. I removed the sort keyword since iterable arrays would be sorted anyway. The function uses a python set, and I was wondering if it is ok to assume that everyone running numpy has a python version >= 2.3 ? David |
From: Keith G. <kwg...@gm...> - 2006-07-17 01:39:22
|
On 7/16/06, Rick White <rl...@st...> wrote: > On Jul 16, 2006, at 11:47 AM, Alan G Isaac wrote: > > > On Sun, 16 Jul 2006, "David M. Cooke" apparently wrote: > >> 'inverse' is not much longer than 'inv', and is more descriptive > > > > But 'inv' is quite universal (can you name a matrix language > > that uses 'inverse' instead?) and I think unambiguous (what > > might it be confused with?). > > IDL uses invert, so inv is not exactly universal. > > I'm personally a fan of names that can be used in interactive > sessions at the command line, which argues for shorter names. But it > is nice to have names where you can type just a few characters and > use tab-completion to fill in the rest of the name. Then the > important thing is not the full length of the name but having the > first 3 or 4 characters be memorable. So I'd rather have > "pseudoinverse" because I can probably find it by just typing "ps<tab>". I prfr shrtr nams lke inv eig and sin. |
From: Rick W. <rl...@st...> - 2006-07-17 01:25:38
|
On Jul 16, 2006, at 11:47 AM, Alan G Isaac wrote: > On Sun, 16 Jul 2006, "David M. Cooke" apparently wrote: >> 'inverse' is not much longer than 'inv', and is more descriptive > > But 'inv' is quite universal (can you name a matrix language > that uses 'inverse' instead?) and I think unambiguous (what > might it be confused with?). IDL uses invert, so inv is not exactly universal. I'm personally a fan of names that can be used in interactive sessions at the command line, which argues for shorter names. But it is nice to have names where you can type just a few characters and use tab-completion to fill in the rest of the name. Then the important thing is not the full length of the name but having the first 3 or 4 characters be memorable. So I'd rather have "pseudoinverse" because I can probably find it by just typing "ps<tab>". |
From: Bob I. <bo...@re...> - 2006-07-17 00:46:23
|
On Jul 16, 2006, at 5:22 PM, Josh Marshall wrote: > Back in December last year, I was building a PyObjC application that > embedded numpy (scipy_core at the time), scipy and matplotlib. I ran > into a few issues doing so, some of which were resolved. One was the > inability for scipy to run from a zipped site-packages. I worked > around this by expanding the embedded site-packages.zip into a site- > packages directory in the same location. For reference, the thread > can be found at: > http://www.scipy.net/pipermail/scipy-dev/2005-December/004551.html > > Come a few months later, I have needed to update to the latest > version of numpy (and therefore scipy and matplotlib). I have not yet > updated to the universal build of Python, still running 2.4.1, > although I will do so if it is known to fix any issues. (I don't have > too much time at the moment, and building the latest versions of > numpy and matplotlib for a universal build scares me). If you're still using 2.4.1 you're going to want to add LSPrefersPPC=True to the plist. Otherwise your app will not run on i386, because it will not know to use Rosetta. > I managed to get it working again, which required: > 1) Setting packages=['matplotlib','numpy'] in setup.py's options for > py2app. That could be handled with recipes, or with eggs when py2app grows support for that. Recipes or workarounds are the only way until then. > 2) Modifying the py2app/apptemplate/lib/site.py file to include > 'sys.path.append(_parent + '/site-packages')' before the same line > with .zip appended to the file name. That's a bug, but not the right fix. Trunk is fixed. > 3) Adding setup_options['options']['py2app']['includes'].extend > (['pytz.zoneinfo.UTC']) to the setup.py, this is required by > matplotlib. That should be part of a matplotlib recipe. > Now, it seems I am doomed to continue to have to find work-arounds to > get numpy and matplotlib working in a standalone .app. Is there a > chance we can come up with a py2app recipe for numpy, matplotlib and > scipy? What other alternatives are there? Sure, someone can write the recipes and send me a patch. I don't currently use matplotlib, numpy, or scipy nor do I have an example I can test with so I'm not going to do it. -bob |
From: Josh M. <jos...@gm...> - 2006-07-17 00:22:14
|
Back in December last year, I was building a PyObjC application that embedded numpy (scipy_core at the time), scipy and matplotlib. I ran into a few issues doing so, some of which were resolved. One was the inability for scipy to run from a zipped site-packages. I worked around this by expanding the embedded site-packages.zip into a site- packages directory in the same location. For reference, the thread can be found at: http://www.scipy.net/pipermail/scipy-dev/2005-December/004551.html Come a few months later, I have needed to update to the latest version of numpy (and therefore scipy and matplotlib). I have not yet updated to the universal build of Python, still running 2.4.1, although I will do so if it is known to fix any issues. (I don't have too much time at the moment, and building the latest versions of numpy and matplotlib for a universal build scares me). I managed to get it working again, which required: 1) Setting packages=['matplotlib','numpy'] in setup.py's options for py2app. 2) Modifying the py2app/apptemplate/lib/site.py file to include 'sys.path.append(_parent + '/site-packages')' before the same line with .zip appended to the file name. 3) Adding setup_options['options']['py2app']['includes'].extend (['pytz.zoneinfo.UTC']) to the setup.py, this is required by matplotlib. I believe (2) is a bug in py2app (I am running 0.3.1). Packages included using 'packages=' are not added to site-packages.zip, but rather are in their own site-packages directory. I am not sure whether this is the intended behaviour or a bug, but it is good for me, since numpy and matplotlib won't run when compressed. Now, it seems I am doomed to continue to have to find work-arounds to get numpy and matplotlib working in a standalone .app. Is there a chance we can come up with a py2app recipe for numpy, matplotlib and scipy? What other alternatives are there? Regards, Josh |
From: Alan G I. <ai...@am...> - 2006-07-16 15:40:05
|
On Sun, 16 Jul 2006, Charles R Harris apparently wrote: > What is needed in the end is a good index with lots of > crossreferences. Name choices are just choices I mostly agree with this (although I think Matlab made some bad choices in naming). As a point of reference for a useful index see http://www.mathworks.com/access/helpdesk/help/techdoc/ref/refbookl.html Cheers, Alan Isaac |
From: Alan G I. <ai...@am...> - 2006-07-16 15:40:04
|
On Sun, 16 Jul 2006, "David M. Cooke" apparently wrote: > 'inverse' is not much longer than 'inv', and is more descriptive But 'inv' is quite universal (can you name a matrix language that uses 'inverse' instead?) and I think unambiguous (what might it be confused with?). Cheers, Alan Isaac |
From: Charles R H. <cha...@gm...> - 2006-07-16 15:23:37
|
On 7/15/06, Travis Oliphant <oli...@ie...> wrote: > > Victoria G. Laidler wrote: > > Jonathan Taylor wrote: > > <snip> It's not that we're concerned with MATLAB compatibility. But, frankly > I've never heard that the short names MATLAB uses for some very common > operations are a liability. So, when a common operation has a short, > easily-remembered name that is in common usage, why not use it? > > That's basically the underlying philosophy. NumPy has too many very > basic operations to try and create very_long_names for them. > > I know there are differing opinions out there. I can understand that. > That's why I suspect that many codes I will want to use will be written > with easy_to_understand_but_very_long names and I'll grin and bear the > extra horizontal space that it takes up in my code. What is needed in the end is a good index with lots of crossreferences. Name choices are just choices, there is no iso standard for function names that I know of. There are short names have been used for so long that everyone knows them (sin, cos, ...), some names come in two standard forms (arcsin, asin) some are fortran conventions (arctan2), some are matlab conventions (pinv, chol). One always has to learn what the names for things are in any new language, so the best thing is to make it easy to find out. Chuck |
From: David M. C. <co...@ph...> - 2006-07-16 08:27:02
|
On Jul 16, 2006, at 00:21 , Travis Oliphant wrote: > Victoria G. Laidler wrote: >> Jonathan Taylor wrote: >> >>> pseudoinverse >>> >>> it's the same name matlab uses: >>> >>> http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html >>> >> >> Thanks for the explanation. >> >> I'm puzzled by the naming choice, however. Standard best practice in >> writing software is to give understandable names, to improve >> readability >> and code maintenance. Obscure abbreviations like "pinv" pretty >> much went >> out with the FORTRAN 9-character limit for variable names. It's very >> unusual to see them in new software nowadays, and it always looks >> unprofessional to me. >> > I appreciate this feedback. It's a question that comes up > occasionally, > so I'll at least give my opinion on the matter which may shed some > light > on it. > > I disagree with the general "long-name" concept when it comes to > "very-common" operations. It's easy to take an idea and > over-generalize it for the sake of consistency. I've seen too many > codes where very long names actually get in the way of code > readability. How are pseudoinverse and inverse "very common"? (Especially given that one of the arguments for not having a .I attribute for inverse on matrices is that that's usually the wrong way to go about solving equations.) > Someone reading code will have to know what an operation actually > is to > understand it. A name like "generalized_inverse" doesn't convey any > intrinsic meaning to the non-practitioner anyway. You always have to > "know" what the function is "really" doing. All that's needed is a > "unique" name. I've found that long names are harder to remember > (there's more opportunity for confusion about how much of the full > name > was actually used and how any words were combined). As has been argued before, short names have their own problems with remembering what they are. I also find that when reading code with short names, I go slower, because I have to stop and think what that short name is (particularly bad are short names that drop vowels, like lstsq -- I can't pronounce that!). I'm not very good at creating hash tables in my head from short names to long ones. The currently exported names in numpy.linalg are solve, inv, cholesky, eigvals, eigvalsh, eig, eigh, svd, pinv, det, lstsq, and norm. Of these, 'lstsq' is the worst offender, IMHO (superfluous dropped vowels). 'inv' and 'pinv' are the next, then the 'eig*' names. 'least_squares' would be better than 'lstsq'. 'inverse' is not much longer than 'inv', and is more descriptive. I don't think 'pinv' is that common to need a short name; 'pseudoinverse' would be better (not all generalized inverses are pseudoinverses). Give me these three and I'll be happy :-) Personally, I'd prefer 'eigenvalues' and 'eigen' instead of 'eigvals' and 'eig', but I can live with the current names. 'det' is fine, as it's used in mathematical notation. 'cholesky' is also fine, as it's a word at least. I'd have to look at the docstring to find how to use it, but that would be the same for "cholesky_decomposition". [btw, I'm ok with numpy.dft now: the names there make sense, because they're constructed logically. Once you know the scheme, you can see right away that 'irfftn' is 'inverse real FFT, n-dimensional'.] > > A particularly ludicrous case, for example, was the fact that the very > common SVD (whose acronym everybody doing linear algebra uses) was > named > in LinearAlgebra (an unecessarily long module name to begin with) with > the horribly long and unsightly name of singular_value_decomposition. > I suppose this was done just for the sake of "code readability." I agree; that's stupid. > It's not that we're concerned with MATLAB compatibility. But, frankly > I've never heard that the short names MATLAB uses for some very common > operations are a liability. So, when a common operation has a short, > easily-remembered name that is in common usage, why not use it? > > That's basically the underlying philosophy. NumPy has too many very > basic operations to try and create very_long_names for them. > > I know there are differing opinions out there. I can understand > that. > That's why I suspect that many codes I will want to use will be > written > with easy_to_understand_but_very_long names and I'll grin and bear the > extra horizontal space that it takes up in my code. -- |>|\/|< /------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Travis O. <oli...@ie...> - 2006-07-16 04:21:30
|
Victoria G. Laidler wrote: > Jonathan Taylor wrote: > > >> pseudoinverse >> >> it's the same name matlab uses: >> >> http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pinv.html >> > > Thanks for the explanation. > > I'm puzzled by the naming choice, however. Standard best practice in > writing software is to give understandable names, to improve readability > and code maintenance. Obscure abbreviations like "pinv" pretty much went > out with the FORTRAN 9-character limit for variable names. It's very > unusual to see them in new software nowadays, and it always looks > unprofessional to me. > I appreciate this feedback. It's a question that comes up occasionally, so I'll at least give my opinion on the matter which may shed some light on it. I disagree with the general "long-name" concept when it comes to "very-common" operations. It's easy to take an idea and over-generalize it for the sake of consistency. I've seen too many codes where very long names actually get in the way of code readability. Someone reading code will have to know what an operation actually is to understand it. A name like "generalized_inverse" doesn't convey any intrinsic meaning to the non-practitioner anyway. You always have to "know" what the function is "really" doing. All that's needed is a "unique" name. I've found that long names are harder to remember (there's more opportunity for confusion about how much of the full name was actually used and how any words were combined). A particularly ludicrous case, for example, was the fact that the very common SVD (whose acronym everybody doing linear algebra uses) was named in LinearAlgebra (an unecessarily long module name to begin with) with the horribly long and unsightly name of singular_value_decomposition. I suppose this was done just for the sake of "code readability." It's not that we're concerned with MATLAB compatibility. But, frankly I've never heard that the short names MATLAB uses for some very common operations are a liability. So, when a common operation has a short, easily-remembered name that is in common usage, why not use it? That's basically the underlying philosophy. NumPy has too many very basic operations to try and create very_long_names for them. I know there are differing opinions out there. I can understand that. That's why I suspect that many codes I will want to use will be written with easy_to_understand_but_very_long names and I'll grin and bear the extra horizontal space that it takes up in my code. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-16 04:06:17
|
Victoria G. Laidler wrote: > Sven Schreiber wrote: > > >> Jon Peirce schrieb: >> >> >> >>> There used to be a function generalized_inverse in the numpy.linalg >>> module (certainly in 0.9.2). >>> >>> In numpy0.9.8 it seems to have been moved to the numpy.linalg.old >>> subpackage. Does that mean it's being dropped? Did it have to move? Now >>> i have to add code to my package to try both locations because my users >>> might have any version... :-( >>> >>> >>> >>> >>> >> Maybe I don't understand, but what's wrong with numpy.linalg.pinv? >> >> >> > Er, what's a pinv? It doesn't sound anything like a generalized_inverse. > > 'pseudo'-inverse. It's the name MATLAB uses for the thing. There are many choices for a "generalized_inverse" which is actually a mis-nomer for what is being done. The Moore-Penrose pseudo-inverse is a particular form of the generalized_inverse (and the one being computed). -Travis |
From: Travis O. <oli...@ie...> - 2006-07-16 04:03:41
|
Jon Peirce wrote: > There used to be a function generalized_inverse in the numpy.linalg > module (certainly in 0.9.2). > > In numpy0.9.8 it seems to have been moved to the numpy.linalg.old > subpackage. Does that mean it's being dropped? No. We are just emphasizing the new names. The old names are just there for compatibility with Numeric. The new names have been there from the beginning of NumPy releases. So, just call it numpy.linalg.pinv and it will work in all versions. -Travis |
From: Travis O. <oli...@ie...> - 2006-07-16 04:01:48
|
Nick Fotopoulos wrote: > Dear all, > > I often make use of numpy.vectorize to make programs read more like > the physics equations I write on paper. numpy.vectorize is basically > a wrapper for numpy.frompyfunc. Reading Travis's Scipy Book (mine is > dated Jan 6 2005) kind of suggests to me that it returns a full- > fledged ufunc exactly like built-in ufuncs. > > First, is this true? Yes, it is true. But, it is a ufunc on Python object data-types. It is calling the underlying Python function at every point in the loop. > Second, how is the performance? i.e., are my > functions performing approximately as fast as they could be or would > they still gain a great deal of speed by rewriting it in C or some > other compiled python accelerator? > Absolutely the functions could be made faster to avoid the call back into Python at each evaluation stage. I don't think it would be too hard to replace the function-call with something else that could be evaluated more quickly. But, this has not been done yet. > As an aside, I've found the following function decorator to be > helpful for readability, and perhaps others will enjoy it or improve > upon it: > Thanks for the decorator. This should be put on the www.scipy.org wiki. -Travis |