You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Francesc A. <fa...@ca...> - 2006-09-29 11:44:43
|
Hi, I'm writing this here because the numpy Trac seems down: {{{ Oops... Trac detected an internal error: The Trac Environment needs to be upgraded.= =20 Run trac-admin /home/scipy/trac/numpy upgrade" }}} Well, it seems that there are inconsistencies on creating arrays coming fro= m=20 negative values using unsigned integers: In [41]: numpy.asarray([-1], dtype=3D'uint8') Out[41]: array([255], dtype=3Duint8) In [42]: numpy.asarray([-1], dtype=3D'uint16') Out[42]: array([65535], dtype=3Duint16) until here it's fine, but: In [43]: numpy.asarray([-1], dtype=3D'uint32') =2D------------------------------------------------------------------------= =2D- <type 'exceptions.TypeError'> Traceback (most recent call last) /home/faltet/python.nobackup/numpy/<ipython console> in <module>() /usr/local/lib/python2.5/site-packages/numpy/core/numeric.py in asarray(a,= =20 dtype, order) 129 are converted to base class ndarray. 130 """ =2D-> 131 return array(a, dtype, copy=3DFalse, order=3Dorder) 132 133 def asanyarray(a, dtype=3DNone, order=3DNone): <type 'exceptions.TypeError'>: long() argument must be a string or a number= ,=20 not 'list' and the same happens with 'uint64'. My numpy version is 1.0.dev3216 Regards, =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: David C. <da...@ar...> - 2006-09-29 07:35:14
|
Travis Oliphant wrote: > David Cournapeau wrote: > >> Hi, >> >> >> What are the rules concerning storage with numpy ? >> > The rule is that a numpy array has "strides" which specify how many > bytes to skip to get to the next element in the array. That's the > internal model. There are no hard and fast rules about storage order. > Internally, C-order is as good as Fortran-order (except the array > iterator gives special preference to C-order and all functions for which > the order can be specified (like zeros) default to C-order). > Ok, this is again a bad habit from matlab to think in C or F order... > Thus, the storage order is whatever the strides say it is. Now, there > are flags that keep track of whether or not the strides agree with the 2 > recognized special cases of "Fortran-order" (first-index varies the > fastest) or "C-order" (last-index varies the fastest). But, this is > only for convenience. Very few functions actually require a > specification of storage order. Those that allow it default to "C-order". > > You can't think of a NumPy array has having a particular storage order > unless you explicitly request it. One of the most common ways that > Fortran-order arrays show up, for example is when a C-order array is > transposed. A transpose operation does nothing except flip the strides > (and therefore the flags) of the array. This is what is happening in > concatenate (using axis=1) to give you a Fortran-order array. > Bascially, code equivalent to the following is being run: > concatenate([X1.T, X2.T]).T > > In the second example, you explicitly create the array (and therefore > the strides) as C-order and then fill it (so it doesn't change on you). > The first example used array calculations which don't guarantee the > storage order. > > This is all seamless to the user until you have to interface with > extension code. Ideally, you write compiled code that deals with > strided arrays. If you can't, then you request an array of the required > storage-order. > > By the way, for interfacing with ctypes, check out the > ctypeslib.ndpointer class-creation function for flag checking and the > require function for automatic conversion of an array to specific > requirements. > > I tried to to that at first, but I couldn't make the examples of numpy works; after having tried at home with beta versions, it looks like it is a ctype version problem, as it works with ctype 1.0 + numpy 1.0rc1. Thanks for the explanations, this answers most questions I had in mind for numpy internal layout compared to matlab which I am used to. I think this should be in the wiki somewhere; do you mind if I use your email as a basis for the tentative numpy tutorial (memory layout section, maybe) ? David |
From: Travis O. <oli...@ee...> - 2006-09-29 01:22:38
|
Bill Spotz wrote: >On Sep 28, 2006, at 12:03 PM, Travis Oliphant wrote: > > > >>The other option is to improve your converter in setElements so >>that it >>can understand any of the array scalar integers and not just the >>default >>Python integer. >> >> > >I think this may be the best approach. > >This may be something worthwhile to put in the numpy.i interface >file: a set of typemaps that handle a set of basic conversions for >those array scalar types for which it makes sense. I'll look into it. > > That's a good idea. Notice that there are some routines for making your life easier here. You should look at the tp_int function for the gentype array (it converts scalars to arrays). You call the "__int__" special method of the scalar to convert it to a Python integer. You should first check to see that it is an integer scalar PyArray_IsScalar(obj, Integer) because the "__int__" method coerces to an integer if it is a float (but maybe you want that behavior). There are other functions in the C-API that return the data directly from the scalar --- check them out. The macros in arrayscalar.h are useful. -Travis |
From: Bill S. <wf...@sa...> - 2006-09-29 00:12:22
|
On Sep 28, 2006, at 12:03 PM, Travis Oliphant wrote: > The other option is to improve your converter in setElements so > that it > can understand any of the array scalar integers and not just the > default > Python integer. I think this may be the best approach. This may be something worthwhile to put in the numpy.i interface file: a set of typemaps that handle a set of basic conversions for those array scalar types for which it makes sense. I'll look into it. ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-5451 ** ** Albuquerque, NM 87185-0370 Email: wf...@sa... ** |
From: Travis O. <oli...@ie...> - 2006-09-28 18:03:33
|
Bill Spotz wrote: > I am wrapping code using swig and extending it to use numpy. > > One class method I wrap (let's call it myElements()) returns an array > of ints, and I convert it to a numpy array with > > PyArray_SimpleNew(1,n,'i'); > You should probably use NPY_INT instead of 'i' for the type-code. > I obtain the data pointer, fill in the values and return it as the > method return argument. > > In python, it is common to want to loop over this array and treat its > elements as integers: > > for row in map.myElements(): > matrix.setElements(row, [row-1,row,row+1], [-1.0,2.0,-1.0]) > > On a 32-bit machine, this has worked fine, but on a 64-bit machine, I > get a type error: > > TypeError: in method 'setElements', argument 2 of type 'int' > > because row is a <type 'int32scalar'>. > > It would be nice if I could get the integer conversion to work > automatically under the covers, but I'm not exactly sure how to make > that work. > Yeah, It can be confusing, at first. You just have to make sure you are matching the right c-data-types. I'm not quite sure what the problem here is given your description, because I don't know what setElements expects. My best guess, is that it is related to the fact that a Python int uses the 'long' c-type. Thus, you should very likely be using PyArray_SimpleNew(1, n, NPY_LONG) instead of int so that your integer array always matches what Python is using as integers. The other option is to improve your converter in setElements so that it can understand any of the array scalar integers and not just the default Python integer. The reason this all worked on 32-bit systems is probably the array scalar corresponding to NPY_INT is a sub-class of the Python integer. It can't be on a 64-bit platform because of binary incompatibility of the layout. Hope that helps. -Travis |
From: Tim H. <tim...@ie...> - 2006-09-28 17:08:23
|
Stefan van der Walt wrote: > Hi all, > > Currently, the power function returns '0' for negative powers of > integers: > > In [1]: N.power(3,-2) > Out[1]: 0 > > (or, more confusingly) > > In [1]: N.power(a,b) > Out[1]: 0 > > which is almost certainly not the answer you want. Two possible > solutions may be to upcast the input to float before calculation, or > to return nan. > Returning nan seems silly. There really is a result, or rather two possible results on a tad more sensible than the other. If we were going to punt it makes more sense to raise an exception, but I doubt that's necessary. In addition, nan is really a floating point value; if we're going to return a floating point value, not an integer, we might as well return the actual result. > This would be consistent with a function like sqrt: > > In [10]: N.sqrt(3) > Out[10]: 1.7320508075688772 > > In [11]: N.sqrt(-3) > Out[11]: nan > > Does anyone have an opinion on whether the change is necessary, and if > so, on which one would work best? > This is a complicated tangle of worms. First off there's both "a**b" and "power(a, b)". These don't necessarily need to work the same. In fact they already differ somewhat in that a**b does some optimization when b is a small scalar that power does not (I think anyway -- haven't looked at this a while). However, if having them differ in any significant way is likely to be quite confusing, so it should be only be considered if there's some compelling reason to support multiple behaviors here. There is a solution that is, IMO, simple obvious and not quite right. That is to return floats if the b contains any negative numbers while returning integers otherwise. That sounds appealing at first, but will cause confusion and memory blow ups when one suddenly has all of ones arrays become floats because somewhere or other a negative value crept into an exponent. It's fine for the return type to depend on the types of the arguments, it's not so good for it to depends on the values. This restriction gets a little weaker once future division arrives since ints and floats will be closer to indistinguishable, but it still has some force. On the other hand, this is consistent with the rest of Python's behavior. Another path is to just always return floats from power. One down side of this is that we lose the ability to do true integer powers, which is sometimes useful. A second downside is that it introduces a inconsistency with Python's scalar 'x**y'. One way to finesse both of these issues is to make numpy.power consistent with math.pow; that is, it returns floating point values when passed integers. At the same time, make a**b consistent with the python's '**' operator in that any negative exponents trigger a floating point return values. This isn't perfect, but it's as close to a self consistent solution as I can think of. That's my two cents. -tim |
From: Bill S. <wf...@sa...> - 2006-09-28 16:43:52
|
I am wrapping code using swig and extending it to use numpy. One class method I wrap (let's call it myElements()) returns an array of ints, and I convert it to a numpy array with PyArray_SimpleNew(1,n,'i'); I obtain the data pointer, fill in the values and return it as the method return argument. In python, it is common to want to loop over this array and treat its elements as integers: for row in map.myElements(): matrix.setElements(row, [row-1,row,row+1], [-1.0,2.0,-1.0]) On a 32-bit machine, this has worked fine, but on a 64-bit machine, I get a type error: TypeError: in method 'setElements', argument 2 of type 'int' because row is a <type 'int32scalar'>. It would be nice if I could get the integer conversion to work automatically under the covers, but I'm not exactly sure how to make that work. ** Bill Spotz ** ** Sandia National Laboratories Voice: (505)845-0170 ** ** P.O. Box 5800 Fax: (505)284-5451 ** ** Albuquerque, NM 87185-0370 Email: wf...@sa... ** |
From: Tim H. <tim...@ie...> - 2006-09-28 16:32:31
|
Tim Hochberg wrote: > Tim Hochberg wrote: > >> Travis Oliphant wrote: >> >>> mg wrote: >>> >>> >>>> Hello, >>>> >>>> I just download the newly Python-2.5 and Numpy-1.0rc1 and all work >>>> fine on Linux-x86 32 and 64bit platforms. >>>> Now, I try to compile the both distributions on WindowsXP with >>>> VisualStudio2003. No problem to compile Python-2.5, but i have some >>>> troubles with Numpy-1.0rc1 and I didn't find any help in the >>>> provided setup.py. So, does someone can tell me how to do it? >>>> >>>> >>>> >>> I don't use VisualStudio2003 on Windows to compile NumPy (I use >>> mingw). Tim Hochberg once used a microsoft compiler to compile a >>> previous version of NumPy and some things had to be fixed to make it >>> work. I'm not sure if some in-compatibilities have crept in since >>> then or not. But, I'd sure like to resolve it if they have. >>> >>> So, please post what problems you are having. You may be the first >>> person to try a microsoft compiler with Python 2.5 >>> >>> >> It was VS2003 that I used to compile numpy. However, I switched boxes >> a while ago and I haven't got around to trying to compile on the new >> box, I've just been using the released builds. So I'm not much help at >> the moment. Things are clearing up a little bit here, so maybe I can >> carve some time out to get things set up to compile in the next couple >> days. If so I'll let you know what I find. >> > FWIW, I just got svn head to compile cleanly against Python *2.4* with > VS2003. Later today I will try compiling against 2.5 > > OK. SVN head compiles out of the box and passes all tests for me under both Python2.4 and Python2.5. My stderr output includes all of the same warnings about not being able to find ATLAS, BLAS and LAPACK that mg's does, but not "ERROR: Failed to test configuration". The stdout output also looks similar except for the error at the end. (I can send you the entire stderr/stdout output privately if you like). One thing I did notice is that your Python appears to be in a nonstandard location: \Program Files\python\dist\src instead of the standard \Python25, or at least that's where it's looking for things. Perhaps this is confusing either distutils or some of numpy's addons to distutils and causing it to misplace python25.lib. At least that's my best (and only) guess at the moment. -tim |
From: Tim H. <tim...@ie...> - 2006-09-28 15:08:07
|
Tim Hochberg wrote: > Travis Oliphant wrote: >> mg wrote: >> >>> Hello, >>> >>> I just download the newly Python-2.5 and Numpy-1.0rc1 and all work >>> fine on Linux-x86 32 and 64bit platforms. >>> Now, I try to compile the both distributions on WindowsXP with >>> VisualStudio2003. No problem to compile Python-2.5, but i have some >>> troubles with Numpy-1.0rc1 and I didn't find any help in the >>> provided setup.py. So, does someone can tell me how to do it? >>> >>> >> I don't use VisualStudio2003 on Windows to compile NumPy (I use >> mingw). Tim Hochberg once used a microsoft compiler to compile a >> previous version of NumPy and some things had to be fixed to make it >> work. I'm not sure if some in-compatibilities have crept in since >> then or not. But, I'd sure like to resolve it if they have. >> >> So, please post what problems you are having. You may be the first >> person to try a microsoft compiler with Python 2.5 >> > It was VS2003 that I used to compile numpy. However, I switched boxes > a while ago and I haven't got around to trying to compile on the new > box, I've just been using the released builds. So I'm not much help at > the moment. Things are clearing up a little bit here, so maybe I can > carve some time out to get things set up to compile in the next couple > days. If so I'll let you know what I find. FWIW, I just got svn head to compile cleanly against Python *2.4* with VS2003. Later today I will try compiling against 2.5 -tim |
From: mg <mg....@la...> - 2006-09-28 14:42:07
|
Unfortunately, no Windows-x86 or Windows-x86-64bit Numpy-1.0rc1 installer are available on SourceForge yet. So the only current solution for us is to compile it. Moreover, our generic C++ framework is compiled with VisualStudio on Windows-native and we compile all additions to it with the same compiler, if compilation is needed. Indeed, we have observed that Visual C++ is still the closest to a "default" or "native" compiler for windows platform, like gcc is for linux....well at least that's our experience when using other commercial (or open-source, think python whose provide visual project files) softwares requiring rebuilding/relinking. Use MinGW for Numpy means risk some un-compatibilities between Numpy and our framework (and even Python) or migrate all our development environment from Visual Studio to MinGW. this is not really an option for the near or mean furture.... If some of you have good experiences for linking (static or dynamic) libraries compiled with mixed compilers (especially mingw and visual), knowing that our libraries contains C++, C and fortran code, we could consider this as a temprary option for numpy, but for convenience we would ultimately prefer to use only 1 compiler to avoid a double maintenance of building tools.... Then, I wonder if the compilation of Numpy with Visual-Studio-2003 (or 2005) is scheduled ? For your information, this is the standard output and the standard error of the compilation of Numpy-1.0rc1 with Visual Studio 2003: >>> command line >>> python setup.py build 1>stdout.txt 2>stderr.txt >>> stdout.txt >>> F2PY Version 2_3198 blas_opt_info: blas_mkl_info: libraries mkl,vml,guide not found in c:\Program Files\python\dist\src\lib libraries mkl,vml,guide not found in C:\ NOT AVAILABLE atlas_blas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in c:\Program Files\python\dist\src\lib libraries ptf77blas,ptcblas,atlas not found in C:\ NOT AVAILABLE atlas_blas_info: libraries f77blas,cblas,atlas not found in c:\Program Files\python\dist\src\lib libraries f77blas,cblas,atlas not found in C:\ NOT AVAILABLE blas_info: libraries blas not found in c:\Program Files\python\dist\src\lib libraries blas not found in C:\ NOT AVAILABLE blas_src_info: NOT AVAILABLE NOT AVAILABLE lapack_opt_info: lapack_mkl_info: mkl_info: libraries mkl,vml,guide not found in c:\Program Files\python\dist\src\lib libraries mkl,vml,guide not found in C:\ NOT AVAILABLE NOT AVAILABLE atlas_threads_info: Setting PTATLAS=ATLAS libraries ptf77blas,ptcblas,atlas not found in c:\Program Files\python\dist\src\lib libraries lapack_atlas not found in c:\Program Files\python\dist\src\lib libraries ptf77blas,ptcblas,atlas not found in C:\ libraries lapack_atlas not found in C:\ numpy.distutils.system_info.atlas_threads_info NOT AVAILABLE atlas_info: libraries f77blas,cblas,atlas not found in c:\Program Files\python\dist\src\lib libraries lapack_atlas not found in c:\Program Files\python\dist\src\lib libraries f77blas,cblas,atlas not found in C:\ libraries lapack_atlas not found in C:\ numpy.distutils.system_info.atlas_info NOT AVAILABLE lapack_info: libraries lapack not found in c:\Program Files\python\dist\src\lib libraries lapack not found in C:\ NOT AVAILABLE lapack_src_info: NOT AVAILABLE NOT AVAILABLE running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build\src.win32-2.5\numpy\core\config.h No module named msvccompiler in numpy.distutils, trying from distutils.. 0 Could not locate executable g77 Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler Could not locate executable f77 Executable f77 does not exist Could not locate executable f77 Executable f77 does not exist Could not locate executable f77 Executable f77 does not exist Could not locate executable ifort Could not locate executable ifc Could not locate executable ifort Could not locate executable efort Could not locate executable efc Could not locate executable ifort Could not locate executable efort Could not locate executable efc customize IntelVisualFCompiler Could not locate executable ifl Executable ifl does not exist customize AbsoftFCompiler Could not locate executable f90 Executable f90 does not exist customize CompaqVisualFCompiler Could not locate executable DF Executable DF does not exist customize IntelItaniumVisualFCompiler Could not locate executable efl Executable efl does not exist customize Gnu95FCompiler Could not locate executable f95 Executable f95 does not exist Could not locate executable f95 Executable f95 does not exist Could not locate executable f95 Executable f95 does not exist customize G95FCompiler Could not locate executable g95 Executable g95 does not exist customize GnuFCompiler Could not locate executable f77 Executable f77 does not exist Could not locate executable f77 Executable f77 does not exist Could not locate executable f77 Executable f77 does not exist customize Gnu95FCompiler Could not locate executable f95 Executable f95 does not exist Could not locate executable f95 Executable f95 does not exist Could not locate executable f95 Executable f95 does not exist customize GnuFCompiler Could not locate executable f77 Executable f77 does not exist Could not locate executable f77 Executable f77 does not exist Could not locate executable f77 Executable f77 does not exist customize GnuFCompiler using config C:\Program Files\Microsoft Visual Studio .NET 2003\Vc7\bin\cl.exe /c /nologo /Ox /MD /W3 /GX /DNDEBUG -Ic:\Program Files\python\dist\src\include -Inumpy\core\src -Inumpy\core\include -Ic:\Program Files\python\dist\src\include -Ic:\Program Files\python\dist\src\PC /Tc_configtest.c /Fo_configtest.obj C:\Program Files\Microsoft Visual Studio .NET 2003\Vc7\bin\link.exe /nologo /INCREMENTAL:NO /LIBPATH:c:\Program Files\python\dist\src\lib /LIBPATH:C:\ _configtest.obj /OUT:_configtest.exe LINK : fatal error LNK1104: cannot open file 'python25.lib' failure. removing: _configtest.c _configtest.obj >>> stderr.txt >>> Running from numpy source directory. C:\Program Files\numpy-1.0rc1\numpy\distutils\system_info.py:1296: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) C:\Program Files\numpy-1.0rc1\numpy\distutils\system_info.py:1305: UserWarning: Blas (http://www.netlib.org/blas/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [blas]) or by setting the BLAS environment variable. warnings.warn(BlasNotFoundError.__doc__) C:\Program Files\numpy-1.0rc1\numpy\distutils\system_info.py:1308: UserWarning: Blas (http://www.netlib.org/blas/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [blas_src]) or by setting the BLAS_SRC environment variable. warnings.warn(BlasSrcNotFoundError.__doc__) C:\Program Files\numpy-1.0rc1\numpy\distutils\system_info.py:1205: UserWarning: Atlas (http://math-atlas.sourceforge.net/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [atlas]) or by setting the ATLAS environment variable. warnings.warn(AtlasNotFoundError.__doc__) C:\Program Files\numpy-1.0rc1\numpy\distutils\system_info.py:1216: UserWarning: Lapack (http://www.netlib.org/lapack/) libraries not found. Directories to search for the libraries can be specified in the numpy/distutils/site.cfg file (section [lapack]) or by setting the LAPACK environment variable. warnings.warn(LapackNotFoundError.__doc__) C:\Program Files\numpy-1.0rc1\numpy\distutils\system_info.py:1219: UserWarning: Lapack (http://www.netlib.org/lapack/) sources not found. Directories to search for the sources can be specified in the numpy/distutils/site.cfg file (section [lapack_src]) or by setting the LAPACK_SRC environment variable. warnings.warn(LapackSrcNotFoundError.__doc__) Traceback (most recent call last): File "setup.py", line 89, in ? setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "C:\Program Files\numpy-1.0rc1\numpy\distutils\core.py", line 174, in setup return old_setup(**new_attr) File "c:\Program Files\python\dist\src\lib\distutils\core.py", line 149, in setup dist.run_commands() File "c:\Program Files\python\dist\src\lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "c:\Program Files\python\dist\src\lib\distutils\dist.py", line 975, in run_command cmd_obj.run() File "c:\Program Files\python\dist\src\lib\distutils\command\build.py", line 112, in run self.run_command(cmd_name) File "c:\Program Files\python\dist\src\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "c:\Program Files\python\dist\src\lib\distutils\dist.py", line 975, in run_command cmd_obj.run() File "C:\Program Files\numpy-1.0rc1\numpy\distutils\command\build_src.py", line 87, in run self.build_sources() File "C:\Program Files\numpy-1.0rc1\numpy\distutils\command\build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "C:\Program Files\numpy-1.0rc1\numpy\distutils\command\build_src.py", line 212, in build_extension_sources sources = self.generate_sources(sources, ext) File "C:\Program Files\numpy-1.0rc1\numpy\distutils\command\build_src.py", line 270, in generate_sources source = func(extension, build_dir) File "numpy\core\setup.py", line 50, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration Thanks a lot for your help. Tim Hochberg wrote: > Travis Oliphant wrote: > >> mg wrote: >> >> >>> Hello, >>> >>> I just download the newly Python-2.5 and Numpy-1.0rc1 and all work fine >>> on Linux-x86 32 and 64bit platforms. >>> Now, I try to compile the both distributions on WindowsXP with >>> VisualStudio2003. No problem to compile Python-2.5, but i have some >>> troubles with Numpy-1.0rc1 and I didn't find any help in the provided >>> setup.py. So, does someone can tell me how to do it? >>> >>> >>> >>> >> I don't use VisualStudio2003 on Windows to compile NumPy (I use mingw). >> Tim Hochberg once used a microsoft compiler to compile a previous >> version of NumPy and some things had to be fixed to make it work. I'm >> not sure if some in-compatibilities have crept in since then or not. >> But, I'd sure like to resolve it if they have. >> >> So, please post what problems you are having. You may be the first >> person to try a microsoft compiler with Python 2.5 >> >> > It was VS2003 that I used to compile numpy. However, I switched boxes a > while ago and I haven't got around to trying to compile on the new box, > I've just been using the released builds. So I'm not much help at the > moment. Things are clearing up a little bit here, so maybe I can carve > some time out to get things set up to compile in the next couple days. > If so I'll let you know what I find. > > -tim > > >> -Travis >> >> >> ------------------------------------------------------------------------- >> Take Surveys. Earn Cash. Influence the Future of IT >> Join SourceForge.net's Techsay panel and you'll get the chance to share your >> opinions on IT & business topics through brief surveys -- and earn cash >> http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> >> > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Tim H. <tim...@ie...> - 2006-09-28 13:26:54
|
Travis Oliphant wrote: > mg wrote: > >> Hello, >> >> I just download the newly Python-2.5 and Numpy-1.0rc1 and all work fine >> on Linux-x86 32 and 64bit platforms. >> Now, I try to compile the both distributions on WindowsXP with >> VisualStudio2003. No problem to compile Python-2.5, but i have some >> troubles with Numpy-1.0rc1 and I didn't find any help in the provided >> setup.py. So, does someone can tell me how to do it? >> >> >> > I don't use VisualStudio2003 on Windows to compile NumPy (I use mingw). > Tim Hochberg once used a microsoft compiler to compile a previous > version of NumPy and some things had to be fixed to make it work. I'm > not sure if some in-compatibilities have crept in since then or not. > But, I'd sure like to resolve it if they have. > > So, please post what problems you are having. You may be the first > person to try a microsoft compiler with Python 2.5 > It was VS2003 that I used to compile numpy. However, I switched boxes a while ago and I haven't got around to trying to compile on the new box, I've just been using the released builds. So I'm not much help at the moment. Things are clearing up a little bit here, so maybe I can carve some time out to get things set up to compile in the next couple days. If so I'll let you know what I find. -tim > -Travis > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: eric <er...@en...> - 2006-09-28 12:05:27
|
I've been using the new record arrays and lexsort from numpy quite a lot lately. Very cool stuff. Using the nightly egg for numpy from here (I believe it is up to date...): http://code.enthought.com/enstaller/eggs/numpy-nightly-py2.4-win32.egg I get segfaults when using lexsort on character arrays. A lot of my columns in record arrays are string based, so sorting the arrays based on these columns would be really handy. Here is an example that crashes for me. <http://code.enthought.com/enstaller/eggs/numpy-nightly-py2.4-win32.egg>C:\wrk\mt\trunk\src\lib\mt\statement\tests>python Python 2.4.3 - Enthought Edition 1.0.0 (#69, Aug 2 2006, 12:09:59) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from numpy import lexsort >>> lst = [1,2,3] >>> lexsort((lst,)) array([0, 1, 2]) >>> lst = ['abc','cde','fgh'] >>> lexsort((lst,)) <seg-fault> Do others see this? I've opened a ticket at: http://projects.scipy.org/scipy/numpy/ticket/298 thanks, eric |
From: Travis O. <oli...@ie...> - 2006-09-28 11:15:16
|
David Cournapeau wrote: > Hi, > > > What are the rules concerning storage with numpy ? The rule is that a numpy array has "strides" which specify how many bytes to skip to get to the next element in the array. That's the internal model. There are no hard and fast rules about storage order. Internally, C-order is as good as Fortran-order (except the array iterator gives special preference to C-order and all functions for which the order can be specified (like zeros) default to C-order). Thus, the storage order is whatever the strides say it is. Now, there are flags that keep track of whether or not the strides agree with the 2 recognized special cases of "Fortran-order" (first-index varies the fastest) or "C-order" (last-index varies the fastest). But, this is only for convenience. Very few functions actually require a specification of storage order. Those that allow it default to "C-order". You can't think of a NumPy array has having a particular storage order unless you explicitly request it. One of the most common ways that Fortran-order arrays show up, for example is when a C-order array is transposed. A transpose operation does nothing except flip the strides (and therefore the flags) of the array. This is what is happening in concatenate (using axis=1) to give you a Fortran-order array. Bascially, code equivalent to the following is being run: concatenate([X1.T, X2.T]).T In the second example, you explicitly create the array (and therefore the strides) as C-order and then fill it (so it doesn't change on you). The first example used array calculations which don't guarantee the storage order. This is all seamless to the user until you have to interface with extension code. Ideally, you write compiled code that deals with strided arrays. If you can't, then you request an array of the required storage-order. By the way, for interfacing with ctypes, check out the ctypeslib.ndpointer class-creation function for flag checking and the require function for automatic conversion of an array to specific requirements. -Travis |
From: Hanno K. <kl...@ph...> - 2006-09-28 10:48:01
|
Emanuele, the scipy compiler flags under http://www.scipy.org/Installing_SciPy/BuildingGeneral work well. However, if you happen to have to use gcc 3.2.3 (e.g. often in Redhat Enterprise editions present), you have to turn off optimization, otherwise lapack doesn't build properly. The correct build flags are then OPTS = -m64 -fPIC (at least that's what worked for me) Regards, Hanno Emanuele Olivetti <oli...@it...> said: > Ops, sorry for the misleding subject: I wrote ATLAS but I meant LAPACK :) > > Emanuele Olivetti wrote: > > Hi, > > I'm installing numpy on a 2 cpus intel pentium 4 Linux box. I'm installing BLAS and > > LAPACK from sources too and I need to tune compiler flags. Here is the question: which > > are the proper flags for compiling LAPACK? Inside lapack.tgz make.inc.LINUX says: > > OPTS = -funroll-all-loops -fno-f2c -O3 > > instead on scipy.org[0] it's suggested: > > OPTS = -O2 > ... > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Hanno Klemm kl...@ph... |
From: Stefan v. d. W. <st...@su...> - 2006-09-28 10:29:57
|
Hi all, Currently, the power function returns '0' for negative powers of integers: In [1]: N.power(3,-2) Out[1]: 0 (or, more confusingly) In [1]: N.power(a,b) Out[1]: 0 which is almost certainly not the answer you want. Two possible solutions may be to upcast the input to float before calculation, or to return nan. This would be consistent with a function like sqrt: In [10]: N.sqrt(3) Out[10]: 1.7320508075688772 In [11]: N.sqrt(-3) Out[11]: nan Does anyone have an opinion on whether the change is necessary, and if so, on which one would work best? Regards St=E9fan |
From: David C. <da...@ar...> - 2006-09-28 09:42:32
|
Hi, I noticed a behaviour which I found counter-intuitive at least when using concatenate. I have a function which takes a numpy array of rank 2 as input, let's say foo(in): a = N.randn((10, 2)) foo(a) To test a ctype implementation of foo against the python version, my test has something like X1 = N.linspace(-2, 2, 10)[:, N.newaxis] X2 = N.linspace(-1, 3, 10)[:, N.newaxis] a = N.concatenate(([X1, X2]), 1) which has Fortran storage (column major order), where as creating a as a = N.zeros((10, 2)) a[:,0] = N.linspace(-2, 2, 10) a[:,1] = N.linspace(-1, 3, 10) has C storage (row major order). What are the rules concerning storage with numpy ? I thought it was always C, except if stated explicitly. I can obviously understand why concatenate gives a Fortran order from an implementation point of view, but this looks kind of strange to me, David |
From: Francesc A. <fa...@ca...> - 2006-09-28 09:24:40
|
El dc 27 de 09 del 2006 a les 21:17 -0600, en/na Travis Oliphant va escriure: > Hi all, >=20 > I'd like to release numpy 1.0rc2 on about October 5 of next week. =20 > Then, the release of numpy 1.0 official should happen on Monday, October=20 > 17. Please try and get all fixes and improvements in before then. =20 > Backward-incompatible changes are not acceptable at this point (unless=20 > they are minor or actually bug-fixes). I think NumPy has been cooking=20 > long enough. Any remaining problems can be fixed with maintenance=20 > releases. When 1.0 comes out, we will make a 1.0 release branch where=20 > bug-fixes should go as well as on the main trunk (I'd love for a way to=20 > do that automatically). In order to reduce the overhead of commiting bug fixes in both trunk and the 1.0 branch, you may want to delay the making of the 1.0 branch as much as possible. Eventually, when you have to start changes that properly belongs to trunk, then it's time to create the branch, but meanwhile you can save yourself quite a few syncronization work. Anyway, it is my pleasure to help finding bugs for NumPy! --=20 >0,0< Francesc Altet http://www.carabos.com/ V V C=C3=A1rabos Coop. V. Enjoy Data "-" |
From: Emanuele O. <oli...@it...> - 2006-09-28 08:20:51
|
Ops, sorry for the misleding subject: I wrote ATLAS but I meant LAPACK :) Emanuele Olivetti wrote: > Hi, > I'm installing numpy on a 2 cpus intel pentium 4 Linux box. I'm installing BLAS and > LAPACK from sources too and I need to tune compiler flags. Here is the question: which > are the proper flags for compiling LAPACK? Inside lapack.tgz make.inc.LINUX says: > OPTS = -funroll-all-loops -fno-f2c -O3 > instead on scipy.org[0] it's suggested: > OPTS = -O2 ... |
From: Emanuele O. <oli...@it...> - 2006-09-28 08:18:26
|
Hi, I'm installing numpy on a 2 cpus intel pentium 4 Linux box. I'm installing BLAS and LAPACK from sources too and I need to tune compiler flags. Here is the question: which are the proper flags for compiling LAPACK? Inside lapack.tgz make.inc.LINUX says: OPTS = -funroll-all-loops -fno-f2c -O3 instead on scipy.org[0] it's suggested: OPTS = -O2 I assume that using -O3 and unrolling loops should have definitely better performance that just -O2 but I'm wondering if the speedy options can be a problem for current numpy release (numpy-1.0rc1). Is it safe to use '-funroll-all-loops -fno-f2c -O3'? Thanks in advance for answers, Emanuele [0]: http://scipy.org/Installing_SciPy P.S.: my GCC is v3.4.5 |
From: Travis O. <oli...@ie...> - 2006-09-28 07:10:36
|
mg wrote: > Hello, > > I just download the newly Python-2.5 and Numpy-1.0rc1 and all work fine > on Linux-x86 32 and 64bit platforms. > Now, I try to compile the both distributions on WindowsXP with > VisualStudio2003. No problem to compile Python-2.5, but i have some > troubles with Numpy-1.0rc1 and I didn't find any help in the provided > setup.py. So, does someone can tell me how to do it? > > I don't use VisualStudio2003 on Windows to compile NumPy (I use mingw). Tim Hochberg once used a microsoft compiler to compile a previous version of NumPy and some things had to be fixed to make it work. I'm not sure if some in-compatibilities have crept in since then or not. But, I'd sure like to resolve it if they have. So, please post what problems you are having. You may be the first person to try a microsoft compiler with Python 2.5 -Travis |
From: mg <mg....@la...> - 2006-09-28 06:56:17
|
Hello, I just download the newly Python-2.5 and Numpy-1.0rc1 and all work fine on Linux-x86 32 and 64bit platforms. Now, I try to compile the both distributions on WindowsXP with VisualStudio2003. No problem to compile Python-2.5, but i have some troubles with Numpy-1.0rc1 and I didn't find any help in the provided setup.py. So, does someone can tell me how to do it? Thanks, Mathieu. |
From: Travis O. <oli...@ie...> - 2006-09-28 03:17:31
|
Hi all, I'd like to release numpy 1.0rc2 on about October 5 of next week. Then, the release of numpy 1.0 official should happen on Monday, October 17. Please try and get all fixes and improvements in before then. Backward-incompatible changes are not acceptable at this point (unless they are minor or actually bug-fixes). I think NumPy has been cooking long enough. Any remaining problems can be fixed with maintenance releases. When 1.0 comes out, we will make a 1.0 release branch where bug-fixes should go as well as on the main trunk (I'd love for a way to do that automatically). There are lots of projects that need to start converting to NumPy 1.0 if we are going to finally have a "merged" community. The actual release of NumPy 1.0 will indicate that we are now committing to stability. Thanks to all that have been contributing so much to the project. -Travis |
From: Travis O. <oli...@ie...> - 2006-09-28 02:24:13
|
During my NumPy Tutorial at the SciPy conference last month, somebody asked the question about the memory requirements of index arrays that I gave the wrong impression about. Here is the context and the correct response that should alleviate concerns about large cross-product index arrays. I was noting how copy-based (advanced) indexing using index arrays works in multiple-dimensions by creating an array of the same-shape of the input index arrays constructed by selecting the elements indicated by respective elements of the index arrays. If a is 2-d, then a[[10,12,14],[13, 15, 17]] returns a 1-d array with elements [a[10,13], a[12,15], a[14,17]]. This is *not* the cross-product that some would expect. The cross-product can be generated using the ix_ function a[ix_([10,12,14], [13,15,17])] is equivalent to a[[[10,10,10],[12,12,12],[14,14,14]], [[13,15,17],[13,15,17],[13,15,17]]] which will return [[a[10,13] a[10,15], a[10,17]], [a[12,13] a[12,15], a[12,17]], [a[14,13] a[14,15], a[14,17]]] The concern mentioned at the conference was that the cross-product would generate large intermediate index arrays for large input arrays to ix_. At the time, I think I validated the concern. However, the concern is unfounded. This is because the cross product function does not actually create a large intermediate array, but uses the broad-casting implementation of indexing to generate the 2-d indexing array "on-the-fly" (much like ogrid and other tools in NumPy). Notice: ix_([10,12,14], [13,15,17]) (array([[10], [12], [14]]), array([[13, 15, 17]])) The first indexing array is 3x1, while the second is 1x3. The result array will be 3x3, but the 2-d indexing array is never actually stored. Just to set my mind at ease about possible mis-information I spread during the tutorial, and give a little tutorial on advanced indexing. Best, -Travis |
From: Travis O. <oli...@ie...> - 2006-09-28 01:04:24
|
Sebastian Haase wrote: > Hi, > This is a vaguely formulated question ... > When I work with memmap'ed files/arrays I have a derived class > that adds special attributes to the array class (referring to the MRC image > file format used in medical / microscopy imaging) > > What are the pros and cons for asarray() vs. asanyarray() > > One obvious con for asanyarray is that its longer and asarray is what I have > been using for the last few years ;-) > asarray() guarantees you have a base-class array. Thus, you are not going to be thwarted by an re-definitions of infix operators, or other changed methods or attributes which you might use in your routine. asanyarray() allows a simple way of making sure your function returns any sub-class so that, for example, matrices are passed seamlessly through your function (matrix in and matrix out). However, a big drawback of asanyarray is that you must be sure that the way your function is written will not get confused by how a sub-class may overwrite the array methods and attributes. This significantly limits the application of asanyarray in my mind, as it is pretty difficult to predict what a sub-class *might* do to it's methods (including the special methods implementing the infix operators). A better way to write a function that passes any sub-class is to use asarray() so you are sure of the behavior of all methods and "infix" operators and then use the __array_wrap__ method of the actual input arguments (using __array_priority__ to choose between competing input objects). I expect that a decorator that automates this process will be added to NumPy eventually. Several examples have already been posted on this list. After getting the array result, you call the stored __array_wrap__ function which will take a base-class ndarray and return an object of the right Python-type (without copying data if possible). This is how the ufuncs work and why they can take sub-classes (actually anything with an __array__ method) and the same kind of object. -Travis |
From: Sebastian H. <ha...@ms...> - 2006-09-27 23:46:24
|
Hi, This is a vaguely formulated question ... When I work with memmap'ed files/arrays I have a derived class that adds special attributes to the array class (referring to the MRC image file format used in medical / microscopy imaging) What are the pros and cons for asarray() vs. asanyarray() One obvious con for asanyarray is that its longer and asarray is what I have been using for the last few years ;-) Thanks, Sebastian |