You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Charles R H. <cha...@gm...> - 2006-10-10 17:26:25
|
On 10/10/06, Christopher Barker <Chr...@no...> wrote: > > Hi all: > > Fredrik Lundh wrote: > > A little later that planned, but PIL 1.1.6 beta 2 is now available from > SVN: > > > > http://svn.effbot.python-hosting.com/tags/pil-1.1.6b2/ > > > > A tarball will appear on effbot.org shortly: > > > > http://effbot.org/downloads/#Imaging > > > > As usual, PIL 1.1.6 supports all Python versions from 1.5.2 and onwards, > > including 2.5. > > > > For a hopefully complete list of changes, see: > > > > http://effbot.org/zone/pil-changes-116.htm > > From there: > > """ > * Added "fromarray" function, which takes an object implementing the > NumPy array interface and creates a PIL Image from it. (from Travis > Oliphant). > > * Added NumPy array interface support (__array_interface__) to the Image > class (based on code by Travis Oliphant). This allows you to easily > convert between PIL image memories and NumPy arrays: > import numpy, Image > > i = Image.open('lena.jpg') > a = numpy.asarray(i) # a is readonly > i = Image.fromarray(a) > """ Fromarray wasn't there for me running latest PIL from svn last week. I had to use another function whose name escapes me at the moment (I don't use PIL very often), but yes, there is a way to use numpy arrays in PIL. Chuck |
From: Christopher B. <Chr...@no...> - 2006-10-10 17:00:10
|
Hi all: Fredrik Lundh wrote: > A little later that planned, but PIL 1.1.6 beta 2 is now available from SVN: > > http://svn.effbot.python-hosting.com/tags/pil-1.1.6b2/ > > A tarball will appear on effbot.org shortly: > > http://effbot.org/downloads/#Imaging > > As usual, PIL 1.1.6 supports all Python versions from 1.5.2 and onwards, > including 2.5. > > For a hopefully complete list of changes, see: > > http://effbot.org/zone/pil-changes-116.htm From there: """ * Added "fromarray" function, which takes an object implementing the NumPy array interface and creates a PIL Image from it. (from Travis Oliphant). * Added NumPy array interface support (__array_interface__) to the Image class (based on code by Travis Oliphant). This allows you to easily convert between PIL image memories and NumPy arrays: import numpy, Image i = Image.open('lena.jpg') a = numpy.asarray(i) # a is readonly i = Image.fromarray(a) """ I hope some of us numpy users will be able to test this new functionality while it's in beta. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Christopher B. <Chr...@no...> - 2006-10-10 16:47:23
|
Eric Emsellem wrote: > I have problems getting binaries for wxPython (Suse10.1) and I > don't want to attempt a full compilation of that package... Di you try the rpms at sourceforge? one of them may well be compatible. If not, I've had good luck with building the source rpms: rpmbuild --rebuild NameOfRPM.srpm > installed Suse10.1 from the downloadable version and many many > packages are missing there - That would be a trick -- you'll need relevant the dev packages. But that sounds like an issue you'll be coming up against over and over again anyway. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Daniel D. <dd...@br...> - 2006-10-10 16:34:58
|
Hi, I have an area of memory which is shared between processes (it is actually a shared memory segment). The address of this memory is stored in a python long variable, which I pass to various custom C/C++ python modules. I would like to construct a numpy array in this area. Is there any way I can tell numpy to use a specific address (stored as a long) to use as storage for the array? Is there any interest in adding this? Thanks, Daniel |
From: Charles H. <c....@se...> - 2006-10-10 15:24:00
|
Hello, =20 I'm trying to use py2exe but I have a problem :-) . I've already compiled some program with it but this time I have a proble because my program is using Numpy modules. =20 My setup.py is the following =20 from distutils.core import setup import py2exe, sys =20 sys.path.append("tools") sys.path.append("report") sys.path.append("spirent") sys.path.append("numpydir") =20 =20 =20 setup( console =3D ['checkfile.py'], ) =20 But when I enter the command "python setup.py py2exe", lots of numpy files appear to be missing. Therefore, my executable file does not work. =20 Is there something special to do with the setup.py in order to import numpy modules? =20 Thank you very much for you help =20 Best Regards =20 This e-mail communication contains information that is confidential and m= ay also be privileged. It is intended for the exclusive use of the addres= sees. If you are not the person or organization to whom it is addressed, = you must not copy, distribute or take any action in reliance upon it. If = you received this communication in error, please notify Septentrio nv imm= ediately [ telephone +32 [0] 16 300800 ]. Septentrio nv will not accept l= iability for contractual commitments made by individuals employed by this= company outside the scope of our business. |
From: Alan G I. <ai...@am...> - 2006-10-10 13:21:57
|
> JJ wrote: >> In my humble opinion, I think indexing is a weak spot in >> numpy. On Tue, 10 Oct 2006, Travis Oliphant apparently wrote: > I'm sorry you see it that way. I think indexing is a strength of > numpy. It's a little different then what you are used to > with Matlab, perhaps, but it is much more general-purpose > and capable I am finding numpy indexing to be great, but it would be helpful perhaps to new users to have a few of the examples from this thread make it to the Cookbook http://www.scipy.org/Cookbook/BuildingArrays Sorry, I cannot do that at the moment. Maybe JJ would find this a profitable exercise? Cheers, Alan Isaac |
From: jj <jos...@ya...> - 2006-10-10 13:20:11
|
> > -- If M is a nxm matrix and P and Z are nx1 (or 1xn) > > matrices, then it would be nice if we could write > > M[P==Z,:] to obtain all columns and only those rows > > where P==Z. > This works already if p and z are 1-d arrays. That seems to be the > only issue. you want this to work with p and z being 2-d arrays (i.e. > matrices). The problem is there is already a defined behavior for this > case that would have to be ignored (i.e. special-cased to get what you > want). This could be done within the special matrix sub-class of > course, but I'm not sure it is wise. Too many special cases make life > difficult down the road. > Thanks for the thoughtful reply Travis. I see I will just have to get used to using code such as M[(P==Z).A.ravel(),:], which I can live with. Your comments helped put this in perspective for me. By the way, is there a web site that lists all the current modules (as does http://www.scipy.org/doc/api_docs/scipy.html)? Best, John |
From: David D. <dav...@lo...> - 2006-10-10 12:33:24
|
On Tue, Oct 10, 2006 at 01:11:28PM +0200, Charles Hanot wrote: > Hi, Hi >=20 > =20 >=20 > my question is the following one. Is it possible to use Numpy with > py2exe in order to compile my program?=20 >=20 I am right now finishing a dev (GUI app un pyGtk) that uses numpy, and it have to be released as a win32 app. I used to have problems packing it using py2exe with enthough version of numpy. Now I use numpy 1.0RC1 and it works fine (I'm only using numpy, not scipy). I use mpl too, and packaging it using py2exe is a little more problematic. One thing I had to do is to set the "optimize" py2exe option to 0 (because mpl does automatic docstring stuffs, and docstrings are removed when optimizing is used). Note that I am usiing the latest py2exe too. David > =20 >=20 > In fact I'm trying to compile a program using py2exe but the problem is > that the numpy function cannot be loaded by py2exe. I've already seen on > the web that other users have the same problem but I've never seen any > answer. Could you tell me if there is something to do? Is there any > solution? >=20 > =20 >=20 > Thank you very much, >=20 > =20 >=20 > Best Regards, >=20 > =20 >=20 > Charles Hanot=20 >=20 >=20 >=20 > This e-mail communication contains information that is confidential and m= ay also be privileged. It is intended for the exclusive use of the addresse= es. If you are not the person or organization to whom it is addressed, you = must not copy, distribute or take any action in reliance upon it. If you re= ceived this communication in error, please notify Septentrio nv immediately= [ telephone +32 [0] 16 300800 ]. Septentrio nv will not accept liability f= or contractual commitments made by individuals employed by this company out= side the scope of our business. > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share y= our > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion --=20 David Douard LOGILAB, Paris (France) Formations Python, Zope, Plone, Debian : http://www.logilab.fr/formations D=E9veloppement logiciel sur mesure : http://www.logilab.fr/services Informatique scientifique : http://www.logilab.fr/science |
From: Charles H. <c....@se...> - 2006-10-10 11:11:45
|
Hi, =20 my question is the following one. Is it possible to use Numpy with py2exe in order to compile my program?=20 =20 In fact I'm trying to compile a program using py2exe but the problem is that the numpy function cannot be loaded by py2exe. I've already seen on the web that other users have the same problem but I've never seen any answer. Could you tell me if there is something to do? Is there any solution? =20 Thank you very much, =20 Best Regards, =20 Charles Hanot=20 This e-mail communication contains information that is confidential and m= ay also be privileged. It is intended for the exclusive use of the addres= sees. If you are not the person or organization to whom it is addressed, = you must not copy, distribute or take any action in reliance upon it. If = you received this communication in error, please notify Septentrio nv imm= ediately [ telephone +32 [0] 16 300800 ]. Septentrio nv will not accept l= iability for contractual commitments made by individuals employed by this= company outside the scope of our business. |
From: Robert C. <cim...@nt...> - 2006-10-10 09:48:20
|
Karol Langner wrote: > Can someone give me a hint as to where in numpy the AMD and UMFpack libraries > are used, if at all? I ask, because they have their respective sections in > site.cfg.example in the trunk. AMD and UMFpack are optional parts of scipy.linsolve, so if you do not want them you can freely ignore the entries in site.cfg (or remove them). r. |
From: Travis O. <oli...@ie...> - 2006-10-10 08:53:46
|
Peter Bienstman wrote: > This is on an AMD64 platform: > > Python 2.4.3 (#1, Sep 27 2006, 14:14:48) > [GCC 4.1.1 (Gentoo 4.1.1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>>> import numpy >>>> print numpy.__version__ >>>> > 1.0rc2 > >>>> a = numpy.float64(1.0) >>>> a >>>> > 1.0 > >>>> a.real >>>> > 1.0 > >>>> a.imag >>>> > Segmentation fault > > Thanks for the test. Fixed in SVN r3299 -Travis |
From: Peter B. <Pet...@ug...> - 2006-10-10 08:33:26
|
This is on an AMD64 platform: Python 2.4.3 (#1, Sep 27 2006, 14:14:48) [GCC 4.1.1 (Gentoo 4.1.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> print numpy.__version__ 1.0rc2 >>> a = numpy.float64(1.0) >>> a 1.0 >>> a.real 1.0 >>> a.imag Segmentation fault Thanks! Peter |
From: Johannes L. <a.u...@gm...> - 2006-10-10 08:03:41
|
Hi, > > This seems like a rather common operation - I know I've needed it on > > at least two occasions - is it worth creating some sort of C > > implementation? What is the appropriate generalization? > > Some sort of indirect addressing infrastructure. But it looks like this > could be tricky to make safe, it would need to do bounds checking at the > least and would probably work best with a contiguous array as the target. I > could see some sort of low-level function called argassign(target, indirect > index, source) that could be used to build more complicated things in > python. This looks somehow like the behaviour of builtin map. One could do map(fn, index) with appropriate fn. But iirc this is not faster than a for loop if fn is not a builtin function. An infrastructure like you imagine might use a similar syntax (with underlying C funcs). The main point is, how to tell it which operation to perform (add, multiply, average, whatever). Implementing a bunch of functions add_argassign, ... whatever_argassign contradicts my understanding of "generalized". ;) Maybe it would be simpler to just have functions which handle the index arrays in advance. An example will show it best: index = array([1, 2, 4, 2, 3, 1]) # 1 and 2 occur twice data = array([1, 1, 1, 1, 1, 1]) newindex, newdata = filter_and_add(index, data) # the kind of function I mean print newindex --> array([1, 2, 4, 3]) # duplicates have been removed print newdata --> array([2, 2, 1, 1]) # corresponding entries have been added a[newindex] += newdata Johannes |
From: Travis O. <oli...@ie...> - 2006-10-10 06:51:15
|
JJ wrote: > Hello. > I haven't been following development too closely > lately, but I did just download and reinstall the > current svn version. For what its worth, I would like > to again suggest two changes: > Suggestions are nice. Working code is better. Many ideas are just too difficult to implement (and still work with the system as it exists) and so never get done. I'm not saying these ideas fit into this category, but generally if a suggestion is not taken it's very likely seen in that light. > -- If M is a nxm matrix and P and Z are nx1 (or 1xn) > matrices, then it would be nice if we could write > M[P==Z,:] to obtain all columns and only those rows > where P==Z. This works already if p and z are 1-d arrays. That seems to be the only issue. you want this to work with p and z being 2-d arrays (i.e. matrices). The problem is there is already a defined behavior for this case that would have to be ignored (i.e. special-cased to get what you want). This could be done within the special matrix sub-class of course, but I'm not sure it is wise. Too many special cases make life difficult down the road. It is better to just un-think the ambiguity between 1-d and 2-d arrays that was inspired by Matlab and recognize a 1-d situation when you have it. But, that's just my opinion. I'm not dead-set against special-casing in the matrix object if enough matrix-oriented people are in favor of it. But, it would be a feature for a later NumPy (not 1.0). > Likewise, for 1xm (or mx1) matrices U and > V, it would be nice to be able to use M[P==Z,U==V]. > Same issue as before + cross-product versus element-by-element. > Also, it would be nice to use M[P==Z,U==2], for > example, to obtain selected rows where matrix U is > equal to a constant. > Again. Form the cross-product using ix_(). > -- It would be nice to slice a matrix by using > M[[1,2,3],[3,5,7]], for example. > You can get the cross-product using M[ix_([1,2,3],[3,5,7])]. This was a design choice and I think a good one. It's been discussed before. > I believe this would help make indexing more user > friendly. In my humble opinion, I think indexing is a > weak spot in numpy. I'm sorry you see it that way. I think indexing is a strength of numpy. It's a little different then what you are used to with Matlab, perhaps, but it is much more general-purpose and capable (there is one weak-spot in that a certain boolean indexing operations uses more memory than it needs to, but that is a separate issue...). The Matlab behavior can always be created in a sub-class. Best regards, -Travis |
From: Karol L. <kar...@kn...> - 2006-10-10 06:46:56
|
On Monday 09 of October 2006 20:15, Karol Langner wrote: > Can someone give me a hint as to where in numpy the AMD and UMFpack > libraries are used, if at all? I ask, because they have their respective > sections in site.cfg.example in the trunk. > > Karol One more thing - should there be a [fft] section in this file? Karol -- written by Karol Langner Tue Oct 10 08:45:05 CEST 2006 |
From: JJ <jos...@ya...> - 2006-10-10 06:17:35
|
Hello. I haven't been following development too closely lately, but I did just download and reinstall the current svn version. For what its worth, I would like to again suggest two changes: -- If M is a nxm matrix and P and Z are nx1 (or 1xn) matrices, then it would be nice if we could write M[P==Z,:] to obtain all columns and only those rows where P==Z. Likewise, for 1xm (or mx1) matrices U and V, it would be nice to be able to use M[P==Z,U==V]. Also, it would be nice to use M[P==Z,U==2], for example, to obtain selected rows where matrix U is equal to a constant. -- It would be nice to slice a matrix by using M[[1,2,3],[3,5,7]], for example. I believe this would help make indexing more user friendly. In my humble opinion, I think indexing is a weak spot in numpy. I find that most of my debugging involves finding the right code to make a matrix slice the way I want it. John __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: Karol L. <kar...@kn...> - 2006-10-10 06:12:06
|
On Saturday 07 of October 2006 20:32, Pauli Virtanen wrote: > la, 2006-10-07 kello 20:01 +0200, Karol Langner kirjoitti: > > I still get a floating point exception when running the numpy tests, > > though. I haven't checked out numpy for some time, so I don't now if it's > > a bug, or if it's my setup. The same thing happens when I use my manually > > built atlas/lapack and the built-in debian atlas/lapack libraries. I'd be > > grateful > > > > for a comment on if this is just me: > > >>numpy.test(10,10) > > > > [output] > > Check reading the top fields of a nested array ... ok > > Check reading the nested fields of a nested array (1st level) ... ok > > Check access nested descriptors of a nested array (1st level) ... ok > > Check reading the nested fields of a nested array (2nd level) ... ok > > Check access nested descriptors of a nested array (2nd level) ... ok > > check_access_fields > > (numpy.core.tests.test_numerictypes.test_read_values_plain_multiple) ... > > ok check_access_fields > > (numpy.core.tests.test_numerictypes.test_read_values_plain_single) ... ok > > check_cdouble (numpy.tests.test_linalg.test_det)Floating point exception > > If you are using Debian stable (sarge), you might need to read > > http://www.its.caltech.edu/~astraw/coding.html#libc-patched-for-debian-sarg >e-to-fix-floating-point-exceptions-on-sse2 > > In short, libc in Debian stable has a bug that makes programs crash with > SIGFPE when SSE instructions are invoked. The solution is to recompile > libc from patched sources, and replace libm.so.6. At least for me this > fixed crashes in numpy. > > Pauli Virtanen Big thanks for that link. I took the leap and it does fix that SIGFPE. As a note, after building the glibc sources, you only have to install the libc6 binary deb. There should be a link to that page somewhere on the wiki. Karol -- written by Karol Langner Tue Oct 10 08:07:04 CEST 2006 |
From: <sz1...@16...> - 2006-10-10 05:06:36
|
尊敬的公司领导(经理/财务): 您好! 本公司是经深圳市财政局批准、经深圳市工商局登记注册的财务公司,成立于2002年。 拥有一批经验丰富的资深会计专业人员;精通各类企业的帐务处理、工商税务等部门的办事程序,专业提供财务代理、代购发票、工商、税务咨询,是一家具有相当实力、社会关系良好、办事效率快的经营机构。 我们遵循的原则:诚信保密,规范执业,严谨负责,优质服务。 一贯以客户服务为第一,互惠互利为原则,以共赢为大局,快捷的经营作风,协助了众多 公司合理避税及公司冲帐等。深受国内众多公司的一致信任和支持。 重点推荐服务项目: 1.代理剩余发票对外代开:国税(普通商品销售发票)、增值税发票; 2.地税发票,如(电脑运输版、建筑安装、广告、服务咨询等其它服务收入发票); 3.海关专用缴款书,其他票据(国际运输、航运、空运、税务代开等); 4.我司服务网络遍布全国,各大、中城市均注册有分公司;如贵公司的所在地是其它省、 市,办理发票代开时,本公司都是以快递投递的方式(24小时至48小时限时送达); 5.我司郑重承诺所开票据全部源自税务部门,并可互联网查询或同去税务机关验证。 备注:客户收到发票确认验证无误后再给我司划账(数量金额较大可再商讨优惠税率)。 详情欢迎来电咨询! 联系人:陈建国 联系方式: 0755--81590701 13544007750 E-mail:sz1...@16... 联系地址:广东省深圳市福田区深南大道4009号投资大厦; 注:这是一封创利的商业邮件,如果打扰了您我们深表歉意,如不再接收此信件,请把我的邮箱地址加入你邮箱的拒收邮件列表中或发一封主题为“拒收”的邮件,系统会把你的邮箱地址自动从邮件列表中过滤。 广东省深圳市中天财税代理有限公司 |
From: Christian K. <ck...@ho...> - 2006-10-10 02:03:36
|
Christian Kristukat <ckkart <at> hoc.net> writes: > > Hi, > i've got problems running a numpy/scipy extension module (scipy.sandbox.odr) > built with cygwin/mingw32 on XP on other machines (windows 2000). I get those > very informative 'windows encountered a problem' messages when calling the > extension module - importing seems to work. More precisely, a binary extension module built on a Celeron D crashes on an Athlon Thunderbird, whereas the Atholon binary runs on the Celereon. So, I'm wondering if I can turn off some processor specific optimization and if this might help here. Christian |
From: Jay P. <pa...@gm...> - 2006-10-10 01:26:29
|
In the process of finally switching over to Python 2.5, and am trying to build numpy. Unfortunately, it dies during the build: Jay-Computer:~/Desktop/numpy-1.0rc2 jayparlar$ python setup.py build Running from numpy source directory. F2PY Version 2_3296 blas_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec', '-I/System/Library/Frameworks/vecLib.framework/Headers'] lapack_opt_info: FOUND: extra_link_args = ['-Wl,-framework', '-Wl,Accelerate'] define_macros = [('NO_ATLAS_INFO', 3)] extra_compile_args = ['-faltivec'] running build running config_fc running build_src building py_modules sources building extension "numpy.core.multiarray" sources Generating build/src.macosx-10.3-fat-2.5/numpy/core/config.h customize NAGFCompiler customize AbsoftFCompiler customize IbmFCompiler Could not locate executable g77 Could not locate executable f77 Could not locate executable gfortran Could not locate executable f95 customize GnuFCompiler customize Gnu95FCompiler customize G95FCompiler customize GnuFCompiler customize Gnu95FCompiler customize NAGFCompiler customize NAGFCompiler using config C compiler: gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -fno-common -dynamic -DNDEBUG -g -O3 compile options: '-I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -Inumpy/core/src -Inumpy/core/include -I/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 -c' gcc: _configtest.c gcc: cannot specify -o with -c or -S and multiple compilations gcc: cannot specify -o with -c or -S and multiple compilations failure. removing: _configtest.c _configtest.o numpy/core/setup.py:50: DeprecationWarning: raising a string exception is deprecated raise "ERROR: Failed to test configuration" Traceback (most recent call last): File "setup.py", line 89, in <module> setup_package() File "setup.py", line 82, in setup_package configuration=configuration ) File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/core.py", line 174, in setup return old_setup(**new_attr) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/core.py", line 151, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 974, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/command/build.py", line 112, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/cmd.py", line 333, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/distutils/dist.py", line 994, in run_command cmd_obj.run() File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", line 87, in run self.build_sources() File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", line 106, in build_sources self.build_extension_sources(ext) File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", line 212, in build_extension_sources sources = self.generate_sources(sources, ext) File "/Users/jayparlar/Desktop/numpy-1.0rc2/numpy/distutils/command/build_src.py", line 270, in generate_sources source = func(extension, build_dir) File "numpy/core/setup.py", line 50, in generate_config_h raise "ERROR: Failed to test configuration" ERROR: Failed to test configuration This is with the Universal 2.5 binary, and OS X 10.3.9. Any ideas? Sorry if this one has been asked before, but I can't seem to find a solution anywhere. Jay P. |
From: Travis O. <oli...@ie...> - 2006-10-09 20:58:32
|
Release Candidate 2.0 is now out. Thanks to all the great testing and fixes that were done between 1.0rc1 and 1.0rc2 The release date for NumPy 1.0 is Oct. 17. There will be a freeze on the trunk starting Monday Oct. 16 so any changes should be in by then. If significant changes are made then we will release 1.0rc3 on Oct. 17 and push the release date of NumPy 1.0 to Oct 24. -Travis |
From: Charles R H. <cha...@gm...> - 2006-10-09 20:14:54
|
On 10/9/06, A. M. Archibald <per...@gm...> wrote: > > > > > > c contains arbitray floats. > > > > > essentially it is to compute class totals > > > > > as in total[class[i]] += value[i] > > > > This seems like a rather common operation - I know I've needed it on > > > at least two occasions - is it worth creating some sort of C > > > implementation? What is the appropriate generalization? > > > > Some sort of indirect addressing infrastructure. But it looks like this > > could be tricky to make safe, it would need to do bounds checking at the > > least and would probably work best with a contiguous array as the > target. I > > could see some sort of low-level function called argassign(target, > indirect > > index, source) that could be used to build more complicated things in > > python. > > If it were only assignment that was needed, fancy indexing could > already handle it. The problem is that this is something that can't > *quite* be done with the current fancy indexing infrastructure - every > time an index comes up we want to add the value to what's there, > rather than replacing it. I suppose histogram covers one major > application; in fact if histogram allowed weightings ("count this > point as -0.6") it would solve the OP's problem. Sure, just add functions arg_addassign, etc., which means dest[ind[i]] += src[i], just as arg_assign would mean dest[ind[i]] = src[i]. If you covered all the assign variants I think you could do most everything. Upper level python routines could deal with shaping and such while the lower level routines dealt with flat, contiguous arrays. Chuck |
From: A. M. A. <per...@gm...> - 2006-10-09 20:00:02
|
> > > > c contains arbitray floats. > > > > essentially it is to compute class totals > > > > as in total[class[i]] += value[i] > > This seems like a rather common operation - I know I've needed it on > > at least two occasions - is it worth creating some sort of C > > implementation? What is the appropriate generalization? > > Some sort of indirect addressing infrastructure. But it looks like this > could be tricky to make safe, it would need to do bounds checking at the > least and would probably work best with a contiguous array as the target. I > could see some sort of low-level function called argassign(target, indirect > index, source) that could be used to build more complicated things in > python. If it were only assignment that was needed, fancy indexing could already handle it. The problem is that this is something that can't *quite* be done with the current fancy indexing infrastructure - every time an index comes up we want to add the value to what's there, rather than replacing it. I suppose histogram covers one major application; in fact if histogram allowed weightings ("count this point as -0.6") it would solve the OP's problem. A. M. Archibald |
From: Charles R H. <cha...@gm...> - 2006-10-09 18:57:47
|
On 10/9/06, A. M. Archibald <per...@gm...> wrote: > > On 09/10/06, Robert Kern <rob...@gm...> wrote: > > Daniel Mahler wrote: > > > In my case all a, b, c are large with b and c being orders of > > > magnitude lareger than a. > > > b is known to contain only, but potentially any, a-indexes, reapeated > > > many times. > > > c contains arbitray floats. > > > essentially it is to compute class totals > > > as in total[class[i]] += value[i] > > > > In that case, a slight modification to Greg's suggestion will probably > be fastest: > > If a is even moderately large and you don't care what's left behind in > b and c you will probably accelerate the process by sorting b and c > together (for cache coherency in a) > > This seems like a rather common operation - I know I've needed it on > at least two occasions - is it worth creating some sort of C > implementation? What is the appropriate generalization? Some sort of indirect addressing infrastructure. But it looks like this could be tricky to make safe, it would need to do bounds checking at the least and would probably work best with a contiguous array as the target. I could see some sort of low-level function called argassign(target, indirect index, source) that could be used to build more complicated things in python. Chuck A. M. Archibald > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: A. M. A. <per...@gm...> - 2006-10-09 18:30:09
|
On 09/10/06, Robert Kern <rob...@gm...> wrote: > Daniel Mahler wrote: > > In my case all a, b, c are large with b and c being orders of > > magnitude lareger than a. > > b is known to contain only, but potentially any, a-indexes, reapeated > > many times. > > c contains arbitray floats. > > essentially it is to compute class totals > > as in total[class[i]] += value[i] > > In that case, a slight modification to Greg's suggestion will probably be fastest: If a is even moderately large and you don't care what's left behind in b and c you will probably accelerate the process by sorting b and c together (for cache coherency in a) This seems like a rather common operation - I know I've needed it on at least two occasions - is it worth creating some sort of C implementation? What is the appropriate generalization? A. M. Archibald |