You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Didrik P. <dp...@it...> - 2006-09-09 00:32:35
|
Hi, I am running a Debian/Sid system and face a problem when using the gdal python bindinds. The gdal python bindings are linked with Numeric. In one of my application i'm using some numpy methods. This is working fine BUT when I add the use of the matploblib library to my application all the calls to gdal specific methods are broken. I have attached a basic example. The first test fails if I import the pylab module. The second one that can be run with any shapefile shows that when pylab is loaded, some GDAL methods raises GEOS exceptions. Commenting the "import pylab" line shows that without pylab there is no exceptions, nor problems. They are two workarounds : [1] a real one : using matplotlib with the Numeric lib [2] a fake one : renaming /usr/lib/python2.4/site-packages/numpy/core/multiarray.so to another name, the tests does not fail. Does anybody have a suggestion to correct this problem ? I can provide more details if needed. Best regards, -- Didrik |
From: <zd...@to...> - 2006-09-08 18:34:50
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> <style type="text/css"> <!-- .td { font-size: 12px; color: #313131; line-height: 20px; font-family: "Arial", "Helvetica", "sans-serif"; } --> </style> </head> <body leftmargin="0" background="http://bo.sohu.com//images/img20040502/dj_bg.gif"> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="31" background="http://igame.sina.com.cn/club/images/topmenu/topMenu_8.gif" class="td"><div align="center"><font color="#FFFFFF">主办单位:易腾企业管理咨询有限公司</font></div></td> </tr> </table> <br> <table width="683" border="0" align="center" cellpadding="0" cellspacing="0" height="1181"> <tr> <td height="71" bgcolor="#8C8C8C" width="681"> <div align="center"> <table width="100%" border="0" cellspacing="1" cellpadding="0" height="76"> <tr> <td height="74" bgcolor="#F3F3F3"><div align="center"> <span lang="zh-cn"><font size="6">DOE田口式品质工程</font></span></div></td> </tr> </table> </div></td> </tr> <tr> <td height="1105" bgcolor="#FFFFFF" width="681"> <div align="center"> <table width="680" border="0" cellspacing="0" cellpadding="0" height="48"> <tr> <td width="119" height="22" bgcolor="#BF0000" class="td"> <div align="center"><font color="#FFFFFF">[课 程 背 景]</font></div></td> <td width="557" class="td" height="22"> </td> </tr> <tr> <td height="26" colspan="2" class="td" width="678"> <p ALIGN="JUSTIFY"><font LANG="ZH-CN"> <font size="2"> </font></font><font size="2" lang="ZH-CN">日本的田口玄一博士所倡导的使用直交表进行实验设计的方法,因为能够快速找到质量成本最低的技术方案,迅速被广大研发和工艺管理人员所采用,成为战后日本企业品质快速进步的有力武器,为日本产品在世界各国市场上的大获全胜起到了不可估量的作用。近几年风靡全球的6Sigma设计,实际上就是以田口方法为核心的设计,6Sigma设计及田口方法在制造业中的广泛应用已收到显著效果,被当作有效改善制程、缩短研发周期一半的重要工具与关键技术。<br> 易腾企管拟透过本课程,为从事产品开发和工艺改善的管理和技术人员提供一个快速的技术突破手段,提高企业的技术创新能力<br> 本课程旨在: <br> 协助研发工程人员以最少的实验次数,快速寻找最佳的制程参数组合条件,筛选出最优设计方案,大量减少实验次数缩短产品开发周期,降低实验成本,以最短的时间响应客户的新需求;<br> 协助质量改进人员分析影响质量稳定性水平的因素,使所设计的产品质量稳定、波动性小,降低质量成本;<br> 协助生产工艺人员掌握快速寻找最佳工艺参数的方法,提高过程能力指数; 提高包括工程师、改善人员及车间班组长“改善制造过程、降低制造成本”的技能.</font></td> </tr> </table> </div> <div align="center" style="width: 671; height: 1"> </div> <div align="center"> <table width="681" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="113" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[课 程 大 纲]</font></div></td> <td width="564" class="td"> </td> </tr> <tr> <td height="64" colspan="2" class="td" width="679"> <p> <font size="2"><b><font color="#FF0000">1.田口式品质工程的思想方法</font></b><br> 田口式品管概念<br> 品质成本测算--田口式质量损失函数(Loss Function)<br> 田口式off-line品管概念及参数设计法:<br> 设计出总成本最低的最优化的制造方法(参数)<br> <br> <b> <font color="#FF0000"> 2.田口式实验计划法的原理</font></b><br> 品质特性<br> 变异与杂音<br> 线外品管<br> 望大特性<br> 望小特性<br> 望目特性<br> <br> <b> <font color="#FF0000"> 3.正交表的灵活运用</font></b><br> 正交表与点线图<br> 如何计算自由度和选择正交表<br> 点线图与交互作用配行表<br> 二水平正交表<br> 三水平正交表<br> 多水平法<br> 参数设计<br> 内外直交表e<br> <br> <b> <font color="#FF0000"> 4. 数据分析与数据处理方法</font></b><br> 正交表数据分析<br> 响应表与响应图<br> 望小特性的信号杂音比法数据处理和最优化选择<br> 望大特性的信号杂音比法数据处理和最优化选择<br> 望目特性的信号杂音比法数据处理和最优化选择<br> <br> <b> <font color="#FF0000"> 5. 如何通过实验设计获得最优配置</font></b><br> 如何选用直交表进行实验设计<br> 运用响应表和响应图进行数据分析<br> 运用S/N信号杂音比进行数据分析<br> 如何选择可控因素的最佳水准<br> 如何通过确认实验确定最佳的技术条件<br> <br> <b> <font color="#FF0000"> 6.田口式品质工程运用的经典案例</font></b><br> 日本某建材厂的磁砖尺寸一致性的改进<br> 铜线镀锡的锡膜厚度均匀性的最佳条件选择<br> 某著名空调设备公司空调器EER值的稳定性研究<br> 光导纤维材料的光电转化效率研究<br> 某电路板厂回流焊工序的工艺研究<br> 某橡胶制品公司的配方研究</font><br> </p></td> </tr> </table> <table width="681" height="186" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="117" height="24" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[导 师 简 介]</font></div></td> <td width="560" class="td" height="24"> </td> </tr> <tr> <td height="162" colspan="2" class="td" width="679"> <p> <font size="2">周老师:易腾企管资深顾问、工学硕士,田口式品质工程推进委员会委员,中国价值工程协会理事,国际职业培训师协会认证职业培训师,曾在科学院研究所、复星高科、美国INTEX公司等高科技企业/研发机构从事产品和工艺开发十余年,主持过多个项目,先后担任过研发工程师、项目经理、技术总监等职务。周老师有丰富的产品开发实务、项目管理经验,曾辅导/培训的客户有:IBM、TDK、松下、联想手机、美国ITT集团、NEC东金电子、TCL、东方通信、PHILIPS、深圳开发科技、大冷王运输制冷、华凌空调、中兴通讯、京信通信、正大集团大福饲料、冠捷电子、华为、可口可乐、正新橡胶、长城计算机、明基、太原钢铁集团公司、柳州汽车、格力电器、李尔长安汽车配件、楼氏电子、德国博世、梅特乐-托利多衡器、关西涂料、厦华电子、金山石化、巨霸机电等等。周老师授课经验丰富,风格幽默诙谐、逻辑清晰、过程互动,案例生动、深受学员喜爱。</font> </p></td> </tr> </table> </div> <div align="center"> <table width="680" border="0" cellpadding="0" cellspacing="0" height="62"> <tr> <td width="132" height="23" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[时间/地点/联系方式]</font></div></td> <td width="547" class="td" height="23"> </td> </tr> <tr> <td height="39" colspan="2" class="td" width="680"> <p><font size="2">(注订退):如您不需要此邮件,请发送邮件至: ts...@to... 并在邮件标题中注明 (订退邮件)</font></p> </td> </tr> </table> </div> <table width="681" height="45" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="25" class="td" width="679"> <p style="line-height: 200%"><font size="2"><b>时间:</b> 06年9月21-22日 <font color="#FF0000"> 地点:</font> 上海 2400/人 四人以上参加,赠予一名名额<b><br> 咨询热线:021-51187132</b> 刘小姐 (欢迎企业定制)</font></p> </td> </tr> </table> </td> </tr> </table> </body> </html> |
From: <lis...@ma...> - 2006-09-08 15:29:42
|
After building and installing numpy from svn on an Intel Mac, I can successfully build f2py modules, but I get the following linking error: ImportError: Failure linking new module: /Library/Frameworks/ Python.framework/Versions/2.4/lib/python2.4/site-packages/PyMC/ flib.so: Symbol not found: ___dso_handle Referenced from: /Library/Frameworks/Python.framework/Versions/2.4/ lib/python2.4/site-packages/PyMC/flib.so Expected in: dynamic lookup I'm using gfortran. Has anyone seen these types of errors before? I do not get them on PPC. Thanks, -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address |
From: Darren D. <dd...@co...> - 2006-09-08 15:02:51
|
This was just reported on mpl-dev: In [1]: from numpy.oldnumeric import mlab In [2]: mlab.eye(3) --------------------------------------------------------------------------- exceptions.NameError Traceback (most recent call last) /home/darren/<ipython console> /usr/lib64/python2.4/site-packages/numpy/oldnumeric/mlab.py in eye(N, M, k, typecode, dtype) 22 dtype = convtypecode(typecode, dtype) 23 if M is None: M = N ---> 24 m = nn.equal(nn.subtract.outer(nn.arange(N), nn.arange(M)),-k) 25 if m.dtype != dtype: 26 return m.astype(dtype) NameError: global name 'nn' is not defined |
From: Francesc A. <fa...@ca...> - 2006-09-08 14:41:41
|
A Divendres 08 Setembre 2006 15:20, lis...@ma... va escriure: > I have built and installed numpy from svn on an Intel mac, and am > having test failures that do not occur on PPC: > > In [8]: numpy.test() > Found 13 tests for numpy.core.umath > Found 8 tests for numpy.lib.arraysetops > Found 3 tests for numpy.fft.helper > Found 1 tests for numpy.lib.ufunclike > Found 4 tests for numpy.ctypeslib > Found 1 tests for numpy.lib.polynomial > Found 8 tests for numpy.core.records > Found 26 tests for numpy.core.numeric > Found 3 tests for numpy.lib.getlimits > Found 31 tests for numpy.core.numerictypes > Found 4 tests for numpy.core.scalarmath > Found 10 tests for numpy.lib.twodim_base > Found 46 tests for numpy.lib.shape_base > Found 4 tests for numpy.lib.index_tricks > Found 32 tests for numpy.linalg.linalg > Found 5 tests for numpy.distutils.misc_util > Found 42 tests for numpy.lib.type_check > Found 163 tests for numpy.core.multiarray > Found 36 tests for numpy.core.ma > Found 10 tests for numpy.core.defmatrix > Found 39 tests for numpy.lib.function_base > Found 0 tests for __main__ > .........................EEEE........................................... > ........................................................................ > ........................................................................ > ........................................................................ > ........................................................................ > ........................................................................ > ......................................................... > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > ERROR: check_dtype (numpy.tests.test_ctypeslib.test_ndpointer) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/numpy/tests/test_ctypeslib.py", line 10, in > check_dtype > p =3D ndpointer(dtype=3Ddt) > File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ > python2.4/site-packages/numpy/ctypeslib.py", line 15, in _dummy > raise ImportError, "ctypes is not available." > ImportError: ctypes is not available. This may be due to the fact that you are using Python 2.4 here and ctypes=20 comes with Python2.5. Switch to 2.5, install ctypes separately or feel free= =20 to ignore this. I suppose that a check has to be set up in the tests to avoid ctypes ones t= o=20 be checked in case ctypes is not available. =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: <lis...@ma...> - 2006-09-08 13:19:04
|
I have built and installed numpy from svn on an Intel mac, and am having test failures that do not occur on PPC: In [8]: numpy.test() Found 13 tests for numpy.core.umath Found 8 tests for numpy.lib.arraysetops Found 3 tests for numpy.fft.helper Found 1 tests for numpy.lib.ufunclike Found 4 tests for numpy.ctypeslib Found 1 tests for numpy.lib.polynomial Found 8 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 3 tests for numpy.lib.getlimits Found 31 tests for numpy.core.numerictypes Found 4 tests for numpy.core.scalarmath Found 10 tests for numpy.lib.twodim_base Found 46 tests for numpy.lib.shape_base Found 4 tests for numpy.lib.index_tricks Found 32 tests for numpy.linalg.linalg Found 5 tests for numpy.distutils.misc_util Found 42 tests for numpy.lib.type_check Found 163 tests for numpy.core.multiarray Found 36 tests for numpy.core.ma Found 10 tests for numpy.core.defmatrix Found 39 tests for numpy.lib.function_base Found 0 tests for __main__ .........................EEEE........................................... ........................................................................ ........................................................................ ........................................................................ ........................................................................ ........................................................................ ......................................................... ====================================================================== ERROR: check_dtype (numpy.tests.test_ctypeslib.test_ndpointer) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/tests/test_ctypeslib.py", line 10, in check_dtype p = ndpointer(dtype=dt) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/ctypeslib.py", line 15, in _dummy raise ImportError, "ctypes is not available." ImportError: ctypes is not available. ====================================================================== ERROR: check_flags (numpy.tests.test_ctypeslib.test_ndpointer) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/tests/test_ctypeslib.py", line 54, in check_flags p = ndpointer(flags='FORTRAN') File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/ctypeslib.py", line 15, in _dummy raise ImportError, "ctypes is not available." ImportError: ctypes is not available. ====================================================================== ERROR: check_ndim (numpy.tests.test_ctypeslib.test_ndpointer) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/tests/test_ctypeslib.py", line 36, in check_ndim p = ndpointer(ndim=0) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/ctypeslib.py", line 15, in _dummy raise ImportError, "ctypes is not available." ImportError: ctypes is not available. ====================================================================== ERROR: check_shape (numpy.tests.test_ctypeslib.test_ndpointer) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/tests/test_ctypeslib.py", line 46, in check_shape p = ndpointer(shape=(1,2)) File "/Library/Frameworks/Python.framework/Versions/2.4/lib/ python2.4/site-packages/numpy/ctypeslib.py", line 15, in _dummy raise ImportError, "ctypes is not available." ImportError: ctypes is not available. ---------------------------------------------------------------------- Ran 489 tests in 0.528s FAILED (errors=4) Out[8]: <unittest.TextTestRunner object at 0x322fe90> Any clues? -- Christopher Fonnesbeck + Atlanta, GA + fonnesbeck at mac.com + Contact me on AOL IM using email address |
From: Mabelle R. <ra...@av...> - 2006-09-08 10:35:45
|
Hi =20 All yo k ur P n HAR c MAC x Y di f rect i ly from the man a ufa d cture o r, Your c w hanc f e to eco q no p mize w f ith us http://www.pumationdesun.com |
From: Robert K. <rob...@gm...> - 2006-09-08 03:35:48
|
rex wrote: > Robert Kern <rob...@gm...> [2006-09-07 16:35]: >> rex wrote: >>> Charles R Harris <cha...@gm...> [2006-09-07 15:04]: >>>> I don't know about count, but you can gin up something like this >>>> >>>> In [78]: a = ran.randint(0,2, size=(10,)) >>>> >>>> In [79]: a >>>> Out[79]: array([0, 1, 0, 1, 1, 0, 0, 1, 1, 1]) >>> This exposed inconsistent randint() behavior between SciPy and the Python >>> random module. The Python randint includes the upper endpoint. The SciPy >>> version excludes it. >> numpy.random.random_integers() includes the upper bound, if you like. >> numpy.random does not try to emulate the standard library's random module. > > I'm not in a position to argue the merits, but IMHO, when code that > previously worked silently starts returning subtly bad results after > importing numpy, there is a problem. What possible upside is there in > having randint() behave one way in the random module and silently behave > differently in numpy? I don't understand you. Importing numpy does not change the standard library's random module in any way. There is no silent difference in behavior. If you use numpy.random you get one set of behavior. If you use random, you get another. Pick the one you want. They're not interchangeable, and nothing suggests that they ought to be. > More generally, since numpy.random does not try to emulate the random > module, how does one convert from code that uses the random module to > numpy? Is randint() the only silent problem, or are there others? If so, > how does one discover them? Are they documented anywhere? The docstrings in that module are complete. > I deeply appreciate the countless hours the core developers have > contributed to numpy/scipy, but sometimes I think you are too close to > the problems to fully appreciate the barriers to widespread adoption such > silent "gotchas" present. If the code breaks, fine, you know there's a > problem. When it runs, but returns wrong -- but not obviously wrong -- > results, there's a serious problem that will deter a significant number > of people from ever trying the product again. > > Again, what is the upside of changing the behavior of the standard > library's randint() without also changing the name? Again, numpy.random has nothing to do with the standard library module random. The names of the functions match those in the PRNG facilities that used to be in Numeric and scipy which numpy.random is replacing. Specifically, numpy.random.randint() derives its behavior from Numeric's RandomArray.randint(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Flavio C. <fcc...@gm...> - 2006-09-08 01:18:49
|
Hi, I have a module that uses a Fortran extension which I would like to compile for windows with f2py. I wonder If I could do this from Linux using xmingw. Has anyone tried this? thanks, --=20 Fl=E1vio Code=E7o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark --=20 Fl=E1vio Code=E7o Coelho registered Linux user # 386432 --------------------------- "Laws are like sausages. It's better not to see them being made." Otto von Bismark |
From: Charles R H. <cha...@gm...> - 2006-09-08 00:42:10
|
On 9/7/06, Andrew Jaffe <a.h...@gm...> wrote: > > Hi Charles, > > Charles R Harris wrote: > > On 9/7/06, *Andrew Jaffe* <a.h...@gm... > > <mailto:a.h...@gm...>> wrote: > > > > Hi all, > > > > It seems that scipy and numpy define rfft differently. > > > > numpy returns n/2+1 complex numbers (so the first and last numbers > are > > actually real) with the frequencies equivalent to the positive part > of > > the fftfreq, whereas scipy returns n real numbers with the > frequencies > > as in rfftfreq (i.e., two real numbers at the same frequency, except > for > > the highest and lowest) [All of the above for even n; but the > > difference > > between numpy and scipy remains for odd n.] > > > > I think the numpy behavior makes more sense, as it doesn't require > any > > unpacking after the fact, at the expense of a tiny amount of wasted > > space. But would this in fact require scipy doing extra work from > > whatever the 'native' real_fft (fftw, I assume) produces? > > > > Anyone else have an opinion? > > > > Yes, I prefer the scipy version because the result is actually a complex > > array and can immediately be use as the coefficients of the fft for > > frequencies <= Nyquist. I suspect, without checking, that what you get > > in numpy is a real array with f[0] == zero frequency, f[1] + 1j* f[2] as > > the coefficient of the second frequency, etc. This makes it difficult to > > multiply by a complex transfer function or phase shift the result to > > rotate the original points by some fractional amount. > > > > As to unpacking, for some algorithms the two real coefficients are > > packed into the real and complex parts of the zero frequency so all that > > is needed is an extra complex slot at the end. Other algorithms produce > > what you describe. I just think it is more convenient to think of the > > real fft as an efficient complex fft that only computes the coefficients > > <= Nyquist because Hermitean symmetry determines the rest. > > Unless I misunderstand, I think you've got it backwards: > - numpy.fft.rfft produces the correct complex array (with no strange > packing), with frequencies (0, 1, 2, ... n/2) > > - scipy.fftpack.rfft produces a single real array, in the correct > order, but with frequencies (0, 1, 1, 2, 2, ..., n/2) -- as given by > scipy.fftpack's rfftfreq function. > > So I think you prefer numpy, not scipy. Ah, well then, yes. I prefer Numpy. IIRC, one way to get the Scipy ordering is to use the Hartley transform as the front to the real transform. And, now that you mention it, there was a big discussion about it in the scipy list way back when with yours truly pushing the complex form. I don't recall that anything was settled, rather the natural output of the algorithm they were using prevailed by default. Maybe you could write a front end that did the right thing? Chuck |
From: rex <re...@no...> - 2006-09-08 00:14:28
|
Robert Kern <rob...@gm...> [2006-09-07 16:35]: > rex wrote: > > Charles R Harris <cha...@gm...> [2006-09-07 15:04]: > >> I don't know about count, but you can gin up something like this > >> > >> In [78]: a = ran.randint(0,2, size=(10,)) > >> > >> In [79]: a > >> Out[79]: array([0, 1, 0, 1, 1, 0, 0, 1, 1, 1]) > > > > This exposed inconsistent randint() behavior between SciPy and the Python > > random module. The Python randint includes the upper endpoint. The SciPy > > version excludes it. > > numpy.random.random_integers() includes the upper bound, if you like. > numpy.random does not try to emulate the standard library's random module. I'm not in a position to argue the merits, but IMHO, when code that previously worked silently starts returning subtly bad results after importing numpy, there is a problem. What possible upside is there in having randint() behave one way in the random module and silently behave differently in numpy? More generally, since numpy.random does not try to emulate the random module, how does one convert from code that uses the random module to numpy? Is randint() the only silent problem, or are there others? If so, how does one discover them? Are they documented anywhere? I deeply appreciate the countless hours the core developers have contributed to numpy/scipy, but sometimes I think you are too close to the problems to fully appreciate the barriers to widespread adoption such silent "gotchas" present. If the code breaks, fine, you know there's a problem. When it runs, but returns wrong -- but not obviously wrong -- results, there's a serious problem that will deter a significant number of people from ever trying the product again. Again, what is the upside of changing the behavior of the standard library's randint() without also changing the name? -rex |
From: Andrew J. <a.h...@gm...> - 2006-09-08 00:04:58
|
Hi Charles, Charles R Harris wrote: > On 9/7/06, *Andrew Jaffe* <a.h...@gm... > <mailto:a.h...@gm...>> wrote: > > Hi all, > > It seems that scipy and numpy define rfft differently. > > numpy returns n/2+1 complex numbers (so the first and last numbers are > actually real) with the frequencies equivalent to the positive part of > the fftfreq, whereas scipy returns n real numbers with the frequencies > as in rfftfreq (i.e., two real numbers at the same frequency, except for > the highest and lowest) [All of the above for even n; but the > difference > between numpy and scipy remains for odd n.] > > I think the numpy behavior makes more sense, as it doesn't require any > unpacking after the fact, at the expense of a tiny amount of wasted > space. But would this in fact require scipy doing extra work from > whatever the 'native' real_fft (fftw, I assume) produces? > > Anyone else have an opinion? > > Yes, I prefer the scipy version because the result is actually a complex > array and can immediately be use as the coefficients of the fft for > frequencies <= Nyquist. I suspect, without checking, that what you get > in numpy is a real array with f[0] == zero frequency, f[1] + 1j* f[2] as > the coefficient of the second frequency, etc. This makes it difficult to > multiply by a complex transfer function or phase shift the result to > rotate the original points by some fractional amount. > > As to unpacking, for some algorithms the two real coefficients are > packed into the real and complex parts of the zero frequency so all that > is needed is an extra complex slot at the end. Other algorithms produce > what you describe. I just think it is more convenient to think of the > real fft as an efficient complex fft that only computes the coefficients > <= Nyquist because Hermitean symmetry determines the rest. Unless I misunderstand, I think you've got it backwards: - numpy.fft.rfft produces the correct complex array (with no strange packing), with frequencies (0, 1, 2, ... n/2) - scipy.fftpack.rfft produces a single real array, in the correct order, but with frequencies (0, 1, 1, 2, 2, ..., n/2) -- as given by scipy.fftpack's rfftfreq function. So I think you prefer numpy, not scipy. This is complicated by the fact that (I think) numpy.fft shows up as scipy.fft, so functions with the same name in scipy.fft and scipy.fftpack aren't actually the same! Andrew |
From: A. M. A. <per...@gm...> - 2006-09-07 23:49:10
|
Maybe I should stay out of this, but it seems like constructing object arrays is complicated and involves a certain amount of guesswork on the part of Numeric. For example, if you do array([a,b,c]).shape(), the answer is normally (3,) unless a b and c happen to all be lists of the same length, at which point your array could have a much more complicated shape... but as the person who wrote "array([a,b,c])" it's tempting to assume that the result has shape (3,), only to discover subtle bugs much later. If we were writing an array-creation function from scratch, would there be any reason to include object-array creation in the same function as uniform array creation? It seems like a bad idea to me. If not, the problem is just compatibility with Numeric. Why not simply write a wrapper function in python that does Numeric-style guesswork, and put it in the compatibility modules? How much code will actually break? A. M. Archibald |
From: Jeff S. <js...@en...> - 2006-09-07 23:19:02
|
Good afternoon, Unfortunately, our recent change in internet service providers is not working out. We will be switching to a more reliable provider on Tuesday 9/12 at 7:00 PM Central. Please allow for up to two hours of downtime. I will send an email announcing the start and completion of this maintenance. This will effect all Enthought servers as well as the SciPy server which hosts many Open Source projects. The problem was that our in-building internet service provider neglected to mention that our internet connection was over a wireless link to a different building. This link had very high latency. They have fixed this problem, but we feel that wireless is not stable enough for our needs. In order to provide higher quality service, we will be using 3 bonded T1s from AT&T after this switchover. Please pass this message along to people that I have missed. If you have any questions, please direct them to me. We apologize for the inconvenience. Jeff Strunk Enthought, Inc. |
From: Charles R H. <cha...@gm...> - 2006-09-07 23:04:00
|
On 9/7/06, Andrew Jaffe <a.h...@gm...> wrote: > > Hi all, > > It seems that scipy and numpy define rfft differently. > > numpy returns n/2+1 complex numbers (so the first and last numbers are > actually real) with the frequencies equivalent to the positive part of > the fftfreq, whereas scipy returns n real numbers with the frequencies > as in rfftfreq (i.e., two real numbers at the same frequency, except for > the highest and lowest) [All of the above for even n; but the difference > between numpy and scipy remains for odd n.] > > I think the numpy behavior makes more sense, as it doesn't require any > unpacking after the fact, at the expense of a tiny amount of wasted > space. But would this in fact require scipy doing extra work from > whatever the 'native' real_fft (fftw, I assume) produces? > > Anyone else have an opinion? Yes, I prefer the scipy version because the result is actually a complex array and can immediately be use as the coefficients of the fft for frequencies <= Nyquist. I suspect, without checking, that what you get in numpy is a real array with f[0] == zero frequency, f[1] + 1j* f[2] as the coefficient of the second frequency, etc. This makes it difficult to multiply by a complex transfer function or phase shift the result to rotate the original points by some fractional amount. As to unpacking, for some algorithms the two real coefficients are packed into the real and complex parts of the zero frequency so all that is needed is an extra complex slot at the end. Other algorithms produce what you describe. I just think it is more convenient to think of the real fft as an efficient complex fft that only computes the coefficients <= Nyquist because Hermitean symmetry determines the rest. Chuck |
From: Robert K. <rob...@gm...> - 2006-09-07 22:57:23
|
rex wrote: > Charles R Harris <cha...@gm...> [2006-09-07 15:04]: >> I don't know about count, but you can gin up something like this >> >> In [78]: a = ran.randint(0,2, size=(10,)) >> >> In [79]: a >> Out[79]: array([0, 1, 0, 1, 1, 0, 0, 1, 1, 1]) > > This exposed inconsistent randint() behavior between SciPy and the Python > random module. The Python randint includes the upper endpoint. The SciPy > version excludes it. numpy.random.random_integers() includes the upper bound, if you like. numpy.random does not try to emulate the standard library's random module. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Charles R H. <cha...@gm...> - 2006-09-07 22:48:54
|
On 9/7/06, Travis Oliphant <oli...@ee...> wrote: > > Charles R Harris wrote: > > > > > So is this intentional? > > > > In [24]: a = array([[],[],[]], dtype=object) > > > > In [25]: a.shape > > Out[25]: (3, 0) > > > > In [26]: a = array([], dtype=object) > > > > In [27]: a.shape > > Out[27]: (0,) > > > > One could argue that the first array should have shape (3,) > > > Yes, it's intentional because it's the old behavior of Numeric. And it > follows the rule that object arrays don't do anything special unless the > old technique of using [] as 'dimension delimiters' breaks down. > > > > > And this doesn't look quite right: > > > > In [38]: a = array([[1],[2],[3]], dtype=object) > > > > In [39]: a.shape > > Out[39]: (3, 1) > > > > In [40]: a = array([[1],[2,3],[4,5]], dtype=object) > > > > In [41]: a.shape > > Out[41]: (3,) > > > > Again, same reason as before. The first example works fine to construct > a rectangular array of object arrays of dimension 2. The second only > does if we limit the number of dimensions to 1. > > The rule is that array needs nested lists with the same number of > dimensions unless you have object arrays. Then, the dimensionality will > be determined by finding the largest number of dimensions possible for > consistency of shape. So there is a 'None' trick: In [93]: a = array([[[2]], None], dtype=object) In [94]: a[0] Out[94]: [[2]] I wonder if it wouldn't be useful to have a 'depth' keyword. Thus depth=None is current behavior, but array([], depth=0) would produce a zero dimensional array containing an empty list. Although I notice from playing with dictionaries that a zero dimensional array containing a dictionary isn't very useful. array([[],[]], depth=1) would produce a one dimensional array containing two empty lists, etc. I can see it is difficult to get something truely general with the current syntax without a little bit of extra information. Another question, what property must an object possess to be a container type argument in array? There are sequence type objects, and array type objects. Are there more or is everything else treated as an object? Chuck |
From: Andrew J. <a.h...@gm...> - 2006-09-07 22:46:40
|
Hi all, It seems that scipy and numpy define rfft differently. numpy returns n/2+1 complex numbers (so the first and last numbers are actually real) with the frequencies equivalent to the positive part of the fftfreq, whereas scipy returns n real numbers with the frequencies as in rfftfreq (i.e., two real numbers at the same frequency, except for the highest and lowest) [All of the above for even n; but the difference between numpy and scipy remains for odd n.] I think the numpy behavior makes more sense, as it doesn't require any unpacking after the fact, at the expense of a tiny amount of wasted space. But would this in fact require scipy doing extra work from whatever the 'native' real_fft (fftw, I assume) produces? Anyone else have an opinion? Andrew |
From: rex <re...@no...> - 2006-09-07 22:33:46
|
Charles R Harris <cha...@gm...> [2006-09-07 15:04]: > I don't know about count, but you can gin up something like this > > In [78]: a = ran.randint(0,2, size=(10,)) > > In [79]: a > Out[79]: array([0, 1, 0, 1, 1, 0, 0, 1, 1, 1]) This exposed inconsistent randint() behavior between SciPy and the Python random module. The Python randint includes the upper endpoint. The SciPy version excludes it. >>> from random import randint >>> for i in range(50): ... print randint(0,2), ... 0 1 1 1 1 1 0 0 2 1 1 0 2 2 1 2 0 2 0 0 0 2 2 2 2 2 2 2 1 2 2 0 0 1 2 2 0 1 1 0 2 0 1 2 1 2 2 2 1 1 >>> from scipy import * >>> print random.randint(0,2, size=(100,)) [0 1 1 1 1 0 1 1 0 1 0 0 0 0 0 1 0 1 1 0 1 0 1 0 0 0 1 0 0 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 0 1 0 1 1 1 1 1 1 1 0 1 1 0 0 1 0 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 0 0] -rex |
From: Travis O. <oli...@ee...> - 2006-09-07 22:15:45
|
Charles R Harris wrote: > > So is this intentional? > > In [24]: a = array([[],[],[]], dtype=object) > > In [25]: a.shape > Out[25]: (3, 0) > > In [26]: a = array([], dtype=object) > > In [27]: a.shape > Out[27]: (0,) > > One could argue that the first array should have shape (3,) > Yes, it's intentional because it's the old behavior of Numeric. And it follows the rule that object arrays don't do anything special unless the old technique of using [] as 'dimension delimiters' breaks down. > > And this doesn't look quite right: > > In [38]: a = array([[1],[2],[3]], dtype=object) > > In [39]: a.shape > Out[39]: (3, 1) > > In [40]: a = array([[1],[2,3],[4,5]], dtype=object) > > In [41]: a.shape > Out[41]: (3,) > Again, same reason as before. The first example works fine to construct a rectangular array of object arrays of dimension 2. The second only does if we limit the number of dimensions to 1. The rule is that array needs nested lists with the same number of dimensions unless you have object arrays. Then, the dimensionality will be determined by finding the largest number of dimensions possible for consistency of shape. -Travis |
From: Mike R. <mik...@gm...> - 2006-09-07 21:22:48
|
On 9/7/06, Glen W. Mabey <Gle...@sw...> wrote: > > A long time ago, Travis wrote: > > > > My understanding is that using memory-mapped files for *very* large > > files will require modification to the mmap module in Python --- > > Did anyone ever "pick up the ball" on this issue? This works with python-2.5 betas and above, and numpy 1.0 betas and above. I frequently do 10+ GB mmaps. Mike -- mik...@al... |
From: Martin S. <ms...@mm...> - 2006-09-07 21:21:17
|
Great! That's exactly what I wanted. Works with floats too. Thanks, Martin Robert Kern wrote: > Mostly, it's simply easy enough to implement yourself. Not all one-liners should > be methods on the array object. > > (a == value).sum() > |
From: Sasha <nd...@ma...> - 2006-09-07 20:48:11
|
On 9/7/06, Martin Spacek <sc...@ms...> wrote: > What's the most straightforward way to count, say, the number of 1s or > Trues in the array? Or the number of any integer? > > I was surprised to discover recently that there isn't a count() method > as there is for Python lists. Sorry if this has been discussed already, > but I'm wondering if there's a reason for its absence. > You don't really need count with ndarrays: >>> from numpy import * >>> a = array([1,2,3,1,2,3,1,2]) >>> (a==3).sum() 2 |
From: Alexander B. <ale...@gm...> - 2006-09-07 20:47:10
|
On 9/7/06, Martin Spacek <sc...@ms...> wrote: > What's the most straightforward way to count, say, the number of 1s or > Trues in the array? Or the number of any integer? > > I was surprised to discover recently that there isn't a count() method > as there is for Python lists. Sorry if this has been discussed already, > but I'm wondering if there's a reason for its absence. > You don't really need count with ndarrays: >>> from numpy import * >>> a = array([1,2,3,1,2,3,1,2]) >>> (a==3).sum() 2 |
From: Sebastian H. <ha...@ms...> - 2006-09-07 20:45:59
|
Hi Glen ! How is that quote really !? The new Python2.5 *is* implementing the needed changes - so go ahead install Python2.5 (rc1 is the latest I think) and report how it works. I would also be very intersted to hear ;-) -Sebastian Haase On Thursday 07 September 2006 12:34, Glen W. Mabey wrote: > A long time ago, Travis wrote: > > On a related, but orthogonal note: > > > > My understanding is that using memory-mapped files for *very* large > > files will require modification to the mmap module in Python --- > > something I think we should push. One part of that process would be > > to add the C-struct array interface to the mmap module and the buffer > > object -- perhaps this is how we get the array interface into Python > > quickly. Then, if we could make a base-type mmap that did not use > > the buffer interface or the sequence interface (similar to the > > bigndarray in scipy_core) and therefore by-passed the problems with > > Python in those areas, then the current mmap object could inherit from > > the base class and provide current functionality while still exposing > > the array interface for access to >2GB files on 64-bit systems. > > > > Who would like to take up the ball for modifying mmap in Python in > > this fashion? > > > > -Travis > > Did anyone ever "pick up the ball" on this issue? > > Glen > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |