You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Bill B. <wb...@gm...> - 2006-08-08 03:18:20
|
I see Pyrex and SWIG examples in numpy/doc but there doesn't seem to be an example of just a simple straightforward usage of the C-API. For instance make a few arrays by hand in C and then call numpy.multiply() on them. So far my attempts to call PyArray_SimpleNewFromData all result in segfaults. Anyone have such an example? --Bill |
From: Robert K. <rob...@gm...> - 2006-08-07 18:05:02
|
Christian Meesters wrote: > Hi, > > I used to work with some unittest scripts for a bigger project of mine. Now > that I started the project again the tests don't work anymore, using numpy > version '0.9.5.2100' . > > The errors I get look are like this: > > ERROR: _normalize() should return dataset scaled between 0 and 1 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "testingSAXS.py", line 265, in testNormalization > > self.assertEqual(self.test1._normalize(minimum=0.0,maximum=1.0),self.test5) > File "/usr/lib64/python2.4/unittest.py", line 332, in failUnlessEqual > if not first == second: > File > "/home/cm/Documents/Informatics/Python/python_programming/biophysics/SAXS/lib/Data.py", > line 174, in __eq__ > if self.intensity == other.intensity: > ValueError: The truth value of an array with more than one element is > ambiguous. Use a.any() or a.all() > > The 'self.intensity' objects are 1D-arrays containing integers <= 1E6. > > The unittest script looks like: > > if __name__=='__main__': > from Data import * > from Utils import * > import unittest > > <snip> > def test__eq__(self): > """__eq__ should return True with identical array data""" > self.assert_(self.test1 == self.test2) > <snip> > suite = unittest.TestSuite() > suite.addTest(unittest.makeSuite(Test_SAXS_Sanity)) > <snip> > unittest.TextTestRunner(verbosity=1).run(suite) > > Any ideas what I have to change? (Possibly trivial, but I have no clue.) self.assert_((self.test1 == self.test2).all()) I'm afraid that your test was always broken. Numeric used the convention that if *any* value in a boolean array was True, then the array would evaluate to True when used as a truth value in an if: clause. However, you almost certainly wanted to test that *all* of the values were True. This is why we now raise an exception; lots of people got tripped up over that. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Christian M. <mee...@un...> - 2006-08-07 17:28:32
|
Hi, I used to work with some unittest scripts for a bigger project of mine. Now that I started the project again the tests don't work anymore, using numpy version '0.9.5.2100' . The errors I get look are like this: ERROR: _normalize() should return dataset scaled between 0 and 1 ---------------------------------------------------------------------- Traceback (most recent call last): File "testingSAXS.py", line 265, in testNormalization self.assertEqual(self.test1._normalize(minimum=0.0,maximum=1.0),self.test5) File "/usr/lib64/python2.4/unittest.py", line 332, in failUnlessEqual if not first == second: File "/home/cm/Documents/Informatics/Python/python_programming/biophysics/SAXS/lib/Data.py", line 174, in __eq__ if self.intensity == other.intensity: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() The 'self.intensity' objects are 1D-arrays containing integers <= 1E6. The unittest script looks like: if __name__=='__main__': from Data import * from Utils import * import unittest <snip> def test__eq__(self): """__eq__ should return True with identical array data""" self.assert_(self.test1 == self.test2) <snip> suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(Test_SAXS_Sanity)) <snip> unittest.TextTestRunner(verbosity=1).run(suite) Any ideas what I have to change? (Possibly trivial, but I have no clue.) TIA Cheers Christian |
From: David H. <dav...@gm...> - 2006-08-07 12:48:58
|
I have noticed some that the 1d histogram and 2d histogram. The > histogram function bins everything between the elements of edges, and > then includes everything greater than the last edge element in the > last bin. The histrogram2d function only bins in the range specified > by edges. Is there a reason these two functions do not operate in the > same way? > Hi Mikolai, The reason is that I didn't like the way histogram handled outliers so I wrote histogram1d, histogram2d, and histogramdd to handle 1d, 2d and nd data series. I submitted those functions and only histogram2d got included in numpy, hence the clash. Travis suggested that histogram1d and histogramdd could go into scipy, but with the new compatibility paradigm, I suggest that the old histogram is moved into the compatibility module and histogram1d is renamed to histogram and put into the main namespace. histogramdd could indeed go into scipy.stats. I'll submit a new patch if there is some interest. The new function takes an axis argument so you can make an histogram out of a nd array rowwise or columnwise. Ouliers are not counted, and the bin array has length (nbin +1) (+1 for the right hand side edge). The new function will break some code relying on the old behavior, so its inclusion presupposes the agreement of the users. You can find the code at ticket 189<http://projects.scipy.org/scipy/numpy/ticket/189> . David |
From: <hj...@to...> - 2006-08-07 09:10:23
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> <style type="text/css"> <!-- .td { font-size: 12px; color: #313131; line-height: 20px; font-family: "Arial", "Helvetica", "sans-serif"; } --> </style> </head> <body leftmargin="0" background="http://bo.sohu.com//images/img20040502/dj_bg.gif"> <table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="31" background="http://igame.sina.com.cn/club/images/topmenu/topMenu_8.gif" class="td"><div align="center"><font color="#FFFFFF">主办单位:易腾企业管理咨询有限公司</font></div></td> </tr> </table> <br> <table width="684" border="0" align="center" cellpadding="0" cellspacing="0" height="1171"> <tr> <td height="71" bgcolor="#8C8C8C"> <div align="center"> <table width="100%" border="0" cellspacing="1" cellpadding="0" height="76"> <tr> <td height="74" bgcolor="#F3F3F3"><div align="center"> <span lang="zh-cn"><font size="6">运用EXCEL和PPT改进管理和经营决策</font></span></div></td> </tr> </table> </div></td> </tr> <tr> <td height="1095" bgcolor="#FFFFFF"> <div align="center"> <table width="99%" border="0" cellspacing="0" cellpadding="0" height="48"> <tr> <td width="17%" height="22" bgcolor="#BF0000" class="td"> <div align="center"><font color="#FFFFFF">[课 程 背 景]</font></div></td> <td width="83%" class="td" height="22"> </td> </tr> <tr> <td height="26" colspan="2" class="td"> <p ALIGN="JUSTIFY"><font LANG="ZH-CN"> <font size="2"> 不管您在什么岗位上工作,利用Excel电子表格进行数据分析几乎已经成为每个经理人的必备工具,无论您从事采购、销售、财务分析还是经营决策,电子表格能够帮助你筛选数据、分析数据并制作管理图表,Excel的各种财务函数为您进行本量利分析和经营决策提供了方便。如果你打算利用Excel提高工作质量和效率,运用Powerpoint制作优美的演示报告取得不同凡响的震撼效果,那么这个课程就是为你定制的。<br> <b>培 训 收 益:</b><br> 提高EXCEL和PPT实际操作能力,提高工作效率;<br> 掌握如何利用各种函数建立数学模型进行高效财务分析;<br> 掌握快速实现产品、客户分类的方法,使公司效益倍增;<br> 掌握建立多因素量本利敏感分析模型,使你直观地发现最佳盈利模式;<br> 掌握利用各种预测工具,寻找营销方案;<br> 掌握如何制作令老板满意的各类图表和管理报告。 </font> </font></td> </tr> </table> </div> <div align="center" style="width: 671; height: 1"> </div> <div align="center"> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[课 程 大 纲]</font></div></td> <td width="83%" class="td"> </td> </tr> <tr> <td height="64" colspan="2" class="td"> <p> <font size="2"><b>一、EXCEL和Powerpoint的操作技巧</b><br> 1 数据管理:<br> 数据格式、建立公式、数据编辑、图表制作、排序、筛选、分类汇总<br> 2 数据分析:<br> 数据透视表(图)、数据敏感分析、单变量求解、模拟运算表、规划求解<br> 3 不同类型报告的模版演示:<br> ①业绩报告;项目汇报、②财务报告、③动员与展望、④评审/评估报告<br> 4 图表应用的五个步骤:<br> 目标、主题、对比关系、数据、图表<br> 5 用PPT表达思想<br> 管理结构、工作流程、业绩趋势和分析、竞争对手的对比<br> 6 PPT与EXCEL,OUTLOOK的链接使用技巧<br> <br> <b>二、如何运用图表进行事务处理和工作报告</b><br> 怎样快速创建出你需要的图表<br> 如何创建动态图<br> 如何因地制宜地使用图表<br> 行政管理表格设计<br> 人力资源管理表格设计<br> 如何自动生成员工考核工资表<br> 企业销售业绩的图表表达<br> 产品市场占有率的图表表达<br> 如何运用EXCEL分析市场调查问卷<br> 如何运用EXCEL制作和分析销售报表<br> 如何运用EXCEL制作和分析财务报表<br> 人事、物料、财务数据库的链接和自动处理<br> <br> <b>三、如何运用EXCEL进行本量利分析和经营决策</b><br> 成本费用分析与管理<br> 销售业务管理与决策<br> 动态本量利模型分析<br> 固定资产折旧计算<br> 工资及个人所得税计算<br> 现金日报及现金流量表的编制<br> 由资产负债表自动生成现金流量表<br> 工资、固定资产投资、折旧方案筛选等实际运用模板建立和应用分析<br> 量本利分析、回归分析、方案预测、销售客户产品分析等实战演练<br> 定性指标的定量化分析技术应用的模拟演练<br> 运用数据透视表(图)进行经营分析的分析思路和模拟演练<br> 投资项目评价与决策<br> <br> <b>四、管理和经营报告</b><br> 用图片插入技术制作项目分析短片<br> 动画展示产品型号及功能<br> 市场分析及推广计划演示<br> 形象化的股东大会营运报告<br> 制作动感宣传片<br> 丰富多彩的可行性研究报告<br> <br> <b>五、案例分析和实例演练</b></font></p></td> </tr> </table> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[导 师 简 介]</font></div></td> <td width="83%" class="td"> </td> </tr> <tr> <td height="64" colspan="2" class="td"> <p> <font color="#FF0000"> </font><font size="2">Mr Wang ,管理工程硕士、高级经济师,国际职业培训师协会认证职业培训师,历任跨国公司工业工程经理、管理会计分析师、营运总监等高级管理职务多年,同时还担任 < 价值工程 > 杂志审稿人、辽宁省营口市商业银行独立董事等职务,对企业管理有较深入的研究。 王老师主要从事成本控制、财务管理、管理会计决策等课程的讲授,为 IBM 、 TDK 、松下、可口可乐、康师傅、汇源果汁、雪津啤酒、吉百利食品、冠捷电子、 INTEX 明达塑胶、正新橡胶、美国 ITT 集团、广上科技、美的空调、中兴通讯、京信通信、联想电脑,应用材料 ( 中国 ) 公司、艾克森 - 金山石化、中国化工进出口公司、正大集团大福饲料、厦华集团、灿坤股份、NEC 东金电子、太原钢铁集团、 PHILIPS 、深圳开发科技、大冷王运输制冷、三洋华强、 TCL 、美的汽车、上海贝尔阿尔卡特、天津扎努西、上海卡博特等知名企业提供项目辅导或专题培训。王老师授课经验丰富,风格幽默诙谐、逻辑清晰、过程互动,案例生动、深受学员喜爱。</font></p></td> </tr> </table> </div> <div align="center"> <table width="679" border="0" cellpadding="0" cellspacing="0" height="70"> <tr> <td width="132" height="24" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF">[时间/地点/联系方式]</font></div></td> <td width="546" class="td" height="24"> </td> </tr> <tr> <td height="46" colspan="2" class="td" width="679"> <p><font size="2"><b>时间:</b> 8月19-20日</font> <font size="2">(周六/日) 北京 1980/人<font face="宋体">(含课程费、教材、午餐等)</font> 四人以上参加,赠予一名名额</font></p> </td> </tr> </table> </div> <table width="678" height="45" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="25" class="td" width="676"> <font size="2"><b>报名/咨询:</b><font color="#000000"> </font>谢小姐 <font color="#000000">021-51187126</font><br> 注:如您不需要此邮件,请发送邮件至:ts...@to... 并在标题注明订退</font></td> </tr> </table> </td> </tr> </table> </body> </html> |
From: Hanno K. <kl...@ph...> - 2006-08-07 08:53:05
|
Hello, I try to compile numpy-1.0b1 with blas and lapack support. I have compiled blas and lapack according to the instrunctions in http://pong.tamu.edu/tiki/tiki-view_blog_post.php?blogId=6&postId=97 . I copied the libraries to /scratch/python2.4/lib and set the environment variables accordingly. python setup.py config in the numpy directory then finds the libraries. If I then do python setup.py build The compilation dies with the error message: .. build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x28ae): In function `dotblas_vdot': numpy/core/blasdot/_dotblas.c:971: undefined reference to `PyArg_ParseTuple' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b45):numpy/core/blasdot/_dotblas.c:1002: undefined reference to `PyTuple_N ew' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b59):numpy/core/blasdot/_dotblas.c:83: undefined reference to `PyArg_Parse Tuple' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2b6d):numpy/core/blasdot/_dotblas.c:107: undefined reference to `_Py_NoneSt ruct' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2cba):numpy/core/blasdot/_dotblas.c:1021: undefined reference to `PyExc_Val ueError' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2cc9):numpy/core/blasdot/_dotblas.c:1021: undefined reference to `PyErr_Set String' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d1c):numpy/core/blasdot/_dotblas.c:1029: undefined reference to `PyEval_Sa veThread' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d3f):numpy/core/blasdot/_dotblas.c:1049: undefined reference to `PyEval_Re storeThread' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d63):numpy/core/blasdot/_dotblas.c:1045: undefined reference to `cblas_cdo tc_sub' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2d84):numpy/core/blasdot/_dotblas.c:1041: undefined reference to `cblas_zdo tc_sub' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2da1):numpy/core/blasdot/_dotblas.c:1037: undefined reference to `cblas_sdo t' build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o(.text+0x2dc6):numpy/core/blasdot/_dotblas.c:1033: undefined reference to `cblas_ddo t' /usr/lib/gcc-lib/x86_64-redhat-linux/3.2.3/libfrtbegin.a(frtbegin.o)(.text+0x22): In function `main': : undefined reference to `MAIN__' collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -L/scratch/apps/lib build/temp.linux-x86_64-2.4/numpy/core/blasdot/_dotblas.o -L/scratch/python2.4/lib -lfblas - lg2c -o build/lib.linux-x86_64-2.4/numpy/core/_dotblas.so" failed with exit status 1 I try this on a dual processor Xeon machine with gcc 3.2.3 under an old redhat distribution. Therefore using the libraries delivered with the distro don't work as they are broken. At first I tried to compile numpy with atlas support but I got similar problems. I have attached the full output of the failed build. I would be very grateful if somebody with a little more experience with compilers could have a look at it and maybe point me in the right direction. Many thanks in advance, Hanno -- Hanno Klemm kl...@ph... |
From: Bill B. <wb...@gm...> - 2006-08-07 05:02:09
|
On 8/1/06, Travis Oliphant <oli...@ie...> wrote: > Bill Baxter wrote: > > When you have a chance, could the powers that be make some comment on > > the r_ and c_ situation? > r_ and c_ were in SciPy and have been there for several years. > > For NumPy, c_ has been deprecated (but not removed because it is used in > SciPy). > > The functionality of c_ is in r_ so it doesn't add anything. I don't see how r_ offers the ability to stack columns like this: >>> c_[ [[0],[1],[2]], [[4],[5],[6]] ] array([[0, 4], [1, 5], [2, 6]]) > There is going to be overlap with long-name functions because of > this. I have not had time to review Bill's suggestions yet --- were > they filed as a ticket? A ticket is the best way to keep track of > issues at this point. I just filed it as #235. But then I noticed I had already filed it previously as #201. Sorry about that. Anyway it's definitely in there now. Regards, --Bill |
From: Sven S. <sve...@gm...> - 2006-08-06 19:03:46
|
Charles R Harris schrieb: > Hi Sven, > > On 7/28/06, *Sven Schreiber* <sve...@gm... > <mailto:sve...@gm...>> wrote: > > Here's my attempt at summarizing the diag-discussion. > > > <snip> > > 2) Deprecate the use of diag which is overloaded with making diagonal > matrices as well as getting diagonals. Instead, use the existing > .diagonal() for getting a diagonal, and introduce a new make_diag() > function which could easily work for numpy-arrays and numpy-matrices > alike. > > > This would be my preference, but with functions {get,put}diag. We could > also add a method or function asdiag, which would always return a > diagonal matrix made from *all* the elements of the matrix taken in > order. For (1,n) or (n,1) this would do what you want. For other > matrices the result would be something new and probably useless, but at > least it wouldn't hurt. > This seems to have been implemented now by the new diagflat() function. So, matrix users can now use m.diagonal() for the matrix->vector direction of diag(), and diagflat(v) for the vector->matrix side of diag(), and always get numpy-matrix output for numpy-matrix input. Thanks a lot for making this possible! One (really minor) comment: "diagflat" as a name is not optimal imho. Are other suggestions welcome, or is there a compelling reason for this name? Thanks, sven |
From: Robert K. <rob...@gm...> - 2006-08-06 08:18:36
|
David Grant wrote: > The following lines of code: > > from numpy import floor > div, mod = divmod(floor(1.5), 12) > > generate an exception: > > ValueError: need more than 0 values to unpack > > in numpy-0.9.8. Does anyone else see this? It might be due to the fact > that floor returns a float64scalar. Should I be forced to cast that to > an int before calling divmod with it? I don't see an exception with a more recent numpy (r2881, to be precise). Please try a later version. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David G. <dav...@gm...> - 2006-08-06 08:15:02
|
The following lines of code: from numpy import floor div, mod = divmod(floor(1.5), 12) generate an exception: ValueError: need more than 0 values to unpack in numpy-0.9.8. Does anyone else see this? It might be due to the fact that floor returns a float64scalar. Should I be forced to cast that to an int before calling divmod with it? -- David Grant http://www.davidgrant.ca |
From: Robert K. <rob...@gm...> - 2006-08-06 07:55:10
|
David Grant wrote: > What about the documentation that already exists here: http://www.tramy.us/ Essentially every function and class needs a docstring whether or not there is a manual available. Neither one invalidates the need for the other. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David G. <dav...@gm...> - 2006-08-06 03:45:54
|
What about the documentation that already exists here: http://www.tramy.us/ I think the more people that buy it the better since that money goes to support Travis does it not? Dave On 8/5/06, Albert Strasheim <fu...@gm...> wrote: > > Hello all > > With NumPy 1.0 mere weeks away, I'm hoping we can improve the > documentation > a bit before the final release. Some things we might want to think about: > > 1. Documentation Sprint > > This page: > > http://www.scipy.org/SciPy2006/CodingSprints > > mentions a possible Documentation Sprint at SciPy 2006. Does anybody know > if > this is going to happen? > > 2. Tickets for missing functions missing docstrings > > Would it be helpful to create tickets for functions that currently don't > have docstrings? If not, is there a better way we can keep track of the > state of the documentation? > > 3. Examples in documentation > > Do we want to include examples in the docstrings? Some functions already > do, > and I think think this can be quite useful when one is exploring the > library. > > Maybe the example list: > > http://www.scipy.org/Numpy_Example_List > > should be incorporated into the docstrings? Then we can also set up > doctests > to make sure that all the examples really work. > > 4. Documentation format > > If someone wants to submit documentation to be included, say as patches > attached to tickets, what kind of format do we want? > > There's already various PEPs dealing with this topic: > > Docstring Processing System Framework > http://www.python.org/dev/peps/pep-0256/ > > Docstring Conventions > http://www.python.org/dev/peps/pep-0257/ > > Docutils Design Specification > http://www.python.org/dev/peps/pep-0258/ > > reStructuredText Docstring Format > http://www.python.org/dev/peps/pep-0287/ > > 5. Documentation tools > > A quick search turned up docutils: > > http://docutils.sourceforge.net/ > > and epydoc: > > http://epydoc.sourceforge.net/ > > Both of these support restructured text, so that looks like the way to go. > I > think epydoc can handle LaTeX equations and some LaTeX support has also > been > added to docutils recently. This might be useful for describing some > functions. > > Something else to consider is pydoc compatibility. NumPy currently breaks > pydoc: > > http://projects.scipy.org/scipy/numpy/ticket/232 > > It also breaks epydoc 3.0a2 (maybe an epydoc bug): > > > http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 > 455&atid=405618 > > Anything else? How should we proceed to improve NumPy's documentation? > > Regards, > > Albert > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- David Grant http://www.davidgrant.ca |
From: Gary R. <gr...@bi...> - 2006-08-06 02:28:33
|
All excellent suggestions Albert. What about creating a numpy version of either the main Numeric or numarray document? I would like to see examples included in numpy of all functions. However, I think a better way to do this would be to place all examples in a separate module and create a function such as example() which would then allow something like example(arange) to spit out the example code. This would make it easier to include multiple examples for each command and to actually execute the example code, which I think is a necessary ability to make the examples testable. Examples could go in like doctests with some sort of delimiting so that they can have numbers generated and be referred to, so that you could execute, say, the 3rd example for the arange() function. Perhaps a runexample() function should be created for this or perhaps provide arguments for the example() function like example(name, number, run) The Maxima CAS package has something like this and also has an apropos() command which lists commands with similar sounding names to the argument. We could implement something similar but better by searching the examples module for similar commands, but also listing "See Also" cross references like those in the Numpy_Example_List, Gary R. Albert Strasheim wrote: > Hello all > > With NumPy 1.0 mere weeks away, I'm hoping we can improve the documentation > a bit before the final release. Some things we might want to think about: > > 1. Documentation Sprint > > This page: > > http://www.scipy.org/SciPy2006/CodingSprints > > mentions a possible Documentation Sprint at SciPy 2006. Does anybody know if > this is going to happen? > > 2. Tickets for missing functions missing docstrings > > Would it be helpful to create tickets for functions that currently don't > have docstrings? If not, is there a better way we can keep track of the > state of the documentation? > > 3. Examples in documentation > > Do we want to include examples in the docstrings? Some functions already do, > and I think think this can be quite useful when one is exploring the > library. > > Maybe the example list: > > http://www.scipy.org/Numpy_Example_List > > should be incorporated into the docstrings? Then we can also set up doctests > to make sure that all the examples really work. > > 4. Documentation format > > If someone wants to submit documentation to be included, say as patches > attached to tickets, what kind of format do we want? > > There's already various PEPs dealing with this topic: > > Docstring Processing System Framework > http://www.python.org/dev/peps/pep-0256/ > > Docstring Conventions > http://www.python.org/dev/peps/pep-0257/ > > Docutils Design Specification > http://www.python.org/dev/peps/pep-0258/ > > reStructuredText Docstring Format > http://www.python.org/dev/peps/pep-0287/ > > 5. Documentation tools > > A quick search turned up docutils: > > http://docutils.sourceforge.net/ > > and epydoc: > > http://epydoc.sourceforge.net/ > > Both of these support restructured text, so that looks like the way to go. I > think epydoc can handle LaTeX equations and some LaTeX support has also been > added to docutils recently. This might be useful for describing some > functions. > > Something else to consider is pydoc compatibility. NumPy currently breaks > pydoc: > > http://projects.scipy.org/scipy/numpy/ticket/232 > > It also breaks epydoc 3.0a2 (maybe an epydoc bug): > > http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 > 455&atid=405618 > > Anything else? How should we proceed to improve NumPy's documentation? > > Regards, > > Albert |
From: Albert S. <fu...@gm...> - 2006-08-05 22:11:25
|
Hello all With NumPy 1.0 mere weeks away, I'm hoping we can improve the documentation a bit before the final release. Some things we might want to think about: 1. Documentation Sprint This page: http://www.scipy.org/SciPy2006/CodingSprints mentions a possible Documentation Sprint at SciPy 2006. Does anybody know if this is going to happen? 2. Tickets for missing functions missing docstrings Would it be helpful to create tickets for functions that currently don't have docstrings? If not, is there a better way we can keep track of the state of the documentation? 3. Examples in documentation Do we want to include examples in the docstrings? Some functions already do, and I think think this can be quite useful when one is exploring the library. Maybe the example list: http://www.scipy.org/Numpy_Example_List should be incorporated into the docstrings? Then we can also set up doctests to make sure that all the examples really work. 4. Documentation format If someone wants to submit documentation to be included, say as patches attached to tickets, what kind of format do we want? There's already various PEPs dealing with this topic: Docstring Processing System Framework http://www.python.org/dev/peps/pep-0256/ Docstring Conventions http://www.python.org/dev/peps/pep-0257/ Docutils Design Specification http://www.python.org/dev/peps/pep-0258/ reStructuredText Docstring Format http://www.python.org/dev/peps/pep-0287/ 5. Documentation tools A quick search turned up docutils: http://docutils.sourceforge.net/ and epydoc: http://epydoc.sourceforge.net/ Both of these support restructured text, so that looks like the way to go. I think epydoc can handle LaTeX equations and some LaTeX support has also been added to docutils recently. This might be useful for describing some functions. Something else to consider is pydoc compatibility. NumPy currently breaks pydoc: http://projects.scipy.org/scipy/numpy/ticket/232 It also breaks epydoc 3.0a2 (maybe an epydoc bug): http://sourceforge.net/tracker/index.php?func=detail&aid=1535178&group_id=32 455&atid=405618 Anything else? How should we proceed to improve NumPy's documentation? Regards, Albert |
From: Travis O. <oli...@ie...> - 2006-08-05 07:59:35
|
I've finished the updates to backward compatibility to Numeric. SciPy passes all tests. Please report any outstanding issues you may encounter. It would be nice to remove dependency on oldnumeric from SciPy entirely. -Travis |
From: Larry W. <We...@ya...> - 2006-08-05 02:45:27
|
I receive an error message when trying to import scipy: import scipy File "C:\Python24\Lib\site-packages\scipy\__init__.py", line 32, in -toplevel- from numpy import oldnumeric ImportError: cannot import name oldnumeric Numpy is installed. How to I correct this problem? Larry W |
From: Charles R H. <cha...@gm...> - 2006-08-04 23:34:10
|
Hi Travis, I wonder if it is possible to adapt these modules so they can flag all the incompatibilities, maybe with a note on the fix. This would be a useful tool for those having to port code. That might not be the easiest route to go but at least there is a partial list of the functions involved. Chuck On 8/4/06, Travis Oliphant <oli...@ie...> wrote: > > > For backward-compatibility with Numeric and Numarray I'm leaning to the > following plan: > > * Do not create compatibility array objects. I initially thought we > could sub-class in order to > create objects that had the expected attributes and methods of Numeric > arrays or Numarray arrays. After some experimentation, I'm ditching > this plan. I think this would create too many array-like objects > floating around and make unification even harder as these objects > interact in difficult-to-predict ways. > > Instead, I'm planning to: > > 1) Create compatibility functions in oldnumeric and numarray > sub-packages that create NumPy arrays but do it with the same function > syntax as the old packages. > > 2) Create 4 scripts for assisting in conversion (2 for Numeric and 2 for > Numarray). > > a) An initial script that just alters imports (to the compatibility > layer) > and fixes method and attribute access. > > b) A secondary script that alters the imports from the compatibility > layer > and fixes as much as possible the things that need to change in > order to > make the switch away from the compatibility layer to work > correctly. > > > While it is not foolproof, I think this will cover most of the issues > and make conversion relatively easy. This will also let us develop > NumPy without undue concern for compatibility with older packages. > > This must all be in place before 1.0 release candidate 1 comes out. > > Comments and criticisms welcome. > > -Travis > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Sebastian H. <ha...@ms...> - 2006-08-04 22:35:53
|
Hi, >>> a=N.random.poisson(N.arange(1e6)+1) >>> U.timeIt('a**2') 0.59 >>> U.timeIt('a*a') 0.01 >>> a.dtype int32 my U.timeIt function just returns the difference of time in seconds before and after evaluation of the string. For >>> c=N.random.normal(1000, 100, 1e6) >>> c.dtype float64 i get .014 seconds for either c*c or c**2 (I averaged over 100 runs). After converting this to float32 I get 0.008 secs for both. Can the int32 case be speed up the same way !? Thanks, Sebastian Haase |
From: Travis O. <oli...@ie...> - 2006-08-04 19:07:25
|
For backward-compatibility with Numeric and Numarray I'm leaning to the following plan: * Do not create compatibility array objects. I initially thought we could sub-class in order to create objects that had the expected attributes and methods of Numeric arrays or Numarray arrays. After some experimentation, I'm ditching this plan. I think this would create too many array-like objects floating around and make unification even harder as these objects interact in difficult-to-predict ways. Instead, I'm planning to: 1) Create compatibility functions in oldnumeric and numarray sub-packages that create NumPy arrays but do it with the same function syntax as the old packages. 2) Create 4 scripts for assisting in conversion (2 for Numeric and 2 for Numarray). a) An initial script that just alters imports (to the compatibility layer) and fixes method and attribute access. b) A secondary script that alters the imports from the compatibility layer and fixes as much as possible the things that need to change in order to make the switch away from the compatibility layer to work correctly. While it is not foolproof, I think this will cover most of the issues and make conversion relatively easy. This will also let us develop NumPy without undue concern for compatibility with older packages. This must all be in place before 1.0 release candidate 1 comes out. Comments and criticisms welcome. -Travis |
From: Phil R. <pru...@gm...> - 2006-08-04 15:12:53
|
The spook is in t = [1.3*i for i in range(1400)]. It used to be t = [1.0*i for i in range(1400)] but I changed it to shake out algorithms that produce differences. But a max difference of 2.077e-16 is immaterial for my application. I should use a less strict compare. On 8/3/06, Charles R Harris <cha...@gm...> wrote: > Hi Phil, > > Curious. It works fine here in the original form. I even expected a tiny > difference because of floating point voodoo but there was none at all. Now > if I copy your program and run it there *is* a small difference over the > slice [1:] (to avoid division by zero). > > index of max fractional difference: 234 > max fractional difference: 2.077e-16 > reg at max fractional difference: 1.098e+03 > > Which is just about roundoff error (1.11e-16) for double precision, so it > lost a bit of precision. > > Still, I am not clear why the results should differ at all between the > original and your new code. Cue spooky music. > > Chuck > > On 8/3/06, Phil Ruggera <pru...@gm...> wrote: > > Tweek2 is slightly faster, but does not produce the same result as the > > regular python baseline: > > > > regular python took: 11.997997 sec. > > numpy convolve took: 0.611996 sec. > > numpy convolve tweek 1 took: 0.442029 sec. > > numpy convolve tweek 2 took: 0.418857 sec. > > Traceback (most recent call last): > > File "G:\Python\Dev\mean.py", line 57, in ? > > numpy.testing.assert_equal(reg, np3) > > File > "C:\Python24\Lib\site-packages\numpy\testing\utils.py", > line > > 130, in assert_equal > > return assert_array_equal(actual, desired, err_msg) > > File > "C:\Python24\Lib\site-packages\numpy\testing\utils.py", > line > > 217, in assert_array_equal > > assert cond,\ > > AssertionError: > > Arrays are not equal (mismatch 17.1428571429%): > > Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > > ..., 1.7842500000000002e+03 1.785550000... > > Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > > ..., 1.7842500000000002e+03 1.785550000... > > > > > Code: > > > > # mean of n values within an array > > import numpy, time > > def nmean(list,n): > > a = [] > > for i in range(1,len(list)+1): > > start = i-n > > divisor = n > > if start < 0: > > start = 0 > > divisor = i > > a.append(sum(list[start:i])/divisor) > > return a > > > > def testNP(code, text): > > start = time.clock() > > for x in range(1000): > > np = code(t,50) > > print text, "took: %f sec."%( time.clock() - start) > > return np > > > > t = [1.3*i for i in range(1400)] > > reg = testNP(nmean, 'regular python') > > > > t = numpy.array(t,dtype=float) > > > > def numpy_nmean_conv(list,n): > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > for i in range(n): > > a[i] /= i + 1 > > a[n:] /= n > > return a[:len(list)] > > > > np1 = testNP(numpy_nmean_conv, 'numpy convolve') > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > a[:n] /= numpy.arange(1, n+1) > > a[n:] /= n > > return a[:len(list)] > > > > np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve > tweek 1') > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > b = numpy.ones(n,dtype=float) > > a = numpy.convolve(list,b,mode="full") > > a[:n] /= numpy.arange(1, n + 1) > > a[n:] *= 1.0/n > > return a[:len(list)] > > > > np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve > tweek 2') > > > > numpy.testing.assert_equal(reg, np1) > > numpy.testing.assert_equal(reg, np2) > > numpy.testing.assert_equal(reg, np3) > > > > On 8/3/06, Charles R Harris < cha...@gm...> wrote: > > > Hi Scott, > > > > > > > > > On 8/3/06, Scott Ransom <sr...@nr...> wrote: > > > > You should be able to modify the kernel so that you can avoid > > > > many of the divides at the end. Something like: > > > > > > > > def numpy_nmean_conv_nl2(list,n): > > > > b = numpy.ones (n,dtype=float) / n > > > > a = numpy.convolve (c,b,mode="full") > > > > # Note: something magic in here to fix the first 'n' values > > > > return a[:len(list)] > > > > > > > > > Yep, I tried that but it wasn't any faster. It might help for really > *big* > > > arrays. The first n-1 values still need to be fixed after. > > > > > > Chuck > > > > > > > I played with it a bit, but don't have time to figure out exactly > > > > how convolve is mangling the first n return values... > > > > > > > > Scott > > > > > > > > > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > > > Heh, > > > > > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > > > better > > > > > timing: > > > > > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > a[:n] /= numpy.arange(1, n + 1) > > > > > a[n:] /= n > > > > > return a[:len(list)] > > > > > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > a[:n] /= numpy.arange(1, n + 1) > > > > > a[n:] *= 1.0/n > > > > > return a[:len(list)] > > > > > > > > > > Which gives > > > > > > > > > > numpy convolve took: 2.630000 sec. > > > > > numpy convolve noloop took: 0.320000 sec. > > > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > > > > > Chuck > > > > > > > > > > On 8/2/06, Phil Ruggera < pru...@gm...> wrote: > > > > > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > > > > > >regular python took: 1.150214 sec. > > > > > >numpy mean slice took: 2.427513 sec. > > > > > >numpy convolve took: 0.546854 sec. > > > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > > > > > >Code: > > > > > > > > > > > ># mean of n values within an array > > > > > >import numpy, time > > > > > >def nmean(list,n): > > > > > > a = [] > > > > > > for i in range(1,len(list)+1): > > > > > > start = i-n > > > > > > divisor = n > > > > > > if start < 0: > > > > > > start = 0 > > > > > > divisor = i > > > > > > a.append(sum(list[start:i])/divisor) > > > > > > return a > > > > > > > > > > > >t = [1.0*i for i in range(1400)] > > > > > >start = time.clock () > > > > > >for x in range(100): > > > > > > reg = nmean(t,50) > > > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > > > > > >def numpy_nmean(list,n): > > > > > > a = numpy.empty(len(list),dtype=float) > > > > > > for i in range(1,len(list)+1): > > > > > > start = i-n > > > > > > if start < 0: > > > > > > start = 0 > > > > > > a[i-1] = list[start:i].mean(0) > > > > > > return a > > > > > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > > > >start = time.clock() > > > > > >for x in range(100): > > > > > > npm = numpy_nmean(t,50) > > > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > > > > > >def numpy_nmean_conv(list,n): > > > > > > b = numpy.ones(n,dtype=float) > > > > > > a = numpy.convolve(list,b,mode="full") > > > > > > for i in range(0,len(list)): > > > > > > if i < n : > > > > > > a[i] /= i + 1 > > > > > > else : > > > > > > a[i] /= n > > > > > > return a[:len(list)] > > > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > > >start = time.clock () > > > > > >for x in range(100): > > > > > > npc = numpy_nmean_conv(t,50) > > > > > >print "numpy convolve took: %f sec."%( time.clock() - start) > > > > > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > > > b = numpy.ones(n,dtype=float) > > > > > > a = numpy.convolve(list,b,mode="full") > > > > > > for i in range(n): > > > > > > a[i] /= i + 1 > > > > > > a[n:] /= n > > > > > > return a[:len(list)] > > > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > > >start = time.clock() > > > > > >for x in range(100): > > > > > > npn = numpy_nmean_conv_nl(t,50) > > > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > > > >numpy.testing.assert_equal(reg,npc) > > > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > > > > > >On 7/29/06, David Grant < dav...@gm...> wrote: > > > > > >> > > > > > >> > > > > > >> > > > > > >> On 7/29/06, Charles R Harris <cha...@gm... > wrote: > > > > > >> > > > > > > >> > Hmmm, > > > > > >> > > > > > > >> > I rewrote the subroutine a bit. > > > > > >> > > > > > > >> > > > > > > >> > def numpy_nmean(list,n): > > > > > >> > a = numpy.empty(len(list),dtype=float) > > > > > >> > > > > > > >> > b = numpy.cumsum(list) > > > > > >> > for i in range(0,len(list)): > > > > > >> > if i < n : > > > > > >> > a[i] = b[i]/(i+1) > > > > > >> > else : > > > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > > > >> > return a > > > > > >> > > > > > > >> > and got > > > > > >> > > > > > > >> > regular python took: 0.750000 sec. > > > > > >> > numpy took: 0.380000 sec. > > > > > >> > > > > > >> > > > > > >> I got rid of the for loop entirely. Usually this is the thing to > do, > > > at > > > > > >> least this will always give speedups in Matlab and also in my > limited > > > > > >> experience with Numpy/Numeric: > > > > > >> > > > > > >> def numpy_nmean2(list,n): > > > > > >> > > > > > >> a = numpy.empty(len(list),dtype=float) > > > > > >> b = numpy.cumsum(list) > > > > > >> c = concatenate((b[n:],b[:n])) > > > > > >> a[:n] = b[:n]/(i+1) > > > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > > > >> return a > > > > > >> > > > > > >> I got no noticeable speedup from doing this which I thought was > > > pretty > > > > > >> amazing. I even profiled all the functions, the original, the one > > > > > >written by > > > > > >> Charles, and mine, using hotspot just to make sure nothing funny > was > > > > > >going > > > > > >> on. I guess plain old Python can be better than you'd expect in > > > certain > > > > > >> situtations. > > > > > >> > > > > > >> -- > > > > > >> David Grant > > > > > > > > > > > > > > > >------------------------------------------------------------------------- > > > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > > > >Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > > > >your > > > > > >opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > >_______________________________________________ > > > > > >Numpy-discussion mailing list > > > > > > Num...@li... > > > > > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > your > > > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > _______________________________________________ > > > > > Numpy-discussion mailing list > > > > > Num...@li... > > > > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > -- > > > > -- > > > > Scott M. Ransom Address: NRAO > > > > Phone: (434) 296-0320 520 Edgemont Rd. > > > > email: sr...@nr... Charlottesville, VA 22903 USA > > > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > > > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > opinions on IT & business topics through brief surveys -- and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > |
From: Charles R H. <cha...@gm...> - 2006-08-04 06:40:15
|
Hi Phil, Curious. It works fine here in the original form. I even expected a tiny difference because of floating point voodoo but there was none at all. Now if I copy your program and run it there *is* a small difference over the slice [1:] (to avoid division by zero). index of max fractional difference: 234 max fractional difference: 2.077e-16 reg at max fractional difference: 1.098e+03 Which is just about roundoff error (1.11e-16) for double precision, so it lost a bit of precision. Still, I am not clear why the results should differ at all between the original and your new code. Cue spooky music. Chuck On 8/3/06, Phil Ruggera <pru...@gm...> wrote: > > Tweek2 is slightly faster, but does not produce the same result as the > regular python baseline: > > regular python took: 11.997997 sec. > numpy convolve took: 0.611996 sec. > numpy convolve tweek 1 took: 0.442029 sec. > numpy convolve tweek 2 took: 0.418857 sec. > Traceback (most recent call last): > File "G:\Python\Dev\mean.py", line 57, in ? > numpy.testing.assert_equal(reg, np3) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line > 130, in assert_equal > return assert_array_equal(actual, desired, err_msg) > File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line > 217, in assert_array_equal > assert cond,\ > AssertionError: > Arrays are not equal (mismatch 17.1428571429%): > Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > ..., 1.7842500000000002e+03 1.785550000... > Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 > 1.3000000000000000e+00 > ..., 1.7842500000000002e+03 1.785550000... Code: > > # mean of n values within an array > import numpy, time > def nmean(list,n): > a = [] > for i in range(1,len(list)+1): > start = i-n > divisor = n > if start < 0: > start = 0 > divisor = i > a.append(sum(list[start:i])/divisor) > return a > > def testNP(code, text): > start = time.clock() > for x in range(1000): > np = code(t,50) > print text, "took: %f sec."%(time.clock() - start) > return np > > t = [1.3*i for i in range(1400)] > reg = testNP(nmean, 'regular python') > > t = numpy.array(t,dtype=float) > > def numpy_nmean_conv(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > for i in range(n): > a[i] /= i + 1 > a[n:] /= n > return a[:len(list)] > > np1 = testNP(numpy_nmean_conv, 'numpy convolve') > > def numpy_nmean_conv_nl_tweak1(list,n): > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > a[:n] /= numpy.arange(1, n+1) > a[n:] /= n > return a[:len(list)] > > np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve tweek 1') > > def numpy_nmean_conv_nl_tweak2(list,n): > > b = numpy.ones(n,dtype=float) > a = numpy.convolve(list,b,mode="full") > a[:n] /= numpy.arange(1, n + 1) > a[n:] *= 1.0/n > return a[:len(list)] > > np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve tweek 2') > > numpy.testing.assert_equal(reg, np1) > numpy.testing.assert_equal(reg, np2) > numpy.testing.assert_equal(reg, np3) > > On 8/3/06, Charles R Harris <cha...@gm...> wrote: > > Hi Scott, > > > > > > On 8/3/06, Scott Ransom <sr...@nr...> wrote: > > > You should be able to modify the kernel so that you can avoid > > > many of the divides at the end. Something like: > > > > > > def numpy_nmean_conv_nl2(list,n): > > > b = numpy.ones(n,dtype=float) / n > > > a = numpy.convolve (c,b,mode="full") > > > # Note: something magic in here to fix the first 'n' values > > > return a[:len(list)] > > > > > > Yep, I tried that but it wasn't any faster. It might help for really > *big* > > arrays. The first n-1 values still need to be fixed after. > > > > Chuck > > > > > I played with it a bit, but don't have time to figure out exactly > > > how convolve is mangling the first n return values... > > > > > > Scott > > > > > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > > Heh, > > > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > > better > > > > timing: > > > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > a[:n] /= numpy.arange(1, n + 1) > > > > a[n:] /= n > > > > return a[:len(list)] > > > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > a[:n] /= numpy.arange(1, n + 1) > > > > a[n:] *= 1.0/n > > > > return a[:len(list)] > > > > > > > > Which gives > > > > > > > > numpy convolve took: 2.630000 sec. > > > > numpy convolve noloop took: 0.320000 sec. > > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > > > Chuck > > > > > > > > On 8/2/06, Phil Ruggera <pru...@gm...> wrote: > > > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > > > >regular python took: 1.150214 sec. > > > > >numpy mean slice took: 2.427513 sec. > > > > >numpy convolve took: 0.546854 sec. > > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > > > >Code: > > > > > > > > > ># mean of n values within an array > > > > >import numpy, time > > > > >def nmean(list,n): > > > > > a = [] > > > > > for i in range(1,len(list)+1): > > > > > start = i-n > > > > > divisor = n > > > > > if start < 0: > > > > > start = 0 > > > > > divisor = i > > > > > a.append(sum(list[start:i])/divisor) > > > > > return a > > > > > > > > > >t = [1.0*i for i in range(1400)] > > > > >start = time.clock() > > > > >for x in range(100): > > > > > reg = nmean(t,50) > > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean(list,n): > > > > > a = numpy.empty(len(list),dtype=float) > > > > > for i in range(1,len(list)+1): > > > > > start = i-n > > > > > if start < 0: > > > > > start = 0 > > > > > a[i-1] = list[start:i].mean(0) > > > > > return a > > > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > > >start = time.clock() > > > > >for x in range(100): > > > > > npm = numpy_nmean(t,50) > > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean_conv(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > for i in range(0,len(list)): > > > > > if i < n : > > > > > a[i] /= i + 1 > > > > > else : > > > > > a[i] /= n > > > > > return a[:len(list)] > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > >start = time.clock () > > > > >for x in range(100): > > > > > npc = numpy_nmean_conv(t,50) > > > > >print "numpy convolve took: %f sec."%(time.clock() - start) > > > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > > b = numpy.ones(n,dtype=float) > > > > > a = numpy.convolve(list,b,mode="full") > > > > > for i in range(n): > > > > > a[i] /= i + 1 > > > > > a[n:] /= n > > > > > return a[:len(list)] > > > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > > >start = time.clock() > > > > >for x in range(100): > > > > > npn = numpy_nmean_conv_nl(t,50) > > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > > >numpy.testing.assert_equal(reg,npc) > > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > > > >On 7/29/06, David Grant < dav...@gm...> wrote: > > > > >> > > > > >> > > > > >> > > > > >> On 7/29/06, Charles R Harris <cha...@gm... > wrote: > > > > >> > > > > > >> > Hmmm, > > > > >> > > > > > >> > I rewrote the subroutine a bit. > > > > >> > > > > > >> > > > > > >> > def numpy_nmean(list,n): > > > > >> > a = numpy.empty(len(list),dtype=float) > > > > >> > > > > > >> > b = numpy.cumsum(list) > > > > >> > for i in range(0,len(list)): > > > > >> > if i < n : > > > > >> > a[i] = b[i]/(i+1) > > > > >> > else : > > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > > >> > return a > > > > >> > > > > > >> > and got > > > > >> > > > > > >> > regular python took: 0.750000 sec. > > > > >> > numpy took: 0.380000 sec. > > > > >> > > > > >> > > > > >> I got rid of the for loop entirely. Usually this is the thing to > do, > > at > > > > >> least this will always give speedups in Matlab and also in my > limited > > > > >> experience with Numpy/Numeric: > > > > >> > > > > >> def numpy_nmean2(list,n): > > > > >> > > > > >> a = numpy.empty(len(list),dtype=float) > > > > >> b = numpy.cumsum(list) > > > > >> c = concatenate((b[n:],b[:n])) > > > > >> a[:n] = b[:n]/(i+1) > > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > > >> return a > > > > >> > > > > >> I got no noticeable speedup from doing this which I thought was > > pretty > > > > >> amazing. I even profiled all the functions, the original, the one > > > > >written by > > > > >> Charles, and mine, using hotspot just to make sure nothing funny > was > > > > >going > > > > >> on. I guess plain old Python can be better than you'd expect in > > certain > > > > >> situtations. > > > > >> > > > > >> -- > > > > >> David Grant > > > > > > > > > > > > >------------------------------------------------------------------------- > > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > > >Join SourceForge.net's Techsay panel and you'll get the chance to > share > > > > >your > > > > >opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > >_______________________________________________ > > > > >Numpy-discussion mailing list > > > > > Num...@li... > > > > > > >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > share > > your > > > > opinions on IT & business topics through brief surveys -- and earn > cash > > > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > _______________________________________________ > > > > Numpy-discussion mailing list > > > > Num...@li... > > > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > -- > > > -- > > > Scott M. Ransom Address: NRAO > > > Phone: (434) 296-0320 520 Edgemont Rd. > > > email: sr...@nr... Charlottesville, VA 22903 USA > > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Phil R. <pru...@gm...> - 2006-08-04 04:46:06
|
Tweek2 is slightly faster, but does not produce the same result as the regular python baseline: regular python took: 11.997997 sec. numpy convolve took: 0.611996 sec. numpy convolve tweek 1 took: 0.442029 sec. numpy convolve tweek 2 took: 0.418857 sec. Traceback (most recent call last): File "G:\Python\Dev\mean.py", line 57, in ? numpy.testing.assert_equal(reg, np3) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 130, in assert_equal return assert_array_equal(actual, desired, err_msg) File "C:\Python24\Lib\site-packages\numpy\testing\utils.py", line 217, in assert_array_equal assert cond,\ AssertionError: Arrays are not equal (mismatch 17.1428571429%): Array 1: [ 0.0000000000000000e+00 6.5000000000000002e-01 1.3000000000000000e+00 ..., 1.7842500000000002e+03 1.785550000... Array 2: [ 0.0000000000000000e+00 6.5000000000000002e-01 1.3000000000000000e+00 ..., 1.7842500000000002e+03 1.785550000... Code: # mean of n values within an array import numpy, time def nmean(list,n): a = [] for i in range(1,len(list)+1): start = i-n divisor = n if start < 0: start = 0 divisor = i a.append(sum(list[start:i])/divisor) return a def testNP(code, text): start = time.clock() for x in range(1000): np = code(t,50) print text, "took: %f sec."%(time.clock() - start) return np t = [1.3*i for i in range(1400)] reg = testNP(nmean, 'regular python') t = numpy.array(t,dtype=float) def numpy_nmean_conv(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") for i in range(n): a[i] /= i + 1 a[n:] /= n return a[:len(list)] np1 = testNP(numpy_nmean_conv, 'numpy convolve') def numpy_nmean_conv_nl_tweak1(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n+1) a[n:] /= n return a[:len(list)] np2 = testNP(numpy_nmean_conv_nl_tweak1, 'numpy convolve tweek 1') def numpy_nmean_conv_nl_tweak2(list,n): b = numpy.ones(n,dtype=float) a = numpy.convolve(list,b,mode="full") a[:n] /= numpy.arange(1, n + 1) a[n:] *= 1.0/n return a[:len(list)] np3 = testNP(numpy_nmean_conv_nl_tweak2, 'numpy convolve tweek 2') numpy.testing.assert_equal(reg, np1) numpy.testing.assert_equal(reg, np2) numpy.testing.assert_equal(reg, np3) On 8/3/06, Charles R Harris <cha...@gm...> wrote: > Hi Scott, > > > On 8/3/06, Scott Ransom <sr...@nr...> wrote: > > You should be able to modify the kernel so that you can avoid > > many of the divides at the end. Something like: > > > > def numpy_nmean_conv_nl2(list,n): > > b = numpy.ones(n,dtype=float) / n > > a = numpy.convolve (c,b,mode="full") > > # Note: something magic in here to fix the first 'n' values > > return a[:len(list)] > > > Yep, I tried that but it wasn't any faster. It might help for really *big* > arrays. The first n-1 values still need to be fixed after. > > Chuck > > > I played with it a bit, but don't have time to figure out exactly > > how convolve is mangling the first n return values... > > > > Scott > > > > > > > > On Thu, Aug 03, 2006 at 09:38:25AM -0600, Charles R Harris wrote: > > > Heh, > > > > > > This is fun. Two more variations with 1000 reps instead of 100 for > better > > > timing: > > > > > > def numpy_nmean_conv_nl_tweak1(list,n): > > > b = numpy.ones(n,dtype=float) > > > a = numpy.convolve(list,b,mode="full") > > > a[:n] /= numpy.arange(1, n + 1) > > > a[n:] /= n > > > return a[:len(list)] > > > > > > def numpy_nmean_conv_nl_tweak2(list,n): > > > b = numpy.ones(n,dtype=float) > > > a = numpy.convolve(list,b,mode="full") > > > a[:n] /= numpy.arange(1, n + 1) > > > a[n:] *= 1.0/n > > > return a[:len(list)] > > > > > > Which gives > > > > > > numpy convolve took: 2.630000 sec. > > > numpy convolve noloop took: 0.320000 sec. > > > numpy convolve noloop tweak1 took: 0.250000 sec. > > > numpy convolve noloop tweak2 took: 0.240000 sec. > > > > > > Chuck > > > > > > On 8/2/06, Phil Ruggera <pru...@gm...> wrote: > > > > > > > >A variation of the proposed convolve routine is very fast: > > > > > > > >regular python took: 1.150214 sec. > > > >numpy mean slice took: 2.427513 sec. > > > >numpy convolve took: 0.546854 sec. > > > >numpy convolve noloop took: 0.058611 sec. > > > > > > > >Code: > > > > > > > ># mean of n values within an array > > > >import numpy, time > > > >def nmean(list,n): > > > > a = [] > > > > for i in range(1,len(list)+1): > > > > start = i-n > > > > divisor = n > > > > if start < 0: > > > > start = 0 > > > > divisor = i > > > > a.append(sum(list[start:i])/divisor) > > > > return a > > > > > > > >t = [1.0*i for i in range(1400)] > > > >start = time.clock() > > > >for x in range(100): > > > > reg = nmean(t,50) > > > >print "regular python took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean(list,n): > > > > a = numpy.empty(len(list),dtype=float) > > > > for i in range(1,len(list)+1): > > > > start = i-n > > > > if start < 0: > > > > start = 0 > > > > a[i-1] = list[start:i].mean(0) > > > > return a > > > > > > > >t = numpy.arange (0,1400,dtype=float) > > > >start = time.clock() > > > >for x in range(100): > > > > npm = numpy_nmean(t,50) > > > >print "numpy mean slice took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean_conv(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > for i in range(0,len(list)): > > > > if i < n : > > > > a[i] /= i + 1 > > > > else : > > > > a[i] /= n > > > > return a[:len(list)] > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > >start = time.clock () > > > >for x in range(100): > > > > npc = numpy_nmean_conv(t,50) > > > >print "numpy convolve took: %f sec."%(time.clock() - start) > > > > > > > >def numpy_nmean_conv_nl(list,n): > > > > b = numpy.ones(n,dtype=float) > > > > a = numpy.convolve(list,b,mode="full") > > > > for i in range(n): > > > > a[i] /= i + 1 > > > > a[n:] /= n > > > > return a[:len(list)] > > > > > > > >t = numpy.arange(0,1400,dtype=float) > > > >start = time.clock() > > > >for x in range(100): > > > > npn = numpy_nmean_conv_nl(t,50) > > > >print "numpy convolve noloop took: %f sec."%( time.clock() - start) > > > > > > > >numpy.testing.assert_equal(reg,npm) > > > >numpy.testing.assert_equal(reg,npc) > > > >numpy.testing.assert_equal(reg,npn) > > > > > > > >On 7/29/06, David Grant < dav...@gm...> wrote: > > > >> > > > >> > > > >> > > > >> On 7/29/06, Charles R Harris <cha...@gm... > wrote: > > > >> > > > > >> > Hmmm, > > > >> > > > > >> > I rewrote the subroutine a bit. > > > >> > > > > >> > > > > >> > def numpy_nmean(list,n): > > > >> > a = numpy.empty(len(list),dtype=float) > > > >> > > > > >> > b = numpy.cumsum(list) > > > >> > for i in range(0,len(list)): > > > >> > if i < n : > > > >> > a[i] = b[i]/(i+1) > > > >> > else : > > > >> > a[i] = (b[i] - b[i-n])/(i+1) > > > >> > return a > > > >> > > > > >> > and got > > > >> > > > > >> > regular python took: 0.750000 sec. > > > >> > numpy took: 0.380000 sec. > > > >> > > > >> > > > >> I got rid of the for loop entirely. Usually this is the thing to do, > at > > > >> least this will always give speedups in Matlab and also in my limited > > > >> experience with Numpy/Numeric: > > > >> > > > >> def numpy_nmean2(list,n): > > > >> > > > >> a = numpy.empty(len(list),dtype=float) > > > >> b = numpy.cumsum(list) > > > >> c = concatenate((b[n:],b[:n])) > > > >> a[:n] = b[:n]/(i+1) > > > >> a[n:] = (b[n:] - c[n:])/(i+1) > > > >> return a > > > >> > > > >> I got no noticeable speedup from doing this which I thought was > pretty > > > >> amazing. I even profiled all the functions, the original, the one > > > >written by > > > >> Charles, and mine, using hotspot just to make sure nothing funny was > > > >going > > > >> on. I guess plain old Python can be better than you'd expect in > certain > > > >> situtations. > > > >> > > > >> -- > > > >> David Grant > > > > > > > > >------------------------------------------------------------------------- > > > >Take Surveys. Earn Cash. Influence the Future of IT > > > >Join SourceForge.net's Techsay panel and you'll get the chance to share > > > >your > > > >opinions on IT & business topics through brief surveys -- and earn cash > > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > >_______________________________________________ > > > >Numpy-discussion mailing list > > > > Num...@li... > > > > >https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > > > > ------------------------------------------------------------------------- > > > Take Surveys. Earn Cash. Influence the Future of IT > > > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > > > opinions on IT & business topics through brief surveys -- and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > _______________________________________________ > > > Numpy-discussion mailing list > > > Num...@li... > > > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > -- > > -- > > Scott M. Ransom Address: NRAO > > Phone: (434) 296-0320 520 Edgemont Rd. > > email: sr...@nr... Charlottesville, VA 22903 USA > > GPG Fingerprint: 06A9 9553 78BE 16DB 407B FFCA 9BFA B6FF FFD3 2989 |
From: Robert K. <rob...@gm...> - 2006-08-04 04:27:14
|
Sebastian Haase wrote: > Hi, > Is it possible to have > 'cc'-ing the poster of a bug ticket be the default !? > Or is/can this be set in a per user preference somehow ? IIRC, if you supply your email address in your "Settings", you will get notification emails. http://projects.scipy.org/scipy/numpy/settings Otherwise, subscribe to the numpy-tickets email list, and you will get notifications of all tickets. http://www.scipy.org/Mailing_Lists -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Sebastian H. <ha...@ms...> - 2006-08-04 04:20:12
|
Hi, Is it possible to have 'cc'-ing the poster of a bug ticket be the default !? Or is/can this be set in a per user preference somehow ? Thanks, Sebastian Haase |
From: Sebastian H. <ha...@ms...> - 2006-08-04 04:00:42
|
Hi! I would like to suggest to put a link to the bug/wishlist tracker web site on the scipy.org wiki site. http://projects.scipy.org/scipy/numpy/ticket I did not do it myself because I could not decide what the best place for it would - I think it should be rather exposed ... The only link I could find was somewhere inside an FAQ for the SciPy package and it was only for the scipy-bug tracker. Thanks, Sebastian Haase |