You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Christian K. <ck...@ho...> - 2006-09-21 23:19:03
|
Hi, on linux I get an error when trying to build a rpm package from numpy 1.0rc1: building extension "numpy.core.umath" sources adding 'build/src.linux-i686-2.4/numpy/core/config.h' to sources. executing numpy/core/code_generators/generate_ufunc_api.py adding 'build/src.linux-i686-2.4/numpy/core/__ufunc_api.h' to sources. creating build/src.linux-i686-2.4/src conv_template:> build/src.linux-i686-2.4/src/umathmodule.c error: src/umathmodule.c.src: No such file or directory error: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.68597 (%build) RPM build errors: Bad exit status from /home/ck/testarea/rpm/tmp/rpm-tmp.68597 (%build) error: command 'rpmbuild' failed with exit status 1 Christian |
From: tpm <yt...@16...> - 2006-09-21 23:04:21
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> <style type="text/css"> <!-- .td { font-size: 12px; color: #313131; line-height: 20px; font-family: "Arial", "Helvetica", "sans-serif"; } --> </style> </head> <body leftmargin="0" background="http://bo.sohu.com//images/img20040502/dj_bg.gif"> <br> <table width="673" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="62" bgcolor="#8C8C8C"> <div align="center"> <table width="100%" border="0" cellspacing="1" cellpadding="0"> <tr> <td height="62" bgcolor="#F3F3F3"><div align="center"><font size="+3" color="#FF0000"><b>TPM全员设备维护与管理</b></font><font color="#aa0000" size="+2"><br> </font></div></td> </tr> </table> <font color="#FF0000" size="+3" face="黑体"></font></div></td> </tr> </table> <table width="673" border="0" align="center" cellpadding="0" cellspacing="0" class="td"> <tr> <td bgcolor="#8C8C8C"><table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="28" bgcolor="#F3F3F3"> <div align="center" class="td"><font size="2">易腾企业管理咨询有限公司</font></div></td> </tr> </table></td> </tr> <tr> <td height="116" bgcolor="#FFFFFF"> <div align="center"> <table width="99%" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="17%" height="20" bgcolor="#BF0000" class="td"> <div align="center"><font color="#FFFFFF" size="2">[课 程 背 景]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="74" colspan="2" class="td"><font size="2">――全球生产型企业最受推崇、最受欢迎的生产管理方法之一――<br> TPM就是Total Productive Maintenance,其定义为:以最有效的设备利用为目标,以设备保养(MP)、预防维修(PM)、改善维修(CM)和事后维修(BM)综合构成生产维修(PM)为总运行体制。从最高经营管理者到第一线作业人员全体参与,以自主的小组活动来推行PM,使因设备问题引起的直接或间接损失为零。<br> 任何企业的综合生产力是以投入和产出来衡量的。具体地讲,产出的产品(服务、情报)要大于投入的 3M (材料、人、设备),生产才具有实际意义。就是说,要提高生产力,方法一是花钱搞设备投资;方法二是不花钱或少花钱,靠人、机械、方法充分协调的 TPM 活动。由此说明, TPM 是人与物质协调的技术产物。它通过对设备、业务的改善,促使人的思想发生理性变化,特别是促进员工形成主人翁意识,从而给企业带来竞争活力。故有人把 TPM 称为“综合生产力经营管理(Total Productivity Management)”,因为它与企业经营目标直接关联。 </font> </td> </tr> </table> </div> <div align="center" style="width: 671; height: 1"> </div> <div align="center"> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF" size="2">[课 程 目 标]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="64" colspan="2" class="td"> <p><font size="2"> 先进的设备管理是制造型企业降低成本,增加效益的最直接,最有效的途径。TPM活动就是以全员参与的小组方式,创建设计优良的设备系统,提高现有设备的最高限运用,实现安全性和高质量,防止错误发生,从而使企业达到降低成本和全面生产效率的提高,我们希望学员通过此次培训达到以下目的:<br> 1. 强化设备基础管理,提高设备可动率;<br> 2. 维持设备良好状态,延长设备寿命;<br> 3. 提高生产效率,降低成本;<br> 4. 改善工作环境,消除安全隐患,提高员工工作满意度;<br> 5. 提高企业持续改善的意识和能力。 </font> </p></td> </tr> </table> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF" size="2">[课 程 大 钢]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="64" colspan="2" class="td"> <p><font size="2"><font color="#0000FF">1、TPM概论</font><br> ◆什么是TPM活动<br> ◆TPM与企业竞争力提升<br> ◆TPM的含义及其演进过程<br> ◆TPM活动与设备维修的关联<br> ◆TPM主要内容及推行组织保证<br> ◆平均修复时间MTTR、平均故障间隔时间MTBF计算与分析<br> ◆设备综合效率OEE计算与分析― 企业效率损失知多少<br> ◆透过OEE看企业“无形的浪费”与改善空间<br> ◆小组分析与讨论<br> <font color="#0000FF">2、TPM自主保全活动实务展开</font><br> ◆为什么要推行TPM自主保全<br> ◆企业实践自主保全活动7步骤<br> Step1初期清扫<br> Step2污染源及困难处所对策<br> Step3制定自主保养临时基准书<br> Step4总点检<br> Step5自主点检<br> Step6工程品质标准化<br> Step7彻底的自主管理<br> ◆在实务中如何展开以上7步骤<br> ◆成功推行自主保全的要点<br> ◆TPM活动企业成功案例分享<br> 演练:TPM自主保全活动计划书及活动要点讨论<br> <font color="#0000FF">3、TPM计划保全活动实务展开</font><br> ◆计划保全的基本观念体系<br> ◆如何正确处理计划保全与自主保全的关联<br> ◆建立设备计划保全运作体系<br> ◆设备日常维修履历管理<br> ◆实践设备零故障的7个步骤<br> Step1使用条件差异分析<br> Step2问题点对策<br> Step3制定计划保养临时基准书<br> Step4自然劣化对策<br> Step5点检效率化<br> Step6 M-Q关联分析Step7点检预知化<br> ◆设备保养信息e化<br> ◆支援自主保全方法―OPL(One Point Lesson)训练<br> ◆计划保全4阶段7步骤展开<br> ◆成功推行计划保全的要点<br> ◆TPM活动企业成功案例分享<br> 演练:TPM计划保全活动计划书及活动要点讨论<br> <font color="#0000FF">4、TPM个别改善活动实务展开</font><br> ◆工厂运行中16种损失分析(Loss)<br> ◆系统的改善活动-步骤与实务方法<br> ◆P-M分析与P-M演练<br> ◆个别改善活动的要点<br> ◆TPM活动企业成功案例分享<br> 演练:个别改善活动推进方法实务讨论<br> <font color="#0000FF">5、TPM开发管理活动实务展开</font><br> ◆设备初期管理体制建立<br> ◆M-P信息回馈管理<br> ◆LCC(Life Cycle Cost)分析<br> <font color="#0000FF">6、TPM品质保全活动实务展开</font><br> ◆M-Q分析―品质可以预防吗?<br> ◆品质保全与TPM其他活动的关联<br> ◆品质保全活动之实现零不良7步骤<br> <font color="#0000FF">7、TPM其他活动展开介绍</font><br> ◆教育训练:TPM教育训练体系<br> ◆安全与卫生改善活动<br> ◆事务部门效率化改善活动 <br> <font color="#0000FF">8、TPM完整案例分享&学员问题解答</font></font> </p></td> </tr> </table> </div> <table width="99%" height="77" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF" size="2">[联 系 我 们]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="57" colspan="2" class="td"> <font size="2"> <font color="#000000">(</font><a href="mailto:注订退):如您不需要此邮件.请发送邮件至:ts...@to...并在邮件标题中注明(定退邮件"><font color="#000000"><span style="text-decoration: none">注订退):如您不需要此邮件.请发送邮件至:ts...@to...并在邮件标题中注明(定退邮件</span></font></a><font color="#000000">)</font><br> 联系人:刘小姐 欢迎接洽厂内训和咨询项目!<br> 联系方式: 021-51187132</font></td> </tr> </table> </td> </tr> </table> </body> </html> |
From: Tim H. <tim...@ie...> - 2006-09-21 22:57:10
|
David M. Cooke wrote: > On Thu, 21 Sep 2006 11:34:42 -0700 > Tim Hochberg <tim...@ie...> wrote: > > >> Tim Hochberg wrote: >> >>> Robert Kern wrote: >>> >>> >>>> David M. Cooke wrote: >>>> >>>> >>>> >>>>> On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote: >>>>> >>>>> >>>>> >>>>>> Let me offer a third path: the algorithms used for .mean() and .var() >>>>>> are substandard. There are much better incremental algorithms that >>>>>> entirely avoid the need to accumulate such large (and therefore >>>>>> precision-losing) intermediate values. The algorithms look like the >>>>>> following for 1D arrays in Python: >>>>>> >>>>>> def mean(a): >>>>>> m = a[0] >>>>>> for i in range(1, len(a)): >>>>>> m += (a[i] - m) / (i + 1) >>>>>> return m >>>>>> >>>>>> >>>>>> >>>>> This isn't really going to be any better than using a simple sum. >>>>> It'll also be slower (a division per iteration). >>>>> >>>>> >>>>> >>>> With one exception, every test that I've thrown at it shows that it's >>>> better for float32. That exception is uniformly spaced arrays, like >>>> linspace(). >>>> >>>> > You do avoid >>>> > accumulating large sums, but then doing the division a[i]/len(a) and >>>> > adding that will do the same. >>>> >>>> Okay, this is true. >>>> >>>> >>>> >>>> >>>>> Now, if you want to avoid losing precision, you want to use a better >>>>> summation technique, like compensated (or Kahan) summation: >>>>> >>>>> def mean(a): >>>>> s = e = a.dtype.type(0) >>>>> for i in range(0, len(a)): >>>>> temp = s >>>>> y = a[i] + e >>>>> s = temp + y >>>>> e = (temp - s) + y >>>>> return s / len(a) >>>>> >>>> >>>> >>>>>> def var(a): >>>>>> m = a[0] >>>>>> t = a.dtype.type(0) >>>>>> for i in range(1, len(a)): >>>>>> q = a[i] - m >>>>>> r = q / (i+1) >>>>>> m += r >>>>>> t += i * q * r >>>>>> t /= len(a) >>>>>> return t >>>>>> >>>>>> Alternatively, from Knuth: >>>>>> >>>>>> def var_knuth(a): >>>>>> m = a.dtype.type(0) >>>>>> variance = a.dtype.type(0) >>>>>> for i in range(len(a)): >>>>>> delta = a[i] - m >>>>>> m += delta / (i+1) >>>>>> variance += delta * (a[i] - m) >>>>>> variance /= len(a) >>>>>> return variance >>>>>> >> I'm going to go ahead and attach a module containing the versions of >> mean, var, etc that I've been playing with in case someone wants to mess >> with them. Some were stolen from traffic on this list, for others I >> grabbed the algorithms from wikipedia or equivalent. >> > > I looked into this a bit more. I checked float32 (single precision) and > float64 (double precision), using long doubles (float96) for the "exact" > results. This is based on your code. Results are compared using > abs(exact_stat - computed_stat) / max(abs(values)), with 10000 values in the > range of [-100, 900] > > First, the mean. In float32, the Kahan summation in single precision is > better by about 2 orders of magnitude than simple summation. However, > accumulating the sum in double precision is better by about 9 orders of > magnitude than simple summation (7 orders more than Kahan). > > In float64, Kahan summation is the way to go, by 2 orders of magnitude. > > For the variance, in float32, Knuth's method is *no better* than the two-pass > method. Tim's code does an implicit conversion of intermediate results to > float64, which is why he saw a much better result. The two-pass method using > Kahan summation (again, in single precision), is better by about 2 orders of > magnitude. There is practically no difference when using a double-precision > accumulator amongst the techniques: they're all about 9 orders of magnitude > better than single-precision two-pass. > > In float64, Kahan summation is again better than the rest, by about 2 orders > of magnitude. > > I've put my adaptation of Tim's code, and box-and-whisker plots of the > results, at http://arbutus.mcmaster.ca/dmc/numpy/variance/ > > Conclusions: > > - If you're going to calculate everything in single precision, use Kahan > summation. Using it in double-precision also helps. > - If you can use a double-precision accumulator, it's much better than any of > the techniques in single-precision only. > > - for speed+precision in the variance, either use Kahan summation in single > precision with the two-pass method, or use double precision with simple > summation with the two-pass method. Knuth buys you nothing, except slower > code :-) > > After 1.0 is out, we should look at doing one of the above. > One more little tidbit; it appears possible to "fix up" Knuth's algorithm so that it's comparable in accuracy to the two pass Kahan version by doing Kahan summation while accumulating the variance. Testing on this was far from thorough, but in the tests I did it nearly always produced identical results to kahan. Of course this is even slower than the original Knuth version, but it's interesting anyway: # This is probably messier than it need to be # but I'm out of time for today... def var_knuth2(values, dtype=default_prec): """var(values) -> variance of values computed using Knuth's one pass algorithm""" m = variance = mc = vc = dtype(0) for i, x in enumerate(values): delta = values[i] - m y = delta / dtype(i+1) + mc t = m + y mc = y - (t - m) m = t y = delta * (x - m) + vc t = variance + y vc = y - (t - variance) variance = t assert type(variance) == dtype variance /= dtype(len(values)) return variance |
From: Tim H. <tim...@ie...> - 2006-09-21 22:28:43
|
David M. Cooke wrote: > On Thu, 21 Sep 2006 11:34:42 -0700 > Tim Hochberg <tim...@ie...> wrote: > > >> Tim Hochberg wrote: >> >>> Robert Kern wrote: >>> >>> >>>> David M. Cooke wrote: >>>> >>>> >>>> >>>>> On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote: >>>>> >>>>> >>>>> >>>>>> Let me offer a third path: the algorithms used for .mean() and .var() >>>>>> are substandard. There are much better incremental algorithms that >>>>>> entirely avoid the need to accumulate such large (and therefore >>>>>> precision-losing) intermediate values. The algorithms look like the >>>>>> following for 1D arrays in Python: >>>>>> >>>>>> def mean(a): >>>>>> m = a[0] >>>>>> for i in range(1, len(a)): >>>>>> m += (a[i] - m) / (i + 1) >>>>>> return m >>>>>> >>>>>> >>>>>> >>>>> This isn't really going to be any better than using a simple sum. >>>>> It'll also be slower (a division per iteration). >>>>> >>>>> >>>>> >>>> With one exception, every test that I've thrown at it shows that it's >>>> better for float32. That exception is uniformly spaced arrays, like >>>> linspace(). >>>> >>>> > You do avoid >>>> > accumulating large sums, but then doing the division a[i]/len(a) and >>>> > adding that will do the same. >>>> >>>> Okay, this is true. >>>> >>>> >>>> >>>> >>>>> Now, if you want to avoid losing precision, you want to use a better >>>>> summation technique, like compensated (or Kahan) summation: >>>>> >>>>> def mean(a): >>>>> s = e = a.dtype.type(0) >>>>> for i in range(0, len(a)): >>>>> temp = s >>>>> y = a[i] + e >>>>> s = temp + y >>>>> e = (temp - s) + y >>>>> return s / len(a) >>>>> >>>> >>>> >>>>>> def var(a): >>>>>> m = a[0] >>>>>> t = a.dtype.type(0) >>>>>> for i in range(1, len(a)): >>>>>> q = a[i] - m >>>>>> r = q / (i+1) >>>>>> m += r >>>>>> t += i * q * r >>>>>> t /= len(a) >>>>>> return t >>>>>> >>>>>> Alternatively, from Knuth: >>>>>> >>>>>> def var_knuth(a): >>>>>> m = a.dtype.type(0) >>>>>> variance = a.dtype.type(0) >>>>>> for i in range(len(a)): >>>>>> delta = a[i] - m >>>>>> m += delta / (i+1) >>>>>> variance += delta * (a[i] - m) >>>>>> variance /= len(a) >>>>>> return variance >>>>>> >> I'm going to go ahead and attach a module containing the versions of >> mean, var, etc that I've been playing with in case someone wants to mess >> with them. Some were stolen from traffic on this list, for others I >> grabbed the algorithms from wikipedia or equivalent. >> > > I looked into this a bit more. I checked float32 (single precision) and > float64 (double precision), using long doubles (float96) for the "exact" > results. This is based on your code. Results are compared using > abs(exact_stat - computed_stat) / max(abs(values)), with 10000 values in the > range of [-100, 900] > > First, the mean. In float32, the Kahan summation in single precision is > better by about 2 orders of magnitude than simple summation. However, > accumulating the sum in double precision is better by about 9 orders of > magnitude than simple summation (7 orders more than Kahan). > > In float64, Kahan summation is the way to go, by 2 orders of magnitude. > > For the variance, in float32, Knuth's method is *no better* than the two-pass > method. Tim's code does an implicit conversion of intermediate results to > float64, which is why he saw a much better result. Doh! And I fixed that same problem in the mean implementation earlier too. I was astounded by how good knuth was doing, but not astounded enough apparently. Does it seem weird to anyone else that in: numpy_scalar <op> python_scalar the precision ends up being controlled by the python scalar? I would expect the numpy_scalar to control the resulting precision just like numpy arrays do in similar circumstances. Perhaps the egg on my face is just clouding my vision though. > The two-pass method using > Kahan summation (again, in single precision), is better by about 2 orders of > magnitude. There is practically no difference when using a double-precision > accumulator amongst the techniques: they're all about 9 orders of magnitude > better than single-precision two-pass. > > In float64, Kahan summation is again better than the rest, by about 2 orders > of magnitude. > > I've put my adaptation of Tim's code, and box-and-whisker plots of the > results, at http://arbutus.mcmaster.ca/dmc/numpy/variance/ > > Conclusions: > > - If you're going to calculate everything in single precision, use Kahan > summation. Using it in double-precision also helps. > - If you can use a double-precision accumulator, it's much better than any of > the techniques in single-precision only. > > - for speed+precision in the variance, either use Kahan summation in single > precision with the two-pass method, or use double precision with simple > summation with the two-pass method. Knuth buys you nothing, except slower > code :-) > The two pass methods are definitely more accurate. I won't be convinced on the speed front till I see comparable C implementations slug it out. That may well mean never in practice. However, I expect that somewhere around 10,000 items, the cache will overflow and memory bandwidth will become the bottleneck. At that point the extra operations of Knuth won't matter as much as making two passes through the array and Knuth will win on speed. Of course the accuracy is pretty bad at single precision, so the possible, theoretical speed advantage at large sizes probably doesn't matter. -tim > After 1.0 is out, we should look at doing one of the above. > +1 |
From: Martin W. <Mar...@mp...> - 2006-09-21 22:18:51
|
On Thursday 21 September 2006 18:24, Travis Oliphant wrote: > Martin Wiechert wrote: > > Thanks Travis. > > > > Do I understand correctly that the only way to be really safe is to make > > a copy and not to export a reference to it? > > Because anybody having a reference to the owner of the data can override > > the flag? > > No, that's not quite correct. Of course in C, anybody can do anything > they want to the flags. > > In Python, only the owner of the object itself can change the writeable > flag once it is set to False. So, if you only return a "view" of the > array (a.view()) then the Python user will not be able to change the > flags. > > Example: > > a = array([1,2,3]) > a.flags.writeable = False > > b = a.view() > > b.flags.writeable = True # raises an error. > > c = a > c.flags.writeable = True # can be done because c is a direct alias to a. > > Hopefully, that explains the situation a bit better. > It does. Thanks Travis. > -Travis > > > > > > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your opinions on IT & business topics through brief surveys -- and earn > cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Travis O. <oli...@ee...> - 2006-09-21 22:07:56
|
David M. Cooke wrote: > >Conclusions: > >- If you're going to calculate everything in single precision, use Kahan >summation. Using it in double-precision also helps. >- If you can use a double-precision accumulator, it's much better than any of >the techniques in single-precision only. > >- for speed+precision in the variance, either use Kahan summation in single >precision with the two-pass method, or use double precision with simple >summation with the two-pass method. Knuth buys you nothing, except slower >code :-) > >After 1.0 is out, we should look at doing one of the above. > > +1 |
From: David M. C. <co...@ph...> - 2006-09-21 21:59:43
|
On Thu, 21 Sep 2006 11:34:42 -0700 Tim Hochberg <tim...@ie...> wrote: > Tim Hochberg wrote: > > Robert Kern wrote: > > > >> David M. Cooke wrote: > >> > >> > >>> On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote: > >>> > >>> > >>>> Let me offer a third path: the algorithms used for .mean() and .var() > >>>> are substandard. There are much better incremental algorithms that > >>>> entirely avoid the need to accumulate such large (and therefore > >>>> precision-losing) intermediate values. The algorithms look like the > >>>> following for 1D arrays in Python: > >>>> > >>>> def mean(a): > >>>> m = a[0] > >>>> for i in range(1, len(a)): > >>>> m += (a[i] - m) / (i + 1) > >>>> return m > >>>> > >>>> > >>> This isn't really going to be any better than using a simple sum. > >>> It'll also be slower (a division per iteration). > >>> > >>> > >> With one exception, every test that I've thrown at it shows that it's > >> better for float32. That exception is uniformly spaced arrays, like > >> linspace(). > >> > >> > You do avoid > >> > accumulating large sums, but then doing the division a[i]/len(a) and > >> > adding that will do the same. > >> > >> Okay, this is true. > >> > >> > >> > >>> Now, if you want to avoid losing precision, you want to use a better > >>> summation technique, like compensated (or Kahan) summation: > >>> > >>> def mean(a): > >>> s = e = a.dtype.type(0) > >>> for i in range(0, len(a)): > >>> temp = s > >>> y = a[i] + e > >>> s = temp + y > >>> e = (temp - s) + y > >>> return s / len(a) > >> > >>>> def var(a): > >>>> m = a[0] > >>>> t = a.dtype.type(0) > >>>> for i in range(1, len(a)): > >>>> q = a[i] - m > >>>> r = q / (i+1) > >>>> m += r > >>>> t += i * q * r > >>>> t /= len(a) > >>>> return t > >>>> > >>>> Alternatively, from Knuth: > >>>> > >>>> def var_knuth(a): > >>>> m = a.dtype.type(0) > >>>> variance = a.dtype.type(0) > >>>> for i in range(len(a)): > >>>> delta = a[i] - m > >>>> m += delta / (i+1) > >>>> variance += delta * (a[i] - m) > >>>> variance /= len(a) > >>>> return variance > > I'm going to go ahead and attach a module containing the versions of > mean, var, etc that I've been playing with in case someone wants to mess > with them. Some were stolen from traffic on this list, for others I > grabbed the algorithms from wikipedia or equivalent. I looked into this a bit more. I checked float32 (single precision) and float64 (double precision), using long doubles (float96) for the "exact" results. This is based on your code. Results are compared using abs(exact_stat - computed_stat) / max(abs(values)), with 10000 values in the range of [-100, 900] First, the mean. In float32, the Kahan summation in single precision is better by about 2 orders of magnitude than simple summation. However, accumulating the sum in double precision is better by about 9 orders of magnitude than simple summation (7 orders more than Kahan). In float64, Kahan summation is the way to go, by 2 orders of magnitude. For the variance, in float32, Knuth's method is *no better* than the two-pass method. Tim's code does an implicit conversion of intermediate results to float64, which is why he saw a much better result. The two-pass method using Kahan summation (again, in single precision), is better by about 2 orders of magnitude. There is practically no difference when using a double-precision accumulator amongst the techniques: they're all about 9 orders of magnitude better than single-precision two-pass. In float64, Kahan summation is again better than the rest, by about 2 orders of magnitude. I've put my adaptation of Tim's code, and box-and-whisker plots of the results, at http://arbutus.mcmaster.ca/dmc/numpy/variance/ Conclusions: - If you're going to calculate everything in single precision, use Kahan summation. Using it in double-precision also helps. - If you can use a double-precision accumulator, it's much better than any of the techniques in single-precision only. - for speed+precision in the variance, either use Kahan summation in single precision with the two-pass method, or use double precision with simple summation with the two-pass method. Knuth buys you nothing, except slower code :-) After 1.0 is out, we should look at doing one of the above. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: tpm <yt...@16...> - 2006-09-21 21:45:53
|
<html> <head> <meta http-equiv="Content-Type" content="text/html; charset=gb2312"> <title>无标题文档</title> <style type="text/css"> <!-- .td { font-size: 12px; color: #313131; line-height: 20px; font-family: "Arial", "Helvetica", "sans-serif"; } --> </style> </head> <body leftmargin="0" background="http://bo.sohu.com//images/img20040502/dj_bg.gif"> <br> <table width="673" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td height="62" bgcolor="#8C8C8C"> <div align="center"> <table width="100%" border="0" cellspacing="1" cellpadding="0"> <tr> <td height="62" bgcolor="#F3F3F3"><div align="center"><font size="+3" color="#FF0000"><b>TPM全员设备维护与管理</b></font><font color="#aa0000" size="+2"><br> </font></div></td> </tr> </table> <font color="#FF0000" size="+3" face="黑体"></font></div></td> </tr> </table> <table width="673" border="0" align="center" cellpadding="0" cellspacing="0" class="td"> <tr> <td bgcolor="#8C8C8C"><table width="100%" border="0" cellspacing="0" cellpadding="0"> <tr> <td height="28" bgcolor="#F3F3F3"> <div align="center" class="td"><font size="2">易腾企业管理咨询有限公司</font></div></td> </tr> </table></td> </tr> <tr> <td height="116" bgcolor="#FFFFFF"> <div align="center"> <table width="99%" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="17%" height="20" bgcolor="#BF0000" class="td"> <div align="center"><font color="#FFFFFF" size="2">[课 程 背 景]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="74" colspan="2" class="td"><font size="2">――全球生产型企业最受推崇、最受欢迎的生产管理方法之一――<br> TPM就是Total Productive Maintenance,其定义为:以最有效的设备利用为目标,以设备保养(MP)、预防维修(PM)、改善维修(CM)和事后维修(BM)综合构成生产维修(PM)为总运行体制。从最高经营管理者到第一线作业人员全体参与,以自主的小组活动来推行PM,使因设备问题引起的直接或间接损失为零。<br> 任何企业的综合生产力是以投入和产出来衡量的。具体地讲,产出的产品(服务、情报)要大于投入的 3M (材料、人、设备),生产才具有实际意义。就是说,要提高生产力,方法一是花钱搞设备投资;方法二是不花钱或少花钱,靠人、机械、方法充分协调的 TPM 活动。由此说明, TPM 是人与物质协调的技术产物。它通过对设备、业务的改善,促使人的思想发生理性变化,特别是促进员工形成主人翁意识,从而给企业带来竞争活力。故有人把 TPM 称为“综合生产力经营管理(Total Productivity Management)”,因为它与企业经营目标直接关联。 </font> </td> </tr> </table> </div> <div align="center" style="width: 671; height: 1"> </div> <div align="center"> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF" size="2">[课 程 目 标]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="64" colspan="2" class="td"> <p><font size="2"> 先进的设备管理是制造型企业降低成本,增加效益的最直接,最有效的途径。TPM活动就是以全员参与的小组方式,创建设计优良的设备系统,提高现有设备的最高限运用,实现安全性和高质量,防止错误发生,从而使企业达到降低成本和全面生产效率的提高,我们希望学员通过此次培训达到以下目的:<br> 1. 强化设备基础管理,提高设备可动率;<br> 2. 维持设备良好状态,延长设备寿命;<br> 3. 提高生产效率,降低成本;<br> 4. 改善工作环境,消除安全隐患,提高员工工作满意度;<br> 5. 提高企业持续改善的意识和能力。 </font> </p></td> </tr> </table> <table width="99%" height="84" border="0" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF" size="2">[课 程 大 钢]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="64" colspan="2" class="td"> <p><font size="2"><font color="#0000FF">1、TPM概论</font><br> ◆什么是TPM活动<br> ◆TPM与企业竞争力提升<br> ◆TPM的含义及其演进过程<br> ◆TPM活动与设备维修的关联<br> ◆TPM主要内容及推行组织保证<br> ◆平均修复时间MTTR、平均故障间隔时间MTBF计算与分析<br> ◆设备综合效率OEE计算与分析― 企业效率损失知多少<br> ◆透过OEE看企业“无形的浪费”与改善空间<br> ◆小组分析与讨论<br> <font color="#0000FF">2、TPM自主保全活动实务展开</font><br> ◆为什么要推行TPM自主保全<br> ◆企业实践自主保全活动7步骤<br> Step1初期清扫<br> Step2污染源及困难处所对策<br> Step3制定自主保养临时基准书<br> Step4总点检<br> Step5自主点检<br> Step6工程品质标准化<br> Step7彻底的自主管理<br> ◆在实务中如何展开以上7步骤<br> ◆成功推行自主保全的要点<br> ◆TPM活动企业成功案例分享<br> 演练:TPM自主保全活动计划书及活动要点讨论<br> <font color="#0000FF">3、TPM计划保全活动实务展开</font><br> ◆计划保全的基本观念体系<br> ◆如何正确处理计划保全与自主保全的关联<br> ◆建立设备计划保全运作体系<br> ◆设备日常维修履历管理<br> ◆实践设备零故障的7个步骤<br> Step1使用条件差异分析<br> Step2问题点对策<br> Step3制定计划保养临时基准书<br> Step4自然劣化对策<br> Step5点检效率化<br> Step6 M-Q关联分析Step7点检预知化<br> ◆设备保养信息e化<br> ◆支援自主保全方法―OPL(One Point Lesson)训练<br> ◆计划保全4阶段7步骤展开<br> ◆成功推行计划保全的要点<br> ◆TPM活动企业成功案例分享<br> 演练:TPM计划保全活动计划书及活动要点讨论<br> <font color="#0000FF">4、TPM个别改善活动实务展开</font><br> ◆工厂运行中16种损失分析(Loss)<br> ◆系统的改善活动-步骤与实务方法<br> ◆P-M分析与P-M演练<br> ◆个别改善活动的要点<br> ◆TPM活动企业成功案例分享<br> 演练:个别改善活动推进方法实务讨论<br> <font color="#0000FF">5、TPM开发管理活动实务展开</font><br> ◆设备初期管理体制建立<br> ◆M-P信息回馈管理<br> ◆LCC(Life Cycle Cost)分析<br> <font color="#0000FF">6、TPM品质保全活动实务展开</font><br> ◆M-Q分析―品质可以预防吗?<br> ◆品质保全与TPM其他活动的关联<br> ◆品质保全活动之实现零不良7步骤<br> <font color="#0000FF">7、TPM其他活动展开介绍</font><br> ◆教育训练:TPM教育训练体系<br> ◆安全与卫生改善活动<br> ◆事务部门效率化改善活动 <br> <font color="#0000FF">8、TPM完整案例分享&学员问题解答</font></font> </p></td> </tr> </table> </div> <table width="99%" height="77" border="0" align="center" cellpadding="0" cellspacing="0"> <tr> <td width="17%" height="20" bgcolor="#0080C0" class="td"> <div align="center"><font color="#FFFFFF" size="2">[联 系 我 们]</font></div></td> <td width="83%" class="td"><font size="2"> </font></td> </tr> <tr> <td height="57" colspan="2" class="td"> <font size="2"> <font color="#000000">(</font><a href="mailto:注订退):如您不需要此邮件.请发送邮件至:ts...@to...并在邮件标题中注明(定退邮件"><font color="#000000"><span style="text-decoration: none">注订退):如您不需要此邮件.请发送邮件至:ts...@to...并在邮件标题中注明(定退邮件</span></font></a><font color="#000000">)</font><br> 联系人:刘小姐 欢迎接洽厂内训和咨询项目!<br> 联系方式: 021-51187132</font></td> </tr> </table> </td> </tr> </table> </body> </html> |
From: Charles R H. <cha...@gm...> - 2006-09-21 19:47:43
|
Hi, On 9/21/06, Robert Kern <rob...@gm...> wrote: > > Steve Lianoglou wrote: > > So .. I guess I'm wondering why we want to break from the standard? > > We don't as far as Python code goes. The code that Chuck added > Doxygen-style > comments to was C code. I presume he was simply answering Sebastian's > question > rather than suggesting we use Doxygen for Python code, too. Exactly. I also don't think the Python hack description applies to doxygen any longer. As to the oddness of \param or @param, here is an example from Epydoc using Epytext @type m: number @param m: The slope of the line. @type b: number @param b: The y intercept of the line. The X{y intercept} of a Looks like they borrowed something there ;) The main advantage of epydoc vs doxygen seems to be that you can use the markup inside the normal python docstring without having to make a separate comment block. Or would that be a disadvantage? Then again, I've been thinking of moving the python function docstrings into the add_newdocs.py file so everything is together in one spot and that would separate the Python docstrings from the functions anyway. I'll fool around with doxygen a bit and see what it does. The C code is the code that most needs documentation in any case. Chuck |
From: Robert K. <rob...@gm...> - 2006-09-21 19:24:39
|
Steve Lianoglou wrote: > So .. I guess I'm wondering why we want to break from the standard? We don't as far as Python code goes. The code that Chuck added Doxygen-style comments to was C code. I presume he was simply answering Sebastian's question rather than suggesting we use Doxygen for Python code, too. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: David M. C. <co...@ph...> - 2006-09-21 19:20:51
|
On Thu, 21 Sep 2006 10:05:58 -0600 "Charles R Harris" <cha...@gm...> wrote: > Travis, > > A few questions. > > 1) I can't find any systematic code testing units, although there seem to be > tests for regressions and such. Is there a place we should be putting such > tests? > > 2) Any plans for code documentation? I documented some of my stuff with > doxygen markups and wonder if we should include a Doxyfile as part of the > package. We don't have much of a defined standard for docs. Personally, I wouldn't use doxygen: what I've seen for Python versions are hacks, whose output looks like C++, and which requires markup that's not like commonly-used conventions in Python (\brief, for instance). Foremost for Python doc strings, I think, is that it look ok when using pydoc or similar (ipython's ?, for instance). That means a minimal amount of markup. Someone previously mentioned including cross-references; I think that's a good idea. A 'See also' line, for instance. Examples are good too, especially if there's been disputes on the interpretation of the command :-) For the C code, documentation is autogenerated from the /** ... API */ comments that determine which functions are part of the C API. This are put into files multiarray_api.txt and ufunc_api.txt (in the include/ directory). The files are in reST format, so the comments should/could be. At some point I've got to through and add more :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Steve L. <lis...@ar...> - 2006-09-21 18:46:16
|
> Are able to use doxygen for Python code ? I thought it only worked > for C (and > alike) ? > > IIRC correctly, it now does Python too. Let's see... here is an > example > ## Documentation for this module. > # > # More details. > > ## Documentation for a function. > # > # More details. > def func(): > pass > Looks like ## replaces the /** I never found it (although I haven't looked too hard), but I always thought there was an official way to document python code -- minimally to put the documentation in the docstring following the function definition: def func(..): """One liner. Continue docs -- some type of reStructredText style """ pas Isn't that the same docstring that ipython uses to bring up help, when you do: In [1]: myobject.some_func? So .. I guess I'm wondering why we want to break from the standard? -steve |
From: Tim H. <tim...@ie...> - 2006-09-21 18:35:06
|
Tim Hochberg wrote: > Robert Kern wrote: > >> David M. Cooke wrote: >> >> >>> On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote: >>> >>> >>>> Let me offer a third path: the algorithms used for .mean() and .var() are >>>> substandard. There are much better incremental algorithms that entirely avoid >>>> the need to accumulate such large (and therefore precision-losing) intermediate >>>> values. The algorithms look like the following for 1D arrays in Python: >>>> >>>> def mean(a): >>>> m = a[0] >>>> for i in range(1, len(a)): >>>> m += (a[i] - m) / (i + 1) >>>> return m >>>> >>>> >>> This isn't really going to be any better than using a simple sum. >>> It'll also be slower (a division per iteration). >>> >>> >> With one exception, every test that I've thrown at it shows that it's better for >> float32. That exception is uniformly spaced arrays, like linspace(). >> >> > You do avoid >> > accumulating large sums, but then doing the division a[i]/len(a) and >> > adding that will do the same. >> >> Okay, this is true. >> >> >> >>> Now, if you want to avoid losing precision, you want to use a better >>> summation technique, like compensated (or Kahan) summation: >>> >>> def mean(a): >>> s = e = a.dtype.type(0) >>> for i in range(0, len(a)): >>> temp = s >>> y = a[i] + e >>> s = temp + y >>> e = (temp - s) + y >>> return s / len(a) >>> >>> Some numerical experiments in Maple using 5-digit precision show that >>> your mean is maybe a bit better in some cases, but can also be much >>> worse, than sum(a)/len(a), but both are quite poor in comparision to the >>> Kahan summation. >>> >>> (We could probably use a fast implementation of Kahan summation in >>> addition to a.sum()) >>> >>> >> +1 >> >> >> >>>> def var(a): >>>> m = a[0] >>>> t = a.dtype.type(0) >>>> for i in range(1, len(a)): >>>> q = a[i] - m >>>> r = q / (i+1) >>>> m += r >>>> t += i * q * r >>>> t /= len(a) >>>> return t >>>> >>>> Alternatively, from Knuth: >>>> >>>> def var_knuth(a): >>>> m = a.dtype.type(0) >>>> variance = a.dtype.type(0) >>>> for i in range(len(a)): >>>> delta = a[i] - m >>>> m += delta / (i+1) >>>> variance += delta * (a[i] - m) >>>> variance /= len(a) >>>> return variance >>>> >>>> >>> These formulas are good when you can only do one pass over the data >>> (like in a calculator where you don't store all the data points), but >>> are slightly worse than doing two passes. Kahan summation would probably >>> also be good here too. >>> >>> >> Again, my tests show otherwise for float32. I'll condense my ipython log into a >> module for everyone's perusal. It's possible that the Kahan summation of the >> squared residuals will work better than the current two-pass algorithm and the >> implementations I give above. >> >> > This is what my tests show as well var_knuth outperformed any simple two > pass algorithm I could come up with, even ones using Kahan sums. > Interestingly, for 1D arrays the built in float32 variance performs > better than it should. After a bit of twiddling around I discovered that > it actually does most of it's calculations in float64. It uses a two > pass calculation, the result of mean is a scalar, and in the process of > converting that back to an array we end up with float64 values. Or > something like that; I was mostly reverse engineering the sequence of > events from the results. > Here's a simple of example of how var is a little wacky. A shape-[N] array will give you a different result than a shape-[1,N] array. The reason is clear -- in the second case the mean is not a scalar so there isn't the inadvertent promotion to float64, but it's still odd. >>> data = (1000*(random.random([10000]) - 0.1)).astype(float32) >>> print data.var() - data.reshape([1, -1]).var(-1) [ 0.1171875] I'm going to go ahead and attach a module containing the versions of mean, var, etc that I've been playing with in case someone wants to mess with them. Some were stolen from traffic on this list, for others I grabbed the algorithms from wikipedia or equivalent. -tim > -tim > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys -- and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Tim H. <tim...@ie...> - 2006-09-21 17:57:05
|
Robert Kern wrote: > David M. Cooke wrote: > >> On Wed, Sep 20, 2006 at 03:01:18AM -0500, Robert Kern wrote: >> >>> Let me offer a third path: the algorithms used for .mean() and .var() are >>> substandard. There are much better incremental algorithms that entirely avoid >>> the need to accumulate such large (and therefore precision-losing) intermediate >>> values. The algorithms look like the following for 1D arrays in Python: >>> >>> def mean(a): >>> m = a[0] >>> for i in range(1, len(a)): >>> m += (a[i] - m) / (i + 1) >>> return m >>> >> This isn't really going to be any better than using a simple sum. >> It'll also be slower (a division per iteration). >> > > With one exception, every test that I've thrown at it shows that it's better for > float32. That exception is uniformly spaced arrays, like linspace(). > > > You do avoid > > accumulating large sums, but then doing the division a[i]/len(a) and > > adding that will do the same. > > Okay, this is true. > > >> Now, if you want to avoid losing precision, you want to use a better >> summation technique, like compensated (or Kahan) summation: >> >> def mean(a): >> s = e = a.dtype.type(0) >> for i in range(0, len(a)): >> temp = s >> y = a[i] + e >> s = temp + y >> e = (temp - s) + y >> return s / len(a) >> >> Some numerical experiments in Maple using 5-digit precision show that >> your mean is maybe a bit better in some cases, but can also be much >> worse, than sum(a)/len(a), but both are quite poor in comparision to the >> Kahan summation. >> >> (We could probably use a fast implementation of Kahan summation in >> addition to a.sum()) >> > > +1 > > >>> def var(a): >>> m = a[0] >>> t = a.dtype.type(0) >>> for i in range(1, len(a)): >>> q = a[i] - m >>> r = q / (i+1) >>> m += r >>> t += i * q * r >>> t /= len(a) >>> return t >>> >>> Alternatively, from Knuth: >>> >>> def var_knuth(a): >>> m = a.dtype.type(0) >>> variance = a.dtype.type(0) >>> for i in range(len(a)): >>> delta = a[i] - m >>> m += delta / (i+1) >>> variance += delta * (a[i] - m) >>> variance /= len(a) >>> return variance >>> >> These formulas are good when you can only do one pass over the data >> (like in a calculator where you don't store all the data points), but >> are slightly worse than doing two passes. Kahan summation would probably >> also be good here too. >> > > Again, my tests show otherwise for float32. I'll condense my ipython log into a > module for everyone's perusal. It's possible that the Kahan summation of the > squared residuals will work better than the current two-pass algorithm and the > implementations I give above. > This is what my tests show as well var_knuth outperformed any simple two pass algorithm I could come up with, even ones using Kahan sums. Interestingly, for 1D arrays the built in float32 variance performs better than it should. After a bit of twiddling around I discovered that it actually does most of it's calculations in float64. It uses a two pass calculation, the result of mean is a scalar, and in the process of converting that back to an array we end up with float64 values. Or something like that; I was mostly reverse engineering the sequence of events from the results. -tim |
From: Travis O. <oli...@ie...> - 2006-09-21 17:00:55
|
Lionel Roubeyrie wrote: > find any solution for that. I have tried with arrays of dtype=object, but I > have problem when I want to compute min, max, ... with an error like: > TypeError: function not supported for these types, and can't coerce safely to > supported types. > I just added support for min and max methods of object arrays, by adding support for Object arrays to the minimum and maximum functions. -Travis |
From: Travis O. <oli...@ie...> - 2006-09-21 16:59:24
|
Matthew Brett wrote: > Hi, > > >> It's in the array interface specification: >> >> http://numpy.scipy.org/array_interface.shtml >> > > I was interested in the 't' (bitfield) type - is there an example of > usage somewhere? > No, It's not implemented in NumPy. It's just part of the array interface specification for completeness. -Travis |
From: Matthew B. <mat...@gm...> - 2006-09-21 16:52:56
|
Hi, > It's in the array interface specification: > > http://numpy.scipy.org/array_interface.shtml I was interested in the 't' (bitfield) type - is there an example of usage somewhere? In [13]: dtype('t8') --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/mb312/python/<ipython console> TypeError: data type not understood Best, Matthew |
From: Travis O. <oli...@ie...> - 2006-09-21 16:43:34
|
Lionel Roubeyrie wrote: > Hi all, > Is it possible to put masked values into recarrays, I need a array with > heterogenous types of datas (datetime objects in the first col, all others > are float) but with missing values in some records. For the moment, I don't > find any solution for that. Either use "nans" or "inf" for missing values or use the masked array object with a complex data-type. You don't need to use a recarray object to get "records". Any array can have "records". Therefore, you can have a masked array of "records" by creating an array with the appropriate data-type. It may also be possible to use a recarray as the "array" for the masked array object becuase the recarray is a sub-class of the array. > I have tried with arrays of dtype=object, but I > have problem when I want to compute min, max, ... with an error like: > TypeError: function not supported for these types, and can't coerce safely to > supported types. > It looks like the max and min functions are not supported for Object arrays. import numpy as N N.maximum.types does not include Object arrays. It probably should. -Travis |
From: Charles R H. <cha...@gm...> - 2006-09-21 16:43:00
|
On 9/21/06, Sebastian Haase <ha...@ms...> wrote: > > On Thursday 21 September 2006 09:05, Charles R Harris wrote: > > Travis, > > > > A few questions. > > > > 1) I can't find any systematic code testing units, although there seem > to > > be tests for regressions and such. Is there a place we should be putting > > such tests? > > > > 2) Any plans for code documentation? I documented some of my stuff with > > doxygen markups and wonder if we should include a Doxyfile as part of > the > > package. > > Are able to use doxygen for Python code ? I thought it only worked for C > (and > alike) ? IIRC correctly, it now does Python too. Let's see... here is an example ## Documentation for this module. # # More details. ## Documentation for a function. # # More details. def func(): pass Looks like ## replaces the /** Chuck |
From: Louis C. <lco...@po...> - 2006-09-21 16:33:16
|
> Are able to use doxygen for Python code ? I thought it only worked for C (and > alike) ? There is an ugly-hack :) http://i31www.ira.uka.de/~baas/pydoxy/ But I wouldn't recommend using it, rather stick with Epydoc. -- Louis Cordier <lco...@po...> cell: +27721472305 Point45 Entertainment (Pty) Ltd. http://www.point45.org |
From: Travis O. <oli...@ie...> - 2006-09-21 16:30:30
|
Charles R Harris wrote: > Travis, > > A few questions. > > 1) I can't find any systematic code testing units, although there seem > to be tests for regressions and such. Is there a place we should be > putting such tests? All tests are placed under the tests directory of the corresponding sub-package. They will only be picked up by .test(level < 10) if the file is named test_<module_name>. .test(level>10) should pick up all test files. If you want to name something different but still have it run at a test level < 10, then you need to run the test from one of the other test files that will be picked up (test_regression.py and test_unicode.py are doing that for example). > > 2) Any plans for code documentation? I documented some of my stuff > with doxygen markups and wonder if we should include a Doxyfile as > part of the package. I'm not familiar with Doxygen, but would welcome any improvements to the code documentation. > > 3) Would you consider breaking out the Converters into a separate .c > file for inclusion? The code generator seems to take care of the ordering. You are right that it doesn't matter which order the API subroutines are placed. I'm not opposed to more breaking up of the .c files, as long as it is clear where things will be located. The #include strategy is necessary to get it all in one Python module, but having smaller .c files usually makes for faster editing. It's the arrayobject.c file that is "too-large" IMHO, however. That's where I would look for ways to break it up. The iterobject and the data-type object could be taken out, for example. -Travis |
From: Travis O. <oli...@ie...> - 2006-09-21 16:24:33
|
Martin Wiechert wrote: > Thanks Travis. > > Do I understand correctly that the only way to be really safe is to make a > copy and not to export a reference to it? > Because anybody having a reference to the owner of the data can override the > flag? > No, that's not quite correct. Of course in C, anybody can do anything they want to the flags. In Python, only the owner of the object itself can change the writeable flag once it is set to False. So, if you only return a "view" of the array (a.view()) then the Python user will not be able to change the flags. Example: a = array([1,2,3]) a.flags.writeable = False b = a.view() b.flags.writeable = True # raises an error. c = a c.flags.writeable = True # can be done because c is a direct alias to a. Hopefully, that explains the situation a bit better. -Travis |
From: Sebastian H. <ha...@ms...> - 2006-09-21 16:18:08
|
On Thursday 21 September 2006 09:05, Charles R Harris wrote: > Travis, > > A few questions. > > 1) I can't find any systematic code testing units, although there seem to > be tests for regressions and such. Is there a place we should be putting > such tests? > > 2) Any plans for code documentation? I documented some of my stuff with > doxygen markups and wonder if we should include a Doxyfile as part of the > package. Are able to use doxygen for Python code ? I thought it only worked for C (and alike) ? > > 3) Would you consider breaking out the Converters into a separate .c file > for inclusion? The code generator seems to take care of the ordering. > > Chuck |
From: Charles R H. <cha...@gm...> - 2006-09-21 16:06:02
|
Travis, A few questions. 1) I can't find any systematic code testing units, although there seem to be tests for regressions and such. Is there a place we should be putting such tests? 2) Any plans for code documentation? I documented some of my stuff with doxygen markups and wonder if we should include a Doxyfile as part of the package. 3) Would you consider breaking out the Converters into a separate .c file for inclusion? The code generator seems to take care of the ordering. Chuck |
From: Charles R H. <cha...@gm...> - 2006-09-21 14:33:39
|
On 9/21/06, Peter Bienstman <Pet...@ug...> wrote: > > Hi, > > I just installed rc1 on an AMD64 machine. but I get this error message > when > trying to import it: > > Python 2.4.3 (#1, Sep 21 2006, 13:06:42) > [GCC 4.1.1 (Gentoo 4.1.1)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > Traceback (most recent call last): <snip> I don't see this running the latest from svn on AMD64 here. Not sayin' there might not be a problem with rc1, I just don't see it with my sources. Python 2.4.3 (#1, Jun 13 2006, 11:46:22) [GCC 4.1.1 20060525 (Red Hat 4.1.1-1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.version.version '1.0.dev3202' >>> numpy.version.os.uname() ('Linux', 'tethys', '2.6.17-1.2187_FC5', '#1 SMP Mon Sep 11 01:16:59 EDT 2006', 'x86_64') If you are building on Gentoo maybe you could delete the build directory (and maybe the numpy site package) and rebuild. Chuck. |