You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Colin J. W. <cj...@sy...> - 2006-10-27 15:55:36
|
A July posting sets out the intent: http://scipy.org/BaseArray Version 3 of the draft: http://numpy.scipy.org/array_interface.shtml There is a description, from a C Structure perspective: http://svn.scipy.org/svn/PEP/PEP_basearray.txt What is the current status of the plan to develop a PEP? Is there a more recent version of the PEP, from a Python user's perspective? Colin W. |
From: Jonathan W. <jon...@gm...> - 2006-10-27 14:35:16
|
On 10/26/06, Travis Oliphant <oli...@ee...> wrote: > > > Okay, is my understanding here correct? I am defining two type > > descriptors: > > PyArray_Descr mxNumpyType - describes the Numpy array type. > > PyTypeObject mxNumpyDataType - describes the data type of the contents > > of the array (i.e. mxNumpyType->typeobj points to this), inherits from > > PyDoubleArrType_Type and overrides some fields as mentioned above. > > > The nomenclature is that mxNumPyType is the data-type of the array and > your PyTypeObject is the "type" of the elements of the array. So, you > have the names a bit backward. > > So, to correspond with the way I use the words "type" and "data-type", I > would name them: > > PyArray_Descr mxNumpyDataType > PyTypeObject mxNumpyType Okay, I will use this convention going forwards. > And the getitem and setitem functions are designed to only give/take > > PyObject* of type mxDateTime. > > > These are in the 'f' member of the PyArray_Descr structure, so > presumably you have also filled in your PyArray_Descr structure with > items from PyArray_DOUBLE? That's correct. I have all members of the 'f' member identical to that from PyArray_DOUBLE, except: mxNumpyType->f->dotfunc = NULL; mxNumpyType->f->getitem = date_getitem; mxNumpyType->f->setitem = date_setitem; mxNumpyType->f->cast[PyArray_DOUBLE] = (PyArray_VectorUnaryFunc*) dateToDouble; mxNumpyType->f->cast[PyArray_OBJECT] = (PyArray_VectorUnaryFunc*) dateToObject; All other cast functions are NULL. If I redefine the string function, I encounter another, perhaps more serious problem leading to a segfault. I've defined my string function to be extremely simple: >>> def printer(arr): ... return str(arr[0]) Now, if I try to print an element of the array: >>> mxArr[0] I get to this stack trace: #0 scalar_value (scalar=0x814be10, descr=0x5079e0) at scalartypes.inc.src :68 #1 0x0079936a in PyArray_Scalar (data=0x814cf98, descr=0x5079e0, base=0x814e7a8) at arrayobject.c:1419 #2 0x007d259f in array_subscript_nice (self=0x814e7a8, op=0x804eb8c) at arrayobject.c:1985 #3 0x00d17dde in PyObject_GetItem (o=0x814e7a8, key=0x804eb8c) at Objects/abstract.c:94 (Note: for some reason gdb claims that arrayobject.c:1985 is array_subscript_nice, but looking at my source this line is actually in array_item_nice. *boggle*) But scalar_value returns NULL for all non-native types. So, destptr in PyArray_Scalar is set to NULL, and the call the copyswap segfaults. Perhaps scalar_value should be checking the scalarkind field of PyArray_Descr, or using the elsize and alignment fields to figure out the pointer to return if scalarkind isn't set? |
From: jeremito <jer...@gm...> - 2006-10-27 13:44:00
|
> Well if all you want is some matrices, there's nothing stopping you > from grabbing the matrices in the LAPACK distribution and using them > yourself. Robert's just saying they won't be included in Numpy. > There's also the matrix market, whcih has a large number of > (sparse-only?) example matrices. > http://math.nist.gov/MatrixMarket/index.html > --bb _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > You might be also interested in the Matrix Computation Toolbox which is > a collection of MATLAB M-files containing functions for constructing > test matrices ... > http://www.ma.man.ac.uk/~higham/mctoolbox/ > > and > http://www.mathworks.com/access/helpdesk/help/techdoc/ref/*gallery*.html > > BTW, you can easily import matrices given in the MatrixMarket format in > scipy. See* io.mmread * > > mmread(source) > Reads the contents of a Matrix Market file 'filename' into a matrix. > > Inputs: > > source - Matrix Market filename (extensions .mtx, .mtz.gz) > or open file object. > > Outputs: > > a - sparse or full matrix > > Nils Thanks Bill and Nils. After my response, I had discovered the Matrix Market and realized it would be easy for me create some of the matrices myself. However having a way to read in the files already is really helpful. Thanks for pointing that out. Jeremy |
From: Colin J. W. <cj...@sy...> - 2006-10-27 13:33:47
|
Ivan Vilata i Balaguer wrote: > En/na Colin J. Williams ha escrit: > > >> Ivan Vilata i Balaguer wrote: >> >>> Hi all. The attached diff makes some changes to Numexpr support for >>> booleans. The changes and their rationale are below. >>> >>> 1. New ``True`` and ``False`` boolean constants. This is so that 1 and >>> 0 are always proper integer constants. It is also for completeness, >>> but I don't envision any usage for them that couldn't be expressed >>> without the constants. >>> >>> >> I'm puzzled. >> Python already has constants True and False of the bool type. bool is a >> subclass of the int type. >> Any instance of the bool type can be converted to the int type. >> >>> a=1==0 >> >>> type(a) >> <type 'bool'> >> >>> int(a) >> 0 >> >>> a >> False >> >>> >> >> Colin W. >> >> >>> 2. The only comparisons supported with booleans are ``==`` and ``!=``, >>> so that you can compare boolean variables. Just as NumPy supports >>> complex order comparisons and Numexpr doesn't, so goes for bools. >>> Being relatively new, I think there is no need to keep >>> integer-boolean compatibility in Numexpr. What was the meaning of >>> ``True > False`` or ``2 > True`` anyway? >>> > > Well, the ``True`` and ``False`` constants where not previously > supported in Numexpr because they had to be defined somewhere. Now they > are. > > Regarding the Python int and bool types and their relationships, it is a > very elegant solution introduced in Python 2.3 since previous versions > didn't have a proper boolean type, so the 0 and 1 ints where used for > that. What I'm proposing here is, since Numexpr has a recent story and > most probably there isn't much code affected by the change, why not > define the boolean type as a purely logical one and leave its numeric > compatibility issues out? By the way, it simplifies Numexpr's virtual > machine (less casting opcodes). > > I admit again this looks a little baffling, of course, but I don't think > it would mean many noticeable changes to the user. > > Regards, > > Ivan, I'm afraid I'm still baffled. Are you saying that your proposal is necessary to preserve compatibility with Python versions before 2.3? Otherwise, it appears to introduce clutter with no clear benefit. I don't find Numexpr in NumPy, are you referring to a scipy module? Colin W. |
From: Lars F. <lfr...@im...> - 2006-10-27 11:04:10
|
Hello Gael (sorry, I just don't get the dots...), Am Freitag, den 27.10.2006, 08:46 +0200 schrieb Gael Varoquaux: > Worked great for me ! My approach was to write a small wrapper C > (actually C++, with "extern C" linking) library that exposed only what I > needed of the camera interface, in a "python-friendly" way, and to wrap > it with ctypes. I controlled a "Pixis" princeton instruments camera this > way. As I said, it worked surprisingly well. I can send the code as an > example if you wish. Yes, that would be really nice! I think, I am doing the same thing. In the attachment, you find my dll-code, which I am still working on --so it is not ready to use!--, if you have any comments I will be happy. Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-Köhler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfr...@im... |
From: David C. <da...@ar...> - 2006-10-27 09:54:32
|
Hi, I announce the first release of pyaudio, a module to make noise from numpy arrays (read, write and play audio files with numpy arrays). * WHAT FOR ?: The Goal is to give to a numpy/scipy environmenet some basic audio IO facilities (ala sound, wavread, wavwrite of matlab). With pyaudio, you should be able to read and write most common audio files from and to numpy arrays. The underlying IO operations are done using libsndfile from Erik Castro Lopo (http://www.mega-nerd.com/libsndfile/), so he is the one who did the hard work. As libsndfile has support for a vast number of audio files (including wav, aiff, but also htk, ircam, and flac, an open source lossless codec), pyaudio enables the import from and export to a fairly large number of file formats. There is also a really crude player, which uses tempfile to play audio, and which only works for linux-alsa anyway. I intend to add better support at least for linux, and for other platforms if this does not involve too much hassle. So basically, if you are lucky enough to use a recent linux system, pyaudio already gives you the equivalent of wavread, wavwrite and sound. * DOWNLOAD: http://www.ar.media.kyoto-u.ac.jp/members/david/pyaudio.tar.gz * INSTALLATION INSTRUCTIONS: Just untar the package and drop it into scipy/Lib/sandbox, and add the two following lines to scipy/Lib/sandbox/setup.py: # Package to make some noise using numpy config.add_subpackage('pyaudio') (if libsndfile.so is not in /usr/lib, a fortiori if you are a windows user, you should also change set the right location for libsndfile in pyaudio/pysndfile.py, at the line _snd.cdll.LoadLibrary('/usr/lib/libsndfile.so') ) * EXAMPLE USAGE == Reading example == # Reading from '/home/david/blop.flac' from scipy.sandbox.pyaudio import sndfile a = sndfile('/home/david/blop.flac') print a tmp = a.read_frames_float(1024) --> Prints: File : /home/david/blop.flac Sample rate : 44100 Channels : 2 Frames : 9979776 Format : 0x00170002 Sections : 1 Seekable : True Duration : 00:03:46.298 And put into tmp the 1024 first frames (a frame is the equivalent of a sample, but taking into account the number of channels: so 1024 frames gives you 2048 samples here). == Writing example == # Writing to a wavfile: from scipy.sandbox.pyaudio import sndfile import numpy as N noise = N.random.randn((44100)) a = sndfile('/home/david/blop.flac', sfm['SFM_WRITE'], sf_format['SF_FORMAT_WAV'] | sf_format['SF_FORMAT_PCM16'], 1, 44100) a.write_frames(noise, 44100) a.close() -> should gives you a lossless compressed white noise ! This is really a first release, not really tested, not much documentation, I can just say it works for me. I haven't found a good way to emulate enumerations, which libsndfile uses a lot, so I am using dictionaries generated from the library C header to get a relation enum label <=> value. If someone has a better idea, I am open to suggestions ! Cheers, David |
From: Ivan V. i B. <iv...@ca...> - 2006-10-27 07:43:55
|
En/na Travis E. Oliphant ha escrit:: > We are very pleased to announce the release of NumPy 1.0 available for = > download at http://www.numpy.org >=20 > This release is the culmination of over 18 months of effort to allow=20 > unification of the Numeric and Numarray communities. [...] Wow, let me say this must be for so many people in this group like having a child at last! Congratulations and lots of thanks to all developers (and very especially to Travis) and contributors for the impressive designing, coding, testing and, last but not least, community driving of the last months. Let's hope this goes on like this for a looong time! Cheers, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Charles R H. <cha...@gm...> - 2006-10-27 07:37:03
|
On 10/26/06, Charles R Harris <cha...@gm...> wrote: > > > > On 10/26/06, Mathew Yeates <my...@jp...> wrote: > > > > yes, I got around the problem from my previous posting "distutils > > question". I added ld_args[:0] = ['-m64'] to line 209 of > > python2.5/distutils/unixcompiler.py. Lovely, yes I know. > > > > I now get an error "numpy/core/src/multiarraymodule.c:7230: error: > > `NPY_ALLOW_THREADS' undeclared". This is after several billion warning > > messages of the form > > numpy/core/src/multiarraymodule.c:5010: warning: dereferencing > > type-punned pointer will break strict-aliasing rules > > > GCC? Needs the -no-strict-aliasing flag. Everybody hates the default > except the compiler writers because you can't cast pointers between > different sized types, something the linux kernel and numpy do a lot. Things > can fail badly if you don't set the flag and ignore the warnings. > Make that -fno-strict-aliasing. The whole command line on x86 looks like: gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -D_GNU_SOURCE -fPIC -fPIC Chuck |
From: Ivan V. i B. <iv...@ca...> - 2006-10-27 07:21:43
|
En/na Colin J. Williams ha escrit: > Ivan Vilata i Balaguer wrote: >> Hi all. The attached diff makes some changes to Numexpr support for >> booleans. The changes and their rationale are below. >> >> 1. New ``True`` and ``False`` boolean constants. This is so that 1 an= d >> 0 are always proper integer constants. It is also for completeness= , >> but I don't envision any usage for them that couldn't be expressed >> without the constants. >> =20 > I'm puzzled. > Python already has constants True and False of the bool type. bool is = a=20 > subclass of the int type. > Any instance of the bool type can be converted to the int type. > >>> a=3D1=3D=3D0 > >>> type(a) > <type 'bool'> > >>> int(a) > 0 > >>> a > False > >>> >=20 > Colin W. >=20 >> 2. The only comparisons supported with booleans are ``=3D=3D`` and ``!= =3D``, >> so that you can compare boolean variables. Just as NumPy supports >> complex order comparisons and Numexpr doesn't, so goes for bools. >> Being relatively new, I think there is no need to keep >> integer-boolean compatibility in Numexpr. What was the meaning of >> ``True > False`` or ``2 > True`` anyway? Well, the ``True`` and ``False`` constants where not previously supported in Numexpr because they had to be defined somewhere. Now they are. Regarding the Python int and bool types and their relationships, it is a very elegant solution introduced in Python 2.3 since previous versions didn't have a proper boolean type, so the 0 and 1 ints where used for that. What I'm proposing here is, since Numexpr has a recent story and most probably there isn't much code affected by the change, why not define the boolean type as a purely logical one and leave its numeric compatibility issues out? By the way, it simplifies Numexpr's virtual machine (less casting opcodes). I admit again this looks a little baffling, of course, but I don't think it would mean many noticeable changes to the user. Regards, :: Ivan Vilata i Balaguer >qo< http://www.carabos.com/ C=C3=A1rabos Coop. V. V V Enjoy Data "" |
From: Gael V. <gae...@no...> - 2006-10-27 06:46:40
|
On Fri, Oct 27, 2006 at 07:55:06AM +0200, Lars Friedrich wrote: > If anyone is using python / numpy / ctypes for hardware control (say, > Cameras with grabber-cards or fire-wire / DCAM; National Instruments > acquisition cards using NIDAQmx, ...) I am interested in discussion! Worked great for me ! My approach was to write a small wrapper C (actually C++, with "extern C" linking) library that exposed only what I needed of the camera interface, in a "python-friendly" way, and to wrap it with ctypes. I controlled a "Pixis" princeton instruments camera this way. As I said, it worked surprisingly well. I can send the code as an example if you wish. Ga=EBl |
From: Lars F. <lfr...@im...> - 2006-10-27 05:55:09
|
Am Donnerstag, den 26.10.2006, 19:08 +0900 schrieb David Cournapeau: > By the way, I found the information about locking pages into memory for > windows: > > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/memory/base/virtuallock.asp > Thanks for the link, I now use a virtuallock in my Dll. I don't get the "paging-error"-bluescreens anymore, but now I get other ones ("DRIVER_IRQL_NOT_LESS_OR_EQUAL"), but I think this is another issue, and I am working on it... If anyone is using python / numpy / ctypes for hardware control (say, Cameras with grabber-cards or fire-wire / DCAM; National Instruments acquisition cards using NIDAQmx, ...) I am interested in discussion! Lars -- Dipl.-Ing. Lars Friedrich Optical Measurement Technology Department of Microsystems Engineering -- IMTEK University of Freiburg Georges-Köhler-Allee 102 D-79110 Freiburg Germany phone: +49-761-203-7531 fax: +49-761-203-7537 room: 01 088 email: lfr...@im... |
From: Robert K. <rob...@gm...> - 2006-10-27 05:05:15
|
George Sakkis wrote: > Robert Kern <robert.kern <at> gmail.com> writes: > >> It looks like you linked against a FORTRAN LAPACK, but didn't manage to link > the >> FORTRAN runtime library libg2c. Can you give us the output of your build? >> > > I just installed Numpy, ATLAS and LAPACK on Centos a few hours ago and I got the > exact same error. You're right, libg2c is never linked. Here's a sample line > from the linking: > > gcc -pthread -shared build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o > -L/usr/local/lib/atlas/ -llapack -lptf77blas -lptcblas -latlas -o > build/lib.linux-i686-2.4/numpy/linalg/lapack_lite.so > > I've been looking into numpy's distutils for the last hour or so but didn't > track down the problem yet; something seems to be broken with Redhat's setup... Did you do anything to configure the libraries for ATLAS? Like editing site.cfg? If so, you will need to add -lg2c after -latlas. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: <jk...@to...> - 2006-10-27 04:24:13
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD><TITLE>新建网页 1</TITLE> <META http-equiv=Content-Language content=zh-cn> <META http-equiv=Content-Type content="text/html; charset=gb2312"> <META content="MSHTML 6.00.2900.2523" name=GENERATOR></HEAD> <BODY> <TABLE id=table1 width="100%" border=1> <TBODY> <TR> <TD> <P style="LINE-HEIGHT: 200%" align=center><FONT color=#0000ff><FONT size=2>主办单位: 易腾企业管理咨询有限公司</FONT><SPAN lang=zh-cn><FONT size=6><B><BR></B></FONT></SPAN></FONT><font size="5" color="#0000FF"><b> EXCEL和PPT在企业管理中的高级应用</b></font></P> <P style="LINE-HEIGHT: 150%"><B><SPAN class=noo><FONT color=#0000ff size=3>课程背景</FONT></SPAN></B><BR><FONT lang=ZH-CN size=2> 不管您在什么岗位上工作,利用Excel电子表格进行数据分析几乎已经成为每个经理人的必备工具,无论您从事采购、销售、财务分析还是经营决策,电子表格能够帮助你筛选数据、分析数据并制作管理图表,Excel的各种财务函数为您进行本量利分析和经营决策提供了方便。如果你打算利用Excel提高工作质量和效率,运用Powerpoint制作优美的演示报告取得不同凡响的震撼效果,那么这个课程就是为你定制的。<br> 培 训 收 益:<br> 提提高EXCEL和PPT实际操作能力,提高工作效率;<br> 掌握如何利用各种函数建立数学模型进行高效财务分析;<br> 掌握快速实现产品、客户分类的方法,使公司效益倍增;<br> 掌握建立多因素量本利敏感分析模型,使你直观地发现最佳盈利模式;<br> 掌握利用各种预测工具,寻找营销方案;<br> 掌握如何制作令老板满意的各类图表和管理报告<BR> </FONT><SPAN class=r3><B><FONT color=#0000ff size=3>课程大纲</FONT></B></SPAN><BR> <FONT face=宋体 size=2><font color="#0000FF">一、EXCEL和Powerpoint的操作技巧<br> </font>1 EXCEL和Powerpoint软件的基本功能<br> 命令与工具栏的使用、工作簿/工作表的使用及多窗口展示、标准表格保护及输入条件设定、相对引用/绝对引用、公式与函数<br> 2 数据管理:<br> 数据格式、建立公式、数据编辑、图表制作、排序、筛选、分类汇总<br> 3 数据分析:<br> 数据透视表(图)、函数的应用、单变量求解、模拟运算表、规划求解<br> 4 不同类型报告的模版演示:<br> 业绩报告、项目汇报、财务报告、产品展示、评审/评估报告<br> 5 用PPT进行展示和报告<br> 管理结构、工作流程、业绩趋势和分析、竞争对手的对比<br> 6 PPT与EXCEL,OUTLOOK的链接使用技巧<BR><font color="#0000FF">二、如何运用图表进行事务处理和工作报告</font><br> 怎样快速创建出你需要的图表<br> 如何创建动态图<br> 如何因地制宜地使用图表<br> 行政管理表格设计<br> 人力资源管理表格设计<br> 如何自动生成员工考核工资表<br> 企业销售业绩的图表表达<br> 产品市场占有率的图表表达<br> 如何运用EXCEL分析市场调查问卷<br> 如何运用EXCEL制作和分析销售报表<br> 如何运用EXCEL制作和分析财务报表<br> 人事、物料、财务数据库的链接和自动处理</FONT><BR><font size="2"><font color="#0000FF">三、运用EXCEL进行有效管理的案例分析</font><br> 成本费用分析与管理<br> 销售业务管理与决策<br> 客户盈利能力与费用分析<br> 采购管理信息系统的建立与应用<br> 工资核算系统的建立与应用<br> 库存管理系统的建立与应用<br> 本量利和敏感性分析<br> 投资项目评价与决策<BR><font color="#0000FF">四、运用PPT进行产品展示和管理经营报告</font><br> 用图片插入技术制作项目分析短片<br> 动画展示产品型号及功能<br> 市场分析及推广计划演示<br> 形象化的股东大会营运报告<br> 制作动感宣传片<br> 丰富多彩的可行性研究报告<br> <font color="#0000FF">五、学员案例分析和实例演练</font><BR></font><B><FONT color=#0000ff size=3>导师简介</FONT></B><BR> <FONT size=2> <FONT color=#000000>Mr Wang ,管理工程硕士、高级经济师,国际职业培训师协会认证职业培训师,历任跨国公司生产负责人、工业工程经理、营运总监等高级管理职务多年,同时还担任 < 价值工程 > 杂志审稿人,对企业管理有较深入的研究。 王老师从事企业管理咨询与培训工作八年来,为 IBM 、 TDK 、松下、可口可乐、康师傅、汇源果汁、雪津啤酒、吉百利食品、冠捷电子、 INTEX、正新橡胶、美国 ITT 集团、广上科技、美的空调、中兴通讯、京信通信、联想电脑、艾克森 - 金山石化、正大集团、厦华集团、灿坤股份、NEC 东金电子、太原钢铁集团、 PHILIPS 、深圳开发科技、大冷王运输制冷、三洋华强、 TCL 、美的汽车、楼氏电子、维讯柔性电路板、上海贝尔阿尔卡特、天津扎努西、上海卡博特等近三百家企业提供了项目辅导或专题培训。王老师授课经验丰富,风格幽默诙谐、逻辑清晰、过程互动,案例生动、深受学员喜爱</FONT>。</FONT><FONT color=#ff0000 size=2> <BR></FONT><B><FONT size=2 color="#0000ff">时间地点:</FONT> </B><FONT face=宋体 size=2>11月4-5日</FONT><FONT size=2> (周六日) 北京 <BR></FONT><B><FONT color=#0000ff size=2>费用:</FONT></B><FONT size=2> 1980/人<FONT face=宋体>(含课程费、教材、午餐等)</FONT> 四人以上参加,赠予一名名额<BR><FONT color=#0000ff><B>报名热线:</B></FONT><FONT color=#000000> 021-51187126 </FONT>张小姐<BR></FONT><FONT color=#0000ff size=2><B>厂内培训和咨询项目:</B></FONT><FONT color=#000000 size=2><BR> 易腾公司致力于生产、质量、成本节约等各方面的课程培训与项目咨询, 欢迎您根据需要提出具体要求,我们将为您定制厂内培训或咨询项目。内训联系: 021-51187132 刘小姐</FONT><FONT size=2><BR><FONT color=#ff0000>如您不需要本广告,请回复来电说明,我们将不再发送给您.谢谢!</FONT></FONT></P></TD></TR></TBODY></TABLE></BODY></HTML> |
From: George S. <geo...@gm...> - 2006-10-27 04:00:21
|
Robert Kern <robert.kern <at> gmail.com> writes: > It looks like you linked against a FORTRAN LAPACK, but didn't manage to link the > FORTRAN runtime library libg2c. Can you give us the output of your build? > I just installed Numpy, ATLAS and LAPACK on Centos a few hours ago and I got the exact same error. You're right, libg2c is never linked. Here's a sample line from the linking: gcc -pthread -shared build/temp.linux-i686-2.4/numpy/linalg/lapack_litemodule.o -L/usr/local/lib/atlas/ -llapack -lptf77blas -lptcblas -latlas -o build/lib.linux-i686-2.4/numpy/linalg/lapack_lite.so I've been looking into numpy's distutils for the last hour or so but didn't track down the problem yet; something seems to be broken with Redhat's setup... George |
From: Charles R H. <cha...@gm...> - 2006-10-27 03:34:03
|
On 10/26/06, Mathew Yeates <my...@jp...> wrote: > > yes, I got around the problem from my previous posting "distutils > question". I added ld_args[:0] = ['-m64'] to line 209 of > python2.5/distutils/unixcompiler.py. Lovely, yes I know. > > I now get an error "numpy/core/src/multiarraymodule.c:7230: error: > `NPY_ALLOW_THREADS' undeclared". This is after several billion warning > messages of the form > numpy/core/src/multiarraymodule.c:5010: warning: dereferencing > type-punned pointer will break strict-aliasing rules GCC? Needs the -no-strict-aliasing flag. Everybody hates the default except the compiler writers because you can't cast pointers between different sized types, something the linux kernel and numpy do a lot. Things can fail badly if you don't set the flag and ignore the warnings. I don't know about the other warning, maybe some syntax error causing the declaration to be missed. Chuck |
From: Mathew Y. <my...@jp...> - 2006-10-27 01:43:05
|
yes, I got around the problem from my previous posting "distutils question". I added ld_args[:0] = ['-m64'] to line 209 of python2.5/distutils/unixcompiler.py. Lovely, yes I know. I now get an error "numpy/core/src/multiarraymodule.c:7230: error: `NPY_ALLOW_THREADS' undeclared". This is after several billion warning messages of the form numpy/core/src/multiarraymodule.c:5010: warning: dereferencing type-punned pointer will break strict-aliasing rules Any help, suggestions gratefully accepted. Mathew |
From: Mathew Y. <my...@jp...> - 2006-10-27 01:03:45
|
Hi I am trying to compile a 64 bit version of numpy with gcc. When building, numpy tries to figure out the lapack/atlas version. Up to this point, everything has been compiled with gcc -m64 and all is groovy. But, when an attempt is made to get the atlas version, the link fails because the command "gcc _configtest.o -L/u/fuego0b/myeates/lib -llapack -lcblas -latlas -o _configtest" is being run (Note the lack of -m64) This generates an error ld: fatal: file _configtest.o: wrong ELF class: ELFCLASS64 ld: fatal: File processing errors. No output written to _configtest collect2: ld returned 1 exit status ld: fatal: file _configtest.o: wrong ELF class: ELFCLASS64 ld: fatal: File processing errors. No output written to _configtest collect2: ld returned 1 exit status LinkError: LinkErro...us 1',),) Anybody know how I can force gcc -m64 when linking? I already have the environment variables CFLAGS=-m64 LDFLAGS=-64 Mathew |
From: Ted H. <ted...@ea...> - 2006-10-27 00:55:18
|
On Oct 26, 2006, at 12:26, Travis Oliphant wrote: > Charles R Harris wrote: >> >> >> On 10/26/06, *Travis Oliphant* <oli...@ie... >> <mailto:oli...@ie...>> wrote: >> >> Ted Horst wrote: >>> On Mac OS X tiger (10.4) ppc, long double has increased >> precision but >>> the same range as double (it really is 128 bits not 80, btw), so >>> e**1000 is inf, so this is not really an error. >>> >>> >> >> Thanks for the clarification. Long-double is not standard >> across >> platforms with different platforms choosing to do different >> things >> with >> the extra bytes. This helps explain one more platform. >> >>> I'm not sure what is the right thing to do in the test, check for >>> overflow? Also, finfo has never worked properly for this type. >>> >> In machar.py is the code that runs to detect all of the >> parameters. I >> think the code should be moved to C as detecting precision on a >> long-double takes too long. >> >> The overflow check is a good idea. The test should probably >> check for >> overflow and not try to run if it's detected. >> >> >> How to check overflow? According to the documentation the flag is not >> set by the hardware. And the precision is variable! Somewhere in the >> neighborhood of 31 decimal digits, more or less, depending. So I >> think >> it is hard to figure out what to do here. > > Let's drop the test. Long-double is available but is not consistent > across platforms and NumPy has done nothing to try and make it so. > Thus, let's just let the user beware. > > -Travis Yeah, that seem like the thing to do. Just for completeness: >>> N.seterr(all = 'raise') >>> fa = N.array([1e308], dtype=N.float) >>> lfa = N.array([1e308], dtype=N.longfloat) >>> fa + fa Traceback (most recent call last): File "<stdin>", line 1, in ? FloatingPointError: overflow encountered in add >>> lfa + lfa Traceback (most recent call last): File "<stdin>", line 1, in ? FloatingPointError: overflow encountered in add >>> N.exp(fa) Traceback (most recent call last): File "<stdin>", line 1, in ? FloatingPointError: overflow encountered in exp >>> N.exp(lfa) array([inf], dtype=float128) Ted |
From: Travis O. <oli...@ee...> - 2006-10-27 00:15:09
|
> It's just confusing as the documentation indicates that the setitem > function should return 0 for success and a negative number for > failure. But within Array_FromPyScalar, we have: > > ret->descr->f->setitem(op, ret->data, ret); > > if (PyErr_Occurred()) { > Py_DECREF(ret); > return NULL; > } else { > return (PyObject *)ret; > } > I see the problem. We are assuming an error is set on failure, so both -1 should be returned and an error condition set for your own setitem function. This is typical Python behavior. I'll fix the documentation. > > > > > I seem to be able to load values into the array, but I can't extract > > anything out of the array, even to print it. In gdb I've > verified that > > loading DateTime.now() correctly puts a float representation of the > > date into my array. However, if I try to get the value out, I get an > > error: > > >>> mxArr[0] = DateTime.now() > > >>> mxArr[0] > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > File "/usr/lib/python2.4/site-packages/numpy/core/numeric.py", > line > > 391, in array_repr > > ', ', "array(") > > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > > line 204, in array2string > > separator, prefix) > > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > > line 160, in _array2string > > format = _floatFormat(data, precision, suppress_small) > > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > > line 281, in _floatFormat > > non_zero = _uf.absolute(data.compress(_uf.not_equal(data, 0))) > > TypeError: bad operand type for abs() > > > > I'm not sure why it's trying to call abs() on my object to print it. > > Because that's the implication of inheriting from a double. It's > just > part of the code that tries to format your values into an array > (notice > the _floatFormat). I actually borrowed this code from numarray so I > can't speak to exactly what it's doing without more study. > > > Hmm, so does Numpy ignore the tp_repr and tp_str fields in the > PyTypeObject of the underlying type? admittedly haven't had a chance > to look at this code closely yet. How arrays print is actually user-settable. The default printing function does indeed ignore tp_repr and tp_str of the underlying scalar objects in order to be able to set precisions. Now, we could probably fix the default printing function to actually use the tp_repr and/or tp_str fields of the corresponding scalar objects. This is worth filing a ticket about. In the mean time you can create a new array print function that checks for your data-type as the type of the array and then does something different otherwise it calls the old function. Then, register this function as the print function for arrays. > > > > I have a separate PyNumberMethods attached to my object type, copied > > from the float scalar type, and nb_absolute is set to 0. When I > break > > at the various functions I've registered, the last thing Numpy > tries > > to do is cast my custom data type to an object type (which it > does so > > successfully) via _broadcast_cast. > > Don't confuse the Python object associated when an element of the > array > is extracted and the data-type of the array. Also don't confuse the > PyNumberMethods of the scalar object with the ufuncs. Defining > PyNumberMethods won't usually give you the ability to calculate > ufuncs. > > > Okay, is my understanding here correct? I am defining two type > descriptors: > PyArray_Descr mxNumpyType - describes the Numpy array type. > PyTypeObject mxNumpyDataType - describes the data type of the contents > of the array (i.e. mxNumpyType->typeobj points to this), inherits from > PyDoubleArrType_Type and overrides some fields as mentioned above. > The nomenclature is that mxNumPyType is the data-type of the array and your PyTypeObject is the "type" of the elements of the array. So, you have the names a bit backward. So, to correspond with the way I use the words "type" and "data-type", I would name them: PyArray_Descr mxNumpyDataType PyTypeObject mxNumpyType > And the getitem and setitem functions are designed to only give/take > PyObject* of type mxDateTime. > These are in the 'f' member of the PyArray_Descr structure, so presumably you have also filled in your PyArray_Descr structure with items from PyArray_DOUBLE? > I guess it's not clear to me whether the abs() referred to by the > error is an abs() ufunc or the nb_absolute pointer in the > PyNumberMethods. Let me try overriding ufuncs and get back to you... > > Perhaps you just want to construct an "object" array of mxDateTime's. > What is the reason you want to define an mxDateTime data-type? > > > Currently I am using an object array of mxDateTime's, but it's rather > frustrating that I can't treat them as normal floats internally since > that's really all they are. Ah, I see. So, you would like to be able to say view the array of mxDateTimes as an array of "floats" (using the .view method). You are correct that this doesn't make sense when you are talking about objects, but might if mxDateTime objects are really just floats. I just wanted to make sure you were aware of the object array route. The new data-type route is less well traveled but I'm anxious to smooth the wrinkles out. Your experiences will help. Bascially we are moving from Numeric being a "builtin data-types" only to a NumPy that has "arbitrary" data-types with a few special-cased "builtins" We need more experience to clarify the issues. Your identification of problems in the default printing, for example, is one thing that will help. Keep us posted. I'd love to here how things went and what can be done to improve. -Travis |
From: Travis O. <oli...@ee...> - 2006-10-26 23:56:51
|
Damien Miller wrote: >Hi, > >I have just got around to updating OpenBSD's numpy port from 1.0b1 to >1.0 and am running into the following regress failure: > > > >>.........................................................................................................................................................................................................................................................Warning: overflow encountered in exp >>F....................................................................................................................................................................................................................................................................... >>====================================================================== >>FAIL: Ticket #112 >>---------------------------------------------------------------------- >>Traceback (most recent call last): >> File "/usr/ports/math/py-numpy/w-py-numpy-1.0/fake-i386/usr/local/lib/python2.4/site-packages/numpy/core/tests/test_regression.py", line 220, in check_longfloat_repr >> assert(str(a)[1:9] == str(a[0])[:8]) >>AssertionError >> >>---------------------------------------------------------------------- >>Ran 513 tests in 3.471s >> >>FAILED (failures=1) >> >> > >The variable 'a' seems to contain '[Inf]', so the failure appears related to >the overflow warning. > >Any clues on how I can debug this further? > > Unless you want to help with tracking how long double is interpreted on several platforms, then just ignore the test. (It actually wasn't being run in 1.0b1). -Travis |
From: Damien M. <dj...@mi...> - 2006-10-26 23:50:21
|
Hi, I have just got around to updating OpenBSD's numpy port from 1.0b1 to 1.0 and am running into the following regress failure: > .........................................................................................................................................................................................................................................................Warning: overflow encountered in exp > F....................................................................................................................................................................................................................................................................... > ====================================================================== > FAIL: Ticket #112 > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/ports/math/py-numpy/w-py-numpy-1.0/fake-i386/usr/local/lib/python2.4/site-packages/numpy/core/tests/test_regression.py", line 220, in check_longfloat_repr > assert(str(a)[1:9] == str(a[0])[:8]) > AssertionError > > ---------------------------------------------------------------------- > Ran 513 tests in 3.471s > > FAILED (failures=1) The variable 'a' seems to contain '[Inf]', so the failure appears related to the overflow warning. Any clues on how I can debug this further? thanks, Damien Miller |
From: Robert K. <rob...@gm...> - 2006-10-26 23:49:58
|
Jonathan Wang wrote: > It's just confusing as the documentation indicates that the setitem > function should return 0 for success and a negative number for failure. > But within Array_FromPyScalar, we have: > > ret->descr->f->setitem(op, ret->data, ret); > > if (PyErr_Occurred()) { > Py_DECREF(ret); > return NULL; > } else { > return (PyObject *)ret; > } > > So, someone reading the documentation could return -1 on failure without > setting the Python error flag, and the function would happily continue > on its way and fail to perform the proper casts. That's a documentation vagueness, then. This is a convention established by the Python C API. If an error happens in a function that returns PyObject*, then it should return NULL to inform the caller that an error happened; other functions should return 0 for success and -1 for an error. However, the function must still set an exception object. The rest is just a convenient convention. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Jonathan W. <jon...@gm...> - 2006-10-26 23:37:47
|
On 10/26/06, Travis Oliphant <oli...@ee...> wrote: > > But, what do you mean "inheriting" from NumPy's double for your scalar > data-type. This has significant implications. To define a new > data-type object (that doesn't build from the VOID data-type), you need > to flesh out the PyArray_Descr * structure and this can only be done in > C. Perhaps you are borrowing most entries in the structure builtin > double type and then filling in a few differently like setitem and > getitem? Is that accurate? Sorry, I should have been clearer. When I talk about inheritance, I mean of the type underlying the array. For example, the built-in scalar double array has an underlying type of PyDoubleArrType_Type. My underlying type is a separate PyTypeObject. The interesting changes here are to tp_repr, tp_str, and tp_as_number. The rest of the fields are inherited from PyDoubleArrType_Type using the tp_base field. The array itself has another statically defined type object of type PyArray_Descr, which I'm creating with a PyObject_New call and filling in with many of the entries from the descriptor returned by PyArray_DescrFromType(NPY_DOUBLE), while overriding getitem and setitem to handle PyObject* of type mxDateTime as you guessed. > The interface used by Array_FromPyScalar does not conform with the > > documentation's claim that a negative return value indicates an error. > > You must be talking about a different function. Array_FromPyScalar is > an internal function and not a C-API call. It also returns a PyObject * > not an integer. So, which function are you actually referring to? > > > The return code from setitem is not checked. Instead, the code depends > > on a Python error being set. This may be true, but how is it a problem? > It's just confusing as the documentation indicates that the setitem function should return 0 for success and a negative number for failure. But within Array_FromPyScalar, we have: ret->descr->f->setitem(op, ret->data, ret); if (PyErr_Occurred()) { Py_DECREF(ret); return NULL; } else { return (PyObject *)ret; } So, someone reading the documentation could return -1 on failure without setting the Python error flag, and the function would happily continue on its way and fail to perform the proper casts. > > > I seem to be able to load values into the array, but I can't extract > > anything out of the array, even to print it. In gdb I've verified that > > loading DateTime.now() correctly puts a float representation of the > > date into my array. However, if I try to get the value out, I get an > > error: > > >>> mxArr[0] = DateTime.now() > > >>> mxArr[0] > > Traceback (most recent call last): > > File "<stdin>", line 1, in ? > > File "/usr/lib/python2.4/site-packages/numpy/core/numeric.py", line > > 391, in array_repr > > ', ', "array(") > > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > > line 204, in array2string > > separator, prefix) > > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > > line 160, in _array2string > > format = _floatFormat(data, precision, suppress_small) > > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > > line 281, in _floatFormat > > non_zero = _uf.absolute(data.compress(_uf.not_equal(data, 0))) > > TypeError: bad operand type for abs() > > > > I'm not sure why it's trying to call abs() on my object to print it. > > Because that's the implication of inheriting from a double. It's just > part of the code that tries to format your values into an array (notice > the _floatFormat). I actually borrowed this code from numarray so I > can't speak to exactly what it's doing without more study. Hmm, so does Numpy ignore the tp_repr and tp_str fields in the PyTypeObject of the underlying type? I admittedly haven't had a chance to look at this code closely yet. > I have a separate PyNumberMethods attached to my object type, copied > > from the float scalar type, and nb_absolute is set to 0. When I break > > at the various functions I've registered, the last thing Numpy tries > > to do is cast my custom data type to an object type (which it does so > > successfully) via _broadcast_cast. > > Don't confuse the Python object associated when an element of the array > is extracted and the data-type of the array. Also don't confuse the > PyNumberMethods of the scalar object with the ufuncs. Defining > PyNumberMethods won't usually give you the ability to calculate ufuncs. Okay, is my understanding here correct? I am defining two type descriptors: PyArray_Descr mxNumpyType - describes the Numpy array type. PyTypeObject mxNumpyDataType - describes the data type of the contents of the array (i.e. mxNumpyType->typeobj points to this), inherits from PyDoubleArrType_Type and overrides some fields as mentioned above. And the getitem and setitem functions are designed to only give/take PyObject* of type mxDateTime. I guess it's not clear to me whether the abs() referred to by the error is an abs() ufunc or the nb_absolute pointer in the PyNumberMethods. Let me try overriding ufuncs and get back to you... Perhaps you just want to construct an "object" array of mxDateTime's. > What is the reason you want to define an mxDateTime data-type? Currently I am using an object array of mxDateTime's, but it's rather frustrating that I can't treat them as normal floats internally since that's really all they are. Jonathan |
From: Travis O. <oli...@ee...> - 2006-10-26 23:19:25
|
Jonathan Wang wrote: > I'm trying to write a Numpy extension that will encapsulate mxDateTime > as a native Numpy type. I've decided to use a type inherited from > Numpy's scalar double. However, I'm running into all sorts of > problems. I'm using numpy 1.0b5; I realize this is somewhat out of date. > Cool. The ability to create your own data-types (and define ufuncs for them) is a feature that I'd like to see explored. But, it has not received a lot of attention and so you may find bugs along the way. We'll try to fix them quickly as they arise (and there will be bug fix releases for 1.0). But, what do you mean "inheriting" from NumPy's double for your scalar data-type. This has significant implications. To define a new data-type object (that doesn't build from the VOID data-type), you need to flesh out the PyArray_Descr * structure and this can only be done in C. Perhaps you are borrowing most entries in the structure builtin double type and then filling in a few differently like setitem and getitem? Is that accurate? > For all the examples below, assume that I've created a 1x1 array, > mxArr, with my custom type. > > The interface used by Array_FromPyScalar does not conform with the > documentation's claim that a negative return value indicates an error. You must be talking about a different function. Array_FromPyScalar is an internal function and not a C-API call. It also returns a PyObject * not an integer. So, which function are you actually referring to? > The return code from setitem is not checked. Instead, the code depends > on a Python error being set. This may be true, but how is it a problem? > > I seem to be able to load values into the array, but I can't extract > anything out of the array, even to print it. In gdb I've verified that > loading DateTime.now() correctly puts a float representation of the > date into my array. However, if I try to get the value out, I get an > error: > >>> mxArr[0] = DateTime.now() > >>> mxArr[0] > Traceback (most recent call last): > File "<stdin>", line 1, in ? > File "/usr/lib/python2.4/site-packages/numpy/core/numeric.py", line > 391, in array_repr > ', ', "array(") > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > line 204, in array2string > separator, prefix) > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > line 160, in _array2string > format = _floatFormat(data, precision, suppress_small) > File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", > line 281, in _floatFormat > non_zero = _uf.absolute(data.compress(_uf.not_equal(data, 0))) > TypeError: bad operand type for abs() > > I'm not sure why it's trying to call abs() on my object to print it. Because that's the implication of inheriting from a double. It's just part of the code that tries to format your values into an array (notice the _floatFormat). I actually borrowed this code from numarray so I can't speak to exactly what it's doing without more study. > I have a separate PyNumberMethods attached to my object type, copied > from the float scalar type, and nb_absolute is set to 0. When I break > at the various functions I've registered, the last thing Numpy tries > to do is cast my custom data type to an object type (which it does so > successfully) via _broadcast_cast. Don't confuse the Python object associated when an element of the array is extracted and the data-type of the array. Also don't confuse the PyNumberMethods of the scalar object with the ufuncs. Defining PyNumberMethods won't usually give you the ability to calculate ufuncs. Perhaps you just want to construct an "object" array of mxDateTime's. What is the reason you want to define an mxDateTime data-type? -Travis |
From: Jonathan W. <jon...@gm...> - 2006-10-26 22:26:52
|
I'm trying to write a Numpy extension that will encapsulate mxDateTime as a native Numpy type. I've decided to use a type inherited from Numpy's scalar double. However, I'm running into all sorts of problems. I'm using numpy 1.0b5; I realize this is somewhat out of date. For all the examples below, assume that I've created a 1x1 array, mxArr, with my custom type. The interface used by Array_FromPyScalar does not conform with the documentation's claim that a negative return value indicates an error. The return code from setitem is not checked. Instead, the code depends on a Python error being set. I seem to be able to load values into the array, but I can't extract anything out of the array, even to print it. In gdb I've verified that loading DateTime.now() correctly puts a float representation of the date into my array. However, if I try to get the value out, I get an error: >>> mxArr[0] = DateTime.now() >>> mxArr[0] Traceback (most recent call last): File "<stdin>", line 1, in ? File "/usr/lib/python2.4/site-packages/numpy/core/numeric.py", line 391, in array_repr ', ', "array(") File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", line 204, in array2string separator, prefix) File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", line 160, in _array2string format = _floatFormat(data, precision, suppress_small) File "/usr/lib/python2.4/site-packages/numpy/core/arrayprint.py", line 281, in _floatFormat non_zero = _uf.absolute(data.compress(_uf.not_equal(data, 0))) TypeError: bad operand type for abs() I'm not sure why it's trying to call abs() on my object to print it. I have a separate PyNumberMethods attached to my object type, copied from the float scalar type, and nb_absolute is set to 0. When I break at the various functions I've registered, the last thing Numpy tries to do is cast my custom data type to an object type (which it does so successfully) via _broadcast_cast. Thanks, Jonathan |