You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Travis O. <oli...@ie...> - 2006-08-24 16:24:03
|
Christopher Hanley wrote: > Good Morning, > > Numpy revision 3056 will not build on either Red Hat Enterprise 3 or > Solaris 8. The relevant syntax errors are below: > > I'd like to see which platforms do not work with the npy_interrupt.h stuff. If you have a unique platform please try the latest SVN. There is a NPY_NO_SIGNAL define that will "turn off" support for interrupts which we can define on platforms that won't work. -Travis |
From: Travis O. <oli...@ie...> - 2006-08-24 16:22:32
|
Frank Conradie wrote: > Thanks Travis - that did the trick. Is this an issue specifically with > mingw and Windows? > Yes, I keep forgetting that Python functions are not necessarily defined at compile time on Windows. It may also be a problem with MSVC on Windows but I'm not sure. The real fix is now in SVN where these function pointers are initialized before calling PyType_Ready -Travis |
From: Frank C. <fr...@qf...> - 2006-08-24 15:36:44
|
Thanks Travis - that did the trick. Is this an issue specifically with mingw and Windows? - Frank Travis Oliphant wrote: > Travis Oliphant wrote: > >> Frank Conradie wrote: >> >> >>> Hi Sven and Jordan >>> >>> I wish to add my name to this list ;-) I just got the same error >>> trying to compile for Python 2.3 with latest candidate mingw32, >>> following the instructions at >>> http://www.scipy.org/Installing_SciPy/Windows . >>> >>> Hopefully someone can shed some light on this error - what I've been >>> able to find on the net explains something about C not allowing >>> dynamic initializing of global variables, whereas C++ does...? >>> >>> >>> >> Edit line 690 of ndarrayobject.h to read >> >> #define NPY_USE_PYMEM 0 >> >> Hopefully that should fix the error. >> >> > > You will also have to alter line 11189 so that > > _Py_HashPointer is replaced by 0 or NULL > > > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Sasha <nd...@ma...> - 2006-08-24 13:27:57
|
On 8/24/06, Bill Baxter <wb...@gm...> wrote: >[snip] it would be > nice to add a concise definition of "ufunc" to the numpy glossary: > http://www.scipy.org/Numpy_Glossary. > done > Can anyone come up with such a definition? I copied the definition from the old Numeric manual. > Here's my stab at it: > > ufunc: A function that operates element-wise on arrays. > This is not entirely correct. Ufuncs operate on anything that can be passed to asarray: arrays, python lists, tuples or scalars. |
From: Christopher H. <ch...@st...> - 2006-08-24 12:57:32
|
Good Morning, Numpy revision 3056 will not build on either Red Hat Enterprise 3 or Solaris 8. The relevant syntax errors are below: For RHE3: --------- creating build/temp.linux-i686-2.4 creating build/temp.linux-i686-2.4/numpy creating build/temp.linux-i686-2.4/numpy/core creating build/temp.linux-i686-2.4/numpy/core/src compile options: '-Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/stsci/pyssgdev/Python-2.4.2/include/python2.4 -c' gcc: numpy/core/src/multiarraymodule.c In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h:95: syntax error before "_NPY_SIGINT_BUF" numpy/core/include/numpy/npy_interrupt.h:95: warning: type defaults to `int' in declaration of `_NPY_SIGINT_BUF' numpy/core/include/numpy/npy_interrupt.h:95: warning: data definition has no type or storage class numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:100: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:100: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:100: for each function it appears in.) numpy/core/include/numpy/npy_interrupt.h:101: warning: implicit declaration of function `longjmp' numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6441: `SIGINT' undeclared (first use in this function) numpy/core/src/multiarraymodule.c:6441: warning: implicit declaration of function `setjmp' In file included from numpy/core/include/numpy/arrayobject.h:19, from numpy/core/src/multiarraymodule.c:25: numpy/core/include/numpy/npy_interrupt.h:95: syntax error before "_NPY_SIGINT_BUF" numpy/core/include/numpy/npy_interrupt.h:95: warning: type defaults to `int' in declaration of `_NPY_SIGINT_BUF' numpy/core/include/numpy/npy_interrupt.h:95: warning: data definition has no type or storage class numpy/core/include/numpy/npy_interrupt.h: In function `_npy_sighandler': numpy/core/include/numpy/npy_interrupt.h:100: `SIG_IGN' undeclared (first use in this function) numpy/core/include/numpy/npy_interrupt.h:100: (Each undeclared identifier is reported only once numpy/core/include/numpy/npy_interrupt.h:100: for each function it appears in.) numpy/core/include/numpy/npy_interrupt.h:101: warning: implicit declaration of function `longjmp' numpy/core/src/multiarraymodule.c: In function `test_interrupt': numpy/core/src/multiarraymodule.c:6441: `SIGINT' undeclared (first use in this function) numpy/core/src/multiarraymodule.c:6441: warning: implicit declaration of function `setjmp' error: Command "gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC -Ibuild/src.linux-i686-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.linux-i686-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/stsci/pyssgdev/Python-2.4.2/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.linux-i686-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 1 For Solaris 8: -------------- creating build/temp.solaris-2.8-sun4u-2.4 creating build/temp.solaris-2.8-sun4u-2.4/numpy creating build/temp.solaris-2.8-sun4u-2.4/numpy/core creating build/temp.solaris-2.8-sun4u-2.4/numpy/core/src compile options: '-Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/ra/pyssg/Python-2.4.2/include/python2.4 -c' cc: numpy/core/src/multiarraymodule.c "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: jmp_buf "numpy/core/include/numpy/npy_interrupt.h", line 95: syntax error before or at: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 100: undefined symbol: SIG_IGN "numpy/core/include/numpy/npy_interrupt.h", line 100: warning: improper pointer/integer combination: arg #2 "numpy/core/src/scalartypes.inc.src", line 70: warning: statement not reached "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arrayobject.c", line 4338: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, int) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 4444: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to unsigned long, pointer to unsigned long, int, int) returning int "numpy/core/src/arrayobject.c", line 4448: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to char, pointer to char, int, int) returning int "numpy/core/src/arrayobject.c", line 5313: warning: assignment type mismatch: pointer to function(pointer to struct PyArrayObject {int ob_refcnt, pointer to struct _typeobject {..} ob_type, pointer to char data, int nd, pointer to int dimensions, pointer to int strides, pointer to struct _object {..} base, pointer to struct {..} descr, int flags, pointer to struct _object {..} weakreflist}, pointer to struct _object {int ob_refcnt, pointer to struct _typeobject {..} ob_type}) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 7280: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, pointer to void, pointer to void) returning void "=" pointer to void "numpy/core/src/multiarraymodule.c", line 6441: undefined symbol: SIGINT cc: acomp failed for numpy/core/src/multiarraymodule.c "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: jmp_buf "numpy/core/include/numpy/npy_interrupt.h", line 95: syntax error before or at: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 95: warning: old-style declaration or incorrect type for: _NPY_SIGINT_BUF "numpy/core/include/numpy/npy_interrupt.h", line 100: undefined symbol: SIG_IGN "numpy/core/include/numpy/npy_interrupt.h", line 100: warning: improper pointer/integer combination: arg #2 "numpy/core/src/scalartypes.inc.src", line 70: warning: statement not reached "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arraytypes.inc.src", line 1045: warning: pointer to void or function used in arithmetic "numpy/core/src/arrayobject.c", line 4338: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, int) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 4444: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to unsigned long, pointer to unsigned long, int, int) returning int "numpy/core/src/arrayobject.c", line 4448: warning: argument #4 is incompatible with prototype: prototype: pointer to void : "numpy/core/src/arrayobject.c", line 4326 argument : pointer to function(pointer to char, pointer to char, int, int) returning int "numpy/core/src/arrayobject.c", line 5313: warning: assignment type mismatch: pointer to function(pointer to struct PyArrayObject {int ob_refcnt, pointer to struct _typeobject {..} ob_type, pointer to char data, int nd, pointer to int dimensions, pointer to int strides, pointer to struct _object {..} base, pointer to struct {..} descr, int flags, pointer to struct _object {..} weakreflist}, pointer to struct _object {int ob_refcnt, pointer to struct _typeobject {..} ob_type}) returning int "=" pointer to void "numpy/core/src/arrayobject.c", line 7280: warning: assignment type mismatch: pointer to function(pointer to void, pointer to void, int, pointer to void, pointer to void) returning void "=" pointer to void "numpy/core/src/multiarraymodule.c", line 6441: undefined symbol: SIGINT cc: acomp failed for numpy/core/src/multiarraymodule.c error: Command "/opt/SUNWspro-6u2/bin/cc -DNDEBUG -O -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core/src -Inumpy/core/include -Ibuild/src.solaris-2.8-sun4u-2.4/numpy/core -Inumpy/core/src -Inumpy/core/include -I/usr/ra/pyssg/Python-2.4.2/include/python2.4 -c numpy/core/src/multiarraymodule.c -o build/temp.solaris-2.8-sun4u-2.4/numpy/core/src/multiarraymodule.o" failed with exit status 2 Chris |
From: Bill B. <wb...@gm...> - 2006-08-24 04:41:55
|
On 8/24/06, Sebastian Haase <ha...@ms...> wrote: > > > I'm not sure what this question is asking, so I'll answer what I think > > it is asking. > > > > The mean, min, max, and average functions are *not* ufuncs. They are > > methods of particular ufuncs. > > > Yes - that's what wanted to hear ! I'm just trying to bring in the > "user's" point of view: Not thinking about how they are implemented > under the hood: mean,min,max,average have a very similar "feeling" to > them as "abs". While we're on the subject of the "user's" point of view, the term "ufunc" is not very new-user friendly, yet it gets slung around fairly often. I'm not sure what to do about it exactly, but maybe for starters it would be nice to add a concise definition of "ufunc" to the numpy glossary: http://www.scipy.org/Numpy_Glossary. Can anyone come up with such a definition? Here's my stab at it: ufunc: A function that operates element-wise on arrays. But I have a feeling there's more to it than that. --bb |
From: Sebastian H. <ha...@ms...> - 2006-08-24 04:22:38
|
Travis Oliphant wrote: > Sebastian Haase wrote: >> On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: >> >>> David M. Cooke wrote: >>> >>>> On Wed, 23 Aug 2006 16:22:52 -0700 >>>> >>>> Sebastian Haase <ha...@ms...> wrote: >>>> >>>>> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >>>>> >>>>>> The thing that I find I keep forgetting is that abs() is a built-in, >>>>>> but other simple functions are not. So it's abs(foo), but >>>>>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which >>>>>> is a built-in but can't be used with arrays, so numpy.round_(foo). >>>>>> Seems like it would be more consistent to just add a numpy.abs() and >>>>>> numpy.round(). >>>>>> >>>>> Regarding the original subject: >>>>> a) "absolute" is impractically too much typing and >>>>> b) I just thought some (module-) functions might be "forgotten" to be >>>>> put in as (object-) methods ... !? >>>>> >>>> Four-line change, so I added a.abs() (three lines for array, one >>>> for MaskedArray). >>>> >>> While I appreciate it's proactive nature, I don't like this change >>> because it adds another "ufunc" as a method. Right now, I think conj is >>> the only other method like that. >>> >>> Instead, I like better the idea of adding abs, round, max, and min to >>> the "non-import-*" namespace of numpy. >>> >>> >> How does this compare with >> mean, min, max, average >> ? >> > > I'm not sure what this question is asking, so I'll answer what I think > it is asking. > > The mean, min, max, and average functions are *not* ufuncs. They are > methods of particular ufuncs. > Yes - that's what wanted to hear ! I'm just trying to bring in the "user's" point of view: Not thinking about how they are implemented under the hood: mean,min,max,average have a very similar "feeling" to them as "abs". I'm hoping this ("seeing things from the user p.o.v.") can stay like that for as long as possible ! Numpy should be focused on "scientist not programers". (This is just why I posted this comment about "arr.abs()" - but if there is a good reason to not have this for "simplicity reasons 'under the hood'" I can see that perfectly fine !) - Sebastian |
From: David C. <da...@ar...> - 2006-08-24 03:04:50
|
David M. Cooke wrote: > On Wed, 23 Aug 2006 11:45:29 -0700 > Travis Oliphant <oli...@ie...> wrote: > >> I'm working on some macros that will allow extensions to be >> "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the >> implementation is complicated by the possibility of threads and making >> sure to handle clean-up code correctly when the interrupt returns. >> > This is funny, I was just thinking about that yesterday. This is a major problem when writing C extensions in matlab (the manual says use the matlab allocator instead of malloc/new/whatever, but when you call a library, you cannot do that...). > > Best way I can see this is to have a SIGINT handler installed that sets a > global variable, and check that every so often. It's such a good way that > Python already does this -- Parser/intrcheck.c sets the handler, and you can > use PyOS_InterruptOccurred() to check if one happened. So something like This is the way I do it when writing extension under matlab. I am by no means knowledgeable about those kind of things, but this is the simplest solution I came up with so far. I would guess that because it uses one global variable, it should not matter which thread receives the signal ? > > while (long running loop) { > if (PyOS_InterruptOccurred()) goto error: > ... useful stuff ... > } > error: > > This could be abstracted to a set of macros (with Perry's syntax): > > NPY_SIG_INTERRUPTABLE > while (long loop) { > NPY_CHECK_SIGINT; > .. more stuff .. > } > NPY_SIG_END_INTERRUPTABLE > > where NPY_CHECK_SIGINT would do a longjmp(). Is there really a need for a longjmp ? What I simply do in this case is checking the global variable, and if its value changes, goto to the normal error handling. Let's say you have already a good error handling in your function, as Travis described in his email: status = do_stuff(); if (status < 0) { goto cleanup; } Then, to handle sigint, you need a global variable got_sigint which is modified by the signal handler, and check its value (the exact type of this variable is platform specific; on linux, I am using volatile sig_atomic_t, as recommeded by the GNU C doc):: /* status is 0 if everything is OK */ status = do_stuff(); if (status < 0) { goto cleanup; } sigprocmask (SIG_BLOCK, &block_sigint, NULL); if (got_sigint) { got_sigint = 0; goto cleanup; } sigprocmask (SIG_UNBLOCK, &block_sigint, NULL); So the error handling does not be modified, and no longjmp is needed ? Or maybe I don't understand what you mean. I think the case proposer by Perry is too restrictive: it is really common to use external libraries which we do not know whether they use memory allocation inside the processing, and there is a need to clean that too. > > Or come up with a good (fast) way to run stuff in another process :-) > This sounds a bit overkill, and a pain to implement for different platforms ? The checking of signals should be fast, but it has a cost (you have to use a branch) which prevents is from being called to often inside a loop, for example. David |
From: Travis O. <oli...@ie...> - 2006-08-24 02:12:04
|
Sebastian Haase wrote: > On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: > >> David M. Cooke wrote: >> >>> On Wed, 23 Aug 2006 16:22:52 -0700 >>> >>> Sebastian Haase <ha...@ms...> wrote: >>> >>>> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >>>> >>>>> The thing that I find I keep forgetting is that abs() is a built-in, >>>>> but other simple functions are not. So it's abs(foo), but >>>>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which >>>>> is a built-in but can't be used with arrays, so numpy.round_(foo). >>>>> Seems like it would be more consistent to just add a numpy.abs() and >>>>> numpy.round(). >>>>> >>>> Regarding the original subject: >>>> a) "absolute" is impractically too much typing and >>>> b) I just thought some (module-) functions might be "forgotten" to be >>>> put in as (object-) methods ... !? >>>> >>> Four-line change, so I added a.abs() (three lines for array, one >>> for MaskedArray). >>> >> While I appreciate it's proactive nature, I don't like this change >> because it adds another "ufunc" as a method. Right now, I think conj is >> the only other method like that. >> >> Instead, I like better the idea of adding abs, round, max, and min to >> the "non-import-*" namespace of numpy. >> >> > How does this compare with > mean, min, max, average > ? > I'm not sure what this question is asking, so I'll answer what I think it is asking. The mean, min, max, and average functions are *not* ufuncs. They are methods of particular ufuncs. The abs() should not be slow (because it calls the __abs__ method which for arrays is mapped to the ufunc absolute). Thus, there is one more layer of indirection which will only matter for small arrays. -Travis |
From: Sebastian H. <ha...@ms...> - 2006-08-24 02:02:18
|
On Wednesday 23 August 2006 18:37, Travis Oliphant wrote: > David M. Cooke wrote: > > On Wed, 23 Aug 2006 16:22:52 -0700 > > > > Sebastian Haase <ha...@ms...> wrote: > >> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > >>> The thing that I find I keep forgetting is that abs() is a built-in, > >>> but other simple functions are not. So it's abs(foo), but > >>> numpy.floor(foo) and numpy.ceil(foo). And then there's round() which > >>> is a built-in but can't be used with arrays, so numpy.round_(foo). > >>> Seems like it would be more consistent to just add a numpy.abs() and > >>> numpy.round(). > >> > >> Regarding the original subject: > >> a) "absolute" is impractically too much typing and > >> b) I just thought some (module-) functions might be "forgotten" to be > >> put in as (object-) methods ... !? > > > > Four-line change, so I added a.abs() (three lines for array, one > > for MaskedArray). > > While I appreciate it's proactive nature, I don't like this change > because it adds another "ufunc" as a method. Right now, I think conj is > the only other method like that. > > Instead, I like better the idea of adding abs, round, max, and min to > the "non-import-*" namespace of numpy. > How does this compare with mean, min, max, average ? BTW: I think me choice is now settled on the builtin call: abs(arr) -- short and sweet. (As long as it is really supposed to *always* work and is not *slow* in any way !?!?!?!?) Cheers, Sebastian |
From: Travis O. <oli...@ie...> - 2006-08-24 01:37:30
|
David M. Cooke wrote: > On Wed, 23 Aug 2006 16:22:52 -0700 > Sebastian Haase <ha...@ms...> wrote: > > >> On Wednesday 23 August 2006 16:12, Bill Baxter wrote: >> >>> The thing that I find I keep forgetting is that abs() is a built-in, but >>> other simple functions are not. So it's abs(foo), but numpy.floor(foo) >>> and numpy.ceil(foo). And then there's round() which is a built-in but >>> can't be used with arrays, so numpy.round_(foo). Seems like it would >>> be more consistent to just add a numpy.abs() and numpy.round(). >>> >>> >> Regarding the original subject: >> a) "absolute" is impractically too much typing and >> b) I just thought some (module-) functions might be "forgotten" to be put >> in as (object-) methods ... !? >> > > Four-line change, so I added a.abs() (three lines for array, one > for MaskedArray). > While I appreciate it's proactive nature, I don't like this change because it adds another "ufunc" as a method. Right now, I think conj is the only other method like that. Instead, I like better the idea of adding abs, round, max, and min to the "non-import-*" namespace of numpy. |
From: Fernando P. <fpe...@gm...> - 2006-08-23 23:46:18
|
On 8/23/06, Bill Baxter <wb...@gm...> wrote: > The thing that I find I keep forgetting is that abs() is a built-in, but > other simple functions are not. So it's abs(foo), but numpy.floor(foo) and > numpy.ceil(foo). And then there's round() which is a built-in but can't be > used with arrays, so numpy.round_(foo). Seems like it would be more > consistent to just add a numpy.abs() and numpy.round(). > > But I guess there's nothing numpy can do about it... you can't name a > method the same as a built-in function, right? That's why we have > numpy.round_() instead of numpy.round(), no? > [...goes and checks] > Oh, you *can* name a module function the same as a built-in. Hmm... so then > why isn't numpy.round_() just numpy.round()? Is it just so "from numpy > import *" won't hide the built-in? Technically numpy could simply have (illustrated with round, but works also with abs) round = round_ and simply NOT include round in the __all__ list. This would make numpy.round(x) work (clean syntax) while from numpy import * would not clobber the builtin round. That sounds like a decent solution to me. Cheers, f |
From: David M. C. <co...@ph...> - 2006-08-23 23:40:49
|
On Wed, 23 Aug 2006 16:22:52 -0700 Sebastian Haase <ha...@ms...> wrote: > On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > > The thing that I find I keep forgetting is that abs() is a built-in, but > > other simple functions are not. So it's abs(foo), but numpy.floor(foo) > > and numpy.ceil(foo). And then there's round() which is a built-in but > > can't be used with arrays, so numpy.round_(foo). Seems like it would > > be more consistent to just add a numpy.abs() and numpy.round(). > > > > Regarding the original subject: > a) "absolute" is impractically too much typing and > b) I just thought some (module-) functions might be "forgotten" to be put > in as (object-) methods ... !? Four-line change, so I added a.abs() (three lines for array, one for MaskedArray). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: David M. C. <co...@ph...> - 2006-08-23 23:35:56
|
On Wed, 23 Aug 2006 11:45:29 -0700 Travis Oliphant <oli...@ie...> wrote: > > I'm working on some macros that will allow extensions to be > "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the > implementation is complicated by the possibility of threads and making > sure to handle clean-up code correctly when the interrupt returns. > For writing clean-up code, here's some prior art on adding exceptions to C: http://www.ossp.org/pkg/lib/ex/ (BSD license) http://adomas.org/excc/ (GPL'd, so no good) http://ldeniau.web.cern.ch/ldeniau/html/exception/exception.html (no license given) The last one has functions that allow you to add pointers (and their deallocation functions) to a list so that they can be deallocated when an exception is thrown. (You don't necessarily need something like these libraries, but I thought I'd throw it in here, because it's along the same lines) > Step 2: > > Implementation. I have the idea to have a single interrupt handler > (defined globally in NumPy) that basically uses longjmp to return to the > section of code corresponding to the thread that is handling the > interrupt. I had thought to use a global variable containing a linked > list of jmp_buf structures with a thread-id attached > (PyThread_get_thread_ident()) so that the interrupt handler can search > it to see if the thread has registered a return location. If it has > not, then the intterupt handler will just return normally. In this way > a thread that calls setjmpbuf will be sure to return to the correct > place when it handles the interrupt. Signals and threads don't mix well at *all*. With POSIX semantics, synchronous signals (ones caused by the thread itself) should be sent to the handler for that thread. Asynchronous ones (like SIGINT for Ctrl-C) will be sent to an *arbitrary* thread. (Apple, for instance, doesn't make any guarantees on which thread gets it: http://developer.apple.com/qa/qa2001/qa1184.html) Best way I can see this is to have a SIGINT handler installed that sets a global variable, and check that every so often. It's such a good way that Python already does this -- Parser/intrcheck.c sets the handler, and you can use PyOS_InterruptOccurred() to check if one happened. So something like while (long running loop) { if (PyOS_InterruptOccurred()) goto error: ... useful stuff ... } error: This could be abstracted to a set of macros (with Perry's syntax): NPY_SIG_INTERRUPTABLE while (long loop) { NPY_CHECK_SIGINT; .. more stuff .. } NPY_SIG_END_INTERRUPTABLE where NPY_CHECK_SIGINT would do a longjmp(). Or come up with a good (fast) way to run stuff in another process :-) -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Sebastian H. <ha...@ms...> - 2006-08-23 23:22:54
|
On Wednesday 23 August 2006 16:12, Bill Baxter wrote: > The thing that I find I keep forgetting is that abs() is a built-in, but > other simple functions are not. So it's abs(foo), but numpy.floor(foo) and > numpy.ceil(foo). And then there's round() which is a built-in but can't be > used with arrays, so numpy.round_(foo). Seems like it would be more > consistent to just add a numpy.abs() and numpy.round(). > > But I guess there's nothing numpy can do about it... you can't name a > method the same as a built-in function, right? That's why we have > numpy.round_() instead of numpy.round(), no? > [...goes and checks] > Oh, you *can* name a module function the same as a built-in. Hmm... so > then why isn't numpy.round_() just numpy.round()? Is it just so "from > numpy import *" won't hide the built-in? > That is my theory... Even tough I try to advertise import numpy as N a) "N." is not *that* much extra typing b) it much clearer to read code and see what is special from numpy vs. what is builtin c) (most important for me): I use PyShell/PyCrust and when I type the '.' after 'N' I get a nice pop-up list reminding me of all the function in numy ;-) Regarding the original subject: a) "absolute" is impractically too much typing and b) I just thought some (module-) functions might be "forgotten" to be put in as (object-) methods ... !? Cheers, Sebastian > --bill > > On 8/24/06, David M. Cooke <co...@ph...> wrote: > > On Wed, 23 Aug 2006 13:51:02 -0700 > > > > Sebastian Haase <ha...@ms...> wrote: > > > Hi! > > > numpy renamed the *function* abs to absolute. > > > Most functions like mean, min, max, average, ... > > > have an equivalent array *method*. > > > > > > Why is absolute left out ? > > > I think it should be added . > > > > We've got __abs__ :-) > > > > > Furthermore, looking at some line of code that have multiple calls to > > > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > > > I think "some people" might prefer less typing and less reading, > > > like f( a.abs(), b.abs(), c.abs() ). > > > > > > One could even consider not requiring the "function call" parenthesis > > > > '()' > > > > > at all - but I don't know about further implications that might have. > > > > eh, no. things that return new arrays should be functions. (As opposed to > > views of existing arrays, like a.T) > > > > > PS: is there any performace hit in using the built-in abs function ? > > > > Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays > > have). |
From: Bill B. <wb...@gm...> - 2006-08-23 23:12:40
|
The thing that I find I keep forgetting is that abs() is a built-in, but other simple functions are not. So it's abs(foo), but numpy.floor(foo) and numpy.ceil(foo). And then there's round() which is a built-in but can't be used with arrays, so numpy.round_(foo). Seems like it would be more consistent to just add a numpy.abs() and numpy.round(). But I guess there's nothing numpy can do about it... you can't name a method the same as a built-in function, right? That's why we have numpy.round_() instead of numpy.round(), no? [...goes and checks] Oh, you *can* name a module function the same as a built-in. Hmm... so then why isn't numpy.round_() just numpy.round()? Is it just so "from numpy import *" won't hide the built-in? --bill On 8/24/06, David M. Cooke <co...@ph...> wrote: > > On Wed, 23 Aug 2006 13:51:02 -0700 > Sebastian Haase <ha...@ms...> wrote: > > > Hi! > > numpy renamed the *function* abs to absolute. > > Most functions like mean, min, max, average, ... > > have an equivalent array *method*. > > > > Why is absolute left out ? > > I think it should be added . > > We've got __abs__ :-) > > > Furthermore, looking at some line of code that have multiple calls to > > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > > I think "some people" might prefer less typing and less reading, > > like f( a.abs(), b.abs(), c.abs() ). > > > One could even consider not requiring the "function call" parenthesis > '()' > > at all - but I don't know about further implications that might have. > > eh, no. things that return new arrays should be functions. (As opposed to > views of existing arrays, like a.T) > > > PS: is there any performace hit in using the built-in abs function ? > > Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays have). > > |
From: Travis O. <oli...@ie...> - 2006-08-23 22:21:44
|
Travis Oliphant wrote: > Frank Conradie wrote: > >> Hi Sven and Jordan >> >> I wish to add my name to this list ;-) I just got the same error >> trying to compile for Python 2.3 with latest candidate mingw32, >> following the instructions at >> http://www.scipy.org/Installing_SciPy/Windows . >> >> Hopefully someone can shed some light on this error - what I've been >> able to find on the net explains something about C not allowing >> dynamic initializing of global variables, whereas C++ does...? >> >> > Edit line 690 of ndarrayobject.h to read > > #define NPY_USE_PYMEM 0 > > Hopefully that should fix the error. > You will also have to alter line 11189 so that _Py_HashPointer is replaced by 0 or NULL |
From: Travis O. <oli...@ie...> - 2006-08-23 22:14:02
|
Frank Conradie wrote: > Hi Sven and Jordan > > I wish to add my name to this list ;-) I just got the same error > trying to compile for Python 2.3 with latest candidate mingw32, > following the instructions at > http://www.scipy.org/Installing_SciPy/Windows . > > Hopefully someone can shed some light on this error - what I've been > able to find on the net explains something about C not allowing > dynamic initializing of global variables, whereas C++ does...? > Edit line 690 of ndarrayobject.h to read #define NPY_USE_PYMEM 0 Hopefully that should fix the error. -Travis |
From: Frank C. <fr...@qf...> - 2006-08-23 21:47:45
|
Hi Sven and Jordan I wish to add my name to this list ;-) I just got the same error trying to compile for Python 2.3 with latest candidate mingw32, following the instructions at http://www.scipy.org/Installing_SciPy/Windows . Hopefully someone can shed some light on this error - what I've been able to find on the net explains something about C not allowing dynamic initializing of global variables, whereas C++ does...? - Frank Sven Schreiber wrote: > Jordan Dawe schrieb: > >> I just tried to compile numpy-1.0b3 under windows using mingw. I got >> this error: >> > ... > >> Any ideas? >> >> > > No, except that I ran into the same problem... Hooray, I'm not alone ;-) > -sven > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Perry G. <pe...@st...> - 2006-08-23 21:41:12
|
I thought it might be useful to give a little more context on the problems involved in handling such interruptions. Basically, one doesn't want to exit out of places where data structures are incompletely set up, or memory isn't properly handled so that later references to these don't cause segfaults (or experience memory leaks). There may be more exotic cases but typically many extensions are as simple as: 1) Figure out what inputs one has and the mode of computation needed 2) allocate and setup output arrays 3) do computation, possibly lengthy, over arrays 4) free temporary arrays and other data structures 5) return results Typically, the interrupt handling is needed only for 3, the part that it may spend a very long time in. 1, 2, 4, and 5 are not worth interrupting, and the area that may cause the most trouble. I'd argue that many things could do with a very simple structure where section 3 is bracketed with macros. Something like: NPY_SIG_INTERRUPTABLE [long looping computational code that doesn't create or destroy objects] NPY_SIG_END_INTERRUPTABLE followed by the normal code to do 4 and 5. What happens during an interrupt is the computation code is exited and execution resumes right after the closing macro. Very often one doesn't care that the results in the arrays may be incomplete, or invalid numbers (presumably you know that since you just did control-C, but maybe I'm confused). Any reason that most cases couldn't be handled with something this simple? All cases can't be handled with this, but most should I think. Perry On Aug 23, 2006, at 2:45 PM, Travis Oliphant wrote: > > I'm working on some macros that will allow extensions to be > "interruptable" (i.e. with Ctrl-C). The idea came from SAGE but the > implementation is complicated by the possibility of threads and making > sure to handle clean-up code correctly when the interrupt returns. > > I'd like to get this in to 1.0 final. Anything needed will not > require > re-compilation of extension modules built for 1.0b2 however. This > will > be strictly "extra" and if an extension module doesn't use it there > will > be no problems. > > Step 1: > > Define the interface. Here are a couple of draft proposals. Please > comment on them. > > 1) General purpose interface > > NPY_SIG_TRY { > [code] > } > NPY_SIG_EXCEPT(signum) { > [interrupt handling return] > } > NPY_SIG_ELSE > [normal return] > > The idea of signum is to hold the signal actually caught. > > > 2) Simpler interface > > NPY_SIG_TRY { > [code] > } > NPY_SIG_EXCEPT_GOTO(label) > [normal return] > > label: > [interrupt handling return] > > > C-extensions often use the notion of a label to handle failure code. > > If anybody has any thoughts on this, they would be greatly > appreciated. > > > Step 2: > > Implementation. I have the idea to have a single interrupt handler > (defined globally in NumPy) that basically uses longjmp to return > to the > section of code corresponding to the thread that is handling the > interrupt. I had thought to use a global variable containing a linked > list of jmp_buf structures with a thread-id attached > (PyThread_get_thread_ident()) so that the interrupt handler can search > it to see if the thread has registered a return location. If it has > not, then the intterupt handler will just return normally. In > this way > a thread that calls setjmpbuf will be sure to return to the correct > place when it handles the interrupt. > > Concern: > > My thinking is that this mechanism should work whether or not the > GIL is > held so that we don't have to worry about whether or not the GIL is > held > except in the interrupt handling case (when Python exceptions are > to be > set). But, honestly, this gets very confusing. > > The sigjmp / longjmp mechanism for handling interrupts is not > recommended under windows (not sure about mingw), but there we could > possibly use Microsoft's __try and __except extension to implement. > Initially, it would be "un-implemented" on platforms where it > didn't work. > > Any comments are greatly appreciated > > -Travis > > > > > ---------------------------------------------------------------------- > --- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your > job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Sven S. <sve...@gm...> - 2006-08-23 21:35:11
|
Jordan Dawe schrieb: > I just tried to compile numpy-1.0b3 under windows using mingw. I got > this error: ... > > Any ideas? > No, except that I ran into the same problem... Hooray, I'm not alone ;-) -sven |
From: Jordan D. <jd...@eo...> - 2006-08-23 21:27:36
|
I just tried to compile numpy-1.0b3 under windows using mingw. I got this error: compile options: '-Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c' gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c numpy\core\src\multiarraymodule.c -o build\temp.win32-2.4\Release\numpy\core\src\multiarraymodule.o In file included from numpy/core/src/multiarraymodule.c:64: numpy/core/src/arrayobject.c:6643: initializer element is not constant numpy/core/src/arrayobject.c:6643: (near initialization for `PyArray_Type.tp_free') numpy/core/src/arrayobject.c:10312: initializer element is not constant numpy/core/src/arrayobject.c:10312: (near initialization for `PyArrayMultiIter_Type.tp_free') numpy/core/src/arrayobject.c:11189: initializer element is not constant numpy/core/src/arrayobject.c:11189: (near initialization for `PyArrayDescr_Type.tp_hash') error: Command "gcc -mno-cygwin -O2 -Wall -Wstrict-prototypes -Ibuild\src.win32-2.4\numpy\core\src -Inumpy\core\include -Ibuild\src.win32-2.4\numpy\core -Inumpy\core\src -Inumpy\core\include -Ic:\Python24\include -Ic:\Python24\PC -c numpy\core\src\multiarraymodule.c -o build\temp.win32-2.4\Release\numpy\core\src\multiarraymodule.o" failed with exit status 1 Any ideas? Jordan Dawe |
From: David M. C. <co...@ph...> - 2006-08-23 21:13:50
|
On Wed, 23 Aug 2006 13:51:02 -0700 Sebastian Haase <ha...@ms...> wrote: > Hi! > numpy renamed the *function* abs to absolute. > Most functions like mean, min, max, average, ... > have an equivalent array *method*. > > Why is absolute left out ? > I think it should be added . We've got __abs__ :-) > Furthermore, looking at some line of code that have multiple calls to > absolute [ like f(absolute(a), absolute(b), absolute(c)) ] > I think "some people" might prefer less typing and less reading, > like f( a.abs(), b.abs(), c.abs() ). > One could even consider not requiring the "function call" parenthesis '()' > at all - but I don't know about further implications that might have. eh, no. things that return new arrays should be functions. (As opposed to views of existing arrays, like a.T) > PS: is there any performace hit in using the built-in abs function ? Shouldn't be: abs(x) looks for the x.__abs__() method (which arrays have). -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Sebastian H. <ha...@ms...> - 2006-08-23 20:56:32
|
Hi! numpy renamed the *function* abs to absolute. Most functions like mean, min, max, average, ... have an equivalent array *method*. Why is absolute left out ? I think it should be added . Furthermore, looking at some line of code that have multiple calls to absolute [ like f(absolute(a), absolute(b), absolute(c)) ] I think "some people" might prefer less typing and less reading, like f( a.abs(), b.abs(), c.abs() ). One could even consider not requiring the "function call" parenthesis '()' at all - but I don't know about further implications that might have. Thanks, Sebastian Haase PS: is there any performace hit in using the built-in abs function ? |
From: Travis O. <oli...@ie...> - 2006-08-23 19:59:31
|
Charles R Harris wrote: > Yes, > > On 8/19/06, Joris De Ridder <jo...@st... > <mailto:jo...@st...>> wrote: > > Hi, > > > > Some of my code is heavily using large complex arrays, and I noticed > a speed > > degression in NumPy 1.0b2 with respect to Numarray. The following > code snippet > > is an example that on my computer runs 10% faster in Numarray than > in NumPy. > > > > >>> A = zeros(1000000, complex) > > >>> for k in range(1000): > > ... A *= zeros(1000000, complex) > > > > (I replaced 'complex' with 'Complex' in Numarray). Can anyone > confirm this? > The multiply (and divide functions) for complex arrays were using the "generic interface" (probably because this is what Numeric did) which calls out to a function to compute each result. I just committed a switch to "in-line" the multiplication and division calls. The speed-up is about that 10%. Now, my numarray and NumPy versions of the test are very similar. -Travis |