You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Pete S. <pe...@sh...> - 2000-07-26 05:44:15
|
> Ideas I've tossed around include creating an interface similar to the one > surface objects have for audio buffers, adding adaptors for PIL and PST, > creating an overly complex system for 'compiling' surface operations for speed, > and some other things. > I don't know. So I'm asking. What is it you want? interfacing with PIL should be pretty easy with the "fromstring" and "tostring" style functions that PIL uses. Numpy interfaces with PIL in this manner. (in fact with my unofficial numpy extension, you can already use numpy to get the images into PIL (or so i assume, have yet to test something like that :] )) i also like the idea of easier integration with "C" extensions, but this will prove a bigger challenge than breaking up the source i'd imagine. (although it will likely require it, since currently, all the global objects are created in the header file (DOH)) well, you asked for it... here's a sampling things i'd like to see for pysdl -------------------------------------------------------------------- first is throwing exceptions instead of returning -value error codes now that i'm into python, i love the flexibility of exceptions, instead of checking every function call for its return code, you just call all the functions, and put the error handling at the end a wrapper for some line/polygon filling library. i assume SDL has something like this already, but everynow and then i keep thinking i'll want this. i'd especially like this if it was done on a library tuned for SDL, so i can get fast-as-possible-filled-polygons break the sourcecode into smaller files. :] actually i really do want this one. i would love to hear a discussion of the best possible ways to get this done. i've done this in a reasonably clean and efficient way. it isn't thoroughly planned out but i was pleased with how it turned out. if it can be used as a starting point for discussion it will have served its purpose! numeric python implementation. i've got a crude sample of this going. (the other) peter and i were able to attain some amazing speed gains. it's not for everyone (and i think well discussed :] ) but it beats the pants off trying to drum up a C extension to do basic image operations. anyways, those speedups we saw were on pretty basic operations. things like a 640x480 radial gradient went from about 20 seconds to under 2 seconds (and that was still with python doing the color conversions, and a generic non-optimized numpy-2-sdl transfer). i also have my basic "fire" demo running under numpy that would be foolhardy without numpy cleanup of the "event" handling routines. currently i think it's too much "C" and too little "python". i've thought out a couple simple changes that make it much smaller and more graceful than event handling with C-SDL some higher-level classes written in python. simple base classes for things like sprites, eventhandlers (i'm enjoying mine!). the reason these should be in python to begin with is so they can be easily inherited and extended. i still haven't got to using the sound routines in pysdl, but it seems like a much cleaner implementation could be made from what there is in C-SDL (and we've currently duplicated for pysdl). instead of the current "audio.play(mysound)" something more like "mysound.play()". i realize there's some issues involved, and i haven't really looked at the audio interfaces (yet), but from what i currently see it seems a little backwards heh, this one just occurred to me, but check with PERL and other SDL language bindings to make sure we're not missing out on any great ideas at least the thought of switching to using distutils. i've scanned through their docs and examples, and it seems powerful. perhaps best to wait for the next python release which includes distutils standard. but i would love to hear anyone's experience if they've maintained any packages with this. this will probably just have to wait until someone out there needs something like this and just goes ahead and creates it. but to help a pysdl game rely on 3rd-party extensions written in python there could be some prebuild "rexec" modes that can run python code in a 'gaming-oriented' restricted environment. i dunno, it's just something that pops into my head once in awhile i looked into the SDL_net library to see if it was worth including with pysdl. after checking it all out i saw nothing that wasn't offered by the standard python "socket" libraries. i'd say this library is worth ignoring, until someone can bring forth some facts that i overlooked finally, i'll end with a simple one. just a global pysdl function to get the current screen. when writing my game between multiple modules i'm always having to pass the screen (and other info) around between all the modules. there may be other pysdl "state" info like this that is usefull to be able to access from any of the modules in my game (i just haven't found em yet) i like peter nicolai's idea for the web plugin. but i fear that SDL is not the base library of choice for this type of project, since SDL has little control over the display window. (ie, impossible to "embed" into other windows) as for other platforms, i have done limited testing on IRIX which has worked great for me. (IRIX being another platform that SDL has recently supported) > A side note to Pete: I haven't been ignoring you, I've just been > trying to decide what I think of it before giving you a response no trouble. i've decided to just start writing stuff and deal with issues as they come up. i figure with this approach there are two benefits. first, i'll have real-world experience to make my "suggestions" a lot more worthwhile. second, there is a top notch pysdl game out there! |
From: Frank W. <war...@ho...> - 2000-07-19 21:51:47
|
Folks, I just wanted to let you all know that I have made a first hack at integrating Numpy with GDAL (my geospatial raster access library), with the purpose of integrating with OpenEV (a geospatial data viewing application). The work is still relatively preliminary, but might be of interest to some. OpenEV and GDAL are raster orientated, and will only be of interest for those working with 2D matrices and an interest in treating them as rasters. A few notes of interest. o OpenEV and GDAL support a variety of data types. Unlike most raster packages they can support UnsignedInt8, Int16, Int32, Float32, Float64, ComplexInt16, ComplexInt32, ComplexFloat32 and ComplexFloat64 data types. o OpenEV will scale for display, but will also show real underlying data values. It also includes special display modes for complex rasters to show phase, magnitude, phase & magnitude, real and imaginary views of complex layers. o GDAL can be used to save real, and complex data to TIFF (and a few other less well known formats), as well as loading various raster formats common in remote sensing and GIS. The following is a minimal example of using GDAL and numpy together: from Numeric import * import gdalnumeric x = gdalnumeric.LoadFile( "/u/data/png/guy.png" ) print x.shape x = 255.0 - x gdalnumeric.SaveArray( x, "out.tif", "GTiff" ) More information is available at: http://www.remotesensing.org/gdal http://openev.sourceforge.net/ Finally, a thanks to those who have developed and maintained Numeric Python. It is a great package, and I look forward to using it more. Best regards, ---------------------------------------+-------------------------------------- I set the clouds in motion - turn up | Frank Warmerdam, war...@ho... light and sound - activate the windows | http://members.home.com/warmerda and watch the world go round - Rush | Geospatial Programmer for Rent |
From: J. M. <mi...@cg...> - 2000-07-18 16:34:15
|
Hi Syrus, Wow, a GPL'd Macsyma! Very cool, thanks for the pointer. That does exactly what I need. Thanks too for the pointer to the Sci Apps for Linux page. regards - Judah Judah Milgram mi...@cg... College Park Press http://www.cgpp.com P.O. Box 143, College Park MD, USA 20741 +001 (301) 422-4626 (422-3047 fax) |
From: Syrus Nemat-N. <sy...@sh...> - 2000-07-18 02:15:34
|
Hello Judah, I accidentally deleted your original message, but found it again in the Numerical list archives. I believe that there are free software packages that would do what you need. For example, look at Maxima which is under the GPL now (free'd version of Macsyma). If you are non-commercial, you can get a free version of MuPad. Also, there are number theory packages available that can probably do polynomial factorization. I suggest that you look at the Scientific Applications for Linux page: http://SAL.KachinaTech.COM/index.shtml Note that a lot of the software there can also be run on other platforms including both UNIX and Windows. You can even search for "polynomial factorization" and get a number of hits. Cheers. Syrus. -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Syrus C Nemat-Nasser, PhD | Center of Excellence for Advanced Materials UCSD Department of Physics | UCSD Department of Mechanical <sy...@uc...> | and Aerospace Engineering |
From: J. M. <mi...@cg...> - 2000-07-16 19:36:56
|
Hi - I need to do prime factorization of some polynomials with integer coefficients. It would seem the Berlekamp algorithm would be what I want but I also admit to almost total ignorance on the subject. I'm having trouble even understanding how the algorithm works so I'm probably not the best person to be implementing it. But I may give it a try. Is anyone already working on a Python implementation of this? Or any tips on how to proceed? Pointers to other freeware solutions? I prefer something that can be re-written in or modularized into Python but even if not, that's no big deal. The alternative is to purchase one of the symbolic algebra packages like Maple or Mathematica etc. thanks Judah Judah Milgram mi...@cg... P.O. Box 8376, Langley Park, MD 20787 (301) 422-4626 (-3047 fax) |
From: Jonathan M. G. <jon...@va...> - 2000-07-11 15:35:55
|
I tried updating my distutils and numpy from their CVS sites last night, to try building and found the following problems. Distutils from :pserver:dis...@cv...:/projects/cvsroot has a most recent tag of Distutils-0_8_2. Where can I get CVS access to Distutils 0.9, or is this a tar-only distribution. On a related note, the cvs of Numerical Python from SourceForge has a most recent tag of V15_2. If I get the most recent (untagged) version, of NumPy, there is not lapack_lite_library directory, so "setup.py install" fails. Does this mean that Numerical Python 15.3 has not yet been checked into the CVS repository? It's a bit confusing when the CVS repositories are not up to date with the most recent releases. Could someone clarify for me, please. Jonathan Gilligan |
From: John A. T. <tu...@bl...> - 2000-07-10 14:34:12
|
>>>>> "PFD" == Paul F Dubois <pau...@ho...>: PFD> The error in the test has been fixed by removing the test for now. That's going in my quotes file - I realize it's legit in this case, but you have to admit that, esp. out of context, it's pretty funny... -- John A. Turner, Senior Research Associate Blue Sky Studios, One South Rd, Harrison, NY 10528 http://www.blueskystudios.com/ (914) 825-8319 |
From: Greg W. <gw...@py...> - 2000-07-09 01:29:51
|
On 06 July 2000, Paul F. Dubois said: > This version REQUIRES the new Distutils. If you install Distutils into > 1.6a2, remember (which I didn't at first) to go delete the distutils > directory in the Python library directory. If you don't you will get a > message while running setup informing you that your Distutils must be > updated. The "Official Recommendation" is just to rename that directory away: that way you'll have (eg.) lib/python1.6/distutils-orig and lib/python1.6/site-packages/distutils, and the site-packages version will take precedence without having clobbered the standard library too badly. See the Distutils README.txt. The alternative was support in the Distutils for replacing/upgrading bits of the standard library; Guido was, shall we say, non-receptive to that idea. Oh well. > This CVS version also separates out LAPACK/BLAS so that using the "lite" > version of the libraries supplied with numpy is now optional. A small > attempt is made to find the library or the user can edit setup.py to set the > locations if that fails. Oh good, does that mean it'll take less than 20 minutes to compile NumPy on my pokey old 100 MHz Pentium? ;-) Greg -- Greg Ward - Unix bigot gw...@py... http://starship.python.net/~gward/ Software patents SUCK -- boycott amazon.com! |
From: Paul F. D. <pau...@ho...> - 2000-07-08 16:45:53
|
The project page has a patch manager for contributions. Please note that Travis is in the middle of a substantial reimplementation and so I think nobody would want to do a lot of optimizing right now. > -----Original Message----- > From: num...@li... > [mailto:num...@li...]On Behalf Of Pete > Shinners > Sent: Saturday, July 08, 2000 12:56 AM > To: num...@so... > Subject: [Numpy-discussion] Optimizing Numpy > > > i've been throwing my hand and getting more speed out of > numpy. i've birthed a little fruit from my efforts. my area > of use is specifically with 2D arrays with image info. > > anyways, i've attached an 'arrayobject.c' file that is > from the 15.3 release and optimized. > > in my test case the code ran about twice the speed of > the original 15.3 release. i went and tested out other > uses and found on a 20% speedup pretty consistent. > (for example, i cranked the mandelbrot demo resolution > to 320 x 200 and removed the 'print' command and it > went from a runtime of 5.5 to 4.5) > > i'm not sure how people 'officially' make contributions > to the code, but i hope this is easy enough to merge. i > also hope this is accepted (or at least reviewed) for > inclusion in the next release. > > > optimizing further... > i also plan on a few more optimizations. the least is going > to be a 'C' version of 'arrayrange' and probably 'ones'. the > current arrayrange is pretty slow (slower than the standard > python 'range' in all my tests). > the other optimization is a bit more drastic, and i'd like > to hear feedback from more 'numpy experts' before making the > change. in the file 'arraytypes.c' with all the arrays of conversion > functions, i've found that the conversion routines are a little > too 'elaborate'. these routines are only ever called from one line > and the the two "increment/skip" arguments are always hardcoded one. > there are two possible roads to speedup the conversion of array > types. > 1-- optimize all the conversion routines so they aren't so generic. > this should be a pretty easy fix and should offer noticeable speed. > 2-- do a better job of converting arrays. instead of creating a > whole new array of the new type and simply copying that, create > a conversion method that simply converts the data directly into > the destination array. this would mean using all those conversion > routines to their full power. this would offer more speed than > the first option, but is also a lot more work > > well, what do people think? my initial thought is to make a > quick python script to take 'arraytypes.c' and convert all the > functions to be a quicker version. > > > numpy is amazing, and i'm glad to get a chance to better it! > > |
From: Paul F. D. <pau...@ho...> - 2000-07-08 16:42:43
|
Setting the "savespace" property encourages Numeric to keep the results of calculations at smaller precisions. The error in the test has been fixed by removing the test for now. It appears the result of the test is platform-dependent. There is a list of fixed and open bugs on the project page http://sourceforge.net/projects/numpy. > -----Original Message----- > From: num...@li... > [mailto:num...@li...]On Behalf Of Pete > Shinners > Sent: Saturday, July 08, 2000 12:36 AM > To: num...@so... > Subject: [Numpy-discussion] savebit questions and errors > > > i'm developing with some Numeric stuff and don't have > a solid grasp on what the 'SAVEBIT' stuff is. it's not > mentioned in the docs at all. > > what's also strange is the the numpy testing "test_all.py" > fails in the final sections when testing the savebit > stuff. > > can someone give a quicky description of what this flag > can be used for? > > also... i assume someone already knows about the error in > the testing? i ran it on both a MIPS5000 IRIX system and > an x86 win98 system and it errored out consistently on > the test line 506. > > > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion |
From: Pete S. <pe...@sh...> - 2000-07-08 07:59:07
|
i've been throwing my hand and getting more speed out of numpy. i've birthed a little fruit from my efforts. my area of use is specifically with 2D arrays with image info. anyways, i've attached an 'arrayobject.c' file that is from the 15.3 release and optimized. in my test case the code ran about twice the speed of the original 15.3 release. i went and tested out other uses and found on a 20% speedup pretty consistent. (for example, i cranked the mandelbrot demo resolution to 320 x 200 and removed the 'print' command and it went from a runtime of 5.5 to 4.5) i'm not sure how people 'officially' make contributions to the code, but i hope this is easy enough to merge. i also hope this is accepted (or at least reviewed) for inclusion in the next release. optimizing further... i also plan on a few more optimizations. the least is going to be a 'C' version of 'arrayrange' and probably 'ones'. the current arrayrange is pretty slow (slower than the standard python 'range' in all my tests). the other optimization is a bit more drastic, and i'd like to hear feedback from more 'numpy experts' before making the change. in the file 'arraytypes.c' with all the arrays of conversion functions, i've found that the conversion routines are a little too 'elaborate'. these routines are only ever called from one line and the the two "increment/skip" arguments are always hardcoded one. there are two possible roads to speedup the conversion of array types. 1-- optimize all the conversion routines so they aren't so generic. this should be a pretty easy fix and should offer noticeable speed. 2-- do a better job of converting arrays. instead of creating a whole new array of the new type and simply copying that, create a conversion method that simply converts the data directly into the destination array. this would mean using all those conversion routines to their full power. this would offer more speed than the first option, but is also a lot more work well, what do people think? my initial thought is to make a quick python script to take 'arraytypes.c' and convert all the functions to be a quicker version. numpy is amazing, and i'm glad to get a chance to better it! |
From: Pete S. <pe...@sh...> - 2000-07-08 07:38:54
|
i'm developing with some Numeric stuff and don't have a solid grasp on what the 'SAVEBIT' stuff is. it's not mentioned in the docs at all. what's also strange is the the numpy testing "test_all.py" fails in the final sections when testing the savebit stuff. can someone give a quicky description of what this flag can be used for? also... i assume someone already knows about the error in the testing? i ran it on both a MIPS5000 IRIX system and an x86 win98 system and it errored out consistently on the test line 506. |
From: Greg W. <gw...@py...> - 2000-07-07 23:20:15
|
On 06 July 2000, Paul F. Dubois said: > I found an error in the previous setup.py; it was installing headers > in include/python1.6/Numerical instead of Numeric. This apparently > gets fixed if you change the name of the package (which I otherwise > thought didn't do anything.) That's a feature. If Joe Blow releases an extension that requires the headers from NumPy, he should just have to specify, "I require the headers for <fill-in-the-blank>" and have Distutils take care of the -I paths for him. (It doesn't do this currently, but it could and should!) Don't tell me I'm the only one who's confused about whether it's "NumPy", "Numerical Python", or "Numeric Python", and whether the above blank should be filled in with "Numeric" or "Numerical". BTW, the distribution name is also used, obviously, to create source and built distributions. So naming the header file directory after it is not without precedent. (It does have the subtle side-effect that distribution names should be valid as part of the filename in C #include statements. I have no idea what restrictions that imposes... but it's probably just common sense to stick to [a-zA-Z0-9_-] in distribution names and filenames.) Greg -- Greg Ward - Unix geek gw...@py... http://starship.python.net/~gward/ "Question authority!" "Oh yeah? Says who?" |
From: Paul F. D. <du...@ll...> - 2000-07-06 17:10:41
|
I have made even more changes to Numeric this morning, separating off FFT and MA as separate packages and adding the package RNG. I found an error in the previous setup.py; it was installing headers in include/python1.6/Numerical instead of Numeric. This apparently gets fixed if you change the name of the package (which I otherwise thought didn't do anything.) |
From: Paul F. D. <du...@ll...> - 2000-07-06 14:06:10
|
I checked in a new version of setup.py for Numerical that corresponds to Distutils-0.9. This is a modification of the setup_numpy.py that Greg has in Distutils. This version REQUIRES the new Distutils. If you install Distutils into 1.6a2, remember (which I didn't at first) to go delete the distutils directory in the Python library directory. If you don't you will get a message while running setup informing you that your Distutils must be updated. This CVS version also separates out LAPACK/BLAS so that using the "lite" version of the libraries supplied with numpy is now optional. A small attempt is made to find the library or the user can edit setup.py to set the locations if that fails. I have not tested this on Windows and I would bet it needs help; we probably won't cut a new Numerical release until this is resolved and Python 2.0 is out so that the needed version of Distutils is standard. Suggestions for improvements would be most welcome. Paul |
From: <ro...@ho...> - 2000-07-04 07:59:52
|
Hm. I found a bug in one of my programs that was due to the difference in behavior between a 'f' shape () array and a true python scalar: import Numeric a=Numeric.zeros((50,50),'f') b=[] for i in range(50): d=a[i,i] b.append(d) nb=Numeric.array(b) print nb.shape # Expect (50,) but get (50,1) BTW: Why does "a=Numeric.zeros((50,),'f'); d=a[i]" return a python scalar, and the above script a shape () array? Rob -- ===== ro...@ho... http://www.hooft.net/people/rob/ ===== ===== R&D, Nonius BV, Delft http://www.nonius.nl/ ===== ===== PGPid 0xFA19277D ========================== Use Linux! ========= |
From: <hi...@di...> - 2000-06-26 17:58:10
|
> Did somebody already treat such problems?? Within Numpy or outside > Numpy. Do I have to install the full blown version of Lapack?? Yes, but it's not a big deal. If you then want a nicer high-level interface, look at the module LinearAlgebra for guidance; input parameters to LAPACK are treated rather consistently, so you shouldn't have to invent anything reallly new. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Vanroose W. <van...@ru...> - 2000-06-25 13:09:22
|
Dear NumPythoneers, I have to solve the generalized Eigenvalue problem A u = lambda B u There are lapack procedures http://netlib2.cs.utk.edu/lapack/lug/node37.html. These procedures are not present in the lite versions included in the NumPy distribution. Did somebody already treat such problems?? Within Numpy or outside Numpy. Do I have to install the full blown version of Lapack?? Wim Vanroose |
From: Chris M. <my...@tc...> - 2000-06-25 00:02:51
|
Michel Sanner wrote: > I built Numeric on a Dec Alpha under OSF1 V4.0. I built fine but when I ran > it I witnessed strange behavior. > > a = Numeric.identity(4) > a.shape = (16,) > > would raise an exception about the size of the array needing to remain > the same ??? I have seen the same behavior on a Dec Alpha running RedHat Linux, with Numeric compiled with gcc. The other random pieces of Numeric that I tried seemed to work correctly. Chris ========================================================================== Chris Myers Cornell Theory Center -------------------------------------------------------------------------- 636 Rhodes Hall email: my...@tc... Cornell University phone: (607) 255-5894 / fax: (607) 254-8888 Ithaca, NY 14853 http://www.tc.cornell.edu/~myers -------------------------------------------------------------------------- "To thine own self be blue." - Polonious Funk ========================================================================== On Sat, 24 Jun 2000 num...@li... wrote: > Date: Sat, 24 Jun 2000 12:19:28 -0700 > From: num...@li... > Reply-To: num...@li... > To: num...@li... > Subject: Numpy-discussion digest, Vol 1 #70 - 1 msg > > > Send Numpy-discussion mailing list submissions to > num...@li... > > To subscribe or unsubscribe via the web, visit > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion > or, via email, send a message with subject or body 'help' to > num...@li... > You can reach the person managing the list at > num...@li... > > When replying, please edit your Subject line so it is more specific than > "Re: Contents of Numpy-discussion digest..." > > > Today's Topics: > > 1. Numeric on Dec Alpha (Michel Sanner) > > --__--__-- > > Message: 1 > From: "Michel Sanner" <sa...@sc...> > Date: Fri, 23 Jun 2000 12:22:24 -0700 > To: num...@li... > Subject: [Numpy-discussion] Numeric on Dec Alpha > > Hi, I posted this message on the python-list a while ago and did not hear > anything .. so I try here :) > > I built Numeric on a Dec Alpha under OSF1 V4.0. I built fine but when I ran it > I witnessed strange behavior. > > a = Numeric.identity(4) > a.shape = (16,) > > would raise an exception about the size of the array needing to remain the same > ??? > > Using the debugger I found in arrayobject.c:2201 > > if (PyArray_As1D(&shape, (char **)&dimensions, &n, PyArray_LONG) == -1) > return NULL; > > After this call shape [0] is 4 BUT shape[1] is 0 ! > > I changed the code to > if (PyArray_As1D(&shape, (char **)&dimensions, &n, > PyArray_INT) == -1) return NULL; > > and got the right result. > > Did anyone else run into this kind of preblems ? what is the correct way to fix > that ? > > thanks > > -Michel > > > -- > > ----------------------------------------------------------------------- > > >>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!! > > Michel F. Sanner Ph.D. The Scripps Research Institute > Assistant Professor Department of Molecular Biology > 10550 North Torrey Pines Road > Tel. (858) 784-2341 La Jolla, CA 92037 > Fax. (858) 784-2860 > sa...@sc... http://www.scripps.edu/sanner > ----------------------------------------------------------------------- > > > > > --__--__-- > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/mailman/listinfo/numpy-discussion > > > End of Numpy-discussion Digest > |
From: Michel S. <sa...@sc...> - 2000-06-23 19:26:55
|
Hi, I posted this message on the python-list a while ago and did not hear anything .. so I try here :) I built Numeric on a Dec Alpha under OSF1 V4.0. I built fine but when I ran it I witnessed strange behavior. a = Numeric.identity(4) a.shape = (16,) would raise an exception about the size of the array needing to remain the same ??? Using the debugger I found in arrayobject.c:2201 if (PyArray_As1D(&shape, (char **)&dimensions, &n, PyArray_LONG) == -1) return NULL; After this call shape [0] is 4 BUT shape[1] is 0 ! I changed the code to if (PyArray_As1D(&shape, (char **)&dimensions, &n, PyArray_INT) == -1) return NULL; and got the right result. Did anyone else run into this kind of preblems ? what is the correct way to fix that ? thanks -Michel -- ----------------------------------------------------------------------- >>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!! Michel F. Sanner Ph.D. The Scripps Research Institute Assistant Professor Department of Molecular Biology 10550 North Torrey Pines Road Tel. (858) 784-2341 La Jolla, CA 92037 Fax. (858) 784-2860 sa...@sc... http://www.scripps.edu/sanner ----------------------------------------------------------------------- |
From: <hi...@di...> - 2000-06-15 18:25:22
|
> If I move things like FFT out of the core and make them separate packages, I > am left with a choice: make them real packages, which means their usage > would change, or structure the packages so that everything gets installed in > the Python search path the way it does now. > > The first choice is better for the future, walling everything off into > namespaces properly. The second choice doesn't break any existing code. There's a compromise: Change everything to a nice package structure, and provide a compatibility module for some transition period. Then everyone can adapt their code in a reasonable time. Ultimately the compatibility modules can disappear. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.55.69 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Janko H. <jh...@if...> - 2000-06-06 13:48:35
|
Hello, I started to use PyUnit to build another test framework for NumPy. The ideas behind this approach are: -- Build on a known and maybe standard framework. -- Make tests in such a way that they can be handled with Python so helper functions can be written to perform single tests from the commandline. -- The framework should facilitate the writing of new tests by everyone, not only for NumPy routines but also for more complex or derived programs. -- The tests should be self contained, each test is done in a clean environment. -- The hole test suite can be run, also if some tests are failing. This is important for all these cases, where there are known errors in the system libraries. -- Define and document what actually is tested. There are different criteria for tests on numerical functions, like testing the interface, the numerical valid input range, type coercions, the algorithm and so on. In the end I want to have a module which can be imported and the contained classes can be used to write new tests, perform tests and generate reports. Together with a naming convention it should be possible to automate the testing of new modules. I have written some ideas down at: http://lisboa.ifm.uni-kiel.de:80080/NumPy/NaFwiki/TestFramework There is also a module which is NOT the proposed framework, but which demonstrates how the code of the tests looks like. Are there some comments, objections or new ideas? With regards, __Janko |
From: Huaiyu Z. <HZ...@kn...> - 2000-06-06 01:03:09
|
Are you craving for Matlab/Octave style expressions in Python? (For example, A*B is matrix multiplication, not elementwise.) Now you can. I've just made a package MatPy and started a SourceForge project for it. It is implemented as wrappers around the Numeric and Gnuplot packages. You can find the source codes, tests and docs on the home page http://MatPy.sourceforge.net The main reason I have written this package is that I'm tired of dealing with NewAxis and have "Frame not aligned" exceptions. Now matrices and vectors behave as one would expect from linear algebra. Examples: >>> from MatPy.Matrix import * >>> A = rand((20,5)) >>> x = rand((5,1)) >>> y = A*x >>> b = solve(A,y) >>> norm(b-x) 1.16043535672e-15 >>> print x [ 0.276 0.553 0.733 0.388 0.5 ] >>> print x.T() [ 0.276 0.553 0.733 0.388 0.5 ] >>> print x.T()*x [ 1.32 ] >>> print x*x.T() [ 0.0763 0.153 0.203 0.107 0.138 0.153 0.306 0.406 0.214 0.277 0.203 0.406 0.538 0.284 0.367 0.107 0.214 0.284 0.15 0.194 0.138 0.277 0.367 0.194 0.25 ] >>> z = x + rand(x.shape)*1j >>> z.H() [ 0.276-0.606j 0.553-0.376j 0.733-0.933j 0.388-0.636j 0.5-0.314j ] >>> z.H()*z [ 3.2+0j ] >>> norm(z)**2 3.2026003449 There are also matrix and elementwise versions of functions: expm and exp, sqrtm and sqrt, etc. Questions, comments, suggestions and helps are all very welcome. It is a future plan to have an efficient interface to Octave to leverage its large code base. Enjoy! Huaiyu <hz...@us...> |
From: Paul G. <get...@mi...> - 2000-06-05 17:58:45
|
> I would dare to say that what you really need is > C=A[:,NewAxis]*B > C will be shaped as (7731,220), which is what you probably need. Based on a simple (small) test, this is exactly what I want; the result of the NewAxis computation is identical to a matrix multiply of the full-sized matrices. Thanks. Now I just have to read the docs until I understand precisely how and why this works. :) -- 101 USES FOR A DEAD MICROPROCESSOR (62) Fungus trellis |
From: Jean-Bernard A. <jb...@ph...> - 2000-06-05 17:46:33
|
Hy Numeric people! Thank you very much Janne for your suggestion of a compilation problem. I just recompiled with debug option and the Numeric.arange(2)*1j problem disapeared. Further testing of my Numeric 1.7 code shows that the code still breaks: Python 1.5.2 (#9, May 30 2000, 15:08:12) [GCC 2.95.2 19991024 (release)] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam Hello from .pythonrc.py >>> from Numeric import * >>> import Numeric >>> Numeric.__version__ '11' >>> a = arange(2, typecode=Complex) % 2 >>> a.typecode() 'O' >>> add.outer(a**2,a**2) Program received signal SIGSEGV, Segmentation fault. 0x4006b570 in malloc () (gdb) where (see at the end of message for debug info) I have no idea if it is another bug or another compilation problem. If this is an installation or compilation problem, I am surprised it occures with standart linux, gcc. But I understand it can happens. Proposal : a test for these problems should be included in test_all.py or other test of the module. I installed with Distutils-0.8.2 and tested with test_all.py without notification of any problem. PS: I have the problem with Numeric 11 (distribution 14 compiled with egcc) Thanks a lot for your help, Jean-Bernard On 2 Jun 2000, Janne Sinkkonen wrote: > Charles G Waldman <cg...@fn...> writes: > > > > >>> Numeric.__version__ > > > '11' > > > >>> Numeric.__version__ > > '15.2' > > >>> Numeric.arange(2)*1j > > array([ 0.+0.j, 0.+1.j]) > > I tested for the bug in the Numeric version 11 on the following: > > Python 1.5.2+ (#7, Nov 13 1999, 17:39:22) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 > Python 1.5.2+ (#4, Oct 6 1999, 22:18:42) [C] on linux2 (alpha with cc) > Python 1.5.2 (#1, Apr 18 1999, 16:03:16) [GCC pgcc-2.91.60 19981201 (egcs-1.1.1 on linux2 > > The bug was not present on these, nor in Numeric 15.2 in an SGI. So the > problem seems to be not in the source but in the installation (or the > compiler). > > -- > Janne > GNU gdb 4.17.m68k.objc.threads.hwwp.fpu.gnat Copyright 1998 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i486-pc-linux-gnu"... (gdb) run Starting program: /users/jbaddor/bin/python Python 1.5.2 (#9, May 30 2000, 15:08:12) [GCC 2.95.2 19991024 (release)] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam Hello from .pythonrc.py >>> from Numeric import * >>> import Numeric >>> Numeric.__version__ '11' >>> a = arange(2, typecode=Complex) % 2 >>> a.typecode() 'O' >>> add.outer(a**2,a**2) Program received signal SIGSEGV, Segmentation fault. 0x4006b570 in malloc () (gdb) where #0 0x4006b570 in malloc () #1 0x4006b0f5 in malloc () #2 0x8070c16 in PyString_FromStringAndSize (str=0x80d982c "[0j , (1+0j) , ", size=14) at stringobject.c:95 #3 0x80711b9 in string_slice (a=0x80d9818, i=0, j=14) at stringobject.c:381 #4 0x80621ca in PySequence_GetSlice (s=0x80d9818, i1=0, i2=-1) at abstract.c:869 #5 0x807920e in apply_slice (u=0x80d9818, v=0x0, w=0x80a39cc) at ceval.c:2552 #6 0x807699c in eval_code2 (co=0x80b4e88, globals=0x80b4d78, locals=0x0, args=0x8102758, argcount=5, kws=0x810276c, kwcount=1, defs=0x80e1554, defcount=2, owner=0x0) at ceval.c:938 #7 0x8077bbc in eval_code2 (co=0x80b4e88, globals=0x80b4d78, locals=0x0, args=0x80d32cc, argcount=6, kws=0x80d32e4, kwcount=0, defs=0x80e1554, defcount=2, owner=0x0) at ceval.c:1612 #8 0x8077bbc in eval_code2 (co=0x80e7818, globals=0x80b4d78, locals=0x0, args=0x80d8ba0, argcount=6, kws=0x80d8bb8, kwcount=0, defs=0x80e19bc, defcount=5, owner=0x0) at ceval.c:1612 #9 0x8077bbc in eval_code2 (co=0x80d5870, globals=0x80d95b0, locals=0x0, args=0x8102a0c, argcount=1, kws=0x0, kwcount=0, defs=0x80e65c4, defcount=3, owner=0x0) at ceval.c:1612 #10 0x807909d in call_function (func=0x80b4d58, arg=0x8102a00, kw=0x0) at ceval.c:2484 #11 0x8078c4d in PyEval_CallObjectWithKeywords (func=0x80b4d58, arg=0x8102a00, kw=0x0) at ceval.c:2322 #12 0x4014d405 in array_repr (self=0x80d3000) at Src/arrayobject.c:1119 #13 0x806ff52 in PyObject_Repr (v=0x80d3000) at object.c:237 #14 0x806fe5b in PyObject_Print (op=0x80d3000, fp=0x809f0c8, flags=0) at object.c:188 #15 0x8067e2e in PyFile_WriteObject (v=0x80d3000, f=0x80a2880, flags=0) at fileobject.c:1044 #16 0x8076c55 in eval_code2 (co=0x80d3598, globals=0x80a37e8, locals=0x80a37e8, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, owner=0x0) at ceval.c:1030 #17 0x8076008 in PyEval_EvalCode (co=0x80d3598, globals=0x80a37e8, locals=0x80a37e8) at ceval.c:324 #18 0x805c957 in run_node (n=0x80c7798, filename=0x8088027 "<stdin>", globals=0x80a37e8, locals=0x80a37e8) at pythonrun.c:887 #19 0x805bdf8 in PyRun_InteractiveOne (fp=0x809f170, filename=0x8088027 "<stdin>") at pythonrun.c:528 #20 0x805bc6f in PyRun_InteractiveLoop (fp=0x809f170, filename=0x8088027 "<stdin>") at pythonrun.c:472 #21 0x805bbb2 in PyRun_AnyFile (fp=0x809f170, filename=0x8088027 "<stdin>") at pythonrun.c:449 #22 0x804efd9 in Py_Main (argc=1, argv=0xbffffbb4) at main.c:287 #23 0x804ea5a in main (argc=1, argv=0xbffffbb4) at python.c:12 (gdb) |