You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Paul F. D. <pa...@pf...> - 2001-05-02 14:04:18
|
Mr. Verveer wrote: "I recently installed this version and it unfortunately broke my own extension C code. I found that my code did not link properly anymore because of the addition of the static declaration tot the API pointer (as noted in changes.txt). Since my extensions consist of several source files, this led to problems when importing my extensions. If I understand correctly, the addition of the static keyword means that the import_array() function can now only be used by extensions that are defined in a single C source file. I can easily work around this problem, by defining my own api pointer, and using a modified import_array() in my code, but I thought I should draw your attention to this problem in any case. Maybe another solution can be found that does not need this pointer to be declared static?" -- I tried this change to see whether it had any bad consequences. It did not appear to have bad consequences in that Numpy itself and all its packages built ok, and we were formerly not following the instructions in the Python documentation and now we are. And, OS-X now worked. So far so good. If I understand you correctly, this does not work properly if the extension is built in pieces. I think I see why. (Frankly, linking and dynamic loading are not something I know much about; as a user I just flounder around.) This all started because of the need to have access to the C API of one dynamic extension from another. Is there a way of institutionalizing your work-around so that others can use it? Frankly, I'm out of ideas. The community is going to have to rescue me on this one. Paul P.S. Wait. I have an idea. Konrad was the one who put all this in. He should be punished for his good deed...(:-> |
From: Gerard V. <gve...@la...> - 2001-05-01 12:37:34
|
Hi, Configuration: Python-2.1 + Numeric-20.0.0b2 on Linux-Mandrake-7.2. I have a Python interface to Numeric and the Qwt-C++ plotting library. When building this interface (a shared library) for Numeric-20.0.0b2, I noticed that the additional static in front of the importing API pointer in arrayobject.h (your OS X fix) leads to an unresolved symbol PyArray_API when building a shared library. My fix is to remove the keyword static from arrayobject.h. Has somebody else encountered this problem? I suppose that this problem should show up with any shared library using import_array() (version 20.0.0b2) on Linux. If the additional static really solves the OS X problems, maybe an #ifdef OSX ... #else ... #endif is necessary. Best regards -- Gerard |
From: Tim C. <tc...@op...> - 2001-04-30 21:17:12
|
Chris Barker wrote: > > Tim Churches wrote: > >Our problem domain involves a mix of manipulating > > very large integer arrays and then floating point calculations on > > smaller arrays, so FP speed is probably not of paramount importance, > > but memory bandwidth and clock speed probably is (perhaps, maybe, > > possibly). > > Have you checked out the PPC G4 option? Apple certainly like to > advertise how fast it can be, which is ussually deceptive, but, in fact > the one place iot does shine is large integer manipulations (in the > Apple literature: "some photoshop applications"). I havn't done any > speed comparisons, but we have found it to pretty fast when using the > optimised BLAS from Absoft. > > Mac OS is pretty patthetic for this kind of application but PPC linux or > OS-X should do the trick for you. IT might be worth a little > investigating. Yes, I had the same thought but it appears that the speed of the CPU-to-main-RAM bus in the G4s leaves a lot to be desired, but the CPUs do have a largish local cache which enables lots of speed on problems with high locality. The other problem is that they don't come standard with SCSI discs and our local Apple dealer seemed unsure what to do when it came to SCSI. He had never heard of PPC Linux. Also, our local IT support people just about tolerate us running Linux on Intel hardware connected to "their" network, but the thought of throwing Apple OSs into the mix would give them conniptions, I suspect. > > By the way, just how much does that 400 Mhz memory cost now? I think the memory is only 100MHz but it is "quad-clocked" and "double ported" or somesuch. Anyway, Dell Australia are quoting about AUD$2800 for "4 x 256MB PC800 ECC RAMBUS RIMM" - that's about US$1400. Seems cheap to me, for somewhat specialised memory. Compaq were quoting nearly three times as much for the same memory modules for their equivalent workstation. Cheers, Tim C Sydney, Australia |
From: Chris B. <chr...@ho...> - 2001-04-30 16:40:45
|
Tim Churches wrote: >Our problem domain involves a mix of manipulating > very large integer arrays and then floating point calculations on > smaller arrays, so FP speed is probably not of paramount importance, > but memory bandwidth and clock speed probably is (perhaps, maybe, > possibly). Have you checked out the PPC G4 option? Apple certainly like to advertise how fast it can be, which is ussually deceptive, but, in fact the one place iot does shine is large integer manipulations (in the Apple literature: "some photoshop applications"). I havn't done any speed comparisons, but we have found it to pretty fast when using the optimised BLAS from Absoft. Mac OS is pretty patthetic for this kind of application but PPC linux or OS-X should do the trick for you. IT might be worth a little investigating. By the way, just how much does that 400 Mhz memory cost now? -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Tim C. <tc...@op...> - 2001-04-29 21:02:16
|
Roy...@cc... wrote: > > Hi, this might not be what you asked for, but... > > There is a benchmark report comparing 1.7GHz P4 and 1.3 GHz Athlon on > > http://www.tech-report.com/reviews/2001q2/pentium4-1.7/index.x?pg=1 > > Mostly gaming and windows stuff, but also a scientific benchmark > included (implementing the Hartree-Fock algorithm for those with > knowlegde in quantum physics). The Athlon still has an egde even on > the scientific benchmark. > > On the other hand, the stream benchmark numbers for the P4 is some of > the most impressive I've seen. > > http://www.cs.virginia.edu/stream/ > > So if your application is like an old vector code you might be better > off with a P4, but the memory price is insane... Many thanks. I suspect that the stream benchmarks are quite relevant to our code. Maybe there is no cause for Alpha-envy after all... The costs of RDRAM for the P4 seems to have fallen dramatically in the last few months. And Intel just cut the cost of the P4 itself. Cheers, Tim C |
From: <Roy...@cc...> - 2001-04-29 18:51:03
|
Hi, this might not be what you asked for, but... There is a benchmark report comparing 1.7GHz P4 and 1.3 GHz Athlon on=20 http://www.tech-report.com/reviews/2001q2/pentium4-1.7/index.x?pg=3D1 Mostly gaming and windows stuff, but also a scientific benchmark included (implementing the Hartree-Fock algorithm for those with knowlegde in quantum physics). The Athlon still has an egde even on the scientific benchmark. On the other hand, the stream benchmark numbers for the P4 is some of the most impressive I've seen. http://www.cs.virginia.edu/stream/ So if your application is like an old vector code you might be better off with a P4, but the memory price is insane... Regards, r. --=20 The Computer Center, University of Troms=F8, N-9037 TROMS=D8, Norway. phone:+47 77 64 41 07, fax:+47 77 64 41 00 Roy Dragseth, High Perfomance Computing System Administrator Direct call: +47 77 64 62 56. email: ro...@cc... |
From: Tim C. <tc...@op...> - 2001-04-29 01:13:30
|
Does anyone have any experience running Numpy under Linux on a Pentium 4 (P4) system? P4 boxes appear to have some attractions: a 1.7 GHz model is now available, they reputedly have a 400 MHz CPU-to-RAM bus, the Intel 850 chipset used with them supports ECC and the cost of ECC RDRAM needed has fallen to the point that 1 Gbyte of RAM is now quite affordable. It's the high CPU-to-RAM bandwidth which suggests that P4 systems might offer a gain in performance when used with Numpy to manipulate large arrays which is disproprtionate to their higher clock speeds, compared to Pentium 3 systems, which are limited to a 133 MHz CPU-to-RAM bus. Is this born out in practice? Ideally we would like an Alpha box, but they just cost too much from Compaq and I don't think Microway or other alternative Alpha box manufacturers are represented here in Australia. Our problem domain involves a mix of manipulating very large integer arrays and then floating point calculations on smaller arrays, so FP speed is probably not of paramount importance, but memory bandwidth and clock speed probably is (perhaps, maybe, possibly). Tim C Sydney, Australia |
From: Stuart I R. <S.I...@cs...> - 2001-04-28 14:00:11
|
Actually your example works for me too. Looks like only it only works for 1D arrays, >>> a1 = array([range(5), range(5)]).astype('O') >>> a1 array([[0 , 1 , 2 , 3 , 4 ], [0 , 1 , 2 , 3 , 4 ]],'O') >>> a2 = arange(6) >>> a2 array([0, 1, 2, 3, 4, 5]) >>> a1[1,1] = a2 Traceback (most recent call last): File "<stdin>", line 1, in ? ValueError: array too large for destination >>> a1 = arange(5).astype('O') >>> a1[1] = a2 >>> a1 array([0 , [0 1 2 3 4 5] , 2 , 3 , 4 ],'O') Hmm. This looks like a bug then. I'll submit a bug report. Cheers, Stuart On Thu, 19 Apr 2001, Pete Shinners wrote: > shoot, sorry then, i just tested it and it seemed happy, let me > try again here... > > >>> a1 = arange(5).astype('O') > >>> a2 = arange(5,9) > >>> a1 > array([0 , 1 , 2 , 3 , 4 ],'O') > >>> a2 > array([5, 6, 7, 8]) |
From: Konrad H. <hi...@cn...> - 2001-04-26 15:56:00
|
> Okay, you have higher standards than me. :) I was imagining just > sticking in an '-O' somewhere, or similar. For Linux, for example, I need to replace -O2 by -O3, and only for some files, as others don't work with O3 optimization. Not so simple. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: John J. L. <ph...@cs...> - 2001-04-26 01:54:17
|
On Wed, 25 Apr 2001, Konrad Hinsen wrote: [...] > It would be quite serious hacking. Distutils just calls the compiler > exactly as it was called for compiling the Python interpreter. To change > the optimization level, I'd have to parse the command to extract > any optimization options, remove them, and put my own instead. I can't > imagine anything more platform-dependent. [...] Okay, you have higher standards than me. :) I was imagining just sticking in an '-O' somewhere, or similar. John |
From: Jack J. <ja...@or...> - 2001-04-25 22:04:43
|
I'd like to request that the CVS repository be tagged when a distribution is made. I'm building the MacPython 2.1 distribution (which includes numeric) and I was forced to include Numeric 20.0.0b2, because there wasn't a tag that allowed me to check out 19.0 or another stable release... -- Jack Jansen | ++++ stop the execution of Mumia Abu-Jamal ++++ Jac...@or... | ++++ if you agree copy these lines to your sig ++++ www.oratrix.nl/~jack | ++++ see http://www.xs4all.nl/~tank/ ++++ |
From: Konrad H. <hi...@cn...> - 2001-04-25 18:09:37
|
> On Tue, 24 Apr 2001, Konrad Hinsen wrote: > [...] > > if I use this in an official release. Some of the code just needs to > > be compiled with the highest optimization level, and there is no way I > > can do that with Distutils. > [...] > > But you can if you're willing to hack things a little bit, surely? > Possibly even without hacking -- IIRC there are some things that are It would be quite serious hacking. Distutils just calls the compiler exactly as it was called for compiling the Python interpreter. To change the optimization level, I'd have to parse the command to extract any optimization options, remove them, and put my own instead. I can't imagine anything more platform-dependent. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Tim L. <tj...@sy...> - 2001-04-25 16:36:42
|
All, It appears that Paul's changes to Numeric now allow the Numeric modules to at least be imported under Mac OS X. I'll be testing the code further in the next week or so. Thanks Paul! Cheers, Tim Lahey |
From: John J. L. <ph...@cs...> - 2001-04-24 18:24:26
|
On Tue, 24 Apr 2001, Konrad Hinsen wrote: [...] > if I use this in an official release. Some of the code just needs to > be compiled with the highest optimization level, and there is no way I > can do that with Distutils. [...] But you can if you're willing to hack things a little bit, surely? Possibly even without hacking -- IIRC there are some things that are supposed to be subclassed in there, aren't there? I don't remember the details now, but I think that subclassing the compiler class will do it, or possibly customize_compiler()...? Neither are the proper ways, if any exist, I'm sure. From the point of view of saving time, of course, this isn't ideal. John |
From: Konrad H. <hi...@cn...> - 2001-04-24 15:17:55
|
> now i'm really keyed to getting something similar for my own project. > how are you building the .RPM, i didn't see a spec file anywhere? i was > under the impression distutils still needed a SPEC file to do a proper > RPM? No, it creates one itself from the MANIFEST file plus some optional data given to the setup procedure. It takes some effort to get it to work, but presumably only once, so it looks like worth the effort. I did it for ScientificPython and I am quite happy with the result. However, I am not quite happy with other aspects of Distutils. I tried it on my Molecular Modelling Toolkit as well, but users will kill me if I use this in an official release. Some of the code just needs to be compiled with the highest optimization level, and there is no way I can do that with Distutils. I wonder if that could become a problem for NumPy as well - I wouldn't be surprised if the LAPACK, FFTPACK etc. code needed maximum optimization as well for good results. Did anyone do comparisons? Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Paul F. D. <pa...@pf...> - 2001-04-24 13:52:08
|
Pete wrote: wow, version 20. Numeric must be the highest version numbered project on sourceforge :] My philosophy is to change the major version number if you do ANYTHING that might require changes in client code. In this case I changed things so that if someone had their own way of building (such as a static build) they might need to change their Makefile. I suppose I overdo it but version numbers are free. I had a project that used an automatic system to up the version number every time we made a build. We got to Basis 499 but then we made the next one 5.00 and stopped that system. Basis is now at a leisurely 12.0 after 16 years. _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Pete S. <pe...@vi...> - 2001-04-24 01:11:28
|
> Numpy has been restructured slightly to enable use of distutils to build: wow, you really did turn things around. anyways, sounds great with the option to build all those distributions automatically (guess i'm out of a job there) :] now i'm really keyed to getting something similar for my own project. how are you building the .RPM, i didn't see a spec file anywhere? i was under the impression distutils still needed a SPEC file to do a proper RPM? anyways, i just threw a patch into the sourceforge patch manager. the code compiles and runs great on python2.0 so i thought it would be worthwhile to let the "setup.py" script also work with that version of python. i also cleaned up a couple compiler warnings. (and in the case of blas_lite.c, just turned them off with some pragma) wow, version 20. Numeric must be the highest version numbered project on sourceforge :] |
From: Thomas H. <tho...@io...> - 2001-04-23 18:13:53
|
> Eeek. It does work for me. I just wasn't doing it right. Being a stupid Unix > hacker I was trying to execute a file in Python 2.1 that said > RemoveMA-8.6.exe. I should have realized that would not remove MA-8.6. (:-> > It's not that easy on windows: The days where you would simply start a program to do something are long gone ;-). I assume you got a message-box, saying something like 'This program is normally run by windows', which seems not to be very good. Probably it should say: 'To remove the MA-8.6 package, please use Control Panel->Add/Remove Programs' or something like that. Thomas > Indeed, if I do the normal Windows Add/Remove programs, no problem. > > I do suggest renaming the file, > IWouldRemoveMA-8.6IfYouKnewHow.notexe |
From: Paul F. D. <du...@us...> - 2001-04-23 17:59:42
|
Eeek. It does work for me. I just wasn't doing it right. Being a stupid Unix hacker I was trying to execute a file in Python 2.1 that said RemoveMA-8.6.exe. I should have realized that would not remove MA-8.6. (:-> Indeed, if I do the normal Windows Add/Remove programs, no problem. I do suggest renaming the file, IWouldRemoveMA-8.6IfYouKnewHow.notexe -----Original Message----- From: dis...@py... [mailto:dis...@py...]On Behalf Of Thomas Heller Sent: Monday, April 23, 2001 10:50 AM To: Paul F. Dubois; Numpy-Discussion@Lists. Sourceforge. Net Cc: Dis...@Py... Subject: Re: [Distutils] Installers for Numeric > I did try the Windows installers and although they seem to work the > "uninstall" didn't work for me. I have no clue if that is because that > command isn't expected to work yet or not. This command _is_expected to work (and it did indeed for me). Which problems did you have? Thomas _______________________________________________ Distutils-SIG maillist - Dis...@py... http://mail.python.org/mailman/listinfo/distutils-sig |
From: Thomas H. <tho...@io...> - 2001-04-23 17:50:25
|
> I did try the Windows installers and although they seem to work the > "uninstall" didn't work for me. I have no clue if that is because that > command isn't expected to work yet or not. This command _is_expected to work (and it did indeed for me). Which problems did you have? Thomas |
From: Paul F. D. <pa...@pf...> - 2001-04-23 17:31:13
|
Dear Nummies, If you picked up the source installer for 20.0.0b2 in the first hour after I released it, it will die when building "kinds". Since you probably don't care, everything else will have been done by then. I have fixed both the .tar.gz and .zip source distributions now. This egregious error gives me the chance to say that I haven't the slightest idea whether the rpm and source rpm work. I just made 'em with Distutils. You rpm freaks out there will have to check it out and if it needs fixing, fix it. But we get so many people asking for those and for the Windows installers that I thought it worth a shot. Distutils burbled along pleasantly while making them, anyway. I did try the Windows installers and although they seem to work the "uninstall" didn't work for me. I have no clue if that is because that command isn't expected to work yet or not. Well informed, aren't I? Can you do better? Please do. == Paul |
From: Paul F. D. <pa...@pf...> - 2001-04-23 16:36:15
|
Numpy has been restructured slightly to enable use of distutils to build: Source .tar.gz Source .zip Windows installer (a real installer, not just a zip you unzip) RPM Source RPM Individual Windows installers for optional packages You can get these from http://sourceforge.net/projects/numpy. Changes are: Version 20.0 Redo setup.py so that binary windows installers for Numeric, FFT, MA, etc. can be made automatically. Packages LALITE and RANLIB merged back to top level. Adjustment for BLAS in setup.py. Documentation of Numeric module made more compatible with pydoc. argmin/argmax/argsort/sort errors with axis specs fixed (bug #233805) -- also made them capable of handling args that can be converted to arrays by adding an array(a, copy=0) at the start. Fixes sum, product, cumsum, cumproduct, alltrue, sometrue to deal with zero shape and to take arguments that can be converted to arrays. MA: See changes.txt file in MA for bug fixes and improvements. New option for average to return sum of weights. Fix bug in putmask so that it handles targets of type object. Because of reference counting issues this is done in Python, not C, and would not be more efficient than doing your own loop, but we include it for completeness. In Packages add draft implementation for PEP 0242, Numerical Kinds. Add PyArray_CopyArray to the API (Thanks to Dave Grote). Add defines for cygwin. In arrayobject.h, add static declaration to importing API pointer. May solve OS X problems. Fix bug in FFT packages, added new test. (Thanks to Warren Focke) -- <a href="http://numpy.sourceforge.net">Numerical Python</a> -- a fast array facility for Python. |
From: Stuart I R. <S.I...@cs...> - 2001-04-19 12:06:45
|
Is there a standard workaround to allow you to assign a sequence to a a single cell in PyObject array? from Numeric import * # Create a 2*5 array a = array([range(5),range(5)], PyObject) a[0,0] = 4 #OK - assigns an int to element 0,0 a[0,3] = {} #OK - assigns a dict # but, trying to assign an array to element 1,4 # (or any sequence) a[1,4] = array(range(10), Float) The last assignment causes, ValueError: array too large for destination error Doh!! Presumably its trying to treat this as an assignment to a slice. Is this a bug? Clearly it shouldn't be treated as a slice assignment since a[1,4] can only refer to an atomic element in the (2D) array, not a sequence. Cheers, Stuart |
From: Rob M. <ma...@ll...> - 2001-04-18 19:58:55
|
> Konrad Hinsen <hi...@cn...> wrote > > Could you give a quick explanation why? I thought the whole point of >> the "extern" specifier was to flag that this variable was defined >> elsewhere. > >Right, but with most platforms' shared library systems, this means >"in another source file that is part of the same shared library", >not "in another shared library" or "in the main executable". > >> Otherwise, doesn't it imply that the API pointer is >> defined in each file that includes arrayobject.h? >> >> i.e. shouldn't headers declare "extern double x" for everything except >> the file that actually defines x? > >If a dynamically loaded module consists of more than one source file, >then all but one of them (the one which calls import_array()) must >define NO_IMPORT_ARRAY before including arrayobject.h. This is also >the answer to Kevin Rodgers' question. > > >However, it seems to me that the current arrangement in NumPy has >another serious drawback: it should be impossible to link NumPy >statically with the Python interpreter, due to multiply defined >symbols. And I am rather sure that this was possible many versions >ago, since I used NumPy on a Cray T3E, which does not have shared >libraries. > >I checked my own extension modules that export a C API, and they all >declare the API pointer static. This is also what the C API export >section in the "Extending & Embedding" manual recommends. (OK, I admit >that I wrote that section, so that is not a coincidence!) > > >So perhaps the best solution is to make this static. Client modules >that consist of more than one source code file with PyArray... calls >must then call import_array() once in every such file, or equivalently >pass on the PyArray_API pointer explicitly between the files. That >sounds quite acceptable to me. > In my experience porting other libraries to the Mac I find that most Unix boxes are not at all upset by what CodeWarrior calls multiple definitions. The answer in that case was in details that are not specified in the C standard. Section 4.8 of "A Reference Manual" by Harbison and Steele goes into the details of how external names are handled. Most UNIX compilers use the mixed Common Model. In this case you can define a global any number of times and if there is no initializer present they are all merged into one much like a Fortran Common block. My point is that what I consider lazy coding practice (not using externs where needed) is tolerated by many C compilers. I am not competent to comment on the impact of shared libraries on all this. -- *-*-*-*-*-*-*-*-*-*-**-*-*-*-*-*-*-*-*-*-*- Rob Managan <mailto://ma...@ll...> LLNL ph: 925-423-0903 P.O. Box 808, L-095 FAX: 925-422-3389 Livermore, CA 94551-0808 |
From: T.Meyarivan <ma...@ii...> - 2001-04-18 19:56:01
|
Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. IDLE 0.8 -- press F1 for help >>> from Numeric import * >>> from LinearAlgebra import * >>> a = array([[1,0],[0,1]]) >>> print a, type(a) [[1 0] [0 1]] <type 'array'> >>> eigenvectors(a) Traceback (most recent call last): File "<pyshell#4>", line 1, in ? eigenvectors(a) File "c:\program files\python21\numeric\LinearAlgebra.py", line 151, in eigenvectors dummy, 1, vr, n, work, lwork, 0) LapackError: Expected an array for parameter a in lapack_dge.dgeev >>> the above test works fine under unix environment using the same version of python and numpy |