You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Konrad H. <hi...@cn...> - 2001-08-29 09:38:38
|
> Is there any way to get invert() to work with complex arrays? If you mean LinearAlgebra.inverse(), it should work with complex arrays, although I haven't checked that for a long time. Konrad. -- ------------------------------------------------------------------------------- Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais ------------------------------------------------------------------------------- |
From: Travis O. <oli...@ie...> - 2001-08-29 05:34:36
|
Eric brought up the point of one-offset arrays which can be relatively easily created with UserArrays. This led me to some thinking about why UserArrays are not more often used. I think one of the biggest reasons is that most of the functions can take UserArrays but returned the basic array type upon completion. So, you end up having to continually construct your UserArrays. Are there other reasons people have thought of? So, here's a suggestion: Why don't we modify PyArray_Return to return an object of the same class as one of the arguments which was passed if the class defines an __array__ method. Which argument to pick and how this would be implemented without changing old code is an interesting question. Assuming it were possible to cause PyArray_Return to do such a thing, would this be a bad idea? Sincrely, Tried-to-use-Matrix-objects-but-always-resort-to-dot(x,x) Travis Oliphant |
From: Travis O. <oli...@ie...> - 2001-08-29 05:27:15
|
> In my view, the most important reason to prefer 1-based indexing > versus 0-based indexing is compatibility. For numerical work, some of > the languages which I use or have used are Matlab, Mathematica, Maple > and Fortran. These are all 1-indexed. (C is by nature 0-indexed > because it is so close to machine architecture, but with a little bit > of not-entirely-clean pointer manipulation, you can easily make > 1-indexed arrays and matrices.) Obviously Python can't be compatible > with these languages in a strict sense, but like most people who do > some programming work, I've built up a library of my own commonly used > routines specific to my work; in general it's a trivial matter to > translate numerical routines from one language to the another if > translation is just a matter of substituting of one set of syntactical > symbols and function names for anther. However it can be damn tricky > to convert 1-indexed code to 0-indexed code or visa versa without > introducing any errors- believe me! (Yes it's possible to call nearly > any language from nearly any other language these days so in theory > you don't have to recode, but there are lots of reasons why often > recoding is the preferable route.) You aren't the first to raise this issue. I wouldn't mind if the user had the option, but then I again I tend to prefer the flag-for-every-feature approach which others who have more computing experience than me have said leads to problems due to the presence of many different ways to do things and unforseen interaction.s I could definitely see the coding advantage in dealing with implementing algorithms that uses notation that is already 1-based. I have come across this myself -- in fact just yesterday when I was coding up the Pade approximation to the matrix exponential using the pseudo-code algorithm given by Golub and Van Loan in their "Matrix Computations" book. It seems to me like it would be a lot of work to add this feature back into the code now (there would be a million places to look for places where the code inherently assumes 0-based indexing). It would also, as you mention, be inconsistent with Python. A general approach would be to inherit from the UserArray for your codes and reimplement the __getitem__ and __getslice__ commands. Your objects should still be able to be passed to many of the routines which expect arrays (because under the covers one of the first things the array_from_object C-code does is check to see if the object has an __array__ method and calls it). Note that this will not copy data around so there is minimal overhead. But, you would have to take care to wrap the returned object back into an array_object. (Maybe something could be done here...Hmmm.) > > By the way, what is leave-last-one-out slicing? Is it > a[:-1] > or is it > a[0,...] > or is it something else? I meant the fact that a[3:6] returns elements a[3], a[4], a[5] but NOT a[6]. I'm sorry for using my own poorly-worded term. I can't remember what other Pythonistas call it. |
From: Eric N. <no...@ph...> - 2001-08-29 05:00:53
|
>> Does anyone have a simple method of using one-offset arrays? >> Specifically, can I define an array "a" so that a[1] refers to the >> first element? >> > Inherit from the UserArray and redefine slicing to your hearts content. >> Without one-offset indexing, it seems to me that Python is minimally >> useful for numerical computations. Many, perhaps the majority, of >> numerical algorithms are one-indexed. Matlab for example is one-based >> for this reason. In fact it seems strange to me that a "high-level" >> language like Python should use zero-offset lists. >> > Wow, that is a pretty condescending-sounding statement, though I'm sure you > didn't mean it that way. Python is indeed being used for quite serious > numerical computations. I have been using Python for quite some time for > Numerical work and find it's zero-based indexing combined with the > leave-last-one-out slicing notation to be much more convenient. Oops, what I really need is a wet-ware (i.e. brain) macro which enforces the correct order of the pair (think,write,send)! The above unconsidered comments arose from the sequence (frustration,write,send,think,regret). ;) Obviously there is a lot of numerical work being done with python and people are very happy with it. But for me I still think it would be "minimally useful" without 1-indexed arrays. Here's why: In my view, the most important reason to prefer 1-based indexing versus 0-based indexing is compatibility. For numerical work, some of the languages which I use or have used are Matlab, Mathematica, Maple and Fortran. These are all 1-indexed. (C is by nature 0-indexed because it is so close to machine architecture, but with a little bit of not-entirely-clean pointer manipulation, you can easily make 1-indexed arrays and matrices.) Obviously Python can't be compatible with these languages in a strict sense, but like most people who do some programming work, I've built up a library of my own commonly used routines specific to my work; in general it's a trivial matter to translate numerical routines from one language to the another if translation is just a matter of substituting of one set of syntactical symbols and function names for anther. However it can be damn tricky to convert 1-indexed code to 0-indexed code or visa versa without introducing any errors- believe me! (Yes it's possible to call nearly any language from nearly any other language these days so in theory you don't have to recode, but there are lots of reasons why often recoding is the preferable route.) The other reason for choosing 1-based indexing is to keep the code as near to the standard notation as possible. This is one of the advantages of using a high level language - it approximates the way you think about things instead of the way the computer organizes them. Of course, this can go either way depending on the quantity in question: as a simple example a spatial vector (x,y,z) is conventionally labelled 1,2,3 (1-indexed), but a relativistic four-vector with time included (t,x,y,z) is conventionally labelled 0,1,2,3 (0-indexed). So ideally one would be able to choose the indexing-type case-by-case. I'm sure that computer programmers will argue vehemently that code which mixes both 0-indexed and 1-indexed arrays is hard to understand and maintain, but for experts in a particular field who are accustomed to certain ingrained notations, it is the code which breaks the conventional notation which is most error-prone. In my case, I'm dealing at the moment with crystal structures with which are associated certain conventional sets of vectors and tensors - all 1-indexed by convention. I find it a delicate enough task to always get the correct vector or tensor without having to remember that d[2] is actually d3. Defining d1,d2,d3 is not convenient because often the index itself needs to be calculated. I guess if I understood the reason for 0-indexed lists and tuples in Python I would be happier. In normal, everyday usage, sets, collections and lists are 1-indexed (the first item, the second item, the third item, and so on). Python is otherwise such an elegant and natural language. Why the ugly exception of making the user conform to the underlying mechanism of an array being an address plus an offset? All this is really neither here nor there, since this debate, at least as far as Python is concerned, was probably settled 10 years ago and I'm sure nobody wants to hear anything more about it at this point. As you point out, I can define my own array type with inheritance. I will also need my own range command and several other functions which haven't occured to me yet. I was hoping that there would be a standard module to implement this. By the way, what is leave-last-one-out slicing? Is it a[:-1] or is it a[0,...] or is it something else? Eric |
From: J. M. <mi...@cg...> - 2001-08-29 03:29:26
|
Hi, this is surely a FAQ so I apologize in advance, but: I'm trying to install cephes-1.3 and getting this: cc -O2 -march=pentium -I/usr/local/include/python2.0 -I/usr/local/include/python2.0/Numeric -c amos_wrappers.c -o amos_wrappers.o In file included from cephes/mconf.h:162, from amos_wrappers.h:11, from amos_wrappers.c:8: cephes/protos.h:67: parse error before `sizeof' cephes/protos.h:68: parse error before `sizeof' cephes/protos.h:69: parse error before `sizeof' make: *** [amos_wrappers.o] Error 1 Am installing with: Python 2.0 (#14, Aug 2 2001, 10:06:36) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 and have already installed: Numeric-20.1.0 grateful for any help ... I did search a bit and it seems to be a known problem but I couldn't figure out the answer. Surely something simple. thanks Judah Judah Milgram mi...@cg... P.O. Box 8376, Langley Park, MD 20787 (301) 422-4626 (-3047 fax) |
From: Rob <eu...@ho...> - 2001-08-29 02:54:27
|
Is there any way to get invert() to work with complex arrays? Thanks, Rob. -- The Numeric Python EM Project www.members.home.net/europax |
From: Travis O. <oli...@ie...> - 2001-08-29 02:05:51
|
On Tue, 28 Aug 2001, Chris Barker wrote: > Hi all, > > Paul recently requested help with testing and debugging MLab.py, and I > offered to help out. He then sent me this note: > > "Paul F. Dubois" wrote: > > MLab.py started out as a package that was supposed to give numpy a > > Matlab look and feel. So useful questions are: are there really useful > > things that should be added? I use many of the functions in MLab regularly. I have no problem renaming it to Utility.py or even incorporating it back into some other module. scipy uses MLab currently has well. > > I can't answer these questions myself. Interestingly enough, despite the > fact that I came to NumPy having used MATLAB heavily for five years (and > one dissertation), I have made very little use of MLab.py. I'm wondering > if anyone is, indeed, using it. Yes, we are... > > As for trying to give numpy a MATLAB look and feel, I question the > usefulness of that. NumPy looks and feels a little different than > MATLAB, and, frankly, I like NumPy's feel better for the most part (now > that rich comparisons have been added, anyway). The one thing that > MATLAB has that NumPy doesn't, that I really miss, is list indexing. > Having to use put and take is so much less elegant! I'm sure this isn't > the least bit trivial to add, however. (is it even possible, given > Python's idea of slicing?) I don't think look and feel is the right word --- but increasing the number of easily accesible utility routines is what the goal of SciPy is all about. List indexing is difficult to add to the C-code due to the complexity of that particular subroutine. Both Paul and I have looked at the code to try and add this feature but neither one of us had the time to make it work right. I think it would be possible if we could ever agree on what list-indexing should do exactly for multidimensional arrays. Python's idea of slicing is actually quite general. It takes a[obj1, obj2, obj3] and essentially calls the (C-equivalent) of __getitem__ (the mapping interface) with the tuple (obj1, obj2, obj3). Any objects which look like (3:10:2) are translated to a slice-object. So, the objects could easily be indices into the array or a mask array of ones and zeros. I seem to remember that there were some "difficult-to-decide" cases but I can't recall them at the moment. > > Anyway, what I do think MATLAB does provide, and MLab.py should, is a > whole bunch of utility functions for various common manipulations. > > > Do we really have what we have correct? I'm pretty confident in it (that's not a life-staking claim...) Most of my utility-development work is taking place in SciPy now. Bugs will get fixed in MLab.py since Scipy uses it. > > > > Most of all, these is no test routine for it. If you could make one > > following the model of the test.py in directory Test, it would be great. > > The idea is to have something that does not require the runner of the > > test to know anything. It just runs silently unless something is wrong. > > I certainly could write up a test routine. I will start work on that. In > the meantime, before I put too much effort into it, I'd really like some > feedback on what people want MLab.py to be, if they see using it at all. > It's a good idea. > As I said above, I'm not sure trying to give NumPy a MATLAB feel is a > useful goal, so what is left is having all those handy functions. > Perhaps we could shift away from MLab.py, and turn it into Utilities.py, > which could use MATLAB (and other user experience) as guidance as to > what utilities to include. I've got some fun suggestions for utility functions which I've just been storing away. Some of them are put into SciPy. |
From: Travis O. <oli...@ie...> - 2001-08-29 01:38:38
|
On Tue, 28 Aug 2001, Eric Nodwell wrote: > Does anyone have a simple method of using one-offset arrays? > Specifically, can I define an array "a" so that a[1] refers to the > first element? > Inherit from the UserArray and redefine slicing to your hearts content. > Without one-offset indexing, it seems to me that Python is minimally > useful for numerical computations. Many, perhaps the majority, of > numerical algorithms are one-indexed. Matlab for example is one-based > for this reason. In fact it seems strange to me that a "high-level" > language like Python should use zero-offset lists. > Wow, that is a pretty condescending-sounding statement, though I'm sure you didn't mean it that way. Python is indeed being used for quite serious numerical computations. I have been using Python for quite some time for Numerical work and find it's zero-based indexing combined with the leave-last-one-out slicing notation to be much more convenient. It is different from MATLAB but that hardly makes it less useful as I suspect you'd agree after trying it out for awhile. What is your application domain that so requires 1-based indexing. Travis E. Oliphant, Ph.D. Assistant Professor Brigham Young University Provo, UT 84602 (801) 378-3108 oli...@ie... |
From: Serge R. <se...@ro...> - 2001-08-29 00:20:14
|
On Tue, Aug 28, 2001 at 12:09:50PM -0700, num...@li... wrote: -snip- > I certainly could write up a test routine. I will start work on that. In > the meantime, before I put too much effort into it, I'd really like some > feedback on what people want MLab.py to be, if they see using it at all. > > As I said above, I'm not sure trying to give NumPy a MATLAB feel is a > useful goal, so what is left is having all those handy functions. > Perhaps we could shift away from MLab.py, and turn it into Utilities.py, > which could use MATLAB (and other user experience) as guidance as to > what utilities to include. > > I'd really like folks' feedback on this before I put too much work into > something no one uses, or will use. i use MLab.py fairly often but see it more as a collection of useful functions. personally, i agree with moving it into Utilities.py, as that is in line with how i use it. otoh, MLab.py does provide a nice entry point for folks coming from Matlab into numpy. so, perhaps any eventual move to Utilities.py could be accompanied with some documentation to remind users of those linkages. i'd offer to help with the testing, but my copy of matlab is back in the states (i'm in australia on sabbatical) so i'd have to go from memory (dangerous). let me know if i can help you. cheers, serge -- Sergio J. Rey http://typhoon.sdsu.edu/rey.html "NT is secure.... as long as you don't remove the shrink wrap." - G. Myers |
From: Eric N. <no...@ph...> - 2001-08-28 23:24:28
|
Does anyone have a simple method of using one-offset arrays? Specifically, can I define an array "a" so that a[1] refers to the first element? Without one-offset indexing, it seems to me that Python is minimally useful for numerical computations. Many, perhaps the majority, of numerical algorithms are one-indexed. Matlab for example is one-based for this reason. In fact it seems strange to me that a "high-level" language like Python should use zero-offset lists. |
From: Paul F. D. <pa...@pf...> - 2001-08-28 19:42:42
|
Install Python first. From the Numerical Python download site, get the .exe file (not the zip) and run it. It is a conventional Setup-like program. Get .exe files for the optional packages if desired and run time too. Functions mean and std are available in module MLab.py (note the capital L). Python 2.2a2 (#22, Aug 22 2001, 01:24:03) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. IDLE 0.8 -- press F1 for help >>> import MLab >>> d = [1,2,4,9,12,24] >>> MLab.mean(d) 8.6666666666666661 >>> MLab.std(d) 8.6178110136313997 >>> -----Original Message----- From: num...@li... [mailto:num...@li...] On Behalf Of Charles G Waldman Sent: Tuesday, August 28, 2001 9:40 AM To: Culhane, Aedin Cc: num...@li... Subject: [Numpy-discussion] Newbie Installation problems Culhane, Aedin writes: > Dear Numpy Discussion List Users, > I would really appreciate if someone could help me install numerical python > on Win98. I have downloaded Numerical Python twice now (older version and > 20.1.0) but I am unable to get python to import numeric. I am running > python2.1. I'm sure a lot of folks on the list would be glad to help, but without a little more detail it's hard to say much. What errors are you seeing? How did you install Python? What directory is it installed into? How did you try to install Numpy? > I need it to be able to calculate basis statistics (mean, std dev, etc). > Will numerical python do this easily or should I write my own scripts? > Thanks a million for your help, > Aedin Culhane I think there are some statistics modules floating around on the Net, but mean and standard deviation are one-liners in NumPy: from Numeric import * def mean(a): return float(sum(a))/len(a) def std_dev(a): return sqrt(mean((a-mean(a))**2)) _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Chris B. <chr...@ho...> - 2001-08-28 18:31:04
|
Hi all, Paul recently requested help with testing and debugging MLab.py, and I offered to help out. He then sent me this note: "Paul F. Dubois" wrote: > MLab.py started out as a package that was supposed to give numpy a > Matlab look and feel. So useful questions are: are there really useful > things that should be added? I can't answer these questions myself. Interestingly enough, despite the fact that I came to NumPy having used MATLAB heavily for five years (and one dissertation), I have made very little use of MLab.py. I'm wondering if anyone is, indeed, using it. As for trying to give numpy a MATLAB look and feel, I question the usefulness of that. NumPy looks and feels a little different than MATLAB, and, frankly, I like NumPy's feel better for the most part (now that rich comparisons have been added, anyway). The one thing that MATLAB has that NumPy doesn't, that I really miss, is list indexing. Having to use put and take is so much less elegant! I'm sure this isn't the least bit trivial to add, however. (is it even possible, given Python's idea of slicing?) Anyway, what I do think MATLAB does provide, and MLab.py should, is a whole bunch of utility functions for various common manipulations. > Do we really have what we have correct? > Most of all, these is no test routine for it. If you could make one > following the model of the test.py in directory Test, it would be great. > The idea is to have something that does not require the runner of the > test to know anything. It just runs silently unless something is wrong. I certainly could write up a test routine. I will start work on that. In the meantime, before I put too much effort into it, I'd really like some feedback on what people want MLab.py to be, if they see using it at all. As I said above, I'm not sure trying to give NumPy a MATLAB feel is a useful goal, so what is left is having all those handy functions. Perhaps we could shift away from MLab.py, and turn it into Utilities.py, which could use MATLAB (and other user experience) as guidance as to what utilities to include. I'd really like folks' feedback on this before I put too much work into something no one uses, or will use. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Chris B. <chr...@ho...> - 2001-08-28 17:19:21
|
Charles G Waldman wrote: > I think there are some statistics modules floating around on the Net, > but mean and standard deviation are one-liners in NumPy: > > from Numeric import * > > def mean(a): > return float(sum(a))/len(a) > > def std_dev(a): > return sqrt(mean((a-mean(a))**2)) or, for the unbiased estimate: def std_dev(a): return sqrt(sum((a-mean(a))**2)/(len(a)-1)) You can also find these in Mlab.py -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Charles G W. <cg...@al...> - 2001-08-28 16:43:30
|
Culhane, Aedin writes: > Dear Numpy Discussion List Users, > I would really appreciate if someone could help me install numerical python > on Win98. I have downloaded Numerical Python twice now (older version and > 20.1.0) but I am unable to get python to import numeric. I am running > python2.1. I'm sure a lot of folks on the list would be glad to help, but without a little more detail it's hard to say much. What errors are you seeing? How did you install Python? What directory is it installed into? How did you try to install Numpy? > I need it to be able to calculate basis statistics (mean, std dev, etc). > Will numerical python do this easily or should I write my own scripts? > Thanks a million for your help, > Aedin Culhane I think there are some statistics modules floating around on the Net, but mean and standard deviation are one-liners in NumPy: from Numeric import * def mean(a): return float(sum(a))/len(a) def std_dev(a): return sqrt(mean((a-mean(a))**2)) |
From: Culhane, A. <a.c...@uc...> - 2001-08-28 16:25:42
|
Dear Numpy Discussion List Users, I would really appreciate if someone could help me install numerical python on Win98. I have downloaded Numerical Python twice now (older version and 20.1.0) but I am unable to get python to import numeric. I am running python2.1. I need it to be able to calculate basis statistics (mean, std dev, etc). Will numerical python do this easily or should I write my own scripts? Thanks a million for your help, Aedin Culhane |
From: Rob <eu...@ho...> - 2001-08-28 04:35:21
|
Tim, I used your routine, computing all of the triangles at once and then feeding the resulting array back into the needy functions. I shaved 30% off the execution time!! There is another routine which computes correction terms which can use the "take()" method. I'll work on that tomorrow. Thanks alot! Rob. ps. I'll have to study the Numpy manual to figure out how your routine works. Tim Hochberg wrote: > > Hi Rob, > > It looks like you are trying to use the function below to integrate over a > single triangle. I'm almost certain that you will _not_ be able to get this > to run fast; there's just too much overhead per triangle from all the > function calls and what not. Instead you need to structure things so that > you do more work per call. One way is to pass a list of triangles and get > back a list of answers. This way you spread your area over many triangles. > Another possibility, assuming that you need to evaluate the integral for all > seven possible values of QPoint each time, is to compute the answer for all > values of QPoint at once so that you reduce the overhead per triangle by > seven. For example (minimally tested): > > Qpnt=reshape(arange(7*3), (7,3)) > # Other code from other messages elided.... > > def Quads(TrngleNode, Qpnt, NodeCord): > c = take(NodeCord, TrngleNode) > SrcPointCols= add.reduce(Qpnt[...,NewAxis] * c[NewAxis,],1) > return SrcPointCols > > # Initialize stuff to minimize effects of timing.... > from time import clock > q1 = [None]*7 > q2 = [None]*7 > rng = range(7) > > # Time the three methods.... > > t0 = clock() > for i in rng: > q1[i] = ComputeGaussQuadPoint(i,TrngleNode,Qpnt,NodeCord) > t1 = clock() > for i in rng: > q2[i] = Quad(i,TrngleNode,Qpnt,NodeCord) > t2 = clock() > q3 = Quads(TrngleNode, Qpnt, NodeCord) > t3 = clock() > > # And the results... > > print "Times (us):", (t1-t0)*1e6, (t2-t1)*1e6, (t3-t2)*1e6 > for i in range(7): > print "The Answers:", q1[i], q2[i], q3[i] > > If this code is still a bottleneck, you could do both (compute the values > for all nodes and all values of QPoint at once, but this may be enough to > move the bottleneck elsewhere; by my timing it's over 7 times as fast. > > -tim > > [snip] > > > Evidently transpose is not very fast even for > > smal matrices. > > Actually, transpose should be fast, and should look progressively faster for > larger matrices. Typically all that happens is that the strides are changed. > > ----- Original Message ----- > From: "Rob" <eu...@ho...> > To: <num...@li...> > Sent: Monday, August 27, 2001 7:49 AM > Subject: [Numpy-discussion] Gaussian Quadrature routine Numpy-ization :) > > > I finally got it to work, but the Numpy-ized version runs slower than > > the plain Python one. I think that I can transpose the NodeCord matrix > > once in the program and feed that in, rather than the scratch matrix > > that is generated here. Evidently transpose is not very fast even for > > smal matrices. Here is my test program: > > > > > > from Numeric import * > > > > > > > > Qpnt=array(([20,21,22],[23,24,25],[26,27,28])) > > NodeCord=array(([1,2,3],[4,5,6],[7,8,9])) > > TrngleNode=array((1,2,0)) > > > > > > #the original routine > > def ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord): > > > > SrcPointCol=zeros((3)) > > > > > > SrcPointCol[0] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],0]\ > > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],0]\ > > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],0] > > > > SrcPointCol[1] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],1]\ > > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],1]\ > > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],1] > > > > SrcPointCol[2] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],2]\ > > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],2]\ > > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],2] > > > > return SrcPointCol > > > > > > #the yet-to-be-faster routine > > > > def Quad(QuadPoint, TrngleNode, Qpnt,NodeCord): > > s = Qpnt[QuadPoint,:] > > c= take(NodeCord, TrngleNode) > > SrcPointCol= add.reduce(s * > > transpose(c),1) > > > > return SrcPointCol > > > > QuadPoint=1 > > > > print "The Correct:" > > print ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord) > > > > print "The New" > > print Quad(QuadPoint,TrngleNode,Qpnt,NodeCord) > > > > > > > > > > > > -- > > The Numeric Python EM Project > > > > www.members.home.net/europax > > > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > http://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax |
From: Rob <eu...@ho...> - 2001-08-28 00:40:56
|
Hi Tim, Yes this is part of a FEM-MOM hybrid EM simulator. It was originally written in C, so I've had to deal with the C way of doing things. I'd like to consolidate many of these types of operations so that the program becomes streamlined and easier to understand- "The Python Way" TM. :) I will try your method to see how it works. By contrast, I have a FDTD simulator which is a true speed demon with Numpy. But it was organized in a way that I would easily use slicing to time march the 6 scalar Maxwell's equations. In any case I'm having fun. I do this at home. At work I have Ansoft Serenade and Agilent ADS at my disposal. Rob. Tim Hochberg wrote: > > Hi Rob, > > It looks like you are trying to use the function below to integrate over a > single triangle. I'm almost certain that you will _not_ be able to get this > to run fast; there's just too much overhead per triangle from all the > function calls and what not. Instead you need to structure things so that > you do more work per call. One way is to pass a list of triangles and get > back a list of answers. This way you spread your area over many triangles. > Another possibility, assuming that you need to evaluate the integral for all > seven possible values of QPoint each time, is to compute the answer for all > values of QPoint at once so that you reduce the overhead per triangle by > seven. For example (minimally tested): > > Qpnt=reshape(arange(7*3), (7,3)) > # Other code from other messages elided.... > > def Quads(TrngleNode, Qpnt, NodeCord): > c = take(NodeCord, TrngleNode) > SrcPointCols= add.reduce(Qpnt[...,NewAxis] * c[NewAxis,],1) > return SrcPointCols > > # Initialize stuff to minimize effects of timing.... > from time import clock > q1 = [None]*7 > q2 = [None]*7 > rng = range(7) > > # Time the three methods.... > > t0 = clock() > for i in rng: > q1[i] = ComputeGaussQuadPoint(i,TrngleNode,Qpnt,NodeCord) > t1 = clock() > for i in rng: > q2[i] = Quad(i,TrngleNode,Qpnt,NodeCord) > t2 = clock() > q3 = Quads(TrngleNode, Qpnt, NodeCord) > t3 = clock() > > # And the results... > > print "Times (us):", (t1-t0)*1e6, (t2-t1)*1e6, (t3-t2)*1e6 > for i in range(7): > print "The Answers:", q1[i], q2[i], q3[i] > > If this code is still a bottleneck, you could do both (compute the values > for all nodes and all values of QPoint at once, but this may be enough to > move the bottleneck elsewhere; by my timing it's over 7 times as fast. > > -tim > > [snip] > > > Evidently transpose is not very fast even for > > smal matrices. > > Actually, transpose should be fast, and should look progressively faster for > larger matrices. Typically all that happens is that the strides are changed. > > ----- Original Message ----- > From: "Rob" <eu...@ho...> > To: <num...@li...> > Sent: Monday, August 27, 2001 7:49 AM > Subject: [Numpy-discussion] Gaussian Quadrature routine Numpy-ization :) > > > I finally got it to work, but the Numpy-ized version runs slower than > > the plain Python one. I think that I can transpose the NodeCord matrix > > once in the program and feed that in, rather than the scratch matrix > > that is generated here. Evidently transpose is not very fast even for > > smal matrices. Here is my test program: > > > > > > from Numeric import * > > > > > > > > Qpnt=array(([20,21,22],[23,24,25],[26,27,28])) > > NodeCord=array(([1,2,3],[4,5,6],[7,8,9])) > > TrngleNode=array((1,2,0)) > > > > > > #the original routine > > def ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord): > > > > SrcPointCol=zeros((3)) > > > > > > SrcPointCol[0] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],0]\ > > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],0]\ > > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],0] > > > > SrcPointCol[1] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],1]\ > > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],1]\ > > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],1] > > > > SrcPointCol[2] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],2]\ > > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],2]\ > > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],2] > > > > return SrcPointCol > > > > > > #the yet-to-be-faster routine > > > > def Quad(QuadPoint, TrngleNode, Qpnt,NodeCord): > > s = Qpnt[QuadPoint,:] > > c= take(NodeCord, TrngleNode) > > SrcPointCol= add.reduce(s * > > transpose(c),1) > > > > return SrcPointCol > > > > QuadPoint=1 > > > > print "The Correct:" > > print ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord) > > > > print "The New" > > print Quad(QuadPoint,TrngleNode,Qpnt,NodeCord) > > > > > > > > > > > > -- > > The Numeric Python EM Project > > > > www.members.home.net/europax > > > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > http://lists.sourceforge.net/lists/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax |
From: Chris B. <chr...@ho...> - 2001-08-27 19:57:13
|
Paul F. Dubois wrote: >I don't own Matlab and so maintaining a package that emulates something >I don't have makes me nervous. I wish one of the other developers with >more knowledge in this area would pitch in. I'm not much of a NumPy developer, but I do own MATLAB, and was once a heavy user of it (now I use Python a lot more, or course). I'm not sure what I can contibute, but I'd be glad to test and comment. -Chris -- Christopher Barker, Ph.D. Chr...@ho... --- --- --- http://members.home.net/barkerlohmann ---@@ -----@@ -----@@ ------@@@ ------@@@ ------@@@ Oil Spill Modeling ------ @ ------ @ ------ @ Water Resources Engineering ------- --------- -------- Coastal and Fluvial Hydrodynamics -------------------------------------- ------------------------------------------------------------------------ |
From: Tim H. <tim...@ie...> - 2001-08-27 16:10:10
|
Hi Rob, It looks like you are trying to use the function below to integrate over a single triangle. I'm almost certain that you will _not_ be able to get this to run fast; there's just too much overhead per triangle from all the function calls and what not. Instead you need to structure things so that you do more work per call. One way is to pass a list of triangles and get back a list of answers. This way you spread your area over many triangles. Another possibility, assuming that you need to evaluate the integral for all seven possible values of QPoint each time, is to compute the answer for all values of QPoint at once so that you reduce the overhead per triangle by seven. For example (minimally tested): Qpnt=reshape(arange(7*3), (7,3)) # Other code from other messages elided.... def Quads(TrngleNode, Qpnt, NodeCord): c = take(NodeCord, TrngleNode) SrcPointCols= add.reduce(Qpnt[...,NewAxis] * c[NewAxis,],1) return SrcPointCols # Initialize stuff to minimize effects of timing.... from time import clock q1 = [None]*7 q2 = [None]*7 rng = range(7) # Time the three methods.... t0 = clock() for i in rng: q1[i] = ComputeGaussQuadPoint(i,TrngleNode,Qpnt,NodeCord) t1 = clock() for i in rng: q2[i] = Quad(i,TrngleNode,Qpnt,NodeCord) t2 = clock() q3 = Quads(TrngleNode, Qpnt, NodeCord) t3 = clock() # And the results... print "Times (us):", (t1-t0)*1e6, (t2-t1)*1e6, (t3-t2)*1e6 for i in range(7): print "The Answers:", q1[i], q2[i], q3[i] If this code is still a bottleneck, you could do both (compute the values for all nodes and all values of QPoint at once, but this may be enough to move the bottleneck elsewhere; by my timing it's over 7 times as fast. -tim [snip] > Evidently transpose is not very fast even for > smal matrices. Actually, transpose should be fast, and should look progressively faster for larger matrices. Typically all that happens is that the strides are changed. ----- Original Message ----- From: "Rob" <eu...@ho...> To: <num...@li...> Sent: Monday, August 27, 2001 7:49 AM Subject: [Numpy-discussion] Gaussian Quadrature routine Numpy-ization :) > I finally got it to work, but the Numpy-ized version runs slower than > the plain Python one. I think that I can transpose the NodeCord matrix > once in the program and feed that in, rather than the scratch matrix > that is generated here. Evidently transpose is not very fast even for > smal matrices. Here is my test program: > > > from Numeric import * > > > > Qpnt=array(([20,21,22],[23,24,25],[26,27,28])) > NodeCord=array(([1,2,3],[4,5,6],[7,8,9])) > TrngleNode=array((1,2,0)) > > > #the original routine > def ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord): > > SrcPointCol=zeros((3)) > > > SrcPointCol[0] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],0]\ > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],0]\ > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],0] > > SrcPointCol[1] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],1]\ > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],1]\ > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],1] > > SrcPointCol[2] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],2]\ > + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],2]\ > + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],2] > > return SrcPointCol > > > #the yet-to-be-faster routine > > def Quad(QuadPoint, TrngleNode, Qpnt,NodeCord): > s = Qpnt[QuadPoint,:] > c= take(NodeCord, TrngleNode) > SrcPointCol= add.reduce(s * > transpose(c),1) > > return SrcPointCol > > QuadPoint=1 > > print "The Correct:" > print ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord) > > print "The New" > print Quad(QuadPoint,TrngleNode,Qpnt,NodeCord) > > > > > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Rob <eu...@ho...> - 2001-08-27 14:49:47
|
I finally got it to work, but the Numpy-ized version runs slower than the plain Python one. I think that I can transpose the NodeCord matrix once in the program and feed that in, rather than the scratch matrix that is generated here. Evidently transpose is not very fast even for smal matrices. Here is my test program: from Numeric import * Qpnt=array(([20,21,22],[23,24,25],[26,27,28])) NodeCord=array(([1,2,3],[4,5,6],[7,8,9])) TrngleNode=array((1,2,0)) #the original routine def ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord): SrcPointCol=zeros((3)) SrcPointCol[0] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],0]\ + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],0]\ + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],0] SrcPointCol[1] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],1]\ + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],1]\ + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],1] SrcPointCol[2] = Qpnt[QuadPoint,0]*NodeCord[TrngleNode[0],2]\ + Qpnt[QuadPoint,1]*NodeCord[TrngleNode[1],2]\ + Qpnt[QuadPoint,2]*NodeCord[TrngleNode[2],2] return SrcPointCol #the yet-to-be-faster routine def Quad(QuadPoint, TrngleNode, Qpnt,NodeCord): s = Qpnt[QuadPoint,:] c= take(NodeCord, TrngleNode) SrcPointCol= add.reduce(s * transpose(c),1) return SrcPointCol QuadPoint=1 print "The Correct:" print ComputeGaussQuadPoint(QuadPoint,TrngleNode,Qpnt,NodeCord) print "The New" print Quad(QuadPoint,TrngleNode,Qpnt,NodeCord) -- The Numeric Python EM Project www.members.home.net/europax |
From: Rob <eu...@ho...> - 2001-08-26 22:19:11
|
The good news is that your routine reduces execution time by 30%. The bad news is that the wrong numbers are coming out of it. I'm trying some otner permutations of "take" and "add.reduce" in the function to see if I can get it. One method that worked was splitting up "c" into c1,c2,c3, such that: c1=Numeric.take(a.NodeCord[:,0],TrngleNode) etc and then using Numeric.add.reduce(s*c1,1) etc This gives the right results, but is slower than plain Python. I'll keep at it. Thanks again. "Paul F. Dubois" wrote: > > def cgqp(QuadPoint, TrngleNode, a): > s = a.Qpnt[Quadpoint,:] > c = Numeric.take(a.NodeCord, TrngleNode) > return Numeric.add.reduce(s * c, axis=1) > > This may or may not be right. The data structures I would have to set up > to test it are too much for Sunday morning. > > -----Original Message----- > From: num...@li... > [mailto:num...@li...] On Behalf Of Rob > Sent: Sunday, August 26, 2001 8:39 AM > To: num...@li.... > Subject: [Numpy-discussion] Can this function by Numpy-ized? > > I finally got my FEM EM code working. I profiled it and this function > uses up a big hunk of time. It performs gaussian integration over a > triangle. I am trying to figure out how to slice the arrays so as to > push it down into the C level. Does anyone have any ideas? Thanks, > Rob. > > ps. it looks to be intractible to me. Maybe I need to look at writing a > C extension. I've never done that before. > > ##********************************************************************** > ***** > ##Prototype: void ComputeGaussQuadPoint(int QuadPoint, int > *a.TrngleNode, > ## double *SrcPointCol) > ##Description: To compute the coordinates of 7-point Gauss nodes of > ## a triangular patch. > ##Input value: > ## int QuadPoint --- node index, it can be from 0 to 6. > ## int *a.TrngleNode --- the three nodes of a tringular patch. > ## double *SrcPointCol --- where to store the results > ##Return value: none > ##Global value used: a.NodeCord, a.Qpnt > ##Global value modified: none > ##Subroutines called: none > ##Note: Not very sure. > ************************************************************************ > **** > def ComputeGaussQuadPoint(QuadPoint,TrngleNode,a): > > SrcPointCol=zeros((3),Float) > > SrcPointCol[0] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],0]\ > + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],0]\ > + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],0] > > SrcPointCol[1] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],1]\ > + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],1]\ > + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],1] > > SrcPointCol[2] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],2]\ > + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],2]\ > + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],2] > > > return SrcPointCol > > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax |
From: Rob <eu...@ho...> - 2001-08-26 20:17:29
|
Thanks Paul, I will try it out. That at least gives me some direction. I've been pouring over the Numpy manual, but there are so many different functions! I would like to try to write an extension that includes some of these FEM routines, but thats for later. BTW, at work I have to use Windows NT, but I've grown to love the calldll/DynWin extension. You can even interact with the windows kernel with that one. Rob. "Paul F. Dubois" wrote: > > def cgqp(QuadPoint, TrngleNode, a): > s = a.Qpnt[Quadpoint,:] > c = Numeric.take(a.NodeCord, TrngleNode) > return Numeric.add.reduce(s * c, axis=1) > > This may or may not be right. The data structures I would have to set up > to test it are too much for Sunday morning. > > -----Original Message----- > From: num...@li... > [mailto:num...@li...] On Behalf Of Rob > Sent: Sunday, August 26, 2001 8:39 AM > To: num...@li.... > Subject: [Numpy-discussion] Can this function by Numpy-ized? > > I finally got my FEM EM code working. I profiled it and this function > uses up a big hunk of time. It performs gaussian integration over a > triangle. I am trying to figure out how to slice the arrays so as to > push it down into the C level. Does anyone have any ideas? Thanks, > Rob. > > ps. it looks to be intractible to me. Maybe I need to look at writing a > C extension. I've never done that before. > > ##********************************************************************** > ***** > ##Prototype: void ComputeGaussQuadPoint(int QuadPoint, int > *a.TrngleNode, > ## double *SrcPointCol) > ##Description: To compute the coordinates of 7-point Gauss nodes of > ## a triangular patch. > ##Input value: > ## int QuadPoint --- node index, it can be from 0 to 6. > ## int *a.TrngleNode --- the three nodes of a tringular patch. > ## double *SrcPointCol --- where to store the results > ##Return value: none > ##Global value used: a.NodeCord, a.Qpnt > ##Global value modified: none > ##Subroutines called: none > ##Note: Not very sure. > ************************************************************************ > **** > def ComputeGaussQuadPoint(QuadPoint,TrngleNode,a): > > SrcPointCol=zeros((3),Float) > > SrcPointCol[0] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],0]\ > + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],0]\ > + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],0] > > SrcPointCol[1] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],1]\ > + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],1]\ > + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],1] > > SrcPointCol[2] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],2]\ > + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],2]\ > + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],2] > > > return SrcPointCol > > > -- > The Numeric Python EM Project > > www.members.home.net/europax > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > http://lists.sourceforge.net/lists/listinfo/numpy-discussion -- The Numeric Python EM Project www.members.home.net/europax |
From: Paul F. D. <pa...@pf...> - 2001-08-26 17:32:22
|
def cgqp(QuadPoint, TrngleNode, a): s = a.Qpnt[Quadpoint,:] c = Numeric.take(a.NodeCord, TrngleNode) return Numeric.add.reduce(s * c, axis=1) This may or may not be right. The data structures I would have to set up to test it are too much for Sunday morning. -----Original Message----- From: num...@li... [mailto:num...@li...] On Behalf Of Rob Sent: Sunday, August 26, 2001 8:39 AM To: num...@li.... Subject: [Numpy-discussion] Can this function by Numpy-ized? I finally got my FEM EM code working. I profiled it and this function uses up a big hunk of time. It performs gaussian integration over a triangle. I am trying to figure out how to slice the arrays so as to push it down into the C level. Does anyone have any ideas? Thanks, Rob. ps. it looks to be intractible to me. Maybe I need to look at writing a C extension. I've never done that before. ##********************************************************************** ***** ##Prototype: void ComputeGaussQuadPoint(int QuadPoint, int *a.TrngleNode, ## double *SrcPointCol) ##Description: To compute the coordinates of 7-point Gauss nodes of ## a triangular patch. ##Input value: ## int QuadPoint --- node index, it can be from 0 to 6. ## int *a.TrngleNode --- the three nodes of a tringular patch. ## double *SrcPointCol --- where to store the results ##Return value: none ##Global value used: a.NodeCord, a.Qpnt ##Global value modified: none ##Subroutines called: none ##Note: Not very sure. ************************************************************************ **** def ComputeGaussQuadPoint(QuadPoint,TrngleNode,a): SrcPointCol=zeros((3),Float) SrcPointCol[0] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],0]\ + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],0]\ + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],0] SrcPointCol[1] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],1]\ + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],1]\ + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],1] SrcPointCol[2] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],2]\ + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],2]\ + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],2] return SrcPointCol -- The Numeric Python EM Project www.members.home.net/europax _______________________________________________ Numpy-discussion mailing list Num...@li... http://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: John J. L. <jj...@po...> - 2001-08-26 17:27:15
|
On Sun, 26 Aug 2001, Rob wrote: > I finally got my FEM EM code working. I profiled it and this function > uses up a big hunk of time. It performs gaussian integration over a > triangle. I am trying to figure out how to slice the arrays so as to > push it down into the C level. Does anyone have any ideas? Thanks, > Rob. > > ps. it looks to be intractible to me. Maybe I need to look at writing a > C extension. I've never done that before. [...] I'm not a good enough ufunc / array function hacker to come up with anything, but I doubt it, and I wonder how much time it would save anyway, given the size of the arrays involved. From the look of your comments before the function, it looks like a) you're a C programmer, not quite comfortable with Python yet, or b) you wrote it in C first, then moved it into Python. If the latter, you might want to try wrapping that C function with SWIG, though to be honest I'm not sure the overhead of learning to use SWIG is any less than for writing a C extension manually (but less error-prone I expect, and less work if you have a lot of C to wrap). I think the Scipy project has some typemaps for passing Numeric arrays; if not, I've seen some collections of SWIG typemaps for Numeric somewhere... BTW, the best place to put those comments is in a docstring. Here is a slightly more Pythonically-formatted version (I must be bored today): def ComputeGaussQuadPoint(QuadPoint, a): """Return coordinates of 7-point Gauss nodes of a triangular patch. QuadPoint: node index, from 0 to 6 a: triangular patch? """ SrcPointCol=zeros((3),Float) tn = a.TrngleNode # the three nodes of the triangular patch SrcPointCol[0] = (a.Qpnt[QuadPoint,0]*a.NodeCord[tn[0],0] + a.Qpnt[QuadPoint,1]*a.NodeCord[tn[1],0] + a.Qpnt[QuadPoint,2]*a.NodeCord[tn[2],0]) SrcPointCol[1] = (a.Qpnt[QuadPoint,0]*a.NodeCord[tn[0],1] + a.Qpnt[QuadPoint,1]*a.NodeCord[tn[1],1] + a.Qpnt[QuadPoint,2]*a.NodeCord[tn[2],1]) SrcPointCol[2] = (a.Qpnt[QuadPoint,0]*a.NodeCord[tn[0],2] + a.Qpnt[QuadPoint,1]*a.NodeCord[tn[1],2] + a.Qpnt[QuadPoint,2]*a.NodeCord[tn[2],2]) return SrcPointCol John |
From: Rob <eu...@ho...> - 2001-08-26 15:39:20
|
I finally got my FEM EM code working. I profiled it and this function uses up a big hunk of time. It performs gaussian integration over a triangle. I am trying to figure out how to slice the arrays so as to push it down into the C level. Does anyone have any ideas? Thanks, Rob. ps. it looks to be intractible to me. Maybe I need to look at writing a C extension. I've never done that before. ##*************************************************************************** ##Prototype: void ComputeGaussQuadPoint(int QuadPoint, int *a.TrngleNode, ## double *SrcPointCol) ##Description: To compute the coordinates of 7-point Gauss nodes of ## a triangular patch. ##Input value: ## int QuadPoint --- node index, it can be from 0 to 6. ## int *a.TrngleNode --- the three nodes of a tringular patch. ## double *SrcPointCol --- where to store the results ##Return value: none ##Global value used: a.NodeCord, a.Qpnt ##Global value modified: none ##Subroutines called: none ##Note: Not very sure. **************************************************************************** def ComputeGaussQuadPoint(QuadPoint,TrngleNode,a): SrcPointCol=zeros((3),Float) SrcPointCol[0] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],0]\ + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],0]\ + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],0] SrcPointCol[1] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],1]\ + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],1]\ + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],1] SrcPointCol[2] = a.Qpnt[QuadPoint,0]*a.NodeCord[TrngleNode[0],2]\ + a.Qpnt[QuadPoint,1]*a.NodeCord[TrngleNode[1],2]\ + a.Qpnt[QuadPoint,2]*a.NodeCord[TrngleNode[2],2] return SrcPointCol -- The Numeric Python EM Project www.members.home.net/europax |