You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Fernando P. <fp...@co...> - 2003-11-07 22:08:52
|
Tim Hochberg wrote: > Your correct that anArray.copy() provides a deep copy of an array in > both Numeric and numarray. However, you would like to be able to pass > some object to copy.copy or copy.deepcopy and have it provide the > appropriate type of copy whether the given object is an array, list or > something other. Good point. This is a usage case I hadn't thought of, never having needed it myself. And apparently neither had the numarray team ;-) Best, f |
From: Tim H. <tim...@ie...> - 2003-11-07 22:06:39
|
Fernando Perez wrote: > Tim Hochberg wrote: > >> It appears that numarray.NumArray does not supply __copy__ or >> __deepcopy__ methods and as a result copy.copy and copy.deepcopy do >> not work correctly. It appears that adding "__copy__ = copy" to class >> NumArray is suffcient for copy, __deepcopy__ appears to need >> something more. Sadly, I don't have time to investigate this further >> right now. > > > As far as I understand, this isn't really necessary with > Numeric/Numarray, because the copy() method in a sense always > guarantees a 'deep copy'. Even when you make assignments to slices of > an array, the issue of nested structures which for python lists/dicts > requires deepcopy() just does not arise. A simple illustration: Your correct that anArray.copy() provides a deep copy of an array in both Numeric and numarray. However, you would like to be able to pass some object to copy.copy or copy.deepcopy and have it provide the appropriate type of copy whether the given object is an array, list or something other. I believe that in Numeric, copy.copy and copy.deepcopy do the right thing already. However, in numarray:: >>> import copy, numarray as na >>> a = na.arange(3) >>> a array([0, 1, 2]) >>> b = copy.copy(a) >>> b[1] = 99 >>> a array([ 0, 99, 2]) # Urkh! If you apply the fix I posted above that fixes copy.copy, but copy.deepcopy then fails. A full fix appears to be to add:: * def __copy__(self): return self.copy() def __deepcopy__(self, memo): return self.copy() to Numarray. Then we get the desired behaviour: *>>> import copy, numarray as na >>> a = na.arange(3) >>> b = copy.copy(a) >>> c = copy.deepcopy(a) >>> a[0] = 99 >>> a, b, c (array([99, 1, 2]), array([0, 1, 2]), array([0, 1, 2])) It may be possible to get the same effect by tweaking __getstate__ and __setstate__, since they also can be used to control copying, but I'm not familar with those functions. Regards, -tim > > > In [1]: a=arange(10) > > In [2]: b=arange(10,20) > > In [3]: c=arange(20,30) > > In [4]: d=zeros(30) > > In [5]: d[0:10] = a > > In [6]: d[10:20] = b > > In [7]: d[20:30] = c > > In [8]: a > Out[8]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) > > In [9]: b > Out[9]: array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) > > In [10]: c > Out[10]: array([20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) > > In [11]: d > Out[11]: > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, > 16, 17, 18, > 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) > > In [12]: b[:]=99 > > In [13]: b > Out[13]: array([99, 99, 99, 99, 99, 99, 99, 99, 99, 99]) > > In [14]: d > Out[14]: > array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, > 16, 17, 18, > 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) > > > Perhaps I'm missing some usage case, but I've always just used > ARR.copy() when I've needed a 'full copy' of an array. This > guarantees that the returned array is contiguous (has .flat) and a > standalone copy of the data in ARR, regardless of the contiguity > properties of ARR. > > HTH. > > Cheers, > > f > > ps: my experience is actually from Numeric, I don't know if Numarray > differs in its copy() behavior. > > |
From: Fernando P. <fp...@co...> - 2003-11-07 21:36:19
|
Tim Hochberg wrote: > It appears that numarray.NumArray does not supply __copy__ or > __deepcopy__ methods and as a result copy.copy and copy.deepcopy do not > work correctly. It appears that adding "__copy__ = copy" to class > NumArray is suffcient for copy, __deepcopy__ appears to need something > more. Sadly, I don't have time to investigate this further right now. As far as I understand, this isn't really necessary with Numeric/Numarray, because the copy() method in a sense always guarantees a 'deep copy'. Even when you make assignments to slices of an array, the issue of nested structures which for python lists/dicts requires deepcopy() just does not arise. A simple illustration: In [1]: a=arange(10) In [2]: b=arange(10,20) In [3]: c=arange(20,30) In [4]: d=zeros(30) In [5]: d[0:10] = a In [6]: d[10:20] = b In [7]: d[20:30] = c In [8]: a Out[8]: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) In [9]: b Out[9]: array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) In [10]: c Out[10]: array([20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) In [11]: d Out[11]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) In [12]: b[:]=99 In [13]: b Out[13]: array([99, 99, 99, 99, 99, 99, 99, 99, 99, 99]) In [14]: d Out[14]: array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]) Perhaps I'm missing some usage case, but I've always just used ARR.copy() when I've needed a 'full copy' of an array. This guarantees that the returned array is contiguous (has .flat) and a standalone copy of the data in ARR, regardless of the contiguity properties of ARR. HTH. Cheers, f ps: my experience is actually from Numeric, I don't know if Numarray differs in its copy() behavior. |
From: Tim H. <tim...@ie...> - 2003-11-07 21:26:34
|
It appears that numarray.NumArray does not supply __copy__ or __deepcopy__ methods and as a result copy.copy and copy.deepcopy do not work correctly. It appears that adding "__copy__ = copy" to class NumArray is suffcient for copy, __deepcopy__ appears to need something more. Sadly, I don't have time to investigate this further right now. Regards, -tim |
From: Perry G. <pe...@st...> - 2003-11-04 16:23:37
|
Indeed, MA will be in the next release (0.8). Todd isn't in today to give an estimate on when that will be, but the MA work is done. He is mostly fixing known bugs in numarray before releasing it. Perry On Tuesday, November 4, 2003, at 10:00 AM, Paul F. Dubois wrote: > In writing MA I specifically said that I would not deal with NaN > issues because I saw no way to do so in a portable way. > > If you have a disk file with arrays that contain NaN's in them, then > you can form a mask using some routine you write in C or Python, using > whatever method you believe will identify those elements correctly. > After that, MA promises never to use the NaN elements in any > operations. > > There is a version of MA being developed for numarray but I haven't > checked as to whether it is in the distribution yet. > > > > ------------------------------------------------------- > This SF.net email is sponsored by: SF.net Giveback Program. > Does SourceForge.net help you be more productive? Does it > help you create better code? SHARE THE LOVE, and help us help > YOU! Click Here: http://sourceforge.net/donate/ > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion |
From: Paul F. D. <du...@ll...> - 2003-11-04 15:00:41
|
In writing MA I specifically said that I would not deal with NaN issues because I saw no way to do so in a portable way. If you have a disk file with arrays that contain NaN's in them, then you can form a mask using some routine you write in C or Python, using whatever method you believe will identify those elements correctly. After that, MA promises never to use the NaN elements in any operations. There is a version of MA being developed for numarray but I haven't checked as to whether it is in the distribution yet. |
From: Nadav H. <na...@Vi...> - 2003-11-04 13:40:37
|
How's about using the original (unpatched) VTKpython --- I think that numarray arrays can be casted transparently to Numeric arrays, or you can explicitly cast them when you call VTKpython functions. Nadav. On Tue, 2003-11-04 at 15:12, Leo Breebaart wrote: > Nadav Horesh wrote: > > > Look at the numarray package instead of Numeric --- I think it > > has a better IEEE754 support even without the MA. For most > > cases, numarray is a 1:1 replacement for Numeric. > > Thanks for the suggestion -- that was just about the one thing I > hadn't tried yet. I had looked at numarray previously to see > whether it had improved Masked Arrays, but since it didn't, I > kinda neglected to look at numarray itself, and you are > absolutely right: you can print and manipulate arrays containing > inf and nan, and logical things will now happen instead of stack > traces. > > However. > > As I mentioned in my previous message, one of the things I'm > doing with these arrays is passing them on to the VTK Python > wrappers for use in visualisation functions. These VTK wrappers > are based on Numeric, nor numarray, so I had to manually patch > them to make them work with numarray as well. > > So far so, good, but there was one problem: at one point, calls > are made to an internal VTK function, using 'a.flat' as an input > parameter (where 'a' is now a numarray array instead of a Numeric > one.) > > This call fails, because in Numeric the type of 'a.flat' is > '<type array>', whereas in numarray the type is '<class > 'numarray.numarraycode.Numarray>'. > > I managed to solve this by feeding the function in question > a.flat._data (<type 'Memory'>) instead, but I am wondering if > this is really the best solution, or if there's perhaps some > other (more portable? more reliable?) way of handing off a > numarray's internal data to an external function. > > Once again, all feedback will be most welcome... |
From: Leo B. <le...@ls...> - 2003-11-04 13:12:55
|
Nadav Horesh wrote: > Look at the numarray package instead of Numeric --- I think it > has a better IEEE754 support even without the MA. For most > cases, numarray is a 1:1 replacement for Numeric. Thanks for the suggestion -- that was just about the one thing I hadn't tried yet. I had looked at numarray previously to see whether it had improved Masked Arrays, but since it didn't, I kinda neglected to look at numarray itself, and you are absolutely right: you can print and manipulate arrays containing inf and nan, and logical things will now happen instead of stack traces. However. As I mentioned in my previous message, one of the things I'm doing with these arrays is passing them on to the VTK Python wrappers for use in visualisation functions. These VTK wrappers are based on Numeric, nor numarray, so I had to manually patch them to make them work with numarray as well. So far so, good, but there was one problem: at one point, calls are made to an internal VTK function, using 'a.flat' as an input parameter (where 'a' is now a numarray array instead of a Numeric one.) This call fails, because in Numeric the type of 'a.flat' is '<type array>', whereas in numarray the type is '<class 'numarray.numarraycode.Numarray>'. I managed to solve this by feeding the function in question a.flat._data (<type 'Memory'>) instead, but I am wondering if this is really the best solution, or if there's perhaps some other (more portable? more reliable?) way of handing off a numarray's internal data to an external function. Once again, all feedback will be most welcome... -- Leo Breebaart <le...@ls...> |
From: Konrad H. <hi...@cn...> - 2003-11-04 10:39:09
|
On Tuesday 04 November 2003 12:19, Nadav Horesh wrote: > The condition of the matrix is about 1.0E7 and its dimensions are > 10000x36: This is not a stable linear system, at least not for a simple You can cut down the number of data points to much less (I use 400) and t= he=20 problem persists. > But the solution to the polynomial fit turns to be much simpler: > In the "fitPolynomial" function the 5th and the 4th lines before the en= d > are commented. These lines uses the "generalized_inverse" procedure to > solve the set of equations. just uncomment these lines and comment the That is exactly what I recommended Rob to do as well. Konrad. --=20 -------------------------------------------------------------------------= ------ Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------= ------ |
From: Nadav H. <na...@Vi...> - 2003-11-04 10:20:16
|
The condition of the matrix is about 1.0E7 and its dimensions are 10000x36: This is not a stable linear system, at least not for a simple solvers. Thus, I estimate that the solver is not of a high quality, but not buggy either. But the solution to the polynomial fit turns to be much simpler: In the "fitPolynomial" function the 5th and the 4th lines before the end are commented. These lines uses the "generalized_inverse" procedure to solve the set of equations. just uncomment these lines and comment the two lines the follows, thats it. The solution to the 5x5 fit now seems OK at the first glance. Nadav. On Mon, 2003-11-03 at 15:47, Konrad Hinsen wrote: > On Sunday 02 November 2003 14:21, Nadav Horesh wrote: > > > * Polynomials fit is relatively very simple --- you may write one > > of you own in less then a one day work. Since, as I said, the > > problem is, in many cases, unstable, you'll have the chance to > > implement more stable linear-equation solvers. > > The polynomial fit is indeed simple, and the routine from ScientificPython > that Rob uses is only 20 lines long, most of that for error checking and > setting up the arrays describing the system of linear equations. > > Looking at the singular values in Rob's problem, I see no evidence for the > problem being particularly unstable. The singular values range from 1e-6 to > 1, that should not pose any problem at double precision. Moreover, for a > lower-order fit that gives reasonable results, the range is only slightly > smaller. So I do suspect that something goes wrong in linear_least_squares. > > Konrad. |
From: Nadav H. <na...@Vi...> - 2003-11-04 09:40:52
|
Look at the numarray package instead of Numeric --- I think it has a better IEEE754 support even without the MA. For most cases, numarray is a 1:1 replacement for Numeric. Nadav. On Mon, 2003-11-03 at 19:23, Leo Breebaart wrote: > Hi all, > > I have a problem, and am looking for help... > > I am trying to use Python as a glue language for passing some > very large numeric arrays in and out of various C libraries. > These arrays can contain NaN values to indicate missing elements. > > As long as on the Python level I only use Numeric to pass these > arrays around as opaque data containers, there is no problem: > > - From C library FOO I obtain the huge array 'data'; > > - Using the PyArray_FromDimsAndData() constructor from the > Numeric C API, I create a Numeric array that references 'data'; > > - In Python, I can pass the Numeric array on to e.g. VTKPython > for visualisation. VTK has no problem with the NaNs -- > everything works. > > The problem arises because I want to allow people to manipulate > these arrays from within Python as well. As is mentioned in its > documentation, Numeric does not support NaNs, and instead advises > to use Masked Arrays instead. > > These would indeed seem to be well-suited for the job (setting > aside the possible issues of performance and user-friendliness), > but my problem is that I do not understand how I can create an > appropriate mask for my array in the first place. > > Something as simple as: > > import MA > > nanv = 1e30000/1e30000 > a = MA.masked_array([0,nanv,nanv,nanv,4], mask=[0,1,1,1,0]) > print MA.filled(2 * MA.sin(a)) > > works quite well, but explicit enumeration is clearly not an > option for the huge pre-existing arrays I'm dealing with. > > So I would want to do something similar to: > > a = MA.masked_object([0,1,nanv,3,4], nanv) > > but this simply leads to a.mask() returning None. > > At first I thought this was because 'nanv == nanv' always > evaluates to False, but it turns out that in Python 2.3.2 it > actually evaluates to True -- presumably because Python's own > IEEE 754 support is lacking (if I understand PEP 754 correctly). > So why doesn't the masked_object() constructor work? Beats me... > It *does* work if I use e.g. '4' as the value parameter. > > I tried many other approaches as well, including downloading the > fpconst package mentioned in PEP 754 and trying to use its > IsNaN() as a condition to the MA.masked_where() constructor -- > which doesn't work either, and gives me an exception somewhere > deep within the bowels of MA. > > At this point I think I've now reached the end of my rope. Does > anybody reading this have any ideas on how I might beat MA into > submission, or if there are any other solutions I could try that > would allow me to manipulate large NaN-containing arrays > efficiently (or even *at all*!) from within Python? Or am I > perhaps simply (hopefully) missing something obvious? > > I am eagerly looking forward to any help or advice. Many thanks > in advance, |
From: Leo B. <le...@ls...> - 2003-11-03 17:24:02
|
Hi all, I have a problem, and am looking for help... I am trying to use Python as a glue language for passing some very large numeric arrays in and out of various C libraries. These arrays can contain NaN values to indicate missing elements. As long as on the Python level I only use Numeric to pass these arrays around as opaque data containers, there is no problem: - From C library FOO I obtain the huge array 'data'; - Using the PyArray_FromDimsAndData() constructor from the Numeric C API, I create a Numeric array that references 'data'; - In Python, I can pass the Numeric array on to e.g. VTKPython for visualisation. VTK has no problem with the NaNs -- everything works. The problem arises because I want to allow people to manipulate these arrays from within Python as well. As is mentioned in its documentation, Numeric does not support NaNs, and instead advises to use Masked Arrays instead. These would indeed seem to be well-suited for the job (setting aside the possible issues of performance and user-friendliness), but my problem is that I do not understand how I can create an appropriate mask for my array in the first place. Something as simple as: import MA nanv = 1e30000/1e30000 a = MA.masked_array([0,nanv,nanv,nanv,4], mask=[0,1,1,1,0]) print MA.filled(2 * MA.sin(a)) works quite well, but explicit enumeration is clearly not an option for the huge pre-existing arrays I'm dealing with. So I would want to do something similar to: a = MA.masked_object([0,1,nanv,3,4], nanv) but this simply leads to a.mask() returning None. At first I thought this was because 'nanv == nanv' always evaluates to False, but it turns out that in Python 2.3.2 it actually evaluates to True -- presumably because Python's own IEEE 754 support is lacking (if I understand PEP 754 correctly). So why doesn't the masked_object() constructor work? Beats me... It *does* work if I use e.g. '4' as the value parameter. I tried many other approaches as well, including downloading the fpconst package mentioned in PEP 754 and trying to use its IsNaN() as a condition to the MA.masked_where() constructor -- which doesn't work either, and gives me an exception somewhere deep within the bowels of MA. At this point I think I've now reached the end of my rope. Does anybody reading this have any ideas on how I might beat MA into submission, or if there are any other solutions I could try that would allow me to manipulate large NaN-containing arrays efficiently (or even *at all*!) from within Python? Or am I perhaps simply (hopefully) missing something obvious? I am eagerly looking forward to any help or advice. Many thanks in advance, -- Leo Breebaart <le...@ls...> |
From: Konrad H. <hi...@cn...> - 2003-11-03 13:47:54
|
On Sunday 02 November 2003 14:21, Nadav Horesh wrote: > * Polynomials fit is relatively very simple --- you may write one > of you own in less then a one day work. Since, as I said, the > problem is, in many cases, unstable, you'll have the chance to > implement more stable linear-equation solvers. The polynomial fit is indeed simple, and the routine from ScientificPytho= n=20 that Rob uses is only 20 lines long, most of that for error checking and=20 setting up the arrays describing the system of linear equations. Looking at the singular values in Rob's problem, I see no evidence for th= e=20 problem being particularly unstable. The singular values range from 1e-6 = to=20 1, that should not pose any problem at double precision. Moreover, for a=20 lower-order fit that gives reasonable results, the range is only slightly= =20 smaller. So I do suspect that something goes wrong in linear_least_square= s. Konrad. --=20 -------------------------------------------------------------------------= ------ Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------= ------ |
From: Rob W.W. H. <ro...@ho...> - 2003-11-03 07:04:03
|
David M. Cooke wrote: > > How did you compile Numeric? With or without the system LAPACK libraries? > I'm probably like 90% of the other lazy people out there: using lapack_lite as coming with the Numeric package. Regards, Rob Hooft -- Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ |
From: David M. C. <co...@ph...> - 2003-11-02 19:51:32
|
On Fri, Oct 31, 2003 at 02:19:54PM +0100, Rob W.W. Hooft wrote: > I am using Polynomial.py from Scientific Python 2.1, together with > Numeric 17.1.2. This has always served me well, but now we are busy > upgrading our software, and I am currently porting some code to > Scientific Python 2.4.1, Numeric 22.0. Suddenly I do no longer manage to > get proper 2D polynomial fits over 4x4th order. At 5x5 the coefficients > that come back from LinearAlgebra.linear_least_squares have exploded. In > the old setup, I easily managed 9x9th order if I needed to, but most of > the time I'd stop at 6x6th order. Would anyone have any idea how this > difference can come about? I managed to work around this for the moment > by using the equivalent code in the fitPolynomial routine that uses > LinearAlgebra.generalized_inverse (and it doesn't even have problems > with the same data at 8x8), but this definitely feels not right! I can't > remember reading anything like this here before. > > Together with Konrad Hinsen, I came to the conclusion that the problem > is not in Scientific Python, so it must be the underlying LinearAlgebra > code that changed between releases 17 and 22. Works for me: (4,4) [[ 0.00001848 -0. 0. -0. 0. ] [ 0.99942297 0.03 -0. 0. -0. ] [ 0.10389946 -0. 0. -0. 0. ] [-0.00940275 0. -0. 0. -0. ] [ 0.01803527 -0.000111 0.00080066 -0.00217267 0.002475 ]] (5,5) [[-0.00000175 -0. 0. 0. -0. 0. ] [ 1.00008231 0.03 -0. 0. -0. 0. ] [ 0.09914353 -0. 0. -0. 0. -0. ] [ 0.00350289 0. -0. 0. -0. 0. ] [ 0.00333036 -0. 0. -0. 0. 0.001 ] [ 0.00594 0. -0. 0. -0. 0. ]] (6,6) [[ 0. -0. 0. -0. 0. -0. 0. ] [ 1. 0.03 -0. 0. -0. 0. -0. ] [ 0.1 -0. 0. -0. 0. -0. 0. ] [-0. 0. -0. 0. -0. 0. -0. ] [ 0.01 -0. 0. -0. 0. 0.001 0. ] [-0. 0. -0. 0. -0. 0. -0. ] [ 0.002 -0. 0. -0. 0. -0. 0. ]] (I've set sys.float_output_suppress_small to True to get a better picture.) I'm using Numeric 23.1, ScientificPython 2.4.3, and Python 2.3.2, from Debian unstable. Numeric is compiled to use the system LAPACK libraries (using ATLAS for the BLAS). This is on both Athlon and PowerPC chips. How did you compile Numeric? With or without the system LAPACK libraries? -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: <jo...@jo...> - 2003-11-02 16:46:58
|
On Sun, 02 Nov 2003 17:25:34 +0100 Rob Hooft wrote: Rob> - lapack_routine =3D lapack_lite.dgelss Rob> + lapack_routine =3D lapack_lite.dgelsd Well, here the underlying LAPACK routine was changed to the newer and significantly faster divide-and-conquer routine. (Same holds for complex version.) This could be the problem, which you should test. See LAPACK documentation for details. Nevertheless I would advise against reversing that change, as performance diffferences can really be large (although I haven't used either one of these specific functions here). Maybe you can keep a copy of the old version in your own project if really necessary? (After all there seems to be some agreement that you were just "lucky" to find a working algorithm in the first place.) Greetings, Jochen --=20 Einigkeit und Recht und Freiheit http://www.Jochen-Kuepper.de Libert=E9, =C9galit=E9, Fraternit=E9 GnuPG key: CC1B0B4D (Part 3 you find in my messages before fall 2003.) |
From: Rob H. <ro...@ho...> - 2003-11-02 16:25:23
|
Nadav Horesh wrote: > Many unstable problems have a stable solution if you choose the right > algorithm. The question is if somewhere the developers decided to switch > the equations solver, or there is a real bug. I hope that one of the > developers will reply in this forum. I will try to look at it also, > since it is a core component in the linear algebra package. Also if you > suspect that your work-around is useful --- please post it here! > The workaround is to use "generalized_inverse" instead of "solve_linear_equations". The changes in the latter routine since 17.1.2 are: @@ -269,18 +408,37 @@ bstar = Numeric.zeros((ldb,n_rhs),t) bstar[:b.shape[0],:n_rhs] = copy.copy(b) a,bstar = _castCopyAndTranspose(t, a, bstar) - lwork = 8*min(n,m) + max([2*min(m,n),max(m,n),n_rhs]) s = Numeric.zeros((min(m,n),),real_t) - work = Numeric.zeros((lwork,), t) + nlvl = max( 0, int( math.log( float(min( m,n ))/2. ) ) + 1 ) + iwork = Numeric.zeros((3*min(m,n)*nlvl+11*min(m,n),), 'l') if _array_kind[t] == 1: # Complex routines take different arguments - lapack_routine = lapack_lite.zgelss - rwork = Numeric.zeros((5*min(m,n)-1,), real_t) + lapack_routine = lapack_lite.zgelsd + lwork = 1 + rwork = Numeric.zeros((lwork,), real_t) + work = Numeric.zeros((lwork,),t) results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, - 0,work,lwork,rwork,0 ) + 0,work,-1,rwork,iwork,0 ) + lwork = int(abs(work[0])) + rwork = Numeric.zeros((lwork,),real_t) + a_real = Numeric.zeros((m,n),real_t) + bstar_real = Numeric.zeros((ldb,n_rhs,),real_t) + results = lapack_lite.dgelsd( m, n, n_rhs, a_real, m, bstar_real,ldb , s, rcond, + 0,rwork,-1,iwork,0 ) + lrwork = int(rwork[0]) + work = Numeric.zeros((lwork,), t) + rwork = Numeric.zeros((lrwork,), real_t) + results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, + 0,work,lwork,rwork,iwork,0 ) else: - lapack_routine = lapack_lite.dgelss + lapack_routine = lapack_lite.dgelsd + lwork = 1 + work = Numeric.zeros((lwork,), t) + results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, + 0,work,-1,iwork,0 ) + lwork = int(work[0]) + work = Numeric.zeros((lwork,), t) results = lapack_routine( m, n, n_rhs, a, m, bstar,ldb , s, rcond, - 0,work,lwork,0 ) + 0,work,lwork,iwork,0 ) if results['info'] > 0: raise LinAlgError, 'SVD did not converge in Linear Least Squares' resids = Numeric.array([],t) I'm not deep enough into this to know where the new version goes wrong. Regards, Rob Hooft -- Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ |
From: Nadav H. <na...@vi...> - 2003-11-02 15:52:56
|
Many unstable problems have a stable solution if you choose the right algorithm. The question is if somewhere the developers decided to switch the equations solver, or there is a real bug. I hope that one of the developers will reply in this forum. I will try to look at it also, since it is a core component in the linear algebra package. Also if you suspect that your work-around is useful --- please post it here! Nadav Rob Hooft wrote: > Nadav Horesh wrote: > >> I tested the problem with: >> 1. Numeric 23.1 under python 2.3.2 >> 2. numarray 0.8 (I made a copy of the Scientific package where all >> calls to Numeric were replaced to numarray), under python 2.3.2 >> There results where about the same -- high coefficients for the 5th >> order polynomials. >> I would expect reliable fit for a high order polynomials only under very >> special circumstances, so this is not a big surprise. > > > Thanks for your efforts. > > The polynomial we're trying to fit here is not extremely unstable. As I > said, with Numeric 17.1.2 my class of problems used to be stable up to > at least 9th order. I really suspect a bug was introduced here which is > difficult to pinpoint because everybody reacts the natural way: this is > an intrinsicly unstable problem, so it is not unexpected. Somehow it > could have been better, though! > > I managed to work around the problem so far by using a different solver > also built in to Scientific Python, so I am saved so far. > > Regards, > > Rob > |
From: Rob H. <ro...@ho...> - 2003-11-02 14:50:57
|
Nadav Horesh wrote: > I tested the problem with: > 1. Numeric 23.1 under python 2.3.2 > 2. numarray 0.8 (I made a copy of the Scientific package where all > calls to Numeric were replaced to numarray), under python 2.3.2 > There results where about the same -- high coefficients for the 5th > order polynomials. > I would expect reliable fit for a high order polynomials only under very > special circumstances, so this is not a big surprise. Thanks for your efforts. The polynomial we're trying to fit here is not extremely unstable. As I said, with Numeric 17.1.2 my class of problems used to be stable up to at least 9th order. I really suspect a bug was introduced here which is difficult to pinpoint because everybody reacts the natural way: this is an intrinsicly unstable problem, so it is not unexpected. Somehow it could have been better, though! I managed to work around the problem so far by using a different solver also built in to Scientific Python, so I am saved so far. Regards, Rob -- Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ |
From: Colin J. W. <cj...@sy...> - 2003-11-02 12:26:18
|
In the line: return num.zeros((1,), type=a.type())-num.multiply.reduce(num.diagonal(a)) the code: num.zeros((1,), type=a.type()) appears to be redundant. Colin W. |
From: Nadav H. <na...@Vi...> - 2003-11-02 12:21:42
|
I tested the problem with: 1. Numeric 23.1 under python 2.3.2 2. numarray 0.8 (I made a copy of the Scientific package where all calls to Numeric were replaced to numarray), under python 2.3.2 There results where about the same -- high coefficients for the 5th order polynomials. I would expect reliable fit for a high order polynomials only under very special circumstances, so this is not a big surprise. My advice is: * Make sure that this is a bug and not a result of a numerical instability. If you can trace it down and point to a bug, then report it. The numarray package is very usable and is under a very active and rapid development, thus bugs are being fixed fast. * Look for a solution in the scipy package: It is generally better then Scientific. * Polynomials fit is relatively very simple --- you may write one of you own in less then a one day work. Since, as I said, the problem is, in many cases, unstable, you'll have the chance to implement more stable linear-equation solvers. Nadav. On Fri, 2003-10-31 at 15:19, Rob W.W. Hooft wrote: > I am using Polynomial.py from Scientific Python 2.1, together with > Numeric 17.1.2. This has always served me well, but now we are busy > upgrading our software, and I am currently porting some code to > Scientific Python 2.4.1, Numeric 22.0. Suddenly I do no longer manage to > get proper 2D polynomial fits over 4x4th order. At 5x5 the coefficients > that come back from LinearAlgebra.linear_least_squares have exploded. In > the old setup, I easily managed 9x9th order if I needed to, but most of > the time I'd stop at 6x6th order. Would anyone have any idea how this > difference can come about? I managed to work around this for the moment > by using the equivalent code in the fitPolynomial routine that uses > LinearAlgebra.generalized_inverse (and it doesn't even have problems > with the same data at 8x8), but this definitely feels not right! I can't > remember reading anything like this here before. > > Together with Konrad Hinsen, I came to the conclusion that the problem > is not in Scientific Python, so it must be the underlying LinearAlgebra > code that changed between releases 17 and 22. > > I hacked up a simplified example. Not sure whether it is the most simple > case, but this resembles what I have in my code, and I'm quite sure it > worked with Numeric 17.x, but currently it is horrible over order (4,4): > > -------------------------------------- > import Numeric > > def func(x,y): > return x+0.1*x**2+0.01*x**4+0.002*x**6+0.03*x*y+0.001*x**4*y**5 > > x=[] > y=[] > z=[] > for dx in Numeric.arange(0,1,0.01): > for dy in Numeric.arange(0,1,0.01): > x.append(dx) > y.append(dy) > z.append(func(dx,dy)) > > from Scientific.Functions import Polynomial > data=Numeric.transpose([x,y]) > z=Numeric.array(z) > for i in range(10): > print data[i],z[i] > pol=Polynomial.fitPolynomial((4,4),data,z) > print pol.coeff > ------------------------------------ > for 4,4 this prints: > [[ 1.84845529e-05 -7.60502772e-13 2.71314749e-12 -3.66731796e-12 > 1.66977148e-12] > [ 9.99422967e-01 3.00000000e-02 -3.26346097e-11 4.42406519e-11 > -2.01549767e-11] > [ 1.03899464e-01 -3.19668064e-11 1.14721790e-10 -1.55489826e-10 > 7.08425891e-11] > [ -9.40275000e-03 4.28456838e-11 -1.53705205e-10 2.08279772e-10 > -9.48840470e-11] > [ 1.80352695e-02 -1.10999843e-04 8.00662570e-04 -2.17266676e-03 > 2.47500004e-03]] > > for 5,5: > > [[ -2.25705839e+03 6.69051337e+02 -6.60470163e+03 6.66572425e+03 > -8.67897022e+02 1.83974866e+03] > [ -2.58646837e+02 -2.46554689e+03 1.15965805e+03 7.01089888e+03 > -2.11395436e+03 2.10884815e+03] > [ 3.93307499e+03 4.34484805e+02 -4.84080392e+03 5.90375330e+03 > 1.16798049e+03 -4.14163933e+03] > [ 1.62814750e+03 2.08717457e+03 1.15870693e+03 -3.37838057e+03 > 3.49821689e+03 5.80572585e+03] > [ 4.54127557e+02 -1.56645524e+03 4.58997025e+00 1.69772635e+03 > -1.37751039e+03 -7.59726558e+02] > [ 2.37878239e+03 9.43032094e+02 8.58518644e+02 -8.35846339e+03 > -5.55845668e+02 1.87502761e+03]] > > Which is clearly wrong. > > I appreciate any help! > > Regards, > > Rob |
From: Konrad H. <hi...@cn...> - 2003-11-01 21:16:49
|
On Saturday 01 November 2003 03:12, Edward C. Jones wrote: > I compile and link Python extension modules using the script > > gcc -fPIC -g -I/usr/local/include/python2.3 \ > -Wall -Wstrict-prototypes -c mymodule.c > g++ -shared mymodule.o -L/usr/local/lib -o mymodule.so > > It works for me but it isn't pretty. Is there a better way to write it? Yes, the distutils module. It's part of the Python standard library and=20 documented there. > Gcc finds all the libraries that need to be linked in. For example, > "/usr/local/lib/python2.3/site-packages/numarray/libnumarray.so". How > does gcc do this? It doesn't :-) And it doesn't have to. You are not creating a stand-alon= e=20 executable, but a shared library for use in Python. Everything is linked=20 together dynamically at runtime when Python imports all the modules. > I created a .so file "utilities.so" that contains some C functions that > are called in mymodule.c but are not visible from Python. Both > "utilities.c" and "mymodule.c" use numarray. What changes do I make in > the script above? Must I use the nasty "libnumarray_UNIQUE_SYMBOL" tric= k? I don't know enough about numarray to answer that question, but that is=20 certainly one option. Alternatively you could include utilities.c into=20 mymodule.c. > What is a script for creating "utilities.a" using gcc? How do I change > the script above to include "utilities.a"? To create the archive: gcc -c utilities.c ar r libutilities.a utilies.o You might have to add some libraries or library paths to the compilation = step.=20 To use the archive, just add -lutilities to the gcc command line, plus=20 eventually the path with -L. Konrad. --=20 -------------------------------------------------------------------------= ------ Konrad Hinsen | E-Mail: hi...@cn... Centre de Biophysique Moleculaire (CNRS) | Tel.: +33-2.38.25.56.24 Rue Charles Sadron | Fax: +33-2.38.63.15.17 45071 Orleans Cedex 2 | Deutsch/Esperanto/English/ France | Nederlands/Francais -------------------------------------------------------------------------= ------ |
From: Edward C. J. <edc...@er...> - 2003-11-01 02:15:40
|
I compile and link Python extension modules using the script gcc -fPIC -g -I/usr/local/include/python2.3 \ -Wall -Wstrict-prototypes -c mymodule.c g++ -shared mymodule.o -L/usr/local/lib -o mymodule.so It works for me but it isn't pretty. Is there a better way to write it? Gcc finds all the libraries that need to be linked in. For example, "/usr/local/lib/python2.3/site-packages/numarray/libnumarray.so". How does gcc do this? I created a .so file "utilities.so" that contains some C functions that are called in mymodule.c but are not visible from Python. Both "utilities.c" and "mymodule.c" use numarray. What changes do I make in the script above? Must I use the nasty "libnumarray_UNIQUE_SYMBOL" trick? What is a script for creating "utilities.a" using gcc? How do I change the script above to include "utilities.a"? |
From: Rob W.W. H. <ro...@ho...> - 2003-10-31 13:20:48
|
I am using Polynomial.py from Scientific Python 2.1, together with Numeric 17.1.2. This has always served me well, but now we are busy upgrading our software, and I am currently porting some code to Scientific Python 2.4.1, Numeric 22.0. Suddenly I do no longer manage to get proper 2D polynomial fits over 4x4th order. At 5x5 the coefficients that come back from LinearAlgebra.linear_least_squares have exploded. In the old setup, I easily managed 9x9th order if I needed to, but most of the time I'd stop at 6x6th order. Would anyone have any idea how this difference can come about? I managed to work around this for the moment by using the equivalent code in the fitPolynomial routine that uses LinearAlgebra.generalized_inverse (and it doesn't even have problems with the same data at 8x8), but this definitely feels not right! I can't remember reading anything like this here before. Together with Konrad Hinsen, I came to the conclusion that the problem is not in Scientific Python, so it must be the underlying LinearAlgebra code that changed between releases 17 and 22. I hacked up a simplified example. Not sure whether it is the most simple case, but this resembles what I have in my code, and I'm quite sure it worked with Numeric 17.x, but currently it is horrible over order (4,4): -------------------------------------- import Numeric def func(x,y): return x+0.1*x**2+0.01*x**4+0.002*x**6+0.03*x*y+0.001*x**4*y**5 x=[] y=[] z=[] for dx in Numeric.arange(0,1,0.01): for dy in Numeric.arange(0,1,0.01): x.append(dx) y.append(dy) z.append(func(dx,dy)) from Scientific.Functions import Polynomial data=Numeric.transpose([x,y]) z=Numeric.array(z) for i in range(10): print data[i],z[i] pol=Polynomial.fitPolynomial((4,4),data,z) print pol.coeff ------------------------------------ for 4,4 this prints: [[ 1.84845529e-05 -7.60502772e-13 2.71314749e-12 -3.66731796e-12 1.66977148e-12] [ 9.99422967e-01 3.00000000e-02 -3.26346097e-11 4.42406519e-11 -2.01549767e-11] [ 1.03899464e-01 -3.19668064e-11 1.14721790e-10 -1.55489826e-10 7.08425891e-11] [ -9.40275000e-03 4.28456838e-11 -1.53705205e-10 2.08279772e-10 -9.48840470e-11] [ 1.80352695e-02 -1.10999843e-04 8.00662570e-04 -2.17266676e-03 2.47500004e-03]] for 5,5: [[ -2.25705839e+03 6.69051337e+02 -6.60470163e+03 6.66572425e+03 -8.67897022e+02 1.83974866e+03] [ -2.58646837e+02 -2.46554689e+03 1.15965805e+03 7.01089888e+03 -2.11395436e+03 2.10884815e+03] [ 3.93307499e+03 4.34484805e+02 -4.84080392e+03 5.90375330e+03 1.16798049e+03 -4.14163933e+03] [ 1.62814750e+03 2.08717457e+03 1.15870693e+03 -3.37838057e+03 3.49821689e+03 5.80572585e+03] [ 4.54127557e+02 -1.56645524e+03 4.58997025e+00 1.69772635e+03 -1.37751039e+03 -7.59726558e+02] [ 2.37878239e+03 9.43032094e+02 8.58518644e+02 -8.35846339e+03 -5.55845668e+02 1.87502761e+03]] Which is clearly wrong. I appreciate any help! Regards, Rob -- Rob W.W. Hooft || ro...@ho... || http://www.hooft.net/people/rob/ |
From: Todd M. <jm...@st...> - 2003-10-27 16:04:00
|
There's a mix of "interface" and "implementation" in libnumarray since the API jump table mechanism is the only kind of inter-module linkage used. Since some of the functions in libnumarray are just there because they needed to be shared within the implementation, not exported as public interface, "everything" is never going to be documented. However, specific holes that people point out certainly can be. I'll document this one. Regards, Todd On Thu, 2003-10-23 at 20:48, Edward C. Jones wrote: > "NA_getType" needs to be documented. The following is useful: > > if (!PyArg_ParseTuple(args, "O", &type)) > return NULL; > > temptype = NA_getType(type); > if (temptype == NULL) return NULL; > typeno = NA_typeObjectToTypeNo(temptype); > Py_DECREF(temptype); > if (typeno < 0) { > PyErr_Format(PyExc_RuntimeError, > "_numarray_init: can't get typeno for type"); > return NULL; > } > > Do other functions in "libnumarraymodule.c" need to be documented? > > > > ------------------------------------------------------- > This SF.net email is sponsored by: The SF.net Donation Program. > Do you like what SourceForge.net is doing for the Open > Source Community? Make a contribution, and help us add new > features and functionality. Click here: http://sourceforge.net/donate/ > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- Todd Miller Space Telescope Science Institute 3700 San Martin Drive Baltimore MD, 21030 (410) 338 - 4576 |