You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: David H. <dav...@gm...> - 2006-07-05 13:23:17
|
HI JJ, try stats.chi2.rvs(10) stats.chi2.rvs(10, loc=0, scale =100, size = 5) df is not a keyword argument, so writing df=10 explicitely was causing the error. David 2006/7/4, JJ <jos...@ya...>: > > Hello. I have a very simple question. I would like > to generate a number of random variables from the chi2 > distribution. If I wanted these for the normal > distribution, the code could be > stats.norm.rvs(size=5,loc=100,scale=1). But > stats.chi2.rvs(size=5,df=10,loc=0,scale=1) or > stats.chi2.rvs(df=10,loc=0,scale=1) or > stats.chi2.rvs(df=10)does not work. Can anyone tell > me what the proper syntax would be for this? > Thanks JJ > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: <Fer...@eu...> - 2006-07-05 12:53:02
|
Hi, how could i get the name of an array in a string ? (info command ?!) thanks f.= |
From: Bill B. <wb...@gm...> - 2006-07-05 07:17:21
|
SSBsaWtlIHRoZSBpZGVhIG9mIGVudGhvbiwgYnV0IEkgaGFkbid0IGdpdmVuIGl0IG11Y2ggdGhv dWdodCBiZWNhdXNlIGl0CnNlZW1zIHRvIGxhZyBzbyBmYXIgYmVoaW5kIGluIHZlcnNpb25zLiAg UHl0aG9uIDIuMywgTnVtcHkgMC45LjYsIGV0Yy4KCkJ1dCBtYXliZSB0aGF0J3Mgbm90IGFzIGJp ZyBhIGRlYWwgYXMgSSd2ZSBiZWVuIHRoaW5raW5nPyAgVHdvIHF1ZXN0aW9uczoKCiogQ2FuIGl0 IGhhcHBpbHkgY28tZXhpc3Qgd2l0aCBzZXBhcmF0ZSBpbnN0YWxscyBvZiBtb3JlIHVwLXRvLWRh dGUgdmVyc2lvbnMKb2YgdGhpbmdzPyAgTGlrZSBjYW4gSSBoYXZlIFB5dGhvbiAyLjQgaW5zdGFs bGVkIGF0IHRoZSBzYW1lIHRpbWU/ICBPciBhCmRpZmZlcmVudCBQeXRob24gMi4zPwoKKiBPciBp cyBpdCB1c2VyLXVwZ3JhZGVhYmxlIGluIGEgcGllY2VtZWFsIG1hbm5lcj8gIEUuZy4gY291bGQg SSBpbnN0YWxsCk51bXB5IDAuOS44IG92ZXIgdG9wIG9mIGVudGhvbidzIG9sZGVyIHZlcnNpb24g b2YgTnVtcHk/CgotLWJiCgoKT24gNy81LzA2LCBFZGluIFNhbGtvdmnmIDxlZGluLnNhbGtvdmlj QGdtYWlsLmNvbT4gd3JvdGU6Cj4KPiBIYXZlIHlvdSB0cmllZDoKPiBodHRwOi8vY29kZS5lbnRo b3VnaHQuY29tL2VudGhvbi8KPgo+IENoZWVycywKPiBFZGluCj4KPiBPbiA3LzUvMDYsIEJpbGwg QmF4dGVyIDx3YmF4dGVyQGdtYWlsLmNvbT4gd3JvdGU6Cj4gPiBJIHRyaWVkIHRvIGdldCBNYXlh VmkgYW5kIFZUSyB3b3JraW5nIHVuZGVyIFdpbjMyL01TVkMuTmV0IGEgd2hpbGUgYmFjawo+ID4g ZmFpbGVkIG1pc2VyYWJseS4KPiA+Cj4gPiBJcyB0aGVyZSBzb21lIHNpbXBsZSwgb3V0LW9mLXRo ZS1ib3gsIHByZWNvbXBpbGVkLCBuby1icmFpbnMtcmVxdWlyZWQKPiA+IHNvbHV0aW9uIGZvciBj cmVhdGluZyAzRCBwbG90cz8gIFByZWZlcmFibHkgb25lIHRoYXQgZG9lc24ndCByZXF1aXJlCj4g PiBjb21waWxpbmcgYW55dGhpbmcuCj4gPgo+ID4gLS1CaWxsCj4gPgo+Cg== |
From: Bill B. <wb...@gm...> - 2006-07-05 06:41:23
|
I tried to get MayaVi and VTK working under Win32/MSVC.Net a while back failed miserably. Is there some simple, out-of-the-box, precompiled, no-brains-required solution for creating 3D plots? Preferably one that doesn't require compiling anything. --Bill |
From: <677...@16...> - 2006-07-05 05:42:42
|
numpy-discussion:您好! 你可以查阅www.cmpi.cn,快去看! 致 礼! 中国金属工业网 677...@16... 2006-07-05 |
From: Bill B. <wb...@gm...> - 2006-07-05 05:00:24
|
Slight correction. {*} except that negative axes for swapaxes doesn't seem work currently, so > instead it would need to be something like: > a.transpose( a.shape[:-2] + (a.shape[-1],a.shape[-2]) ) > with a check for "if ndim > 1", of course. > Apparently a.swapaxes(-2,-1) does work, and it does exactly what I am suggesting, including leaving zero-d and 1-d arrays alone. Not sure why I thought it wasn't working. So in short my proposal is to: -- make a.T a property of array that returns a.swapaxes(-2,-1), -- make a.H a property of array that returns a.conjugate().swapaxes(-2,-1) and maybe -- make a.M a property of array that returns numpy.asmatrix(a) --Bill |
From: Bill B. <wb...@gm...> - 2006-07-05 04:03:36
|
Just wanted to make one last effort get a .T attribute for arrays, so that you can flip axes with a simple "a.T" instead of "a.transpose()", as with numpy matrix objects. If I recall, the main objection raised before was that there are lots of ways to transpose n-dimensional data. Fine, but the fact is that 2D arrays are pretty darn common, and so are a special case worth optimizing for. Furthermore transpose() won't go away if you do need to do some specific kind of axes swapping other than the default, so noone is really going to be harmed by adding it. I propose to make .T a synonym for .swapaxes(-2,-1) {*}, i.e. the last two axes are interchanged. This should also make it useful in many N-d array cases (whereas the default of .transpose() -- to completely reverse the order of all the axes -- is seldom what you want). Part of the thinking is that when you print an N-d array it's the last two dimensions that get printed like 2-d matrices separated by blank likes. You can think of it as some number of stacks of 2-d matrices. So this .T would just transpose those 2-d matrices in the printout. Those are the parts that are generally most contiguous in memory also, so it makes sense for 2-d matrix bits to be stored in those last two dimensions. Then, if there is a .T, it makes sense to also have .H which would basically be equivalent to .T.conjugate(). Finally, the matrix class has .A to get the underlying array -- it would also be nice to have a .M on array as a shortcut for asmatrix(). This one would be very handy for matrix users, I think, but I could go either way on that, having abandoned matrix myself. Ex: ones([4,4]).M Other possibilities: - Make .T a function, so that you can pass it the same info as .transpose(). Then the shortcut becomes a.T(), which isn't as nice, and isn't consistent with matrix's .T any more. - Just make .T raise an error for ndim>2. But I don't really see any benefit in making it an error as opposed to defining a reasonable default behavior. - Make .T on a 1-dim array return a 2-dim Nx1 array. (My default suggestion is to just leave it alone if ndim < 2, an exception would be another possiblility). Would make an easy way to create column vectors from arrays, but I can think of nothing else in Numpy that acts that way. This is not a 1.0 must have, as it introduces no backward compatibility issues. But it would be trivial to add if the will is there. {*} except that negative axes for swapaxes doesn't seem work currently, so instead it would need to be something like: a.transpose( a.shape[:-2] + (a.shape[-1],a.shape[-2]) ) with a check for "if ndim > 1", of course. --Bill |
From: Paul D. <pfd...@gm...> - 2006-07-04 21:54:34
|
Some things to note: The mask is copy-on-write. Don't mess with that. You can't just poke values into an existing mask, it may be shared with other arrays. I do not agree that there is any 'inconsistency'. It may be someone's concept of the class that if there is a mask then at least one value is on, but that was not my design. I believe if you try your ideas you'll find it slows other people down, if not you. Perhaps with all of Travis' new machinery, subclassing works. It didn't used to, and I haven't kept up. On 7/3/06, Pierre GM <pgm...@ma...> wrote: > > Michael, > I wonder whether the Mask class you suggest is not a bit overkill. There > should be enough tools in the existing MA module to do what we want. And I > don't wanna think about compatibility the number of changes in the MA code > that'd be required (but I'm lazy)... > > For the sake of consistency and optimization, I still think it could be > easier > (and cleaner) to make `nomask` the default for a MaskedArray without > masked > values. That could for example be implemented by forcing `nomask` at the > creation of the MaskedArray with an extra > `if mask and not mask.any(): mask=nomask`, or by using Paul's > make_mask( flag=1) trick. > > Masking some specific values could still be done when mask is nomask with > an > intermediary MA.getmaskarray() step. > > On a side note, modifying an existing mask is a delicate matter. > Everything's > OK if you use masks as a way to hide existing data, it's more complex when > initially you have some holes in your dataset... > |
From: JJ <jos...@ya...> - 2006-07-04 21:23:27
|
Hello. I have a very simple question. I would like to generate a number of random variables from the chi2 distribution. If I wanted these for the normal distribution, the code could be stats.norm.rvs(size=5,loc=100,scale=1). But stats.chi2.rvs(size=5,df=10,loc=0,scale=1) or stats.chi2.rvs(df=10,loc=0,scale=1) or stats.chi2.rvs(df=10)does not work. Can anyone tell me what the proper syntax would be for this? Thanks JJ __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
From: David M. C. <co...@ph...> - 2006-07-04 20:09:44
|
On Tue, 4 Jul 2006 12:10:18 +0200 Jan-Matthias Braun <jan...@gm...> wrote: > Hi all, > > I'm testing some computations with float96 at the moment and right now I > have problems with polyfit raising a KeyError for the keycode 'g', which is > floatxx with xx>64. Use longdouble instead of float96; it'll make your code portable. > I am getting a KeyError using polyfit on some float96 values. The used > Routines seem to know nothing about this type. > > My main question is: have I missed something? Shouldn't this type be used? > Below is a more detailed descripton. polyfit uses LAPACK to do least-squares for the fit, and LAPACK doesn't handle any real types besides single- and double-precision. And Numpy (as a design decision) doesn't like casting to a lower precision. > In file numpy/linalg/linalg.py, the following definitions at lines 26ff > seem to be the offending ones: > # Helper routines > _array_kind = {'i':0, 'l': 0, 'q': 0, 'f': 0, 'd': 0, 'F': 1, 'D': > 1} _array_precision = {'i': 1, 'l': 1, 'q': 1, 'f': 0, 'd': 1, 'F': 0, 'D': > 1} _array_type = [['f', 'd'], ['F', 'D']] > > Here the new typecodes are missing. I tried > # Helper routines > _array_kind = {'i':0, 'l': 0, 'q': 0, 'f': 0, 'd': 0, 'g': '0', > 'F': 1, 'D':1, 'G':1} > _array_precision = {'i': 1, 'l': 1, 'q': 1, 'f': 0, 'd': 1, 'g': 1, > 'F': 0, 'D': 1, 'G': 1} > _array_type = [['f', 'd', 'g'], ['F', 'D', 'G']] > > which gets me a step further to a TypeError: > > File "lib/python2.3/site-packages/numpy/linalg/linalg.py", line 454, in > lstsq > bstar[:b.shape[0],:n_rhs] = b.copy() > TypeError: array cannot be safely cast to required type That would be a better error. I'm going to leave it like this for now, though. Instead, the linalg module will be converted to use dtypes. > (Question: Why only one typecode for a type which varies in bitlength on > different platforms? On Opteron CPU's I've seen float128 with 'g'?) Typecodes are deprecated (they date back to Numeric), so we're not bothering to add new ones. The equivalent to 'g' is the dtype longdouble. -- |>|\/|< /--------------------------------------------------------------------------\ |David M. Cooke http://arbutus.physics.mcmaster.ca/dmc/ |co...@ph... |
From: Bart V. <bar...@cs...> - 2006-07-04 15:25:29
|
Hi all, reading the thread "Ransom proposals" I was wondering why there isn't a ndarray.dot() method? There is already a scipy.sparse.dot() so this would fit nicely in the whole idea of polymorphism. Bart |
From:
<je...@fy...> - 2006-07-04 14:25:57
|
Hi! With numpy-0.9.9.2726, I do this: >>> x =3D arange(4) >>> y =3D x[newaxis, :] I would expect both arrays to be contiguous: >>> x.flags.contiguous, y.flags.contiguous (True, False) Shouldn't y be contiguous? Maybe it's because of the strange strides: >>> y.strides (0, 4) >>> y.strides =3D (16, 4) >>> y.flags.contiguous True Jens J=C3=B8rgen |
From: David H. <dav...@gm...> - 2006-07-04 13:28:08
|
Hi John, Here is a patch to fix the first error in test_twodim_base.py. I'm sorry I can't help you with the rest. David 2006/7/4, John Carter <jn...@ec...>: > > Hi, > > As is the way posting to a news group stirs the brain cell into > activity and the problem is solved. or rather shifted. > > I've downloaded the candidate version of mingw32 and using that to > build numpy/scipy works, or rather it builds the extensions for Python 2.3 > > I believe that there are still problems as both numpy and scipy fail > their tests. I'm also dubious as to whether they are finding Altas and > Lapack. > > Configuration: > > Windows XP SP2 > Cygwin = gcc version 3.4.4 (cygming special) (gdc 0.12, using dmd 0.125) > MinGW = gcc version 3.4.5 (mingw special) > Python = Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit > (Intel)] on win32 > > scipy, numpy, Atlas, Lapack all downloaded within the last 2 days > > > I've cut and pasted the error messages from the tests below. > > Any help gratefully received. > > John > > > ===== > numpy > ===== > D:.\numerical> python > Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import numpy > >>> numpy.test() > Found 5 tests for numpy.distutils.misc_util > Found 3 tests for numpy.lib.getlimits > Found 30 tests for numpy.core.numerictypes > Found 13 tests for numpy.core.umath > Found 3 tests for numpy.core.scalarmath > Found 8 tests for numpy.lib.arraysetops > Found 42 tests for numpy.lib.type_check > Found 101 tests for numpy.core.multiarray > Found 36 tests for numpy.core.ma > Found 10 tests for numpy.lib.twodim_base > Found 10 tests for numpy.core.defmatrix > Found 1 tests for numpy.lib.ufunclike > Found 38 tests for numpy.lib.function_base > Found 3 tests for numpy.dft.helper > Found 1 tests for numpy.lib.polynomial > Found 7 tests for numpy.core.records > Found 26 tests for numpy.core.numeric > Found 4 tests for numpy.lib.index_tricks > Found 46 tests for numpy.lib.shape_base > Found 0 tests for __main__ > > ........................................................................................................ > > ........................................................................................................ > > .........E.............................................................................................. > ........................... > ====================================================================== > ERROR: check_simple (numpy.lib.tests.test_twodim_base.test_histogram2d) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "C:\Python23\Lib\site-packages\numpy\lib\tests\test_twodim_base.py", > line 137, in check_simple > np.random.seed(1) > File "mtrand.pyx", line 311, in mtrand.RandomState.seed > SystemError: C:\sf\python\dist\src-maint23\Objects\longobject.c:240: > bad argument to internal function > > ---------------------------------------------------------------------- > Ran 387 tests in 1.391s > > FAILED (errors=1) > <unittest.TextTestRunner object at 0x010DB8F0> > >>> > > ===== > scipy > ===== > D:.\numerical> python > Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import scipy > >>> scipy.test() > Overwriting lib=<module 'scipy.lib' from > 'C:\PYTHON23\lib\site-packages\scipy\lib\__init__.pyc'> from > C:\PYTHON23\lib\si > te-packages\scipy\lib\__init__.pyc (was <module 'numpy.lib' from > 'C:\PYTHON23\lib\site-packages\numpy\lib\__init__.pyc'> > from C:\PYTHON23\lib\site-packages\numpy\lib\__init__.pyc) > Overwriting fft=<function fft at 0x013E7AB0> from scipy.fftpack.basic > (was <function fft at 0x00AAEE30> from numpy.dft.f > ftpack) > Overwriting ifft=<function ifft at 0x013E7AF0> from > scipy.fftpack.basic (was <function ifft at 0x00AAEE70> from numpy.df > t.fftpack) > Found 4 tests for scipy.io.array_import > Found 128 tests for scipy.linalg.fblas > Found 397 tests for scipy.ndimage > Found 10 tests for scipy.integrate.quadpack > Found 97 tests for scipy.stats.stats > Found 47 tests for scipy.linalg.decomp > Found 2 tests for scipy.integrate.quadrature > Found 95 tests for scipy.sparse.sparse > Found 20 tests for scipy.fftpack.pseudo_diffs > Found 6 tests for scipy.optimize.optimize > Found 5 tests for scipy.interpolate.fitpack > Found 1 tests for scipy.interpolate > Found 12 tests for scipy.io.mmio > Found 10 tests for scipy.stats.morestats > Found 4 tests for scipy.linalg.lapack > Found 18 tests for scipy.fftpack.basic > Found 4 tests for scipy.linsolve.umfpack > Found 4 tests for scipy.optimize.zeros > Found 41 tests for scipy.linalg.basic > Found 2 tests for scipy.maxentropy.maxentropy > Found 358 tests for scipy.special.basic > Found 128 tests for scipy.lib.blas.fblas > Found 7 tests for scipy.linalg.matfuncs > > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > Found 42 tests for scipy.lib.lapack > Found 1 tests for scipy.optimize.cobyla > Found 16 tests for scipy.lib.blas > Found 1 tests for scipy.integrate > Found 14 tests for scipy.linalg.blas > Found 70 tests for scipy.stats.distributions > Found 4 tests for scipy.fftpack.helper > Found 4 tests for scipy.signal.signaltools > Found 0 tests for __main__ > > Don't worry about a warning regarding the number of bytes read. > Warning: 1000000 bytes requested, 20 bytes read. > .......caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > > ........................................................................................................................ > > ........................................................................................................................ > > ........................................................................................................................ > > ........................................................................................................................ > > .........................................................................Took > 13 points. > ..........Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > ........................Use minimum degree ordering on A'+A. > ...................Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > .................Resizing... 16 17 24 > Resizing... 20 7 35 > Resizing... 23 7 47 > Resizing... 24 25 58 > Resizing... 28 7 68 > Resizing... 28 27 73 > .....Use minimum degree ordering on A'+A. > > ......................................C:\PYTHON23\lib\site-packages\scipy\interpolate\fitpack2.py:410: > UserWarning: > The coefficients of the spline returned have been computed as the > minimal norm least-squares solution of a (numerically) rank deficient > system (deficiency=7). If deficiency is large, the results may be > inaccurate. Deficiency may strongly depend on the value of eps. > warnings.warn(message) > ....................Ties preclude use of exact statistic. > ..Ties preclude use of exact statistic. > ........ > **************************************************************** > WARNING: clapack module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses flapack instead of clapack. > **************************************************************** > > ....................data-ftype: z compared to data D > Calling _superlu.zgssv > Use minimum degree ordering on A'+A. > .data-ftype: c compared to data F > Calling _superlu.cgssv > Use minimum degree ordering on A'+A. > .data-ftype: d compared to data d > Calling _superlu.dgssv > Use minimum degree ordering on A'+A. > .data-ftype: s compared to data f > Calling _superlu.sgssv > Use minimum degree ordering on A'+A. > > ........................................................................................................................ > > ........................................................................................................................ > ....................................Gegenbauer, a = 2.7337413228 > > ........................................................................................................................ > .............caxpy:n=4 > ..caxpy:n=3 > ....ccopy:n=4 > ..ccopy:n=3 > .............cscal:n=4 > ....cswap:n=4 > ..cswap:n=3 > .....daxpy:n=4 > ..daxpy:n=3 > ....dcopy:n=4 > ..dcopy:n=3 > .............dscal:n=4 > ....dswap:n=4 > ..dswap:n=3 > .....saxpy:n=4 > ..saxpy:n=3 > ....scopy:n=4 > ..scopy:n=3 > .............sscal:n=4 > ....sswap:n=4 > ..sswap:n=3 > .....zaxpy:n=4 > ..zaxpy:n=3 > ....zcopy:n=4 > ..zcopy:n=3 > .............zscal:n=4 > ....zswap:n=4 > ..zswap:n=3 > ...Result may be inaccurate, approximate err = 4.1928136851e-009 > ...Result may be inaccurate, approximate err = 7.27595761418e-012 > .............................................F................Residual: > 1.05006950319e-007 > . > **************************************************************** > WARNING: cblas module is empty > ----------- > See scipy/INSTALL.txt for troubleshooting. > Notes: > * If atlas library is not found by numpy/distutils/system_info.py, > then scipy uses fblas instead of cblas. > **************************************************************** > > > ............................................................................................ > ====================================================================== > FAIL: check_simple (scipy.optimize.tests.test_cobyla.test_cobyla) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File > "C:\Python23\Lib\site-packages\scipy\optimize\tests\test_cobyla.py", > line 20, in check_simple > assert_almost_equal(x, [x0, x1], decimal=5) > File "C:\Python23\Lib\site-packages\numpy\testing\utils.py", line > 152, in assert_almost_equal > return assert_array_almost_equal(actual, desired, decimal, err_msg) > File "C:\Python23\Lib\site-packages\numpy\testing\utils.py", line > 222, in assert_array_almost_equal > header='Arrays are not almost equal') > File "C:\Python23\Lib\site-packages\numpy\testing\utils.py", line > 207, in assert_array_compare > assert cond, msg > AssertionError: > Arrays are not almost equal > > (mismatch 100.0%) > x: array([ 4.957975 , 0.64690335]) > y: array([ 4.95535625, 0.66666667]) > > ---------------------------------------------------------------------- > Ran 1552 tests in 5.828s > > FAILED (failures=1) > <unittest.TextTestRunner object at 0x02671210> > >>> > > > > > > Dr. John N. Carter. E-Mail : > jn...@ec... > Building 1, Room > 2005 http://www.ecs.soton.ac.uk/~jnc/ > Information: Signals, Images, Systems Phone : +44 (0) 23 8059 > 2405 > School of Electronics & Computer Science, Fax : +44 (0) 23 8059 > 4498 > Southampton University, Hants, UK SO17 1BJ. > > > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Steffen L. <ste...@gm...> - 2006-07-04 13:20:42
|
Hi all, i made some speed tests using the sin-function and the %-operation to compare Numeric, numpy 0.9.8 and numpy 0.9.9.2732. As result the latest numpy version seems to be very slow in comparison to the two other candidates. Results (in usec per loop): sin-array mod-array Numeric 134 18 numpy 0.9.8 97 55 numpy 0.9.9.2732 204 316 numpy 0.9.8 + math 38 numpy 0.9.9.2732 + math 161 Numeric + math 23 The used scripts can be found at the end. Can anyone verify my results and explain the observed speed degression? Thanks, Steffen sin-scripts: /usr/lib/python2.3/timeit.py -s "from Numeric import sin,zeros,arange; x=zeros(10, 'd'); x[0]=0.1" "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py -s "from numpy import sin,zeros,arange; x=zeros(10, 'd'); x[0]=0.1" "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py -s "from math import sin; from numpy import zeros,arange; x=zeros(10, 'd');x[0]=0.1" "for i in arange(9): x[i+1]=sin(x[i])" /usr/lib/python2.3/timeit.py -s "from math import sin; from Numeric import zeros,arange; x=zeros(10, 'd'); x[0]=0.1" "for i in arange(9): x[i+1]=sin(x[i] )" %-scripts /usr/lib/python2.3/timeit.py -s "from Numeric import zeros,arange; x=zeros(10, 'd'); x[0]=0.1" "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" /usr/lib/python2.3/timeit.py -s "from numpy import zeros,arange; x=zeros(10, 'd'); x[0]=0.1" "for i in arange(9): x[i+1]=(x[i]+1.1)%(1.0)" |
From: Albert S. <fu...@gm...> - 2006-07-04 12:58:12
|
Hello all On Tue, 04 Jul 2006, Thomas Heller wrote: > Albert Strasheim schrieb: > > Hey Thomas > > > > Thomas Heller wrote: > >> Thomas Heller schrieb: > >> > I've also played a little, and I think one important limitation in > >> ctypes > >> > is that items in the argtypes list have to be ctypes types. > >> > >> Thi swas misleading: I mean that this limitation should probably be > >> removed, because it prevents a lot of things one could do. > > > > What's your thinking on getting these changes made to ctypes and on ctypes' > > future development in general? > > > > Presumably you can't change it too much with the Python 2.5 release coming > > up, but it would be a shame if we had to wait until Python 2.6 to get the > > changes you suggested (and other goodies, like the array interface). > > I have asked on python-dev, let's wait for the answer. > I hope that at least the limitation that I mentioned can be removed in Python 2.5. Sounds great. > The goal of my post was to show that (without this restriction) a lot can > already be done in Python, of course it would be better if this could be > implemented in C and integrated in ctypes. > > For the numpy/ctypes inegration I'm not absolutely sure what would be needed most: > > Is there a need to convert between ctypes and numpy arrays? If numpy arrays can > be passed to ctypes foreign functions maybe there is no need at all for the conversion. > We could probably even live with helper code like that I posted outside of ctypes... I think there are basically two ways for a C library to work with regards to memory allocation: 1. let the user allocate the array/struct/whatever and pass a pointer to the library to manipulate 2. let the library allocate the array/struct/whatever, manipulate it and return the pointer to the user I think the first case is pretty much covered. Where in the past you would create the array or struct on the stack or allocate it on the heap with malloc, you now create a ctypes Structure or a NumPy array and pass that to the C function. In the second case, one would want to wrap a NumPy array around the ctype so that you can manipulate the data returned by the library. I don't know if this second scenario is very common -- hopefully not. If not, then having ctypes implement the array interface isn't too critical, since you wouldn't typically need to make a NumPy array from existing data. What do you think? Regards, Albert |
From: stephen e. <ste...@gm...> - 2006-07-04 12:00:30
|
I've found some matlab code that seems to do the same sort of thing. Interestingly enough it just uses trigonomotry to do find the x,y positions in the matrix that correspond to the ray at a particular angle. I had origionally discarded this idea because I thought there must be a more efficient way to do it, but perhaps not. I still have a lot to learn about numpy :) Stephen (sorry for the double post to you Torgil) On 7/3/06, Torgil Svensson <tor...@gm...> wrote: > > I've done something similar a few years ago (numarray,numeric). I > started roughly at the middle and did 64 points from a reference point > (xc,yc). This point together with a point at the edge of the image > (xp,yp) also defined a reference angle (a0). (ysize,xsize) is the > shape of the intensity image. > > I used the following code to calculate points of interest: > > na=64 > xr,yr=xsize-xc,ysize-yc > a0=arctan2(yp-yr,xp-xc) > if a0<0: a0+=2*pi > ac=arctan2([yr,yr,-yc,-yc],[xr,-xc,-xc,xr]) > if numarray: > ac[ac<0]+=2*pi > else: > ac=choose(ac<0,(ac,ac+2*pi)) > a1,a2,a3,a4=ac > rmaxfn={ > 0: lambda a: a<=a1 and xr/cos(a-0.0*pi) or yr/cos(0.5*pi-a), > 1: lambda a: a<=a2 and yr/cos(a-0.5*pi) or xc/cos(1.0*pi-a), > 2: lambda a: a<=a3 and xc/cos(a-1.0*pi) or yc/cos(1.5*pi-a), > 3: lambda a: a<=a4 and yc/cos(a-1.5*pi) or xr/cos(2.0*pi-a) > } > angles=arange(a0,a0+2*pi,2*pi/na) > if numarray: > angles[angles>=2*pi]-=2*pi > else: > angles=choose(angles>=2*pi,(angles,angles-2*pi)) > nr=int(ceil(sqrt(max(yc,yr)**2+max(xc,xr)**2))) > crmax=array([int(floor(rmaxfn[floor(a*2/pi)](a))) for a in angles]) > cr=outerproduct(ones(na),arange(float(nr))) > ca=outerproduct(angles,ones(nr)) > x=cr*cos(ca)+xc > y=cr*sin(ca)+yc > > After this I did cubic spline interpolation in the image with these > points and did something useful. I don't know how relevant this is to > you and it doesn't use the linear algebra package but it might give > you some hint. > > If you find out a nifty way to do your rays please post on this thread. > > Sidenote -- Watch my explicit float argument to arange and even > putting in pi there in one case. There's a discussion on this list > that floats in arange are troublesome > > > On 6/30/06, stephen emslie <ste...@gm...> wrote: > > I am in the process of implementing an image processing algorithm that > > requires following rays extending outwards from a starting point and > > calculating the intensity derivative at each point. The idea is to find > the > > point where the difference in intensity goes beyond a particular > threshold. > > > > Specifically I'm examining an image of an eye to find the pupil, and the > > edge of the pupil is a sharp change in intensity. > > > > How does one iterate along a line in a 2d matrix, and is there a better > way > > to do this? Is this a problem that linear algebra can help with? > > > > Thanks > > Stephen Emslie > > > > Using Tomcat but need to do more? Need to support web services, > security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier > > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > > > _______________________________________________ > > Numpy-discussion mailing list > > Num...@li... > > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > > > > > |
From: Thomas H. <th...@py...> - 2006-07-04 11:50:11
|
Albert Strasheim schrieb: > Hey Thomas > > Thomas Heller wrote: >> Thomas Heller schrieb: >> > I've also played a little, and I think one important limitation in >> ctypes >> > is that items in the argtypes list have to be ctypes types. >> >> Thi swas misleading: I mean that this limitation should probably be >> removed, because it prevents a lot of things one could do. > > What's your thinking on getting these changes made to ctypes and on ctypes' > future development in general? > > Presumably you can't change it too much with the Python 2.5 release coming > up, but it would be a shame if we had to wait until Python 2.6 to get the > changes you suggested (and other goodies, like the array interface). I have asked on python-dev, let's wait for the answer. I hope that at least the limitation that I mentioned can be removed in Python 2.5. The goal of my post was to show that (without this restriction) a lot can already be done in Python, of course it would be better if this could be implemented in C and integrated in ctypes. For the numpy/ctypes inegration I'm not absolutely sure what would be needed most: Is there a need to convert between ctypes and numpy arrays? If numpy arrays can be passed to ctypes foreign functions maybe there is no need at all for the conversion. We could probably even live with helper code like that I posted outside of ctypes... Thomas |
From: Simon B. <si...@ar...> - 2006-07-04 11:48:02
|
On Mon, 03 Jul 2006 16:41:11 -0600 Fernando Perez <Fer...@co...> wrote: > > So I'd like to know if SWIG is really the best way out in this particular case > (and any advice on taking advantage of the array interface via SWIG would be > appreciated), or if ctypes or pyrex could be used here. I'm quite happy using > pyrex in other contexts, but I know it doesn't directly support C++. However, > since I have access to all the code, perhaps a pure C layer could be used to > bridge the C++/pyrex gap. Or given the recent praise for ctypes, perhaps that > can be an option? Pyrex can handle some C++. Eg. make objects, and call methods. You will need to search the pyrex email archives for all the tricks to get this to work. Simon. |
From: Albert S. <fu...@gm...> - 2006-07-04 10:11:49
|
Hey Thomas Thomas Heller wrote: > Thomas Heller schrieb: > > I've also played a little, and I think one important limitation in > ctypes > > is that items in the argtypes list have to be ctypes types. > > Thi swas misleading: I mean that this limitation should probably be > removed, because it prevents a lot of things one could do. What's your thinking on getting these changes made to ctypes and on ctypes' future development in general? Presumably you can't change it too much with the Python 2.5 release coming up, but it would be a shame if we had to wait until Python 2.6 to get the changes you suggested (and other goodies, like the array interface). Regards, Albert |
From: Jan-Matthias B. <jan...@gm...> - 2006-07-04 10:10:20
|
Hi all, I'm testing some computations with float96 at the moment and right now I ha= ve=20 problems with polyfit raising a KeyError for the keycode 'g', which is=20 floatxx with xx>64.=20 I am getting a KeyError using polyfit on some float96 values. The used=20 Routines seem to know nothing about this type. My main question is: have I missed something? Shouldn't this type be used? Below is a more detailed descripton. Thanks in advance, Jan =2D------------------------------------------------------------------------= =2D-- In file numpy/linalg/linalg.py, the following definitions at lines 26ff see= m=20 to be the offending ones: # Helper routines _array_kind =3D {'i':0, 'l': 0, 'q': 0, 'f': 0, 'd': 0, 'F': 1, 'D': 1} _array_precision =3D {'i': 1, 'l': 1, 'q': 1, 'f': 0, 'd': 1, 'F': 0, 'D':= 1} _array_type =3D [['f', 'd'], ['F', 'D']] Here the new typecodes are missing. I tried=20 # Helper routines _array_kind =3D {'i':0, 'l': 0, 'q': 0, 'f': 0, 'd': 0, 'g': '0', 'F': 1,= =20 'D':1, 'G':1} _array_precision =3D {'i': 1, 'l': 1, 'q': 1, 'f': 0, 'd': 1, 'g': 1,=20 'F': 0, 'D': 1, 'G': 1} _array_type =3D [['f', 'd', 'g'], ['F', 'D', 'G']] which gets me a step further to a TypeError: File "lib/python2.3/site-packages/numpy/linalg/linalg.py", line 454, in=20 lstsq bstar[:b.shape[0],:n_rhs] =3D b.copy() TypeError: array cannot be safely cast to required type (Question: Why only one typecode for a type which varies in bitlength on=20 different platforms? On Opteron CPU's I've seen float128 with 'g'?) |
From: Thomas H. <th...@py...> - 2006-07-04 10:05:07
|
Thomas Heller schrieb: > I've also played a little, and I think one important limitation in ctypes > is that items in the argtypes list have to be ctypes types. Thi swas misleading: I mean that this limitation should probably be removed, because it prevents a lot of things one could do. Thomas |
From: Simon A. <sim...@ui...> - 2006-07-04 10:03:17
|
Hi Fernando, Fernando Perez schrieb: [...] > So I'd like to know if SWIG is really the best way out in this particular case > (and any advice on taking advantage of the array interface via SWIG would be > appreciated), or if ctypes or pyrex could be used here. I'm quite happy using > pyrex in other contexts, but I know it doesn't directly support C++. However, > since I have access to all the code, perhaps a pure C layer could be used to > bridge the C++/pyrex gap. Or given the recent praise for ctypes, perhaps that > can be an option? I'm not so sure either, whether SWIG is the way to go. If, however, you decide to give it a try, you could try a little tool I am currently working on. It is basically a SWIG typemap definition which allows for easy use of numpy arrays from C++. In brief, I have defined a C++ template class around numpy's own PyArrayObject structure which allows for convenient access of the array data using C++-like techniques and takes care of type checking and the like. As it adds only non-virtual methods and no fields, the objects are memory-wise identical to numpy's own array structure and can hence be passed between C++ and Python without any copying of data or metadata. The original idea behind was a bit the opposite of your problem. I had program for a numerical computation, written in Python, and wanted to rewrite the most performance-critical parts in C++. With my SWIG typemap, I can now write the C++ part using my template class and wrap it such that these objects appear to Python as numpy array objects. For your problem, it might be a way to write a kind of casting functions that takes one of your tensor objects and transforms it into my numpy objects. This should be possible with only copying metadata (i.e. fields containing dimensions and the like) but with leaving the actual data in place. My stuff is still a bit work in progress, and so I don't want to post it here yet, but it may help you to not have to start from scratch, as the typemap code would help you to get started and might be easy to adjust to your needs. So, if you are interested, send me a mail, and I would appreciate any comments from you on how to make my tool into seomthing really reusable. Regards, Simon -- +--- | Simon Anders, Dipl. Phys. | Institut fuer Theoretische Physik, Universitaet Innsbruck, Austria | Tel. +43-512-507-6207, Fax -2919 | preferred (permanent) e-mail: sa...@fs... |
From: Thomas H. <th...@py...> - 2006-07-04 09:56:54
|
Travis Oliphant schrieb: > I've been playing a bit with ctypes and realized that with a little > help, it could be made much easier to interface with NumPy arrays. > Thus, I added a ctypes attribute to the NumPy array. If ctypes is > installed, this attribute returns a "conversion" object otherwise an > AttributeError is raised. > > The ctypes-conversion object has attributes which return c_types aware > objects so that the information can be passed directly to c-code (as an > integer, the number of dimensions can already be passed using c-types). > > The information available and it's corresponding c_type is > > data - c_void_p > shape, strides - c_int * nd or c_long * nd or c_longlong * nd > depending on platform I've also played a little, and I think one important limitation in ctypes is that items in the argtypes list have to be ctypes types. If that limitation is removed (see the attached trivial patch) one can write a class that implements 'from_param' and accepts ctypes arrays as well as numpy arrays as argument in function calls (Maybe the _as_parameter_ stuff needs cleanup as well). The attached shape.py script implements this class, and has two examples. The 'from_param' method checks the correct shape and itemtype of the arrays that are passed as parameter. Thomas |
From: Robert C. <cim...@nt...> - 2006-07-04 09:51:23
|
David Huard wrote: > Here is a quick benchmark between numpy's unique, unique1d and sasha's > unique: > > x = rand(100000)*100 > x = x.astype('i') > > %timeit unique(x) > 10 loops, best of 3: 525 ms per loop > > %timeit unique_sasha(x) > 100 loops, best of 3: 10.7 ms per loop > > timeit unique1d(x) > 100 loops, best of 3: 12.6 ms per loop > > So I wonder what is the added value of unique? > Could unique1d simply become unique ? It looks like unique1d and friends could use same facelifting with new numpy features like boolean indexing :) r. |
From: John C. <jn...@ec...> - 2006-07-04 09:39:43
|
Hi, As is the way posting to a news group stirs the brain cell into activity and the problem is solved. or rather shifted. I've downloaded the candidate version of mingw32 and using that to build numpy/scipy works, or rather it builds the extensions for Python 2.3 I believe that there are still problems as both numpy and scipy fail their tests. I'm also dubious as to whether they are finding Altas and Lapack. Configuration: Windows XP SP2 Cygwin = gcc version 3.4.4 (cygming special) (gdc 0.12, using dmd 0.125) MinGW = gcc version 3.4.5 (mingw special) Python = Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 scipy, numpy, Atlas, Lapack all downloaded within the last 2 days I've cut and pasted the error messages from the tests below. Any help gratefully received. John ===== numpy ===== D:.\numerical> python Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy >>> numpy.test() Found 5 tests for numpy.distutils.misc_util Found 3 tests for numpy.lib.getlimits Found 30 tests for numpy.core.numerictypes Found 13 tests for numpy.core.umath Found 3 tests for numpy.core.scalarmath Found 8 tests for numpy.lib.arraysetops Found 42 tests for numpy.lib.type_check Found 101 tests for numpy.core.multiarray Found 36 tests for numpy.core.ma Found 10 tests for numpy.lib.twodim_base Found 10 tests for numpy.core.defmatrix Found 1 tests for numpy.lib.ufunclike Found 38 tests for numpy.lib.function_base Found 3 tests for numpy.dft.helper Found 1 tests for numpy.lib.polynomial Found 7 tests for numpy.core.records Found 26 tests for numpy.core.numeric Found 4 tests for numpy.lib.index_tricks Found 46 tests for numpy.lib.shape_base Found 0 tests for __main__ ........................................................................................................ ........................................................................................................ .........E.............................................................................................. ........................... ====================================================================== ERROR: check_simple (numpy.lib.tests.test_twodim_base.test_histogram2d) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python23\Lib\site-packages\numpy\lib\tests\test_twodim_base.py", line 137, in check_simple np.random.seed(1) File "mtrand.pyx", line 311, in mtrand.RandomState.seed SystemError: C:\sf\python\dist\src-maint23\Objects\longobject.c:240: bad argument to internal function ---------------------------------------------------------------------- Ran 387 tests in 1.391s FAILED (errors=1) <unittest.TextTestRunner object at 0x010DB8F0> >>> ===== scipy ===== D:.\numerical> python Python 2.3.3 (#51, Dec 18 2003, 20:22:39) [MSC v.1200 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import scipy >>> scipy.test() Overwriting lib=<module 'scipy.lib' from 'C:\PYTHON23\lib\site-packages\scipy\lib\__init__.pyc'> from C:\PYTHON23\lib\si te-packages\scipy\lib\__init__.pyc (was <module 'numpy.lib' from 'C:\PYTHON23\lib\site-packages\numpy\lib\__init__.pyc'> from C:\PYTHON23\lib\site-packages\numpy\lib\__init__.pyc) Overwriting fft=<function fft at 0x013E7AB0> from scipy.fftpack.basic (was <function fft at 0x00AAEE30> from numpy.dft.f ftpack) Overwriting ifft=<function ifft at 0x013E7AF0> from scipy.fftpack.basic (was <function ifft at 0x00AAEE70> from numpy.df t.fftpack) Found 4 tests for scipy.io.array_import Found 128 tests for scipy.linalg.fblas Found 397 tests for scipy.ndimage Found 10 tests for scipy.integrate.quadpack Found 97 tests for scipy.stats.stats Found 47 tests for scipy.linalg.decomp Found 2 tests for scipy.integrate.quadrature Found 95 tests for scipy.sparse.sparse Found 20 tests for scipy.fftpack.pseudo_diffs Found 6 tests for scipy.optimize.optimize Found 5 tests for scipy.interpolate.fitpack Found 1 tests for scipy.interpolate Found 12 tests for scipy.io.mmio Found 10 tests for scipy.stats.morestats Found 4 tests for scipy.linalg.lapack Found 18 tests for scipy.fftpack.basic Found 4 tests for scipy.linsolve.umfpack Found 4 tests for scipy.optimize.zeros Found 41 tests for scipy.linalg.basic Found 2 tests for scipy.maxentropy.maxentropy Found 358 tests for scipy.special.basic Found 128 tests for scipy.lib.blas.fblas Found 7 tests for scipy.linalg.matfuncs **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** Found 42 tests for scipy.lib.lapack Found 1 tests for scipy.optimize.cobyla Found 16 tests for scipy.lib.blas Found 1 tests for scipy.integrate Found 14 tests for scipy.linalg.blas Found 70 tests for scipy.stats.distributions Found 4 tests for scipy.fftpack.helper Found 4 tests for scipy.signal.signaltools Found 0 tests for __main__ Don't worry about a warning regarding the number of bytes read. Warning: 1000000 bytes requested, 20 bytes read. .......caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ ........................................................................................................................ .........................................................................Took 13 points. ..........Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ........................Use minimum degree ordering on A'+A. ...................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. .................Resizing... 16 17 24 Resizing... 20 7 35 Resizing... 23 7 47 Resizing... 24 25 58 Resizing... 28 7 68 Resizing... 28 27 73 .....Use minimum degree ordering on A'+A. ......................................C:\PYTHON23\lib\site-packages\scipy\interpolate\fitpack2.py:410: UserWarning: The coefficients of the spline returned have been computed as the minimal norm least-squares solution of a (numerically) rank deficient system (deficiency=7). If deficiency is large, the results may be inaccurate. Deficiency may strongly depend on the value of eps. warnings.warn(message) ....................Ties preclude use of exact statistic. ..Ties preclude use of exact statistic. ........ **************************************************************** WARNING: clapack module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses flapack instead of clapack. **************************************************************** ....................data-ftype: z compared to data D Calling _superlu.zgssv Use minimum degree ordering on A'+A. .data-ftype: c compared to data F Calling _superlu.cgssv Use minimum degree ordering on A'+A. .data-ftype: d compared to data d Calling _superlu.dgssv Use minimum degree ordering on A'+A. .data-ftype: s compared to data f Calling _superlu.sgssv Use minimum degree ordering on A'+A. ........................................................................................................................ ........................................................................................................................ ....................................Gegenbauer, a = 2.7337413228 ........................................................................................................................ .............caxpy:n=4 ..caxpy:n=3 ....ccopy:n=4 ..ccopy:n=3 .............cscal:n=4 ....cswap:n=4 ..cswap:n=3 .....daxpy:n=4 ..daxpy:n=3 ....dcopy:n=4 ..dcopy:n=3 .............dscal:n=4 ....dswap:n=4 ..dswap:n=3 .....saxpy:n=4 ..saxpy:n=3 ....scopy:n=4 ..scopy:n=3 .............sscal:n=4 ....sswap:n=4 ..sswap:n=3 .....zaxpy:n=4 ..zaxpy:n=3 ....zcopy:n=4 ..zcopy:n=3 .............zscal:n=4 ....zswap:n=4 ..zswap:n=3 ...Result may be inaccurate, approximate err = 4.1928136851e-009 ...Result may be inaccurate, approximate err = 7.27595761418e-012 .............................................F................Residual: 1.05006950319e-007 . **************************************************************** WARNING: cblas module is empty ----------- See scipy/INSTALL.txt for troubleshooting. Notes: * If atlas library is not found by numpy/distutils/system_info.py, then scipy uses fblas instead of cblas. **************************************************************** ............................................................................................ ====================================================================== FAIL: check_simple (scipy.optimize.tests.test_cobyla.test_cobyla) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Python23\Lib\site-packages\scipy\optimize\tests\test_cobyla.py", line 20, in check_simple assert_almost_equal(x, [x0, x1], decimal=5) File "C:\Python23\Lib\site-packages\numpy\testing\utils.py", line 152, in assert_almost_equal return assert_array_almost_equal(actual, desired, decimal, err_msg) File "C:\Python23\Lib\site-packages\numpy\testing\utils.py", line 222, in assert_array_almost_equal header='Arrays are not almost equal') File "C:\Python23\Lib\site-packages\numpy\testing\utils.py", line 207, in assert_array_compare assert cond, msg AssertionError: Arrays are not almost equal (mismatch 100.0%) x: array([ 4.957975 , 0.64690335]) y: array([ 4.95535625, 0.66666667]) ---------------------------------------------------------------------- Ran 1552 tests in 5.828s FAILED (failures=1) <unittest.TextTestRunner object at 0x02671210> >>> Dr. John N. Carter. E-Mail : jn...@ec... Building 1, Room 2005 http://www.ecs.soton.ac.uk/~jnc/ Information: Signals, Images, Systems Phone : +44 (0) 23 8059 2405 School of Electronics & Computer Science, Fax : +44 (0) 23 8059 4498 Southampton University, Hants, UK SO17 1BJ. |