You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: David L G. <Dav...@no...> - 2006-11-09 21:25:25
|
Wow, thanks! DG Pauli Virtanen wrote: > ke, 2006-11-08 kello 16:08 -0800, David L Goldsmith kirjoitti: > >> Hi! I tried to send this earlier: it made it into my sent mail folder, >> but does not appear to have made it to the list. >> >> I need to numerically solve: >> (1-t)x" + x' - x = f(t), x(0) = x0, x(1) = x1 >> I've been trying to use (because it's the approach I inherited) an >> elementary finite-difference discretization, but unit tests have shown >> that that approach isn't working. >> > [clip] > > You could try to use some of the pre-existing www.netlib.org codes for > solving BVPs. This only requires writing a wrapper. > > However, I have written a wrapper for the COLNEW code, which is a > finite-difference method written by Ascher & Bader in '87, having an > adaptive mesh selection. You can find it here: > > http://www.iki.fi/pav/bvp > > As I understand, COLNEW does not have any special handling for singular > coefficients, but nevertheless it seems to be able to solve your > problem. The code goes as follows: > > ---------------------------------------------------- > import scipy as N > import bvp > > u0 = 1 > u1 = 2 > > def f(t): > return N.sin(2*t) > > def fsub(t, z): > u, du = z > return N.array([(f(t) + u - du)/(1-t)]) > > def gsub(z): > u, du = z > return N.array([u[0] - u0, u[1] - u1]) > > tol = [1e-5, 1e-5] > boundary_points = [0, 1] > > solution = bvp.colnew.solve(boundary_points, [2], fsub, gsub, > is_linear=True, tolerances=tol, > vectorized=True, maximum_mesh_size=300) > > import pylab > > x = solution.mesh > pylab.plot(x, solution(x)[:,0]) > pylab.savefig('solution.png') > ---------------------------------------------------- > > BR, > > Pauli Virtanen > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > ------------------------------------------------------------------------ > > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- HMRD/ORR/NOS/NOAA <http://response.restoration.noaa.gov/emergencyresponse/> |
From: Brent P. <bpe...@gm...> - 2006-11-09 19:02:16
|
hi, i have a recarray of > 60K records and i'm wondering if there's a numpy/vectorized way to the following. get a new array where there will be unique column0 + column1 rows with the row that remains being chosen because it has the highest value in the last column. so in the paste below, there are 4 'AT1G01070', 'AT1G11450' rows, but i only want to keep the row: ('AT1G01070', 'AT1G11450', 78.660003662109375, 717, 140, 5, 1129L, 1838L, 1098L, 1808L, 1e-168, 592.0) because 592.0 is the highest value. i can do this using a hash and looping, but i'm guessing there's a more efficient way. any tips? thanks. -brent [ ('AT1G01030', 'AT1G13260', 70.339996337890625, 263, 75, 1, 835L, 1094L, 778L, 1040L, 5.9999999999999998e-38, 158.0) ('AT1G01040', 'AT1G01040', 100.0, 8019, 0, 0, 1L, 8019L, 1L, 8019L, 0.0, 12620.0) ('AT1G01050', 'AT1G01050', 100.0, 1968, 0, 0, 1L, 1968L, 1L, 1968L, 0.0, 3090.0) ('AT1G01060', 'AT1G01060', 100.0, 4175, 0, 0, 1L, 4175L, 1L, 4175L, 0.0, 6355.0) ('AT1G01070', 'AT1G01070', 100.0, 2192, 0, 0, 1L, 2192L, 1L, 2192L, 0.0, 3426.0) ('AT1G01070', 'AT1G11450', 78.660003662109375, 717, 140, 5, 1129L, 1838L, 1098L, 1808L, 1e-168, 592.0) ('AT1G01070', 'AT1G11450', 79.870002746582031, 303, 51, 2, 1L, 293L, 1L, 303L, 1.9999999999999999e-67 , 256.0) ('AT1G01070', 'AT1G11450', 88.400001525878906, 181, 21, 0, 587L, 767L, 724L, 904L, 6e-57, 221.0) ('AT1G01070', 'AT1G11450', 83.209999084472656, 131, 19, 1, 1875L, 2002L, 1878L, 2008L, 1.9999999999999999e-28 , 126.0) ('AT1G01070', 'AT1G11460', 82.919998168945312, 480, 77, 3, 1129L, 1607L, 1216L, 1691L, 6.9999999999999999e-132, 470.0)] |
From: Colin J. W. <cj...@sy...> - 2006-11-09 19:01:39
|
Stefan van der Walt wrote: > On Thu, Nov 09, 2006 at 03:18:57AM -0500, Colin J. Williams wrote: > >> >>> import numpy.core as _n >> >>> _nt= _n.numerictypes >> >>> value= >> '\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x08@\x00\x00\x00\x00\x00\x00\x10@\x00\x00\x00\x00\x00\x00\x14@\x00\x00\x00\x00\x00\x00\x18@' >> >>> _n.array(value, dtype= _nt.complex128, copy=True) >> Traceback (most recent call last): >> File "<interactive input>", line 1, in <module> >> TypeError: a float is required >> >>> >> > > If I understand correctly, you would like to create an array from a > buffer: > > In [12]: N.frombuffer(value,dtype=N.complex128) > Out[12]: array([ 1.+2.j, 3.+4.j, 5.+6.j]) > > Cheers > Stéfan > Thanks, I didn't find "frombuffer" in the Numpy Examples List. Incidentally, the List is a big help but it would be even better if it included the signatures of the functions. Colin W. > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > |
From: Tim H. <tim...@ie...> - 2006-11-09 18:05:35
|
A. M. Archibald wrote: > On 08/11/06, Tim Hochberg <tim...@ie...> wrote: > > >> It has always been my experience (on various flavors or Pentium) that >> operating on NANs is extremely slow. Does anyone know on what hardware >> NANs are *not* slow? Of course it's always possible I just never notice >> NANs on hardware where they aren't slow. >> > > On an opteron machine I have access to, they appear to be no slower > (and even faster for some transcendental functions) than ordinary > floats: > > In [13]: a=zeros(1000000) > > In [14]: %time for i in xrange(1000): a += 1.1 > CPU times: user 6.87 s, sys: 0.00 s, total: 6.87 s > Wall time: 6.87 > > In [15]: a *= NaN > > In [16]: %time for i in xrange(1000): a += 1.1 > CPU times: user 6.86 s, sys: 0.00 s, total: 6.86 s > Wall time: 6.87 > > On my Pentium M, they are indeed significantly slower (three times? I > didn't really do enough testing to say how much). I am actually rather > offended by this unfair discrimination against a citizen in good > standing of the IEEE floating point community. > If they're only 3x slower you're doing better than I am. On my core duo box they appear to be nearly 20x slower for both addition and multiplication. This more or less matches what I recall from earlier boxes. >>> Timer("a.prod()", "import numpy as np; a = np.ones(4096); a[0] = np.nan").timeit(10000) 5.6495738983585397 >>> Timer("a.prod()", "import numpy as np; a = np.ones(4096)").timeit(10000) 0.34439041833525152 >>> Timer("a.sum()", "import numpy as np; a = np.ones(4096); a[0] = np.nan").timeit(10000) 6.0985655998179027 >>> Timer("a.sum()", "import numpy as np; a = np.ones(4096)").timeit(10000) 0.32354363473564263 I've been told that operations on NANs are slow because they aren't always implemented in the FPU hardware. Instead they are trapped and implemented software or firmware or something or other. That may well be bogus 42nd hand information though. -tim |
From: Pauli V. <pau...@ik...> - 2006-11-09 17:47:44
|
ke, 2006-11-08 kello 16:08 -0800, David L Goldsmith kirjoitti: > Hi! I tried to send this earlier: it made it into my sent mail folder,=20 > but does not appear to have made it to the list. >=20 > I need to numerically solve: > (1-t)x" + x' - x =3D f(t), x(0) =3D x0, x(1) =3D x1 > I've been trying to use (because it's the approach I inherited) an=20 > elementary finite-difference discretization, but unit tests have shown=20 > that that approach isn't working. =20 [clip] You could try to use some of the pre-existing www.netlib.org codes for solving BVPs. This only requires writing a wrapper. However, I have written a wrapper for the COLNEW code, which is a finite-difference method written by Ascher & Bader in '87, having an adaptive mesh selection. You can find it here: http://www.iki.fi/pav/bvp As I understand, COLNEW does not have any special handling for singular coefficients, but nevertheless it seems to be able to solve your problem. The code goes as follows: ---------------------------------------------------- import scipy as N import bvp u0 =3D 1 u1 =3D 2 def f(t): return N.sin(2*t) def fsub(t, z): u, du =3D z return N.array([(f(t) + u - du)/(1-t)]) def gsub(z): u, du =3D z return N.array([u[0] - u0, u[1] - u1]) tol =3D [1e-5, 1e-5] boundary_points =3D [0, 1] solution =3D bvp.colnew.solve(boundary_points, [2], fsub, gsub, is_linear=3DTrue, tolerances=3Dtol, vectorized=3DTrue, maximum_mesh_size=3D300) import pylab x =3D solution.mesh pylab.plot(x, solution(x)[:,0]) pylab.savefig('solution.png') ---------------------------------------------------- BR, Pauli Virtanen |
From: Fernando P. <fpe...@gm...> - 2006-11-09 17:07:44
|
On 11/9/06, Robert Kern <rob...@gm...> wrote: > Fernando Perez wrote: > > I understand why this happens, but I wonder if it should be in any way > > 'fixed' (if that is even feasible without introducing other problems): [...] > > I am sure it will be, to say the least, pretty surprising (and I can > > imagine subtle bugs being caused by this). > > I think we decided long ago that an int32 really is an array of 32-bit integers > and behaves like one. That's precisely why one uses int32 arrays rather than > object arrays. There are algorithms that do need the wraparound, and the Python > int behavior is always available through object arrays. Mkay, fine by me. Novus caveat emptor. Cheers, f |
From: Robert K. <rob...@gm...> - 2006-11-09 16:44:30
|
Fernando Perez wrote: > I understand why this happens, but I wonder if it should be in any way > 'fixed' (if that is even feasible without introducing other problems): > > In [28]: x = 999999 > > In [29]: y = numpy.array([x]) > > In [30]: z = y[0] > > In [31]: x==z > Out[31]: True > > In [32]: x > Out[32]: 999999 > > In [33]: z > Out[33]: 999999 > > In [34]: x*x > Out[34]: 999998000001L > > In [35]: z*z > Warning: overflow encountered in long_scalars > Out[35]: -729379967 > > I am sure it will be, to say the least, pretty surprising (and I can > imagine subtle bugs being caused by this). I think we decided long ago that an int32 really is an array of 32-bit integers and behaves like one. That's precisely why one uses int32 arrays rather than object arrays. There are algorithms that do need the wraparound, and the Python int behavior is always available through object arrays. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Alan G I. <ai...@am...> - 2006-11-09 16:41:41
|
On Wed, 08 Nov 2006, koara apparently wrote: > 'mat' is a numpy.array with shape=(22973, 1009), > 'vec' is a numpy.array with shape=(22973,), both of type int: > for i in xrange(1009): > ... > fr = vec[10001] > mat[:, i] = vec # assign whole column > if mat[10001, i] != fr: > print "how come?" > ... > for elements beyond index 10000, nothing is assigned (ie, > numpy.sum(mat[row, :]) is zero for any row > 10000). Using numpy 1.0 and trying the code below, I do not see the problem. import numpy from numpy import random m = random.randint(0,1000,(22973,1009)) v = random.randint(0,1000,(22973,)) fr = v[10001] for i in xrange(1009): m[:,i] = v if m[10001, i] != fr: print m[10001,i], fr Upgrading to numpy 1.0 should be easy enough and may fix your problem. Cheers, Alan Isaac |
From: Stefan v. d. W. <st...@su...> - 2006-11-09 08:59:25
|
On Thu, Nov 09, 2006 at 03:18:57AM -0500, Colin J. Williams wrote: >=20 > >>> import numpy.core as _n > >>> _nt=3D _n.numerictypes > >>> value=3D=20 > '\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00= \x00\x00\x00\x08@\x00\x00\x00\x00\x00\x00\x10@\x00\x00\x00\x00\x00\x00\x1= 4@\x00\x00\x00\x00\x00\x00\x18@' > >>> _n.array(value, dtype=3D _nt.complex128, copy=3DTrue) > Traceback (most recent call last): > File "<interactive input>", line 1, in <module> > TypeError: a float is required > >>> If I understand correctly, you would like to create an array from a buffer: In [12]: N.frombuffer(value,dtype=3DN.complex128) Out[12]: array([ 1.+2.j, 3.+4.j, 5.+6.j]) Cheers St=E9fan |
From: Colin J. W. <cj...@sy...> - 2006-11-09 08:54:20
|
Robert Kern wrote: > Colin J. Williams wrote: > >> >>> import numpy.core as _n >> >>> _nt= _n.numerictypes >> >>> value= >> '\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x08@\x00\x00\x00\x00\x00\x00\x10@\x00\x00\x00\x00\x00\x00\x14@\x00\x00\x00\x00\x00\x00\x18@' >> >>> _n.array(value, dtype= _nt.complex128, copy=True) >> Traceback (most recent call last): >> File "<interactive input>", line 1, in <module> >> TypeError: a float is required >> >>> >> >> A bug? >> > > No. array() takes sequences, not data buffers. You want frombuffer(). > Thanks. Colin W. |
From: Fernando P. <fpe...@gm...> - 2006-11-09 08:41:38
|
I understand why this happens, but I wonder if it should be in any way 'fixed' (if that is even feasible without introducing other problems): In [28]: x = 999999 In [29]: y = numpy.array([x]) In [30]: z = y[0] In [31]: x==z Out[31]: True In [32]: x Out[32]: 999999 In [33]: z Out[33]: 999999 In [34]: x*x Out[34]: 999998000001L In [35]: z*z Warning: overflow encountered in long_scalars Out[35]: -729379967 I am sure it will be, to say the least, pretty surprising (and I can imagine subtle bugs being caused by this). Regards, f |
From: Robert K. <rob...@gm...> - 2006-11-09 08:35:26
|
Colin J. Williams wrote: > >>> import numpy.core as _n > >>> _nt= _n.numerictypes > >>> value= > '\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x08@\x00\x00\x00\x00\x00\x00\x10@\x00\x00\x00\x00\x00\x00\x14@\x00\x00\x00\x00\x00\x00\x18@' > >>> _n.array(value, dtype= _nt.complex128, copy=True) > Traceback (most recent call last): > File "<interactive input>", line 1, in <module> > TypeError: a float is required > >>> > > A bug? No. array() takes sequences, not data buffers. You want frombuffer(). -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Colin J. W. <cj...@sy...> - 2006-11-09 08:19:16
|
>>> import numpy.core as _n >>> _nt= _n.numerictypes >>> value= '\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x08@\x00\x00\x00\x00\x00\x00\x10@\x00\x00\x00\x00\x00\x00\x14@\x00\x00\x00\x00\x00\x00\x18@' >>> _n.array(value, dtype= _nt.complex128, copy=True) Traceback (most recent call last): File "<interactive input>", line 1, in <module> TypeError: a float is required >>> A bug? Colin W. |
From: Johannes L. <a.u...@gm...> - 2006-11-09 08:01:05
|
> Well what about A*(B*C)? Point taken... Johannes |
From: A. M. A. <per...@gm...> - 2006-11-09 07:27:49
|
On 08/11/06, Tim Hochberg <tim...@ie...> wrote: > It has always been my experience (on various flavors or Pentium) that > operating on NANs is extremely slow. Does anyone know on what hardware > NANs are *not* slow? Of course it's always possible I just never notice > NANs on hardware where they aren't slow. On an opteron machine I have access to, they appear to be no slower (and even faster for some transcendental functions) than ordinary floats: In [13]: a=zeros(1000000) In [14]: %time for i in xrange(1000): a += 1.1 CPU times: user 6.87 s, sys: 0.00 s, total: 6.87 s Wall time: 6.87 In [15]: a *= NaN In [16]: %time for i in xrange(1000): a += 1.1 CPU times: user 6.86 s, sys: 0.00 s, total: 6.86 s Wall time: 6.87 On my Pentium M, they are indeed significantly slower (three times? I didn't really do enough testing to say how much). I am actually rather offended by this unfair discrimination against a citizen in good standing of the IEEE floating point community. A. M. Archibald |
From: David C. <da...@ar...> - 2006-11-09 06:33:13
|
Josh Marshall wrote: > > > I don't see how you are going to get around doing the copies. Matlab > is in a separate process from the Python interpreter, and there is no > shared memory. In what way do you want these proxy classes to "look > like numpy arrays"? I am not talking about the copy in the matlab <-> python interaction. This is done through pipe, handled by the OS; I don't know the details, but I know that communication through pipe is quite fast under linux (see below), and is not the bottleneck. > > Note that mlabwrap creates proxy arrays, and only copies the data if > you actually request it to. (AFAIRemember) Otherwise you aren't losing > any speed, because there aren't going to be any copies. There may be no copy for returned data you don't need, but that's not the case I am talking about. For all other cases, I don't think this is what's happening: if you take a look at mlabwrap, in the C mlabraw module, the function mlabraw_put always calls numeric2mx for arrays, which itself always calls makeMxFromNumeric, which makes a copy. Same in the other direction once you call mlabwrap_get. I am doing the same in my module, because that's the simplest thing to do. The problem is that when you are using the function engPutVariable of the matlab engine API, you need to give a pointer to a mxArray structure, which is the C representation of a matlab array. You cannot say (this is one of the brain damaged thing of matlab C api I was talking in an other mail): build a mxArray from existing data: this is the copy I am talking about, and this is one expensive. In the best case (real numpy arrays with fortran storage), you can do a memcpy, but in most cases, you need to do something which takes strides into account (because complex matlab arrays are actually not fortran, or because by default, most numpy arrays are C storage, and this makes a difference for rank >= 2), which implies non-contiguous memory access, which is *really* expensive (around 2 cycles/byte at best, on my bi Xeon 3.2 Ghz). Basically, if you want to do something like calling the resample function of matlab on an numpy array and using the result later in numpy, here is what's happening right now: 1 copy numpy (or numarray in the case of mlabwrap, but this should not matter, I guess) data into an mxArray 2 send the mxArray to matlab engine: done with pipe (imply copy ? At least, it is contiguous array copy) 3 compute the thing into matlab 4 send the result to python mxArray 5 copy the data of the mxArray to numpy array A quick profiling show that if you don't do any processing in matlab, just sending and getting an array back, 1 and 5 takes roughly 80-90 % of the time in my implementation (which is faster than mlabwrap, but I think this is just caused by the much fancier API of mlabwrap, ie the core mecanism to pass arrays should be roughly the same, as mlabwrap uses the C function makeMxFromNumeric, and I am using a similar function myself through ctypes), the 10-20% are used for the communication through the pipe. I believe that most typical usage cases involve 1 and 5. 5 should be avoidable in many cases if I know how to build a proxy class around the mxArray so that the the proxy behaves as a numpy array, with the buffer owned by the mxArray; but I don't know how to do that (particularly, how to handle the destruction of data, as the proxy should destroy the mxArray once the proxy object is garbage collected). 1 would be easy if the C matlab API was sane, which is not the case; they give functions which are impossible to use correctly (mxSetPr and mxSetData). > > What could be possible to do is add an array interface to the mlabwrap > proxy classes so they can be used as numpy arrays when required for > passing to numpy functions (or PIL, etc). Thus we only copy when we > want to use numpy functions. Then we could define the operators on the > proxy class to perform their operations on the other side of the bridge. Yes, that's what I want to do, and in theory, this should be possible without copy; my initial question in the beginning of the thread is how to build a numpy proxy class from existing buffer of data, with the proxy becoming the owner of the data (ie should do all the deallocation, including here cleaning mxArray structures). cheers, David |
From: David L G. <Dav...@no...> - 2006-11-09 06:32:42
|
Hi! I tried to send this earlier: it made it into my sent mail folder, but does not appear to have made it to the list. I need to numerically solve: (1-t)x" + x' - x = f(t), x(0) = x0, x(1) = x1 I've been trying to use (because it's the approach I inherited) an elementary finite-difference discretization, but unit tests have shown that that approach isn't working. After a little review, I believe I understand the problem: near t=0, the thing is like an advection-diffusion equation with diffusion as strong as advection (a case I'm sure is treated somewhere in the literature, but not as easily findable as:) near t=1, the thing is like the advection-diffusion equation I did easily find treated, namely one where advection dominates. Based on that treatment, I understand why an FD approach (with a uniform grid) would fail around t=0, and from there it is easy to accept that the fact that the thing changes its "nature" over the course of its "life," implies that (a uniform grid) FD is probably not a very good approach for this equation anywhere on its domain. I could try (and maybe will have to) a variable grid FD approach, but I'd first like to try a "shooting" method (for some reason my intuition is telling me that in this case this might be more efficient). This is where you all come in: I understand the algorithm and could (may have to) code it myself, but if anyone out there already has code for this (in Python using numpy, and preferably already under test), might you be willing to share? Thanks in advance, David Goldsmith PS: In the interim, I realized that if something like this "pre-exists" in a module, that module might be scipy. Sure enough, scipy has integrate.ode and integrate.odeint; Google-ing scipy: integrate.odeint help led me to Travis's 2004 "SciPy Tutorial," where I find "There are many optional inputs and outputs available when using odeint which can help tune the solver. These additional inputs and outputs are not needed much of the time, however..." Well, is one of those inputs an algorithm specifier? Alternatively, is there a "shooting" algorithm implemented elsewhere in scipy? |
From: David L G. <Dav...@no...> - 2006-11-09 06:32:41
|
Hi! I need to numerically solve: (1-t)x" + x' - x = f(t), x(0) = x0, x(1) = x1 I've been trying to use (because it's the approach I inherited) an elementary finite-difference discretization, but unit tests have shown that that approach isn't working. After a little review, I believe I understand the problem: near t=0, the thing is like an advection-diffusion equation with diffusion as strong as advection (a case I'm sure is treated somewhere in the literature, but not as easily findable as:) near t=1, the thing is like the advection-diffusion equation I did easily find treated, namely one where advection dominates. Based on that treatment, I understand why an FD approach (with a uniform grid) would fail around t=0, and from there it is easy to accept that the fact that the thing changes its "nature" over the course of its "life," implies that (a uniform grid) FD is probably not a very good approach for this equation anywhere on its domain. I could try (and maybe will have to) a variable grid FD approach, but I'd first like to try a "shooting" method (for some reason my intuition is telling me that in this case this might be more efficient). This is where you all come in: I understand the algorithm and could (may have to) code it myself, but if anyone out there already has code for this (in Python using numpy, and preferably already under test), might you be willing to share? Thanks in advance, David Goldsmith -- HMRD/ORR/NOS/NOAA <http://response.restoration.noaa.gov/emergencyresponse/> |
From: David L G. <Dav...@no...> - 2006-11-09 06:32:38
|
If the subject line is true, no one will see this; if it's false, please ignore! :-) DG -- HMRD/ORR/NOS/NOAA <http://response.restoration.noaa.gov/emergencyresponse/> |
From: Tim H. <tim...@ie...> - 2006-11-09 05:20:53
|
A. M. Archibald wrote: > On 08/11/06, Pierre GM <pgm...@gm...> wrote: > > >> I like your idea, but not its implementation. If MA.masked_singleton is >> defined as an object, as you suggest, then the dtype of the ndarray it is >> passed to becomes 'object', as you pointed out, and that is not something one >> would naturally expec, as basic numerical functions don't work well with the >> 'object' dtype (just try N.sqrt(N.array([1],dtype=N.object)) to see what I >> mean). >> Even if we can construct a mask rather easily at the creation of the masked >> array, following your 'a==masked' suggestion, we still need to get the dtype >> of the non-masked section, and that doesn't seem trivial... >> > > A good candidate for "should be masked" marked is NaN. It is supposed > to mean, more or less, "no sensible value". Unfortunately, integer > types do not have such a special value. It's also conceivable that > some user might want to keep NaNs in their array separate from the > mask. Finally, on some hardware, operations with NaN are very slow (so > leaving them in the array, even masked, might not be a good idea). > It has always been my experience (on various flavors or Pentium) that operating on NANs is extremely slow. Does anyone know on what hardware NANs are *not* slow? Of course it's always possible I just never notice NANs on hardware where they aren't slow. [SNIP] -tim |
From: Josh M. <jos...@gm...> - 2006-11-09 04:19:22
|
Hi David, Sorry for the late reply. Can you CC any reply to me as well, as I just get the digests and read them every few days. On 08/11/2006, at 11:09 PM, David Cournapeau wrote: > I didn't know that, thanks. Unfortunately, it is not really what I am > trying to do: mlabwrap is just a python interface a bit more high > level > than pymat, with many fancy tricks, but still do copies. What I would > like is to avoid completely the copying by using proxy classes around > data from numpy so that I can pass "automatically" numpy arrays to > matlab C api, and a proxy class around data from matlab so that they > look like numpy arrays. I don't see how you are going to get around doing the copies. Matlab is in a separate process from the Python interpreter, and there is no shared memory. In what way do you want these proxy classes to "look like numpy arrays"? Note that mlabwrap creates proxy arrays, and only copies the data if you actually request it to. (AFAIRemember) Otherwise you aren't losing any speed, because there aren't going to be any copies. > I don't care that much about the actual api from python point of > view, because I intend to use this mainly to compare matlab vs numpy > implementation, not as a way to use matlab inside python regularly. > And > once the copy problem is solved, adding syntactic sugar using > python is > easy anyway, I think (it should be easy to do something similar to > mlabwrap at that point), What could be possible to do is add an array interface to the mlabwrap proxy classes so they can be used as numpy arrays when required for passing to numpy functions (or PIL, etc). Thus we only copy when we want to use numpy functions. Then we could define the operators on the proxy class to perform their operations on the other side of the bridge. Cheers, Josh |
From: Paul D. <pfd...@gm...> - 2006-11-09 04:02:02
|
2 cents from the author of the first folio: The intent was to allow creation of masked arrays with copy=no, so that the original data could be retrieved from it if desired. But I was quite, quite rigorous about NEVER assuming the data in a masked slot made any sense whatsoever. The intention was that there are two ways to get a numeric array out of a masked one: 1. Get the data field 2. m.filled() (1) is strictly at your own risk. |
From: David C. <da...@ar...> - 2006-11-09 02:32:26
|
Matthew Brett wrote: > > I would be very happy to help with this. It would be great if we > could get a standard well-maintained library of some sort towards > scipy - we (http://neuroimaging.scipy.org/) have a great deal of > matlab integration to do. > I am a bit busy and late on my PhD schedule; I will try to release something remotely usable by other people within the end of the week, so other people can start hacking on it too, cheers, David |
From: Neal B. <ndb...@gm...> - 2006-11-08 23:43:06
|
Try the enclosed spec file. Also, you can use the one from Fedora devel (I think the one I enclosed is the same). |
From: Bill B. <wb...@gm...> - 2006-11-08 23:40:18
|
Also have a look at the section on Arrays vs Matrices in the Numpy for Matlab users page. That particular section has nothing to do with Matlab, really. http://www.scipy.org/NumPy_for_Matlab_Users Most of the suggestions and comments made here are already on that page. --bb On 11/8/06, Joris De Ridder <jo...@st...> wrote: > > [im]: Sorry if this is an obvious question, but what is the easiest way to multiply matrices in numpy? Suppose I want to do A=B*C*D. The ' * ' operator apparently does element wise multiplication, as does the 'multiply' ufunc. > [im] All I could find was the numeric function 'matrix_multiply, but this only takes two arguments. > > Have a look at the examples "dot()" and "mat()" in the Numpy Example List. > http://www.scipy.org/Numpy_Example_List > > J. > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |