You can subscribe to this list here.
2000 |
Jan
(8) |
Feb
(49) |
Mar
(48) |
Apr
(28) |
May
(37) |
Jun
(28) |
Jul
(16) |
Aug
(16) |
Sep
(44) |
Oct
(61) |
Nov
(31) |
Dec
(24) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(56) |
Feb
(54) |
Mar
(41) |
Apr
(71) |
May
(48) |
Jun
(32) |
Jul
(53) |
Aug
(91) |
Sep
(56) |
Oct
(33) |
Nov
(81) |
Dec
(54) |
2002 |
Jan
(72) |
Feb
(37) |
Mar
(126) |
Apr
(62) |
May
(34) |
Jun
(124) |
Jul
(36) |
Aug
(34) |
Sep
(60) |
Oct
(37) |
Nov
(23) |
Dec
(104) |
2003 |
Jan
(110) |
Feb
(73) |
Mar
(42) |
Apr
(8) |
May
(76) |
Jun
(14) |
Jul
(52) |
Aug
(26) |
Sep
(108) |
Oct
(82) |
Nov
(89) |
Dec
(94) |
2004 |
Jan
(117) |
Feb
(86) |
Mar
(75) |
Apr
(55) |
May
(75) |
Jun
(160) |
Jul
(152) |
Aug
(86) |
Sep
(75) |
Oct
(134) |
Nov
(62) |
Dec
(60) |
2005 |
Jan
(187) |
Feb
(318) |
Mar
(296) |
Apr
(205) |
May
(84) |
Jun
(63) |
Jul
(122) |
Aug
(59) |
Sep
(66) |
Oct
(148) |
Nov
(120) |
Dec
(70) |
2006 |
Jan
(460) |
Feb
(683) |
Mar
(589) |
Apr
(559) |
May
(445) |
Jun
(712) |
Jul
(815) |
Aug
(663) |
Sep
(559) |
Oct
(930) |
Nov
(373) |
Dec
|
From: Keith G. <kwg...@gm...> - 2006-11-15 02:50:28
|
On 11/14/06, Erin Sheldon <eri...@gm...> wrote: > As an aside, your database is running on a local disk, right, so > the overehead of retrieving data is minimized here? > For my tests I think I am data retrieval limited because I > get exactly the same time for the equivalent of retrieve1 > and retrieve2. Tim created the database in memory (conn = sqlite3.connect(':memory:')), which is much faster than disk. |
From: Erin S. <eri...@gm...> - 2006-11-15 02:26:49
|
On 11/14/06, Tim Hochberg <tim...@ie...> wrote: > Tim Hochberg wrote: > > John Hunter wrote: > > > >>>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: > >>>>>>> > >>>>>>> > >> Erin> The question I have been asking myself is "what is the > >> Erin> advantage of such an approach?". It would be faster, but by > >> > >> In the use case that prompted this message, the pull from mysql took > >> almost 3 seconds, and the conversion from lists to numpy arrays took > >> more that 4 seconds. We have a list of about 500000 2 tuples of > >> floats. > >> > >> > > I'm no database user, but a glance at the at the docs seems to indicate > > that you can get your data via an iterator (by iterating over the cursor > > or some such db mumbo jumbo) rather than slurping up the whole list up > > at once. If so, then you'll save a lot of memory by passing the iterator > > straight to fromiter. It may even be faster, who knows. > > > > Accessing the db via the iterator could be a performance killer, but > > it's almost certainly worth trying as it could a few megabytes of > > storage and that in turn might speed things up. > > > > > Assuming that I didn't mess this up too badly, it appears that using the > iterator directly with fromiter is significantly faster than the next > best solution (about 45%). The fromiter wrapping a list solution come in > second, followed by numarray.array and finally way in the back, > numpy.array. Here's the numbers: > > retrieve1 took 0.902922857514 seconds > retrieve2 took 1.31245870634 seconds > retrieve3 took 1.51207569677 seconds > retrieve4 took 8.71539930354 seconds > > Interesting results Tim. From Pierre's results we saw that fromiter is the fastest way to get data into arrays. With your results we see there is a difference between iterating over the cursor and doing a fetchall() as well. Surprisingly, running the cursor is faster. This must come not from the data retrieval rate but from creating the copies in memory. But just in case I think there is one more thing to check. I haven't used sqlite, but with other databases I have used there is often a large variance in times from one select to the next. Can you repeat these tests with a timeit().repeat and give the minimum? As an aside, your database is running on a local disk, right, so the overehead of retrieving data is minimized here? For my tests I think I am data retrieval limited because I get exactly the same time for the equivalent of retrieve1 and retrieve2. Erin |
From: Tim H. <tim...@ie...> - 2006-11-15 02:09:31
|
Tim Hochberg wrote: > John Hunter wrote: > >>>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: >>>>>>> >>>>>>> >> Erin> The question I have been asking myself is "what is the >> Erin> advantage of such an approach?". It would be faster, but by >> >> In the use case that prompted this message, the pull from mysql took >> almost 3 seconds, and the conversion from lists to numpy arrays took >> more that 4 seconds. We have a list of about 500000 2 tuples of >> floats. >> >> > I'm no database user, but a glance at the at the docs seems to indicate > that you can get your data via an iterator (by iterating over the cursor > or some such db mumbo jumbo) rather than slurping up the whole list up > at once. If so, then you'll save a lot of memory by passing the iterator > straight to fromiter. It may even be faster, who knows. > > Accessing the db via the iterator could be a performance killer, but > it's almost certainly worth trying as it could a few megabytes of > storage and that in turn might speed things up. > > Assuming that I didn't mess this up too badly, it appears that using the iterator directly with fromiter is significantly faster than the next best solution (about 45%). The fromiter wrapping a list solution come in second, followed by numarray.array and finally way in the back, numpy.array. Here's the numbers: retrieve1 took 0.902922857514 seconds retrieve2 took 1.31245870634 seconds retrieve3 took 1.51207569677 seconds retrieve4 took 8.71539930354 seconds And here is the code: import sqlite3, numpy as np, numarray as na, time N = 500000 def setup(conn): c = conn.cursor() c.execute('''create table demo (x real, y real)''') data = np.random.rand(N, 2) c.executemany("""insert into demo values (?, ?)""", data) def retrieve1(conn): c = conn.cursor() c.execute('select * from demo') y = np.fromiter(c, dtype=[('a',float), ('b', float)]) return y def retrieve2(conn): c = conn.cursor() c.execute('select * from demo') y = np.fromiter(c.fetchall(), dtype=[('a',float), ('b', float)]) return y def retrieve3(conn): c = conn.cursor() c.execute('select * from demo') y = na.array(c.fetchall()) return y def retrieve4(conn): c = conn.cursor() c.execute('select * from demo') y = np.array(c.fetchall()) return y conn = sqlite3.connect(':memory:') setup(conn) t0 = time.clock() y1 = retrieve1(conn) t1 = time.clock() y2 = retrieve2(conn) t2 = time.clock() y3 = retrieve3(conn) t3 = time.clock() y4 = retrieve4(conn) t4 = time.clock() assert y1.shape == y2.shape == y3.shape[:1] == y4.shape[:1] == (N,) assert np.alltrue(y1 == y2) print "retrieve1 took", t1-t0, "seconds" print "retrieve2 took", t2-t1, "seconds" print "retrieve3 took", t3-t2, "seconds" print "retrieve4 took", t4-t3, "seconds" |
From: Tim H. <tim...@ie...> - 2006-11-15 00:45:10
|
John Hunter wrote: >>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: >>>>>> > > Erin> The question I have been asking myself is "what is the > Erin> advantage of such an approach?". It would be faster, but by > > In the use case that prompted this message, the pull from mysql took > almost 3 seconds, and the conversion from lists to numpy arrays took > more that 4 seconds. We have a list of about 500000 2 tuples of > floats. > I'm no database user, but a glance at the at the docs seems to indicate that you can get your data via an iterator (by iterating over the cursor or some such db mumbo jumbo) rather than slurping up the whole list up at once. If so, then you'll save a lot of memory by passing the iterator straight to fromiter. It may even be faster, who knows. Accessing the db via the iterator could be a performance killer, but it's almost certainly worth trying as it could a few megabytes of storage and that in turn might speed things up. -tim > Digging in a little bit, we found that numpy is about 3x slower than > Numeric here > > peds-pc311:~> python test.py > with dtype: 4.25 elapsed seconds > w/o dtype 5.79 elapsed seconds > Numeric 1.58 elapsed seconds > 24.0b2 > 1.0.1.dev3432 > > Hmm... So maybe the question is -- is there some low hanging fruit > here to get numpy speeds up? > > import time > import numpy > import numpy.random > rand = numpy.random.rand > > x = [(rand(), rand()) for i in xrange(500000)] > tnow = time.time() > y = numpy.array(x, dtype=numpy.float_) > tdone = time.time() > print 'with dtype: %1.2f elapsed seconds'%(tdone - tnow) > > tnow = time.time() > y = numpy.array(x) > tdone = time.time() > print 'w/o dtype %1.2f elapsed seconds'%(tdone - tnow) > > import Numeric > tnow = time.time() > y = Numeric.array(x, Numeric.Float) > tdone = time.time() > print 'Numeric %1.2f elapsed seconds'%(tdone - tnow) > > print Numeric.__version__ > print numpy.__version__ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Travis O. <oli...@ee...> - 2006-11-14 23:50:07
|
John Hunter wrote: >>>>>>"John" == John Hunter <jdh...@ac...> writes: >>>>>> >>>>>> > > > >>>>>>"Erin" == Erin Sheldon <eri...@gm...> writes: >>>>>> >>>>>> > Erin> The question I have been asking myself is "what is the > Erin> advantage of such an approach?". It would be faster, but by > > John> In the use case that prompted this message, the pull from > John> mysql took almost 3 seconds, and the conversion from lists > John> to numpy arrays took more that 4 seconds. We have a list of > John> about 500000 2 tuples of floats. > > John> Digging in a little bit, we found that numpy is about 3x > John> slower than Numeric here > > John> peds-pc311:~> python test.py with dtype: 4.25 elapsed > John> seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed > John> seconds 24.0b2 1.0.1.dev3432 > > John> Hmm... So maybe the question is -- is there some low hanging > John> fruit here to get numpy speeds up? > >And for reference, numarray is 5 times faster than Numeric here and 15 >times faster than numpy > > peds-pc311:~> python test.py > with dtype: 4.20 elapsed seconds > w/o dtype 5.71 elapsed seconds > Numeric 1.60 elapsed seconds > numarray 0.30 elapsed seconds > 24.0b2 > 1.0.1.dev3432 > 1.5.1 > >import numarray >tnow = time.time() >y = numarray.array(x, numarray.Float) >tdone = time.time() > > This sounds like it could be definitely be sped up, then. Assign_Array is the relevant code (it then calls PySequence_SetItem) so that basically for k in a.shape[0]: a[k] = x[k] is what is being done. Thus, it might be the indexing code that is causing this to be a slower operation. We should look at what numarray is doing --- it has provided important speed-ups in the past. I still don't have time to look at this, but please file a ticket as we should fix this one. Reference the faster numarray implementation. -Travis |
From: Kenny O. <kor...@id...> - 2006-11-14 23:48:53
|
I am sorry if this has come up before, I've found some stuff on it but just want to be clear. What are the latest versions of numpy matplotlib and scipy that work together. I have seen that numpy 1.0rc2 works with the latest scipy until 0.5.2 comes out. now with numpy and matplotlib? I saw that matplotlib 87.5 works with i think the 1.0rc2 also, but i read that mpl was going to release right after numpy 1.0 was released so that they would be compatible. I am just trying to upgrade these packages before I create an executable of the program I am working on, and I am running into problems with "from matplotlib._ns_nxutils import * ImportError: numpy.core.multiarray failed to imort" -Kenny |
From: Pierre GM <pgm...@gm...> - 2006-11-14 23:47:56
|
John, I just added the following to your example: =2E.................................. tnow =3D time.time() y =3D numpy.fromiter((tuple(i) for i in x), dtype=3D[('a',numpy.float_), ('= b',=20 numpy.float_)]) tdone =3D time.time() print 'Numpy.fromiter %1.2f elapsed seconds'%(tdone - tnow) =2E.................................. Here are my results with dtype: 4.43 elapsed seconds w/o dtype 5.78 elapsed seconds Numeric =A01.17 elapsed seconds Numpy.fromiter 0.62 elapsed seconds 23.7 1.0 Numpy, one point. |
From: John H. <jdh...@ac...> - 2006-11-14 23:17:08
|
>>>>> "John" == John Hunter <jdh...@ac...> writes: >>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: Erin> The question I have been asking myself is "what is the Erin> advantage of such an approach?". It would be faster, but by John> In the use case that prompted this message, the pull from John> mysql took almost 3 seconds, and the conversion from lists John> to numpy arrays took more that 4 seconds. We have a list of John> about 500000 2 tuples of floats. John> Digging in a little bit, we found that numpy is about 3x John> slower than Numeric here John> peds-pc311:~> python test.py with dtype: 4.25 elapsed John> seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed John> seconds 24.0b2 1.0.1.dev3432 John> Hmm... So maybe the question is -- is there some low hanging John> fruit here to get numpy speeds up? And for reference, numarray is 5 times faster than Numeric here and 15 times faster than numpy peds-pc311:~> python test.py with dtype: 4.20 elapsed seconds w/o dtype 5.71 elapsed seconds Numeric 1.60 elapsed seconds numarray 0.30 elapsed seconds 24.0b2 1.0.1.dev3432 1.5.1 import numarray tnow = time.time() y = numarray.array(x, numarray.Float) tdone = time.time() print 'numarray %1.2f elapsed seconds'%(tdone - tnow) print numarray.__version__ |
From: John H. <jdh...@ac...> - 2006-11-14 23:04:50
|
>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: Erin> The question I have been asking myself is "what is the Erin> advantage of such an approach?". It would be faster, but by In the use case that prompted this message, the pull from mysql took almost 3 seconds, and the conversion from lists to numpy arrays took more that 4 seconds. We have a list of about 500000 2 tuples of floats. Digging in a little bit, we found that numpy is about 3x slower than Numeric here peds-pc311:~> python test.py with dtype: 4.25 elapsed seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed seconds 24.0b2 1.0.1.dev3432 Hmm... So maybe the question is -- is there some low hanging fruit here to get numpy speeds up? import time import numpy import numpy.random rand = numpy.random.rand x = [(rand(), rand()) for i in xrange(500000)] tnow = time.time() y = numpy.array(x, dtype=numpy.float_) tdone = time.time() print 'with dtype: %1.2f elapsed seconds'%(tdone - tnow) tnow = time.time() y = numpy.array(x) tdone = time.time() print 'w/o dtype %1.2f elapsed seconds'%(tdone - tnow) import Numeric tnow = time.time() y = Numeric.array(x, Numeric.Float) tdone = time.time() print 'Numeric %1.2f elapsed seconds'%(tdone - tnow) print Numeric.__version__ print numpy.__version__ |
From: Erin S. <eri...@gm...> - 2006-11-14 22:08:54
|
On 11/14/06, John Hunter <jdh...@ac...> wrote: > > Has anyone written any code to facilitate dumping mysql query results > (mainly arrays of floats) into numpy arrays directly at the extension > code layer. The query results->list->array conversion can be slow. > > Ideally, one could do this semi-automagically with record arrays and > table introspection.... I've been considering this as well. I use both postgres and Oracle in my work, and I have been using the python interfaces (cx_Oracle and pgdb) to get result lists and convert to numpy arrays. The question I have been asking myself is "what is the advantage of such an approach?". It would be faster, but by how much? Presumably the bottleneck for most applications will be data retrieval rather than data copying in memory. The process numpy.array(results, dtype=) is pretty fast and simple if the client is DB 2.0 compliant and returns a list of tuples (pgdb does not sadly). Also the memory usage will be about the same since a copy must be made in order to create the lists or python arrays in either case. On the other hand, the database access modules for all major databases, with DB 2.0 semicomplience, have already been written. This is not an insignificant amount of work. Writing our own interfaces for each of our favorite databases would require an equivalent amount of work. I think a set of timing tests would be useful. I will try some using Oracle or postgres over the next few days. Perhaps you could do the same with mysql. Erin |
From: Hanno K. <kl...@ph...> - 2006-11-14 21:52:48
|
Hi Christian, I send this off-list as there are probably a lot more knowledgeable people around there. However, I don't entirely understand your problem (I'm not on the f2py list). What happens if you try: C file hello.f subroutine foo(a) integer a Cf2py intent(in) a print*, "Hello from Fortran!" print*, "a=",a end f2py -m -c hello hello.f That usually did the trick for me. What are your error messages, if you try the above? Best regards, Hanno On Nov 14, 2006, at 12:09 PM, Christian Meesters wrote: > Hoi, > > thanks to Robert Kern who helped me out yesterday on the f2py-list, I > was able > to make some progress in accessing FORTRAN from Python. But only some > progress ... > > If I have the following code, named 'hello.f': > C File hello.f > subroutine foo (a) > integer a > print*, "Hello from Fortran!" > print*, "a=",a > end > > and compile it with g77 -shared -fPIC hello.f -o hello.so > > and then start python, I get the following: >>>> from numpy import * >>>> from ctypes import c_int, POINTER, byref >>>> hellolib = ctypeslib.load_library('hello', '.') >>>> hello = hellolib.foo_ >>>> hello(42) > Hello from Fortran! > Segmentation fault > > Can anybody tell me where my mistake is? (Currently python 2.4.1 (no > intention > to update soon), the most recent ctypes, and numpy '1.0.dev3341' from > svn.) > > And a second question: Are there simple examples around which show how > to pass > and retrieve lists, numpy arrays, and dicts to and from FORTRAN? > Despite an > intensive web search I couldn't find anything. > > TIA > Christian > > ----------------------------------------------------------------------- > -- > Using Tomcat but need to do more? Need to support web services, > security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > -- Hanno Klemm kl...@it... ETH Zurich tel: +41-1-6332580 Institute for theoretical physics mobile: +41-79-4500428 http://www.mth.kcl.ac.uk/~klemm |
From: Keith G. <kwg...@gm...> - 2006-11-14 21:49:12
|
On 11/14/06, Jeff Strunk <js...@en...> wrote: > Good afternoon, > > We will be performing the migration of this mailing list from Sourceforge to > SciPy on Thursday at 2pm central. > > After this time, the new mailing list address will be > num...@sc... . Any mail sent to the Sourceforge address will > NOT be forwarded. The sender will receive a message with the new address. > > Thank you for your patience. > Jeff Strunk > IT Administrator > Enthought, Inc. > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash So is this my last chance to Take Surveys. Earn Cash. Influence the Future of IT? That's great. Thank you for hosting. |
From: Jeff S. <js...@en...> - 2006-11-14 21:42:58
|
Good afternoon, We will be performing the migration of this mailing list from Sourceforge to SciPy on Thursday at 2pm central. After this time, the new mailing list address will be num...@sc... . Any mail sent to the Sourceforge address will NOT be forwarded. The sender will receive a message with the new address. Thank you for your patience. Jeff Strunk IT Administrator Enthought, Inc. |
From: John H. <jdh...@ac...> - 2006-11-14 21:20:15
|
Has anyone written any code to facilitate dumping mysql query results (mainly arrays of floats) into numpy arrays directly at the extension code layer. The query results->list->array conversion can be slow. Ideally, one could do this semi-automagically with record arrays and table introspection.... JDH |
From: David H. <dav...@gm...> - 2006-11-14 18:50:20
|
I put the patch on Trac. Ticket 189. 2006/11/14, Xavier Gnata <gn...@ob...>: > > Hi, > > > > IFAICS these new histograms versions have not yet been merged to svn. > > Are they problems to be solve before to be able to merge them? > > How could we help? > > Voice you support on Trac to replace histogram with the upgraded version. Thanks. David |
From: Christian M. <mee...@un...> - 2006-11-14 17:24:18
|
On Tuesday 14 November 2006 18:01, Robert Kern wrote: > > I don't believe anyone has posted anything about calling FORTRAN code using > ctypes. Why aren't you using f2py? What if we just forget about my last post? (I guess I was doing way too many things in parallel, stumbled across the g95 site and tried to apply what's posted there in the way described there ...) Sorry for this confusion and thanks, Robert, that again you brought me back on track, things are working now. Christian |
From: Robert K. <rob...@gm...> - 2006-11-14 17:03:31
|
Christian Meesters wrote: > Hoi, > > thanks to Robert Kern who helped me out yesterday on the f2py-list, I was able > to make some progress in accessing FORTRAN from Python. But only some > progress ... > > If I have the following code, named 'hello.f': > C File hello.f > subroutine foo (a) > integer a > print*, "Hello from Fortran!" > print*, "a=",a > end > > and compile it with g77 -shared -fPIC hello.f -o hello.so > > and then start python, I get the following: >>>> from numpy import * >>>> from ctypes import c_int, POINTER, byref >>>> hellolib = ctypeslib.load_library('hello', '.') >>>> hello = hellolib.foo_ >>>> hello(42) > Hello from Fortran! > Segmentation fault > > Can anybody tell me where my mistake is? (Currently python 2.4.1 (no intention > to update soon), the most recent ctypes, and numpy '1.0.dev3341' from svn.) > > And a second question: Are there simple examples around which show how to pass > and retrieve lists, numpy arrays, and dicts to and from FORTRAN? Despite an > intensive web search I couldn't find anything. I don't believe anyone has posted anything about calling FORTRAN code using ctypes. Why aren't you using f2py? -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |
From: Travis O. <oli...@ee...> - 2006-11-14 15:56:31
|
Neilen Marais wrote: >Hi, > >I'm not sure if the following is expected to work on a 64bit machine: > >In [381]: import numpy as N >In [382]: l = range(3) >In [383]: i32 = N.array([0,2], N.int32) >In [384]: i64 = N.array([0,2], N.int64) >In [385]: l[i32[0]] >--------------------------------------------------------------------------- >exceptions.TypeError Traceback (most recent call last) > >/home/brick/akademie/NewCode/working/<ipython console> > >TypeError: list indices must be integers > >In [386]: l[i64[0]] >Out[386]: 0 > >I'd expect the 32-bit indices to work since they can be upcast to 64bit without >loss. Am I silly for thinking this way, or is it something numpy can/should >address? This came up while working with sparse matrices: > >http://projects.scipy.org/scipy/scipy/ticket/307 > > It's addressed with Python 2.5 We can't do anything about it for Python 2.4 -Travis |
From: David H. <dav...@gm...> - 2006-11-14 14:53:42
|
2006/11/14, Xavier Gnata <gn...@ob...>: > > Hi, > > IFAICS these new histograms versions have not yet been merged to svn. > Are they problems to be solve before to be able to merge them? > How could we help? No, I'm just overloaded with other stuff, I'll submit a patch on Trac today or tomorrow. Up to now, Travis did the merges, but since he looks pretty busy right now, I don't know when it'll show up on svn. David Xavier > > > Hi, > > > > Your histograms functions look fine for me :) > > As it is a quite usual operation on an array, I would suggest to put > > it in numpy as numpy.histogram. IMHO, there is no point to create an > > numpy.stats only for histograms (or do you have plans to move other > > stats related function to numpy.stats?) > > > > Xavier. > > > > > >> Nicolas, thanks for the bug report, I fooled around with argument > >> passing and should have checked every case. > >> > >> You'll find the histogram function that deals with weights on the > >> numpy trac ticket 189, < > http://projects.scipy.org/scipy/numpy/ticket/189> > >> I'm waiting for some hints as to where the histogram function should > >> reside (numpy.histogram, numpy.stats.histogram, ...) before submitting > >> a patch . > >> > >> Salut, > >> David > >> > >> > >> 2006/10/25, Nicolas Champavert <nic...@ob... > >> <mailto:nic...@ob...>>: > >> > >> Hi, > >> > >> it would be great if you could add the weight option in the 1D > >> histogram too. > >> > >> Nicolas > >> > >> David Huard a =E9crit : > >> > Xavier, > >> > Here is the patch against svn. Please report any bug. I haven't > had > >> > the time to test it extensively, something that should be done > >> before > >> > commiting the patch to the repo. I'd appreciate your feedback. > >> > > >> > David > >> > > >> > 2006/10/24, David Huard < dav...@gm... > >> <mailto:dav...@gm...> > >> > <mailto:dav...@gm... <mailto:dav...@gm...>>>: > >> > > >> > Hi Xavier, > >> > > >> > You could tweak histogram2d to do what you want, or you > >> could give > >> > me a couple of days and I'll do it and let you know. If you > want > >> > to help, you could write a test using your particular > >> application > >> > and data. > >> > > >> > David > >> > > >> > > >> > 2006/10/24, Xavier Gnata < gn...@ob... > >> <mailto:gn...@ob...> > >> > <mailto:gn...@ob... > >> <mailto:gn...@ob...>>>: > >> > > >> > Hi, > >> > > >> > I have a set of 3 1D large arrays. > >> > The first 2 one stand for the coordinates of particules > and > >> > the last one > >> > for their masses. > >> > I would like to be able to plot this data ie to compute > >> a 2D > >> > histogram > >> > summing the masses in each bin. > >> > I cannot find a way to do that without any loop on the > >> indices > >> > resulting > >> > too a very slow function. > >> > > >> > I'm looking for an elegant way to do that with numpy (or > >> > scipy??) function. > >> > > >> > For instance, scipy.histogram2d cannot do the job becaus= e > it > >> > only counts > >> > the number of samples in each bin. > >> > There is no way to deal with weights. > >> > > >> > Xavier. > >> > > >> > > >> > -- > >> > ############################################ > >> > Xavier Gnata > >> > CRAL - Observatoire de Lyon > >> > 9, avenue Charles Andr=E9 > >> > 69561 Saint Genis Laval cedex > >> > Phone: +33 4 78 86 85 28 > >> > Fax: +33 4 78 86 83 86 > >> > E-mail: gn...@ob... > >> <mailto:gn...@ob...> <mailto:gn...@ob... > >> <mailto:gn...@ob...>> > >> > ############################################ > >> > > >> > > >> > > >> > ------------------------------------------------------------------------- > >> > Using Tomcat but need to do more? Need to support web > >> > services, security? > >> > Get stuff done quickly with pre-integrated technology to > >> make > >> > your job easier > >> > Download IBM WebSphere Application Server v.1.0.1 based > on > >> > Apache Geronimo > >> > > >> > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > >> < > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642> > >> > > >> < > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > >> < > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642>> > >> > _______________________________________________ > >> > Numpy-discussion mailing list > >> > Num...@li... > >> <mailto:Num...@li...> > >> > <mailto:Num...@li... > >> <mailto:Num...@li...>> > >> > > >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > >> > > >> > > >> > > >> > > >> > >> > ------------------------------------------------------------------------- > >> Using Tomcat but need to do more? Need to support web services, > >> security? > >> Get stuff done quickly with pre-integrated technology to make your > >> job easier > >> Download IBM WebSphere Application Server v.1.0.1 based on Apache > >> Geronimo > >> > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > >> < > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642> > >> _______________________________________________ > >> Numpy-discussion mailing list > >> Num...@li... > >> <mailto:Num...@li...> > >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > >> > >> > >> > ------------------------------------------------------------------------ > >> > >> > ------------------------------------------------------------------------- > >> Using Tomcat but need to do more? Need to support web services, > security? > >> Get stuff done quickly with pre-integrated technology to make your job > easier > >> Download IBM WebSphere Application Server v.1.0.1 based on Apache > Geronimo > >> > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > >> > ------------------------------------------------------------------------ > >> > >> _______________________________________________ > >> Numpy-discussion mailing list > >> Num...@li... > >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion > >> > >> > > > > > > > > > -- > ############################################ > Xavier Gnata > CRAL - Observatoire de Lyon > 9, avenue Charles Andr=E9 > 69561 Saint Genis Laval cedex > Phone: +33 4 78 86 85 28 > Fax: +33 4 78 86 83 86 > E-mail: gn...@ob... > ############################################ > > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job > easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronim= o > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D120709&bid=3D263057&dat= =3D121642 > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > |
From: Neilen M. <nm...@su...> - 2006-11-14 14:45:43
|
Hi, I'm not sure if the following is expected to work on a 64bit machine: In [381]: import numpy as N In [382]: l = range(3) In [383]: i32 = N.array([0,2], N.int32) In [384]: i64 = N.array([0,2], N.int64) In [385]: l[i32[0]] --------------------------------------------------------------------------- exceptions.TypeError Traceback (most recent call last) /home/brick/akademie/NewCode/working/<ipython console> TypeError: list indices must be integers In [386]: l[i64[0]] Out[386]: 0 I'd expect the 32-bit indices to work since they can be upcast to 64bit without loss. Am I silly for thinking this way, or is it something numpy can/should address? This came up while working with sparse matrices: http://projects.scipy.org/scipy/scipy/ticket/307 Regards Neilen -- you know its kind of tragic we live in the new world but we've lost the magic -- Battery 9 (www.battery9.co.za) |
From: Xavier G. <gn...@ob...> - 2006-11-14 11:31:59
|
Hi, IFAICS these new histograms versions have not yet been merged to svn. Are they problems to be solve before to be able to merge them? How could we help? Xavier > Hi, > > Your histograms functions look fine for me :) > As it is a quite usual operation on an array, I would suggest to put > it in numpy as numpy.histogram. IMHO, there is no point to create an > numpy.stats only for histograms (or do you have plans to move other > stats related function to numpy.stats?) > > Xavier. > > >> Nicolas, thanks for the bug report, I fooled around with argument >> passing and should have checked every case. >> >> You'll find the histogram function that deals with weights on the >> numpy trac ticket 189, <http://projects.scipy.org/scipy/numpy/ticket/189> >> I'm waiting for some hints as to where the histogram function should >> reside (numpy.histogram, numpy.stats.histogram, ...) before submitting >> a patch . >> >> Salut, >> David >> >> >> 2006/10/25, Nicolas Champavert <nic...@ob... >> <mailto:nic...@ob...>>: >> >> Hi, >> >> it would be great if you could add the weight option in the 1D >> histogram too. >> >> Nicolas >> >> David Huard a écrit : >> > Xavier, >> > Here is the patch against svn. Please report any bug. I haven't had >> > the time to test it extensively, something that should be done >> before >> > commiting the patch to the repo. I'd appreciate your feedback. >> > >> > David >> > >> > 2006/10/24, David Huard < dav...@gm... >> <mailto:dav...@gm...> >> > <mailto:dav...@gm... <mailto:dav...@gm...>>>: >> > >> > Hi Xavier, >> > >> > You could tweak histogram2d to do what you want, or you >> could give >> > me a couple of days and I'll do it and let you know. If you want >> > to help, you could write a test using your particular >> application >> > and data. >> > >> > David >> > >> > >> > 2006/10/24, Xavier Gnata < gn...@ob... >> <mailto:gn...@ob...> >> > <mailto:gn...@ob... >> <mailto:gn...@ob...>>>: >> > >> > Hi, >> > >> > I have a set of 3 1D large arrays. >> > The first 2 one stand for the coordinates of particules and >> > the last one >> > for their masses. >> > I would like to be able to plot this data ie to compute >> a 2D >> > histogram >> > summing the masses in each bin. >> > I cannot find a way to do that without any loop on the >> indices >> > resulting >> > too a very slow function. >> > >> > I'm looking for an elegant way to do that with numpy (or >> > scipy??) function. >> > >> > For instance, scipy.histogram2d cannot do the job because it >> > only counts >> > the number of samples in each bin. >> > There is no way to deal with weights. >> > >> > Xavier. >> > >> > >> > -- >> > ############################################ >> > Xavier Gnata >> > CRAL - Observatoire de Lyon >> > 9, avenue Charles André >> > 69561 Saint Genis Laval cedex >> > Phone: +33 4 78 86 85 28 >> > Fax: +33 4 78 86 83 86 >> > E-mail: gn...@ob... >> <mailto:gn...@ob...> <mailto:gn...@ob... >> <mailto:gn...@ob...>> >> > ############################################ >> > >> > >> > >> ------------------------------------------------------------------------- >> > Using Tomcat but need to do more? Need to support web >> > services, security? >> > Get stuff done quickly with pre-integrated technology to >> make >> > your job easier >> > Download IBM WebSphere Application Server v.1.0.1 based on >> > Apache Geronimo >> > >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> <http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642> >> > >> <http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> <http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642>> >> > _______________________________________________ >> > Numpy-discussion mailing list >> > Num...@li... >> <mailto:Num...@li...> >> > <mailto:Num...@li... >> <mailto:Num...@li...>> >> > >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> > >> > >> > >> > >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, >> security? >> Get stuff done quickly with pre-integrated technology to make your >> job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache >> Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> <http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642> >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> <mailto:Num...@li...> >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------- >> Using Tomcat but need to do more? Need to support web services, security? >> Get stuff done quickly with pre-integrated technology to make your job easier >> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo >> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Numpy-discussion mailing list >> Num...@li... >> https://lists.sourceforge.net/lists/listinfo/numpy-discussion >> >> > > > -- ############################################ Xavier Gnata CRAL - Observatoire de Lyon 9, avenue Charles André 69561 Saint Genis Laval cedex Phone: +33 4 78 86 85 28 Fax: +33 4 78 86 83 86 E-mail: gn...@ob... ############################################ |
From: Christian M. <mee...@un...> - 2006-11-14 11:13:59
|
Hoi, thanks to Robert Kern who helped me out yesterday on the f2py-list, I was able to make some progress in accessing FORTRAN from Python. But only some progress ... If I have the following code, named 'hello.f': C File hello.f subroutine foo (a) integer a print*, "Hello from Fortran!" print*, "a=",a end and compile it with g77 -shared -fPIC hello.f -o hello.so and then start python, I get the following: >>> from numpy import * >>> from ctypes import c_int, POINTER, byref >>> hellolib = ctypeslib.load_library('hello', '.') >>> hello = hellolib.foo_ >>> hello(42) Hello from Fortran! Segmentation fault Can anybody tell me where my mistake is? (Currently python 2.4.1 (no intention to update soon), the most recent ctypes, and numpy '1.0.dev3341' from svn.) And a second question: Are there simple examples around which show how to pass and retrieve lists, numpy arrays, and dicts to and from FORTRAN? Despite an intensive web search I couldn't find anything. TIA Christian |
From: Seweryn K. <sk...@po...> - 2006-11-14 10:01:45
|
Sven Schreiber <sve...@gm...> writes: > > Sure, but Seweryn used the same import statement from scipy and never > explicitly referred to numpy, so there must be some subtle import voodoo > going on. Or did you not show us everything, Seweryn? It's all ok now, It was my mistake. The problem was that in ipython I typed "from scipy import linalg" which is wrong and being surprised by the output I open python shell and tried different combinations of import, among others "from scipy import *" and this is the reason of the difference. So now in ipython I get the same output when typing help(linalg.eig). Sorry for bothering you, regards, SK |
From: Sven S. <sve...@gm...> - 2006-11-14 09:51:17
|
Charles R Harris schrieb: > In [1]: from scipy import linalg > > In [2]: help(linalg.eig) > > > >>> from scipy import linalg > >>> help(linalg.eig) > > Help on function eig in module scipy.linalg.decomp: > > > I expect scipy.linalg and numpy.linalg are different modules containing > different functions. That said, the documentation in scipy.linalg looks > quite a bit more complete. > > Chuck Sure, but Seweryn used the same import statement from scipy and never explicitly referred to numpy, so there must be some subtle import voodoo going on. Or did you not show us everything, Seweryn? -sven |
From: Robert K. <rob...@gm...> - 2006-11-14 04:17:20
|
Tim Hochberg wrote: > Another little tidbit: this is not as general as where, and could > probably be considered a little too clever to be clear, but: > > b = 1 / (a + (a==0.0)) > > is faster than using where in this particular case and sidesteps the > divide by zero issue altogether. A less clever approach that does much the same thing: b = 1.0 / where(a==0, 1.0, a) -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco |