From: John H. <jdh...@ac...> - 2006-11-14 21:20:15
|
Has anyone written any code to facilitate dumping mysql query results (mainly arrays of floats) into numpy arrays directly at the extension code layer. The query results->list->array conversion can be slow. Ideally, one could do this semi-automagically with record arrays and table introspection.... JDH |
From: Erin S. <eri...@gm...> - 2006-11-14 22:08:54
|
On 11/14/06, John Hunter <jdh...@ac...> wrote: > > Has anyone written any code to facilitate dumping mysql query results > (mainly arrays of floats) into numpy arrays directly at the extension > code layer. The query results->list->array conversion can be slow. > > Ideally, one could do this semi-automagically with record arrays and > table introspection.... I've been considering this as well. I use both postgres and Oracle in my work, and I have been using the python interfaces (cx_Oracle and pgdb) to get result lists and convert to numpy arrays. The question I have been asking myself is "what is the advantage of such an approach?". It would be faster, but by how much? Presumably the bottleneck for most applications will be data retrieval rather than data copying in memory. The process numpy.array(results, dtype=) is pretty fast and simple if the client is DB 2.0 compliant and returns a list of tuples (pgdb does not sadly). Also the memory usage will be about the same since a copy must be made in order to create the lists or python arrays in either case. On the other hand, the database access modules for all major databases, with DB 2.0 semicomplience, have already been written. This is not an insignificant amount of work. Writing our own interfaces for each of our favorite databases would require an equivalent amount of work. I think a set of timing tests would be useful. I will try some using Oracle or postgres over the next few days. Perhaps you could do the same with mysql. Erin |
From: John H. <jdh...@ac...> - 2006-11-14 23:04:50
|
>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: Erin> The question I have been asking myself is "what is the Erin> advantage of such an approach?". It would be faster, but by In the use case that prompted this message, the pull from mysql took almost 3 seconds, and the conversion from lists to numpy arrays took more that 4 seconds. We have a list of about 500000 2 tuples of floats. Digging in a little bit, we found that numpy is about 3x slower than Numeric here peds-pc311:~> python test.py with dtype: 4.25 elapsed seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed seconds 24.0b2 1.0.1.dev3432 Hmm... So maybe the question is -- is there some low hanging fruit here to get numpy speeds up? import time import numpy import numpy.random rand = numpy.random.rand x = [(rand(), rand()) for i in xrange(500000)] tnow = time.time() y = numpy.array(x, dtype=numpy.float_) tdone = time.time() print 'with dtype: %1.2f elapsed seconds'%(tdone - tnow) tnow = time.time() y = numpy.array(x) tdone = time.time() print 'w/o dtype %1.2f elapsed seconds'%(tdone - tnow) import Numeric tnow = time.time() y = Numeric.array(x, Numeric.Float) tdone = time.time() print 'Numeric %1.2f elapsed seconds'%(tdone - tnow) print Numeric.__version__ print numpy.__version__ |
From: John H. <jdh...@ac...> - 2006-11-14 23:17:08
|
>>>>> "John" == John Hunter <jdh...@ac...> writes: >>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: Erin> The question I have been asking myself is "what is the Erin> advantage of such an approach?". It would be faster, but by John> In the use case that prompted this message, the pull from John> mysql took almost 3 seconds, and the conversion from lists John> to numpy arrays took more that 4 seconds. We have a list of John> about 500000 2 tuples of floats. John> Digging in a little bit, we found that numpy is about 3x John> slower than Numeric here John> peds-pc311:~> python test.py with dtype: 4.25 elapsed John> seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed John> seconds 24.0b2 1.0.1.dev3432 John> Hmm... So maybe the question is -- is there some low hanging John> fruit here to get numpy speeds up? And for reference, numarray is 5 times faster than Numeric here and 15 times faster than numpy peds-pc311:~> python test.py with dtype: 4.20 elapsed seconds w/o dtype 5.71 elapsed seconds Numeric 1.60 elapsed seconds numarray 0.30 elapsed seconds 24.0b2 1.0.1.dev3432 1.5.1 import numarray tnow = time.time() y = numarray.array(x, numarray.Float) tdone = time.time() print 'numarray %1.2f elapsed seconds'%(tdone - tnow) print numarray.__version__ |
From: Pierre GM <pgm...@gm...> - 2006-11-14 23:47:56
|
John, I just added the following to your example: =2E.................................. tnow =3D time.time() y =3D numpy.fromiter((tuple(i) for i in x), dtype=3D[('a',numpy.float_), ('= b',=20 numpy.float_)]) tdone =3D time.time() print 'Numpy.fromiter %1.2f elapsed seconds'%(tdone - tnow) =2E.................................. Here are my results with dtype: 4.43 elapsed seconds w/o dtype 5.78 elapsed seconds Numeric =A01.17 elapsed seconds Numpy.fromiter 0.62 elapsed seconds 23.7 1.0 Numpy, one point. |
From: Travis O. <oli...@ee...> - 2006-11-14 23:50:07
|
John Hunter wrote: >>>>>>"John" == John Hunter <jdh...@ac...> writes: >>>>>> >>>>>> > > > >>>>>>"Erin" == Erin Sheldon <eri...@gm...> writes: >>>>>> >>>>>> > Erin> The question I have been asking myself is "what is the > Erin> advantage of such an approach?". It would be faster, but by > > John> In the use case that prompted this message, the pull from > John> mysql took almost 3 seconds, and the conversion from lists > John> to numpy arrays took more that 4 seconds. We have a list of > John> about 500000 2 tuples of floats. > > John> Digging in a little bit, we found that numpy is about 3x > John> slower than Numeric here > > John> peds-pc311:~> python test.py with dtype: 4.25 elapsed > John> seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed > John> seconds 24.0b2 1.0.1.dev3432 > > John> Hmm... So maybe the question is -- is there some low hanging > John> fruit here to get numpy speeds up? > >And for reference, numarray is 5 times faster than Numeric here and 15 >times faster than numpy > > peds-pc311:~> python test.py > with dtype: 4.20 elapsed seconds > w/o dtype 5.71 elapsed seconds > Numeric 1.60 elapsed seconds > numarray 0.30 elapsed seconds > 24.0b2 > 1.0.1.dev3432 > 1.5.1 > >import numarray >tnow = time.time() >y = numarray.array(x, numarray.Float) >tdone = time.time() > > This sounds like it could be definitely be sped up, then. Assign_Array is the relevant code (it then calls PySequence_SetItem) so that basically for k in a.shape[0]: a[k] = x[k] is what is being done. Thus, it might be the indexing code that is causing this to be a slower operation. We should look at what numarray is doing --- it has provided important speed-ups in the past. I still don't have time to look at this, but please file a ticket as we should fix this one. Reference the faster numarray implementation. -Travis |
From: Tim H. <tim...@ie...> - 2006-11-15 00:45:10
|
John Hunter wrote: >>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: >>>>>> > > Erin> The question I have been asking myself is "what is the > Erin> advantage of such an approach?". It would be faster, but by > > In the use case that prompted this message, the pull from mysql took > almost 3 seconds, and the conversion from lists to numpy arrays took > more that 4 seconds. We have a list of about 500000 2 tuples of > floats. > I'm no database user, but a glance at the at the docs seems to indicate that you can get your data via an iterator (by iterating over the cursor or some such db mumbo jumbo) rather than slurping up the whole list up at once. If so, then you'll save a lot of memory by passing the iterator straight to fromiter. It may even be faster, who knows. Accessing the db via the iterator could be a performance killer, but it's almost certainly worth trying as it could a few megabytes of storage and that in turn might speed things up. -tim > Digging in a little bit, we found that numpy is about 3x slower than > Numeric here > > peds-pc311:~> python test.py > with dtype: 4.25 elapsed seconds > w/o dtype 5.79 elapsed seconds > Numeric 1.58 elapsed seconds > 24.0b2 > 1.0.1.dev3432 > > Hmm... So maybe the question is -- is there some low hanging fruit > here to get numpy speeds up? > > import time > import numpy > import numpy.random > rand = numpy.random.rand > > x = [(rand(), rand()) for i in xrange(500000)] > tnow = time.time() > y = numpy.array(x, dtype=numpy.float_) > tdone = time.time() > print 'with dtype: %1.2f elapsed seconds'%(tdone - tnow) > > tnow = time.time() > y = numpy.array(x) > tdone = time.time() > print 'w/o dtype %1.2f elapsed seconds'%(tdone - tnow) > > import Numeric > tnow = time.time() > y = Numeric.array(x, Numeric.Float) > tdone = time.time() > print 'Numeric %1.2f elapsed seconds'%(tdone - tnow) > > print Numeric.__version__ > print numpy.__version__ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion > > > |
From: Tim H. <tim...@ie...> - 2006-11-15 02:09:31
|
Tim Hochberg wrote: > John Hunter wrote: > >>>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: >>>>>>> >>>>>>> >> Erin> The question I have been asking myself is "what is the >> Erin> advantage of such an approach?". It would be faster, but by >> >> In the use case that prompted this message, the pull from mysql took >> almost 3 seconds, and the conversion from lists to numpy arrays took >> more that 4 seconds. We have a list of about 500000 2 tuples of >> floats. >> >> > I'm no database user, but a glance at the at the docs seems to indicate > that you can get your data via an iterator (by iterating over the cursor > or some such db mumbo jumbo) rather than slurping up the whole list up > at once. If so, then you'll save a lot of memory by passing the iterator > straight to fromiter. It may even be faster, who knows. > > Accessing the db via the iterator could be a performance killer, but > it's almost certainly worth trying as it could a few megabytes of > storage and that in turn might speed things up. > > Assuming that I didn't mess this up too badly, it appears that using the iterator directly with fromiter is significantly faster than the next best solution (about 45%). The fromiter wrapping a list solution come in second, followed by numarray.array and finally way in the back, numpy.array. Here's the numbers: retrieve1 took 0.902922857514 seconds retrieve2 took 1.31245870634 seconds retrieve3 took 1.51207569677 seconds retrieve4 took 8.71539930354 seconds And here is the code: import sqlite3, numpy as np, numarray as na, time N = 500000 def setup(conn): c = conn.cursor() c.execute('''create table demo (x real, y real)''') data = np.random.rand(N, 2) c.executemany("""insert into demo values (?, ?)""", data) def retrieve1(conn): c = conn.cursor() c.execute('select * from demo') y = np.fromiter(c, dtype=[('a',float), ('b', float)]) return y def retrieve2(conn): c = conn.cursor() c.execute('select * from demo') y = np.fromiter(c.fetchall(), dtype=[('a',float), ('b', float)]) return y def retrieve3(conn): c = conn.cursor() c.execute('select * from demo') y = na.array(c.fetchall()) return y def retrieve4(conn): c = conn.cursor() c.execute('select * from demo') y = np.array(c.fetchall()) return y conn = sqlite3.connect(':memory:') setup(conn) t0 = time.clock() y1 = retrieve1(conn) t1 = time.clock() y2 = retrieve2(conn) t2 = time.clock() y3 = retrieve3(conn) t3 = time.clock() y4 = retrieve4(conn) t4 = time.clock() assert y1.shape == y2.shape == y3.shape[:1] == y4.shape[:1] == (N,) assert np.alltrue(y1 == y2) print "retrieve1 took", t1-t0, "seconds" print "retrieve2 took", t2-t1, "seconds" print "retrieve3 took", t3-t2, "seconds" print "retrieve4 took", t4-t3, "seconds" |
From: Erin S. <eri...@gm...> - 2006-11-15 02:26:49
|
On 11/14/06, Tim Hochberg <tim...@ie...> wrote: > Tim Hochberg wrote: > > John Hunter wrote: > > > >>>>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: > >>>>>>> > >>>>>>> > >> Erin> The question I have been asking myself is "what is the > >> Erin> advantage of such an approach?". It would be faster, but by > >> > >> In the use case that prompted this message, the pull from mysql took > >> almost 3 seconds, and the conversion from lists to numpy arrays took > >> more that 4 seconds. We have a list of about 500000 2 tuples of > >> floats. > >> > >> > > I'm no database user, but a glance at the at the docs seems to indicate > > that you can get your data via an iterator (by iterating over the cursor > > or some such db mumbo jumbo) rather than slurping up the whole list up > > at once. If so, then you'll save a lot of memory by passing the iterator > > straight to fromiter. It may even be faster, who knows. > > > > Accessing the db via the iterator could be a performance killer, but > > it's almost certainly worth trying as it could a few megabytes of > > storage and that in turn might speed things up. > > > > > Assuming that I didn't mess this up too badly, it appears that using the > iterator directly with fromiter is significantly faster than the next > best solution (about 45%). The fromiter wrapping a list solution come in > second, followed by numarray.array and finally way in the back, > numpy.array. Here's the numbers: > > retrieve1 took 0.902922857514 seconds > retrieve2 took 1.31245870634 seconds > retrieve3 took 1.51207569677 seconds > retrieve4 took 8.71539930354 seconds > > Interesting results Tim. From Pierre's results we saw that fromiter is the fastest way to get data into arrays. With your results we see there is a difference between iterating over the cursor and doing a fetchall() as well. Surprisingly, running the cursor is faster. This must come not from the data retrieval rate but from creating the copies in memory. But just in case I think there is one more thing to check. I haven't used sqlite, but with other databases I have used there is often a large variance in times from one select to the next. Can you repeat these tests with a timeit().repeat and give the minimum? As an aside, your database is running on a local disk, right, so the overehead of retrieving data is minimized here? For my tests I think I am data retrieval limited because I get exactly the same time for the equivalent of retrieve1 and retrieve2. Erin |
From: Keith G. <kwg...@gm...> - 2006-11-15 02:50:28
|
On 11/14/06, Erin Sheldon <eri...@gm...> wrote: > As an aside, your database is running on a local disk, right, so > the overehead of retrieving data is minimized here? > For my tests I think I am data retrieval limited because I > get exactly the same time for the equivalent of retrieve1 > and retrieve2. Tim created the database in memory (conn = sqlite3.connect(':memory:')), which is much faster than disk. |
From: Tim H. <tim...@ie...> - 2006-11-15 04:14:33
|
Erin Sheldon wrote: > On 11/14/06, Tim Hochberg <tim...@ie...> wrote: > >> Tim Hochberg wrote: >> >>> [SNIP] >>> >>> I'm no database user, but a glance at the at the docs seems to indicate >>> that you can get your data via an iterator (by iterating over the cursor >>> or some such db mumbo jumbo) rather than slurping up the whole list up >>> at once. If so, then you'll save a lot of memory by passing the iterator >>> straight to fromiter. It may even be faster, who knows. >>> >>> Accessing the db via the iterator could be a performance killer, but >>> it's almost certainly worth trying as it could a few megabytes of >>> storage and that in turn might speed things up. >>> >> Assuming that I didn't mess this up too badly, it appears that using the >> iterator directly with fromiter is significantly faster than the next >> best solution (about 45%). The fromiter wrapping a list solution come in >> second, followed by numarray.array and finally way in the back, >> numpy.array. Here's the numbers: >> >> retrieve1 took 0.902922857514 seconds >> retrieve2 took 1.31245870634 seconds >> retrieve3 took 1.51207569677 seconds >> retrieve4 took 8.71539930354 seconds >> > > Interesting results Tim. From Pierre's results > we saw that fromiter is the fastest way to get data > into arrays. With your results we see there is a > difference between iterating over the cursor and > doing a fetchall() as well. Surprisingly, running > the cursor is faster. > > This must come not from the data retrieval rate but > from creating the copies in memory. I imagine that is correct. In particular, skipping the making of the list avoids the creation of 1e6 Python floats, which is going to result in a lot of memory allocation. > But just in case > I think there is one more thing to check. > I haven't used sqlite, but with other databases I have > used there is often a large variance in times from > one select to the next. Can you > repeat these tests with a timeit().repeat and give the > minimum? > Sure. Here's two sets of numbers. The first is for repeat(3,1) and the second for repeat (3,3). retrieve1 [0.91198546183942375, 0.9042411814909439, 0.90411518782415001] retrieve2 [0.98355349632425515, 0.95424502276127754, 0.94714328217692412] retrieve3 [1.2227562441595268, 1.2195848913758596, 1.2206193803961156] retrieve4 [8.4344040932576547, 8.3556245276983532, 8.3568341786456131] retrieve1 [2.7317457945074026, 2.7274656415829384, 2.7250913174719109] retrieve2 [2.8857103346933783, 2.8379299603720582, 2.8386803350705136] retrieve3 [3.6870535221655203, 3.8980253076857565, 3.7002303365371887] retrieve4 [25.138646950939304, 25.06737169109482, 25.052789390830412] The timings of these are pretty consistent with each other with the previous runs except that the difference between retrieve1 and retrieve2 has disappeared. In fact, all of the runs that produce lists have gotten faster by about the same amount.. Odd! A little digging reveals that timeit turns off garbage collection to make things more repeatable. Turning gc back on yields the following numbers for repeat(3,1): retrieve1 [0.92517736192728406, 0.92109667569481601, 0.92390960303614023] retrieve2 [1.3018456256311914, 1.2277141368525903, 1.2929785768861706] retrieve3 [1.5309831277438946, 1.4998853206203577, 1.5601200711263488] retrieve4 [8.6400394463542227, 8.7022300320292061, 8.6807761880350682] So there we are, back to our original numbers. This also reveals that the majority of the time difference between retrieve1 and retrieve2 *is* memory related. However, it's the deallocation (or more precisely garbage collection) of all those floats that is the killer. Here's what the timeit routines looked like: if __name__ == "__main__": for name in ['retrieve1', 'retrieve2', 'retrieve3', 'retrieve4']: print name, timeit.Timer("%s(conn)" % name, "gc.enable(); from scratch import sqlite3, %s, setup; conn = sqlite3.connect(':memory:'); setup(conn)" % name).repeat(3, 1) > As an aside, your database is running on a local disk, right, so > the overehead of retrieving data is minimized here? > For my tests I think I am data retrieval limited because I > get exactly the same time for the equivalent of retrieve1 > and retrieve2. > As Keith pointed out, I'm keeping the database in memory (although there's a very good chance some of it is actually swapped to disk) so it's probably relatively fast. On the other hand, if you are using timeit to make your measurements you could be running into the (lack of) garbage collection issue I mention above. -tim |
From: Erin S. <eri...@gm...> - 2006-11-15 04:47:09
|
On 11/14/06, Tim Hochberg <tim...@ie...> wrote: SNIP > > Interesting results Tim. From Pierre's results > > we saw that fromiter is the fastest way to get data > > into arrays. With your results we see there is a > > difference between iterating over the cursor and > > doing a fetchall() as well. Surprisingly, running > > the cursor is faster. > > > > This must come not from the data retrieval rate but > > from creating the copies in memory. > I imagine that is correct. In particular, skipping the making of the > list avoids the creation of 1e6 Python floats, which is going to result > in a lot of memory allocation. > > > But just in case > > I think there is one more thing to check. > > I haven't used sqlite, but with other databases I have > > used there is often a large variance in times from > > one select to the next. Can you > > repeat these tests with a timeit().repeat and give the > > minimum? > > > Sure. Here's two sets of numbers. The first is for repeat(3,1) and the > second for repeat (3,3). > > retrieve1 [0.91198546183942375, 0.9042411814909439, 0.90411518782415001] > retrieve2 [0.98355349632425515, 0.95424502276127754, > 0.94714328217692412] > retrieve3 [1.2227562441595268, 1.2195848913758596, 1.2206193803961156] > retrieve4 [8.4344040932576547, 8.3556245276983532, 8.3568341786456131] > > retrieve1 [2.7317457945074026, 2.7274656415829384, 2.7250913174719109] > retrieve2 [2.8857103346933783, 2.8379299603720582, 2.8386803350705136] > retrieve3 [3.6870535221655203, 3.8980253076857565, 3.7002303365371887] > retrieve4 [25.138646950939304, 25.06737169109482, 25.052789390830412] > > The timings of these are pretty consistent with each other with the > previous runs except that the difference between retrieve1 and retrieve2 > has disappeared. In fact, all of the runs that produce lists have gotten > faster by about the same amount.. Odd! A little digging reveals that > timeit turns off garbage collection to make things more repeatable. > Turning gc back on yields the following numbers for repeat(3,1): > > retrieve1 [0.92517736192728406, 0.92109667569481601, > 0.92390960303614023] > retrieve2 [1.3018456256311914, 1.2277141368525903, 1.2929785768861706] > retrieve3 [1.5309831277438946, 1.4998853206203577, 1.5601200711263488] > retrieve4 [8.6400394463542227, 8.7022300320292061, 8.6807761880350682] > > So there we are, back to our original numbers. This also reveals that > the majority of the time difference between retrieve1 and retrieve2 *is* > memory related. However, it's the deallocation (or more precisely > garbage collection) of all those floats that is the killer. Here's what > the timeit routines looked like: > > if __name__ == "__main__": > for name in ['retrieve1', 'retrieve2', 'retrieve3', 'retrieve4']: > print name, timeit.Timer("%s(conn)" % name, "gc.enable(); > from scratch import sqlite3, %s, setup; conn = > sqlite3.connect(':memory:'); setup(conn)" % name).repeat(3, 1) > > > As an aside, your database is running on a local disk, right, so > > the overehead of retrieving data is minimized here? > > For my tests I think I am data retrieval limited because I > > get exactly the same time for the equivalent of retrieve1 > > and retrieve2. > > > As Keith pointed out, I'm keeping the database in memory (although > there's a very good chance some of it is actually swapped to disk) so > it's probably relatively fast. On the other hand, if you are using > timeit to make your measurements you could be running into the (lack of) > garbage collection issue I mention above. > I checked and for my real situation I am totally limited by the time to retrieve the data. From these tests I think this will probably be true even if the data is on a local disk. I think these experiments show that iterating over the cursor is the best approach. It is better from a memory point of view and is probably also the fastest. We should still resolve the slowness for the array() function however when converting lists of tuples. I will file a ticket if no one else has. Erin |
From: Tim H. <tim...@ie...> - 2006-11-15 04:56:20
|
Tim Hochberg wrote: > [CHOP] > > The timings of these are pretty consistent with each other with the > previous runs except that the difference between retrieve1 and retrieve2 > has disappeared. In fact, all of the runs that produce lists have gotten > faster by about the same amount.. Odd! A little digging reveals that > timeit turns off garbage collection to make things more repeatable. > Turning gc back on yields the following numbers for repeat(3,1): > > retrieve1 [0.92517736192728406, 0.92109667569481601, > 0.92390960303614023] > retrieve2 [1.3018456256311914, 1.2277141368525903, 1.2929785768861706] > retrieve3 [1.5309831277438946, 1.4998853206203577, 1.5601200711263488] > retrieve4 [8.6400394463542227, 8.7022300320292061, 8.6807761880350682] > > So there we are, back to our original numbers. This also reveals that > the majority of the time difference between retrieve1 and retrieve2 *is* > memory related. However, it's the deallocation (or more precisely > garbage collection) of all those floats that is the killer. I just realized that this sounds sort of misleading. In both cases a million floats are allocated and deallocated. However, in retrieve1 only two of those million are alive at any one time, so Python will just keep reusing the same two chunks of memory for all 500,000 pairs (ditto for the 500,000 tuples that are created). In the other cases, all million floats will be alive at once, requiring much more memory and possibly swapping to disk. Unsurprisingly, the second case is slower, but the details aren't clear. In particular why is it the deallocation that is slow? Another mystery is why gc matters at all. None of the obvious actors are involved in cycles so they would normally go away due to reference counting even with gc turned off. My rather uninformed guess is that the cursor or the connection holds onto the list (caching it for later perhaps) and that cursor/connection is involved in some sort of cycle. This would keep the list alive until the gc ran. -tim [CHOP] |
From: Francesc A. <fa...@ca...> - 2006-11-16 13:13:47
|
A Dimarts 14 Novembre 2006 23:08, Erin Sheldon escrigu=E9: > On 11/14/06, John Hunter <jdh...@ac...> wrote: > > Has anyone written any code to facilitate dumping mysql query results > > (mainly arrays of floats) into numpy arrays directly at the extension > > code layer. The query results->list->array conversion can be slow. > > > > Ideally, one could do this semi-automagically with record arrays and > > table introspection.... > > I've been considering this as well. I use both postgres and Oracle > in my work, and I have been using the python interfaces (cx_Oracle > and pgdb) to get result lists and convert to numpy arrays. > > The question I have been asking myself is "what is the advantage > of such an approach?". It would be faster, but by how > much? Presumably the bottleneck for most applications will > be data retrieval rather than data copying in memory. Well, that largely depends on your pattern to access the data in your database. If you are accessing to regions of your database that have a high degree of spatial locality (i.e. they are located in equal or very similar places), the data is most probably already in memory (in your filesystem cache or maybe in your database cache) and the bottleneck will become the memory access. Of course, if you don't have such a spatial locality in the access pattern, then the bottleneck will be the disk. Just to see how DB 2.0 could benefit from adopting record arrays as input buffers, I've done a comparison between SQLite3 and PyTables. PyTables doesn't suport DB 2.0 as such, but it does use record arrays as buffers internally so as to read data in an efficient way (there should be other databases that features this, but I know PyTables best ;) =46or this, I've used a modified version of a small benchmarking program posted by Tim Hochberg in this same thread (it is listed at the end of the message). Here are the results: setup SQLite took 23.5661110878 seconds retrieve SQLite took 3.26717996597 seconds setup PyTables took 0.139157056808 seconds retrieve PyTables took 0.13444685936 seconds [SQLite results were obtained using an in-memory database, while PyTables used an on-disk one. See the code.] So, yes, if your access pattern exhibits a high degree of locality, you can expect a huge difference on the reading speed (more than 20x for this example, but as this depends on the dataset size, it can be even higher for larger datasets). > On the other hand, the database access modules for all major > databases, with DB 2.0 semicomplience, have already been written. > This is not an insignificant amount of work. Writing our own > interfaces for each of our favorite databases would require an > equivalent amount of work. That's true, but still, feasible. However, before people would start doing this on a general way, it should help implementing first in Python something like the numpy.ndarray object: this would standarize a full-fledged heterogeneous buffer for doing intensive I/O tasks. > I think a set of timing tests would be useful. I will try some > using Oracle or postgres over the next few days. Perhaps > you could do the same with mysql. Well, here it is my own benchmark (admittedly trivial). Hope it helps in your comparisons. =2D--------------------------------------------------------------------- import sqlite3, numpy as np, time, tables as pt, os, os.path N =3D 500000 rndata =3D np.random.rand(2, N) dtype =3D np.dtype([('x',float), ('y', float)]) data =3D np.empty(shape=3DN, dtype=3Ddtype) data['x'] =3D rndata[0] data['y'] =3D rndata[1] def setupSQLite(conn): c =3D conn.cursor() c.execute('''create table demo (x real, y real)''') c.executemany("""insert into demo values (?, ?)""", data) def retrieveSQLite(conn): c =3D conn.cursor() c.execute('select * from demo') y =3D np.fromiter(c, dtype=3Ddtype) return y def setupPT(fileh): fileh.createTable('/', 'table', data) def retrievePT(fileh): y =3D fileh.root.table[:] return y # if os.path.exists('test.sql3'): # os.remove('test.sql3') #conn =3D sqlite3.connect('test.sql3') conn =3D sqlite3.connect(':memory:') t0 =3D time.time() setupSQLite(conn) t1 =3D time.time() print "setup SQLite took", t1-t0, "seconds" t0 =3D time.time() y1 =3D retrieveSQLite(conn) t1 =3D time.time() print "retrieve SQLite took", t1-t0, "seconds" conn.close() fileh =3D pt.openFile("test.h5", "w") t0 =3D time.time() setupPT(fileh) t1 =3D time.time() print "setup PyTables took", t1-t0, "seconds" t0 =3D time.time() y2 =3D retrievePT(fileh) t1 =3D time.time() print "retrieve PyTables took", t1-t0, "seconds" fileh.close() assert y1.shape =3D=3D y2.shape assert np.alltrue(y1 =3D=3D y2) =2D-=20 >0,0< Francesc Altet =A0 =A0 http://www.carabos.com/ V V C=E1rabos Coop. V. =A0=A0Enjoy Data "-" |
From: Tim H. <tim...@ie...> - 2006-11-16 18:05:56
|
Francesc Altet wrote: > A Dimarts 14 Novembre 2006 23:08, Erin Sheldon escrigué: > >> On 11/14/06, John Hunter <jdh...@ac...> wrote: >> >>> Has anyone written any code to facilitate dumping mysql query results >>> (mainly arrays of floats) into numpy arrays directly at the extension >>> code layer. The query results->list->array conversion can be slow. >>> >>> Ideally, one could do this semi-automagically with record arrays and >>> table introspection.... >>> >> I've been considering this as well. I use both postgres and Oracle >> in my work, and I have been using the python interfaces (cx_Oracle >> and pgdb) to get result lists and convert to numpy arrays. >> >> The question I have been asking myself is "what is the advantage >> of such an approach?". It would be faster, but by how >> much? Presumably the bottleneck for most applications will >> be data retrieval rather than data copying in memory. >> > > Well, that largely depends on your pattern to access the data in your > database. If you are accessing to regions of your database that have a > high degree of spatial locality (i.e. they are located in equal or > very similar places), the data is most probably already in memory (in > your filesystem cache or maybe in your database cache) and the > bottleneck will become the memory access. Of course, if you don't have > such a spatial locality in the access pattern, then the bottleneck > will be the disk. > > Just to see how DB 2.0 could benefit from adopting record arrays as > input buffers, I've done a comparison between SQLite3 and PyTables. > PyTables doesn't suport DB 2.0 as such, but it does use record arrays > as buffers internally so as to read data in an efficient way (there > should be other databases that features this, but I know PyTables best > ;) > > For this, I've used a modified version of a small benchmarking program > posted by Tim Hochberg in this same thread (it is listed at the end > of the message). Here are the results: > > setup SQLite took 23.5661110878 seconds > retrieve SQLite took 3.26717996597 seconds > setup PyTables took 0.139157056808 seconds > retrieve PyTables took 0.13444685936 seconds > > [SQLite results were obtained using an in-memory database, while > PyTables used an on-disk one. See the code.] > > So, yes, if your access pattern exhibits a high degree of locality, > you can expect a huge difference on the reading speed (more than 20x > for this example, but as this depends on the dataset size, it can be > even higher for larger datasets). > One weakness of this benchmark is that it doesn't break out how much of the sqlite3 overhead is inherent to the sqlite3 engine, which I expect is somewhat more complicated internally than PyTables, and how much is due to all the extra layers we go through to get the data into an array (native[in database]->Python Objects->Native[In record array]). To try to get at least a little handle on this, I add this test: def querySQLite(conn): c = conn.cursor() c.execute('select * from demo where x = 0.0') y = np.fromiter(c, dtype=dtype) return y This returns very little data (in the cases I ran it actually returned no data). However is still needs to loop over all the records and examine them. Here's what the timings looked like: setup SQLite took 9.71799993515 seconds retrieve SQLite took 0.921999931335 seconds query SQLite took 0.313000202179 seconds I'm reluctant to conclude to conclude that 1/3 of the time is spent in traversing the database and 2/3 of the time in creating the data solely because databases are big voodoo to me. Still, we can probably conclude that traversing the data itself is pretty expensive and we would be unlikely to approach PyTables speed even if we didn't have the extra overhead. On the other hand, there's a factor of three or so improvement that could be realized by reducing overhead. Or maybe not. I think that the database has to return it's data a row at a time, so there's intrinsically a lot of copying that's going to happen. So, I think it's unclear whether getting the data directly in native format would be significantly cheaper. I suppose that the way to definitively test it would be to rewrite one of these tests in C. Any volunteers? I think it's probably safe to say that either way PyTables will cream sqllite3 in those fields where it's applicable. One of these days I really need to dig into PyTables. I'm sure I could use it for something. [snip] -tim |