From: John H. <jdh...@ac...> - 2006-11-14 23:17:08
|
>>>>> "John" == John Hunter <jdh...@ac...> writes: >>>>> "Erin" == Erin Sheldon <eri...@gm...> writes: Erin> The question I have been asking myself is "what is the Erin> advantage of such an approach?". It would be faster, but by John> In the use case that prompted this message, the pull from John> mysql took almost 3 seconds, and the conversion from lists John> to numpy arrays took more that 4 seconds. We have a list of John> about 500000 2 tuples of floats. John> Digging in a little bit, we found that numpy is about 3x John> slower than Numeric here John> peds-pc311:~> python test.py with dtype: 4.25 elapsed John> seconds w/o dtype 5.79 elapsed seconds Numeric 1.58 elapsed John> seconds 24.0b2 1.0.1.dev3432 John> Hmm... So maybe the question is -- is there some low hanging John> fruit here to get numpy speeds up? And for reference, numarray is 5 times faster than Numeric here and 15 times faster than numpy peds-pc311:~> python test.py with dtype: 4.20 elapsed seconds w/o dtype 5.71 elapsed seconds Numeric 1.60 elapsed seconds numarray 0.30 elapsed seconds 24.0b2 1.0.1.dev3432 1.5.1 import numarray tnow = time.time() y = numarray.array(x, numarray.Float) tdone = time.time() print 'numarray %1.2f elapsed seconds'%(tdone - tnow) print numarray.__version__ |