From: <pe...@ce...> - 2006-10-11 23:40:04
|
On Wed, 11 Oct 2006, Travis Oliphant wrote: > >Interestingly, in worst cases numpy.sqrt is approximately ~3 times slower > >than scipy.sqrt on negative input but ~2 times faster on positive input: > > > >In [47]: pos_input = numpy.arange(1,100,0.001) > > > >In [48]: %timeit -n 1000 b=numpy.sqrt(pos_input) > >1000 loops, best of 3: 4.68 ms per loop > > > >In [49]: %timeit -n 1000 b=scipy.sqrt(pos_input) > >1000 loops, best of 3: 10 ms per loop > > > > > > This is the one that concerns me. Slowing everybody down who knows they > have positive values just for people that don't seems problematic. I think the code in scipy.sqrt can be optimized from def _fix_real_lt_zero(x): x = asarray(x) if any(isreal(x) & (x<0)): x = _tocomplex(x) return x def sqrt(x): x = _fix_real_lt_zero(x) return nx.sqrt(x) to (untested) def _fix_real_lt_zero(x): x = asarray(x) if not isinstance(x,(nt.csingle,nt.cdouble)) and any(x<0): x = _tocomplex(x) return x def sqrt(x): x = _fix_real_lt_zero(x) return nx.sqrt(x) or def sqrt(x): old = nx.seterr(invalid='raises') try: r = nx.sqrt(x) except FloatingPointError: x = _tocomplex(x) r = nx.sqrt(x) nx.seterr(**old) return r I haven't timed these cases yet.. Pearu |