From: Curzio B. <cur...@un...> - 2004-06-24 10:14:20
|
Hi. I noticed that when multiplying two matrices of type Float32, the result is Float64: ----------------------------------------- In [103]: a=NA.ones((2,2), NA.Float32) In [104]: b=NA.ones((2,2), NA.Float32) In [105]: c=NA.matrixmultiply(a,b) In [106]: c.type() Out[106]: Float64 ----------------------------------------- Since the matrix I'm going to multiply in practice are quite big, I'd like to do the operation in Float32. Otherwise this is what I get: Traceback (most recent call last): File "/home/basso/work/python/port/apps/pca-heads.py", line 141, in ? pc = NA.array(NA.matrixmultiply(cent, c), NA.Float32) File "/home/basso/usr//lib/python/numarray/numarraycore.py", line 1150, in dot return ufunc.innerproduct(array1, _gen.swapaxes(array2, -1, -2)) File "/home/basso/usr//lib/python/numarray/ufunc.py", line 2047, in innerproduct r = a.__class__(shape=adots+bdots, type=rtype) MemoryError Any suggestion (apart from doing the operation one column at a time)? thanks |
From: Todd M. <jm...@st...> - 2004-06-24 13:38:38
|
On Thu, 2004-06-24 at 06:14, Curzio Basso wrote: > Hi. > > I noticed that when multiplying two matrices of type Float32, the result > is Float64: > > ----------------------------------------- > In [103]: a=NA.ones((2,2), NA.Float32) > > In [104]: b=NA.ones((2,2), NA.Float32) > > In [105]: c=NA.matrixmultiply(a,b) > > In [106]: c.type() > Out[106]: Float64 > ----------------------------------------- > > Since the matrix I'm going to multiply in practice are quite big, I'd > like to do the operation in Float32. Otherwise this is what I get: > > Traceback (most recent call last): > File "/home/basso/work/python/port/apps/pca-heads.py", line 141, in ? > pc = NA.array(NA.matrixmultiply(cent, c), NA.Float32) > File "/home/basso/usr//lib/python/numarray/numarraycore.py", line > 1150, in dot return ufunc.innerproduct(array1, _gen.swapaxes(array2, > -1, -2)) > File "/home/basso/usr//lib/python/numarray/ufunc.py", line 2047, in > innerproduct > r = a.__class__(shape=adots+bdots, type=rtype) > MemoryError > > Any suggestion (apart from doing the operation one column at a time)? > I modified dot() and innerproduct() this morning to return Float32 and Complex32 for like inputs. This is in CVS now. numarray-1.0 is dragging out, but will nevertheless be released relatively soon. I'm curious about what your array dimensions are. When I implemented matrixmuliply for numarray, I was operating under the assumption that no one would be multiplying truly huge arrays because it's an O(N^3) algorithm. Regards, Todd > thanks > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - > digital self defense, top technical experts, no vendor pitches, > unmatched networking opportunities. Visit www.blackhat.com > _______________________________________________ > Numpy-discussion mailing list > Num...@li... > https://lists.sourceforge.net/lists/listinfo/numpy-discussion -- Todd Miller <jm...@st...> |
From: Rick W. <rl...@st...> - 2004-06-24 14:31:00
|
On 24 Jun 2004, Todd Miller wrote: > On Thu, 2004-06-24 at 06:14, Curzio Basso wrote: > > > I noticed that when multiplying two matrices of type Float32, the result > > is Float64: > > I modified dot() and innerproduct() this morning to return Float32 and > Complex32 for like inputs. I wonder whether it would be worth providing an option to accumulate the sums using Float64 and to convert to Float32 before storing them in an array. I suspect that one reason this returned Float64 is that it is very easy to run into precision/roundoff problems in single-precision matrix multiplies. You could avoid that by using doubles for the sum while still returning the result as a single. Rick |
From: Perry G. <pe...@st...> - 2004-06-24 15:08:34
|
Rick White wrote: > On 24 Jun 2004, Todd Miller wrote: > > > On Thu, 2004-06-24 at 06:14, Curzio Basso wrote: > > > > > I noticed that when multiplying two matrices of type Float32, > the result > > > is Float64: > > > > I modified dot() and innerproduct() this morning to return Float32 and > > Complex32 for like inputs. > > I wonder whether it would be worth providing an option to accumulate > the sums using Float64 and to convert to Float32 before storing them in > an array. I suspect that one reason this returned Float64 is that it > is very easy to run into precision/roundoff problems in > single-precision matrix multiplies. You could avoid that by using > doubles for the sum while still returning the result as a single. > Rick > I definitely agree. I'm pretty certain the reason it was done with double precision floats is the sensitivity to roundoff issues with matrix operations. I think Rick is right though that only intermediate calculations need to be done in double precision and that doesn't require the whole output array to be kept that way. Perry |
From: Todd M. <jm...@st...> - 2004-06-24 15:31:33
|
On Thu, 2004-06-24 at 11:08, Perry Greenfield wrote: > Rick White wrote: > > > On 24 Jun 2004, Todd Miller wrote: > > > > > On Thu, 2004-06-24 at 06:14, Curzio Basso wrote: > > > > > > > I noticed that when multiplying two matrices of type Float32, > > the result > > > > is Float64: > > > > > > I modified dot() and innerproduct() this morning to return Float32 and > > > Complex32 for like inputs. > > > > I wonder whether it would be worth providing an option to accumulate > > the sums using Float64 and to convert to Float32 before storing them in > > an array. I suspect that one reason this returned Float64 is that it > > is very easy to run into precision/roundoff problems in > > single-precision matrix multiplies. You could avoid that by using > > doubles for the sum while still returning the result as a single. > > Rick > > > I definitely agree. I'm pretty certain the reason it was done > with double precision floats is the sensitivity to roundoff > issues with matrix operations. I think Rick is right though that > only intermediate calculations need to be done in double precision > and that doesn't require the whole output array to be kept that way. > > Perry OK. I implemented intermediate sums using Float64 and Complex64 but single precision inputs will still result in single precision outputs. Todd |
From: Todd M. <jm...@st...> - 2004-06-24 15:33:54
|
On Thu, 2004-06-24 at 10:30, Rick White wrote: > On 24 Jun 2004, Todd Miller wrote: > > > On Thu, 2004-06-24 at 06:14, Curzio Basso wrote: > > > > > I noticed that when multiplying two matrices of type Float32, the result > > > is Float64: > > > > I modified dot() and innerproduct() this morning to return Float32 and > > Complex32 for like inputs. > > I wonder whether it would be worth providing an option to accumulate > the sums using Float64 and to convert to Float32 before storing them in > an array. I suspect that one reason this returned Float64 is that it > is very easy to run into precision/roundoff problems in > single-precision matrix multiplies. You could avoid that by using > doubles for the sum while still returning the result as a single. > Rick OK. I implemented intermediate sums using Float64 and Complex64 but single precision inputs will still result in single precision outputs. Todd |