On 8/29/06, Travis Oliphant <oli...@ie...> wrote:
> Example:
>
> If a.shape is (3,4,5)
> and b.shape is (4,3,2)
>
> Then
>
> tensordot(a, b, axes=([1,0],[0,1]))
>
> returns a (5,2) array which is equivalent to the code:
>
> c = zeros((5,2))
> for i in range(5):
> for j in range(2):
> for k in range(3):
> for l in range(4):
> c[i,j] += a[k,l,i]*b[l,k,j]
That's pretty cool.
>From there it shouldn't be too hard to make a wrapper that would allow
you to write c_ji = a_kli * b_lkj (w/sum over k and l) like:
tensordot_ez(a,'kli', b,'lkj', out='ji')
or maybe with numexpr-like syntax:
tensor_expr('_ji = a_kli * b_lkj') [pulling a and b out of the
globals()/locals()]
Might be neat to be able to build a callable function for repeated reuse:
tprod = tensor_func('_ji = [0]_kli * [1]_lkj') # [0] and [1]
become parameters 0 and 1
c = tprod(a, b)
or to pass the output through a (potentially reused) array argument:
tprod1 = tensor_func('[0]_ji = [1]_kli * [2]_lkj')
tprod1(c, a, b)
--bb
|