From: Ian S. <sc...@im...> - 2012-05-16 14:26:41
|
On 16/05/2012 14:55, Friedmann Y. wrote: > so how is it that the vectorised calcs are so much faster in matlab? Taking the example of a = some_large_matrix .^ 2 This is cheap to parse and dispatch. The only interpreter-level operations are 1. Parse line. 2. look up variable "large_matrix" and store in internal register V1. 3. look up/create variable "a" once store in internal register V2. 4. Call internal function per_element_power(V1, 2, V2). The internal function per_element_power will have been written in C (or C++, FORTRAN, or assembler) and will have been optimised by which ever compiler they used to compile MATLAB. If you wrote the full loop version for i=1:size(some_large_matrix,1) for j=1:size(some_large_matrix,2) a(i,j) = some_large_matrix(i,j) ^ 2; then the interpreter would repeated be asking 1-9: Loop stuff 10. Parse loop internals. 11. lookup variable some_large_matrix and store in v1 12. lookup variable i and store in v2 13. lookup variable j and store in v3 14. call dereference_matrix(v1, v2, v3, v4) 15-18 - same again for a into v8 19. Call internal function full_power(v4, 2, v8) 20-25 Some more loop stuff - jump back to line 4-ish several thousand times. I'm afraid these questions are getting a little far from VXL. I'd suggest reading a book, or taking a course on compilers and interpreters, if you want to know more. Ian. |