Laurent Plagne wrote:
>> * Why is there a maximum at a matrix size of 100? What does this mean?
> Well, it is actually 1000. It is an arbitrary size. it means 1000x1000
> elements which is a large enough
> size to be an "out of the cache calculation" and fit in ram (no
Sorry, if I havn't expressed acuralty, but at a matrix size of ~100
there is a maximum speed (mflops) for the most
libarys. My question points out to this point.
>> * Could you use the ifc|icc instead of f77|gcc they should be much
> I guess that ifc and icc are both native compiler on a given
> architecture. The comparison
> between ifc and g77 would probably be not fair because most good
> native fortran compiler
> can identify a BLAS like calculation written with loops and replace it
> with call to
> vendor-blas implementation. Good performances obtained this way
> depends on commercial
> compiler and will not be portable (on a linux box for example).
> For gcc, I think it is a very good compiler which probably compares
> well with vendor compiler in
> most cases (especially with C code).
OK, ifc and icc are comercial compilers, but for university it is free.
So if I write a nummerical code for me it is of interest
how fast it is on which architectur with which compiler. Because the gnu
compiler runs nearly everywhere this allways
be should be given as a reference speed, but why shouldn't compare nativ
>> * Why is MTL so much better(worse) than the rest and why are ATLAS
>> and INTEL_BLAS so much worse(better) than the rest?
Ok, if ATLAS and INTEL_BLAS (whatever it is, I havn't a clue) ar so much
faster, why don't we use them and forget about blitz,ublas and even stl?