Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Close
From: Brendan McCane <mccane@cs...>  20021120 20:59:20

In message <Pine.LNX.4.44.0211201044030.12401100000@...>, Frederik Schaffalitzky writes: > > This is a common problem with big matrices; I use maxit = 1000. It is > strange that the linpack SVD code has a fixed upper bound on the number of > iterations since that limits the size of the matrices that can be used. It > would probably be safe to remove the limit altogether. Hmmm. I really don't know what I'm talking about here, but I thought there was some small chance that the svd won't converge, in which case removing the limit is a bit risky. Also, I think it would be worth documenting this maxit problem in the svd documentation  ie move the comments from inside the svd code to the doxygen svd docs. Also, does anyone know if there is a simple relationship between the size of the matrix and the max number of iterations needed? Perhaps we could make an educated guess at the maximum number maxit should be?  Cheers, Brendan.  Brendan McCane Email: mccane@... Department of Computer Science Phone: +64 3 479 8588/8578. University of Otago Fax: +64 3 479 8529 Box 56, Dunedin, New Zealand. There's only one catch  Catch 22. 
From: Frederik Schaffalitzky <fsm@sy...>  20021120 23:30:51

By "converge" you of course mean "terminate", convergence being a property of an infinite sequence... It is true that the DVSDC code does not always terminate but I have only seen that happen when IEEE rounding is disabled by optimization. [On the other hand, with exact arithmetic the DSVDC iteration is, so far as I know, guaranteed to converge.] I am sure there is reasonable rule of thumb for the number of iterations to required but it is beyond my competence. Golub and Van Loan give nominal flops required but their termination criterion is different. Maybe someone could look into getting another SVD from somewhere. On Thu, 21 Nov 2002, Brendan McCane wrote: > In message <Pine.LNX.4.44.0211201044030.12401100000@...>, > Frederik Schaffalitzky writes: > > > > This is a common problem with big matrices; I use maxit = 1000. It is > > strange that the linpack SVD code has a fixed upper bound on the number of > > iterations since that limits the size of the matrices that can be used. It > > would probably be safe to remove the limit altogether. > > Hmmm. I really don't know what I'm talking about here, but I thought there was > some small chance that the svd won't converge, in which case removing the > limit is a bit risky. Also, I think it would be worth documenting this maxit > problem in the svd documentation  ie move the comments from inside the svd > code to the doxygen svd docs. > > Also, does anyone know if there is a simple relationship between the size of > the matrix and the max number of iterations needed? Perhaps we could make an > educated guess at the maximum number maxit should be? 
From: Andrew Fitzgibbon <awf@ro...>  20021121 12:15:39

> Maybe someone could look into getting another SVD from somewhere. Is the LAPACK one better? 
From: Frederik Schaffalitzky <fsm@sy...>  20021122 00:16:06

On Thu, 21 Nov 2002, Andrew Fitzgibbon wrote: > > Maybe someone could look into getting another SVD from somewhere. > > Is the LAPACK one better? One would think so, since the stated purpose of LAPACK was to make LINPACK and EISPACK more efficient and both the FAQs for these two libraries say they have been made mostly obsolete by LAPACK. It can be obtained from http://www.netlib.org as a 4.8Mb gzipped tarfile of code, with unoptimized BLAS and tests. It makes heavy use of BLAS so vendorsupplied versions of that should make LAPACK faster. In addition it has a C version (CLAPACK) that is essentially the result of running f2c and then manually tweaking the result, which sounds familiar. That would be useful for systems that don't have a FORTRAN compiler, but otherwise I see no reason to not just compile the FORTRAN out of the box. I think the maximum number of iterations used by the LAPACK SVD iteration ([SDCZ]BDSQR) is 6N^2 where N is the smallest dimension of the matrix to be decomposed (the iteration is applied to a bidiagonal matrix with only 2N1 nonzero entries). Maybe LAPACK is better. Someone should try it out. Not me. 