From: Rob Speer <rspeer@MIT.EDU>  20100324 19:05:22

It looks like it should be possible to compute the truncated spectral decomposition of a sparse, symmetric matrix using pysparse.jdsym. This is the key step in computing a truncated SVD, which is the next thing to do, and it would be great to be able to do it entirely within Pysparse. There's just one thing I'm unsure about: how do I ask for the *largest* eigenvalues? jdsym is set up to return eigenvalues around some value tau, defaulting to 0, so it seems this is set up for finding the smallest eigenvalues. Do I just set tau to some very large number, or would that cause numerical stability issues? Is this the wrong problem for jdsym to solve?  Rob 
From: Roman Geus <roman.geus@gm...>  20100325 07:17:19

Hi Rob If you set tau to some very large number, but still far away from the actual largest eigenvalue, you might experience very slow convergence. There are certainly better and simpler algorithms than JDSYM for computing a few of the largest eigenvalues.  Roman On Wed, Mar 24, 2010 at 8:05 PM, Rob Speer <rspeer@...> wrote: > It looks like it should be possible to compute the truncated spectral > decomposition of a sparse, symmetric matrix using pysparse.jdsym. This > is the key step in computing a truncated SVD, which is the next thing > to do, and it would be great to be able to do it entirely within > Pysparse. > > There's just one thing I'm unsure about: how do I ask for the > *largest* eigenvalues? jdsym is set up to return eigenvalues around > some value tau, defaulting to 0, so it seems this is set up for > finding the smallest eigenvalues. Do I just set tau to some very large > number, or would that cause numerical stability issues? Is this the > wrong problem for jdsym to solve? > >  Rob > >  > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and finetune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intelswdev > _______________________________________________ > Pysparseusers mailing list > Pysparseusers@... > https://lists.sourceforge.net/lists/listinfo/pysparseusers > 
From: Rob Speer <rspeer@MIT.EDU>  20100325 16:21:31

Got any pointers? Up until now, I've been working with an old, clunky C library called SVDLIBC that implements Lanczos. Pysparse's jdsym is the first thing I've seen that can find eigenvectors and presents an interface that can actually work with Python objects. There's the stuff in scipy.sparse, of course, but that's been stalled in the development for years now, and it doesn't often compile from SVN.  Rob On Thu, Mar 25, 2010 at 3:17 AM, Roman Geus <roman.geus@...> wrote: > Hi Rob > > If you set tau to some very large number, but still far away from the > actual largest eigenvalue, you might experience very slow convergence. > There are certainly better and simpler algorithms than JDSYM for > computing a few of the largest eigenvalues. > >  Roman > > On Wed, Mar 24, 2010 at 8:05 PM, Rob Speer <rspeer@...> wrote: >> It looks like it should be possible to compute the truncated spectral >> decomposition of a sparse, symmetric matrix using pysparse.jdsym. This >> is the key step in computing a truncated SVD, which is the next thing >> to do, and it would be great to be able to do it entirely within >> Pysparse. >> >> There's just one thing I'm unsure about: how do I ask for the >> *largest* eigenvalues? jdsym is set up to return eigenvalues around >> some value tau, defaulting to 0, so it seems this is set up for >> finding the smallest eigenvalues. Do I just set tau to some very large >> number, or would that cause numerical stability issues? Is this the >> wrong problem for jdsym to solve? >> >>  Rob >> >>  >> Download Intel® Parallel Studio Eval >> Try the new software tools for yourself. Speed compiling, find bugs >> proactively, and finetune applications for parallel performance. >> See why Intel Parallel Studio got high marks during beta. >> http://p.sf.net/sfu/intelswdev >> _______________________________________________ >> Pysparseusers mailing list >> Pysparseusers@... >> https://lists.sourceforge.net/lists/listinfo/pysparseusers >> > 
From: Roman Geus <roman.geus@gm...>  20100329 13:15:12

Hi Rob I have made good experiences with ARPACK in the past. There seems to be a Python wrapper for it as well (though I have never used it). Regards, Roman On Thu, Mar 25, 2010 at 6:21 PM, Rob Speer <rspeer@...> wrote: > Got any pointers? Up until now, I've been working with an old, clunky > C library called SVDLIBC that implements Lanczos. Pysparse's jdsym is > the first thing I've seen that can find eigenvectors and presents an > interface that can actually work with Python objects. > > There's the stuff in scipy.sparse, of course, but that's been stalled > in the development for years now, and it doesn't often compile from > SVN. > >  Rob > > On Thu, Mar 25, 2010 at 3:17 AM, Roman Geus <roman.geus@...> wrote: >> Hi Rob >> >> If you set tau to some very large number, but still far away from the >> actual largest eigenvalue, you might experience very slow convergence. >> There are certainly better and simpler algorithms than JDSYM for >> computing a few of the largest eigenvalues. >> >>  Roman >> >> On Wed, Mar 24, 2010 at 8:05 PM, Rob Speer <rspeer@...> wrote: >>> It looks like it should be possible to compute the truncated spectral >>> decomposition of a sparse, symmetric matrix using pysparse.jdsym. This >>> is the key step in computing a truncated SVD, which is the next thing >>> to do, and it would be great to be able to do it entirely within >>> Pysparse. >>> >>> There's just one thing I'm unsure about: how do I ask for the >>> *largest* eigenvalues? jdsym is set up to return eigenvalues around >>> some value tau, defaulting to 0, so it seems this is set up for >>> finding the smallest eigenvalues. Do I just set tau to some very large >>> number, or would that cause numerical stability issues? Is this the >>> wrong problem for jdsym to solve? >>> >>>  Rob >>> >>>  >>> Download Intel® Parallel Studio Eval >>> Try the new software tools for yourself. Speed compiling, find bugs >>> proactively, and finetune applications for parallel performance. >>> See why Intel Parallel Studio got high marks during beta. >>> http://p.sf.net/sfu/intelswdev >>> _______________________________________________ >>> Pysparseusers mailing list >>> Pysparseusers@... >>> https://lists.sourceforge.net/lists/listinfo/pysparseusers >>> >> > 