You can subscribe to this list here.
2005 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}
(1) 
_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}


2006 
_{Jan}

_{Feb}

_{Mar}
(3) 
_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}
(2) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}
(1) 
2007 
_{Jan}
(2) 
_{Feb}
(5) 
_{Mar}
(2) 
_{Apr}

_{May}
(2) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2008 
_{Jan}
(2) 
_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}
(6) 
_{Jul}
(6) 
_{Aug}

_{Sep}
(5) 
_{Oct}

_{Nov}

_{Dec}
(1) 
2009 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(2) 
_{Aug}
(4) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(19) 
_{Dec}
(2) 
2010 
_{Jan}
(4) 
_{Feb}
(4) 
_{Mar}
(17) 
_{Apr}
(12) 
_{May}
(10) 
_{Jun}
(17) 
_{Jul}
(2) 
_{Aug}
(5) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

2011 
_{Jan}

_{Feb}

_{Mar}
(3) 
_{Apr}
(2) 
_{May}

_{Jun}
(9) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2012 
_{Jan}
(4) 
_{Feb}
(12) 
_{Mar}

_{Apr}

_{May}
(3) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2013 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(2) 
_{May}
(1) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}

_{Sep}

_{Oct}
(1) 
_{Nov}

_{Dec}
(2) 
2014 
_{Jan}

_{Feb}
(4) 
_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(2) 
_{Nov}

_{Dec}
(1) 
S  M  T  W  T  F  S 

1

2

3

4

5
(3) 
6

7

8

9

10

11

12

13

14

15

16
(3) 
17
(4) 
18
(3) 
19

20

21

22

23

24

25

26
(2) 
27
(4) 
28

29

30






From: Dominique Orban <dominique.orban@gm...>  20091127 22:00:06

Hi Rodrigo, If you look at the jdsym documentation you will see that the input matrix (and preconditioners) need not be in linkedlist format, or any other Pysparse format for that matter. The matrix object need only have the shape, matvec and matvec_transp attributes. If you can generate irow, jcol and val arrays efficiently, then it is easy to wrap those into a simple sparse matrix class that provides the required attributes. What kind of preconditioner are you using? Regards, Dominique On Thu, Nov 19, 2009 at 3:44 AM, izateran@... <izateran@...> wrote: > > Hello Dominique, > > Thank you very much for answering. Conversion to scipy sparse formats is not > as expensive as using put. I can generate the three vectors irow, icol, val > in a short time. The problem is to put the values in the pysparse ll format. > I can even generate a scipy linked list format efficiently, but the problem > is this is not the same as the pysparse linked list format and I can not use > it with jdsym. > > The option to use put is unfortunately very expensive ( I have tried it). I > will try to generate the pysparse matrix directly but I am afraid all the > operations I used are not supported. For example I do a very expensive > double loop operation in C with Pyrex and I am able to generate a n*n vector > that I reshape inside python to do further processing in matrix form. Can I > reshape a vector into pysparse matrix format? > > Best regards > > Rodrigo > >  > Hi Rodrigo, > > Thanks for using PySparse! All comments can help us improve the > library. A (rough) design decision in PySparse is that matrix > operations should be "cheap", i.e., O(nnz), for matrices that are > indeed sparse. If your matrix is dense, I'm afraid any constructor of > the form spmatrix.ll_mat(K) will require O(n^2) operations. Please > correct me if I'm wrong but the Matrix Market format lists all the > nonzero elements of your matrix, and we *do* need all that information > to construct the matrix. I don't see how to lower this cost. Using a > different sparse format won't help either. > > What could help is detect any exploitable structure in your matrix. > Yes it is dense but it probably isn't random and may have some pattern > to it. For instance, it could be (anti)symmetric, (block)circulant, > (block)Toeplitz, or whatever. I feel that is where savings might be > found. > > Alternatively, is there any chance to bypass the 2dimensional array > that you process before building the PySparse matrix, construct the > PySparse matrix directly and operate on it instead? Or operate on > arrays of the form (irow,jcol,val) and then use put()? > > I hope this helps. Good luck. > >  > Dominique > > OriginalNachricht > Subject: Re: [Pysparseusers] Full matrix conversion to spmatrix.ll_mat > Date: Wed, 18 Nov 2009 23:30:24 +0100 > From: Dominique Orban <dominique.orban@...> > To: "izateran@..." <izateran@...> > > On Wed, Nov 18, 2009 at 11:47 AM, izateran@... > <izateran@...> wrote: >> Dear Sirs, >> >> Im using pysparse with the eigenvalue solver jdsym. It works great !!. >> The >> only problem I have at the moment is the conversion time from the original >> format of my matrix. Here is a description of my process: >> >>  Read a file wit around 20000 points >>  Process this points and get a very big array (20000*20000) (this part is >> done with pyrex) >>  I reshape the array into a matrix (20000,20000) >>  I do some math ( matrix transpose, matrix scaling, dot product, >> transpose, >> etc.) >>  I get a matrix from which I would like to obtain a couple of >> eigenvectors >> >> In order to use jdsm I need the matrix in linked list format. I have tried >> several things: >> >>  write the matrix in matrix market format (using scipy) and the read with >> spmatrix.ll_mat_from_mtx >> (it takes very very long) >>  convert directly to sparse coo format and then into ll format >> (I get an error that this format is not supported for conversion) >>  use the internal routine put(V,index1,index2) >> (this is faster but still takes a long time compared with the eigenvalue >> problem) >>  I also use >> for ii in range(N): >> for jj in range(N): >> Al[ii,jj]=K[ii,jj] >> (this is almost the same as using put) >> >> >> Is there any way to have it faster in the right format? This will improve >> a >> lot the all process of obtaining the eigenvalues. If would be great to >> have >> something like: >> >> spmatrix.ll_mat(K) >> >> where K is a full matrix. >> >> I appreciate any help with this problem. > >  Dominique 
From: morovia morovia <jallikattu@go...>  20091127 06:35:37

Dear Dominique, > If libf77blas.a, libcblas.a, libatlas.a are in $HOME/bin/lib, it seems > that you would need instead: > > library_dirs_list= ['/home/viswanath/bin/lib','/usr/lib'] > libraries_list = ['f77blas', 'cblas', 'atlas', 'lapack'] > > Let us know if that works. > Your suggestion has worked and I could import precon ! However, when I tried to run the Examples, poisson_test_eig.py, ImportError occurred, undefined symbol : _gfortran_st_write in jdsym.so Any suggestions ! Thanks Best regards Viswanath. 
From: Dominique Orban <dominique.orban@gm...>  20091127 05:39:59

On Fri, Nov 27, 2009 at 12:10 AM, morovia morovia <jallikattu@...> wrote: > Thanks for your reply. I have my atlas libraries installed in $HOME/bin/lib > where the libraries, libf77blas.a, libcblas.a, libatlas.a are present. > > My .bashrc contains : > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/bin/lib:$HOME/bin/lib/ATLAS/ > > I have modified the following lines in the setup.py of pysparse, > > library_dirs_list= ['','/home/viswanath/bin/lib/ATLAS','/usr/lib'] > libraries_list = ['atlas','lapack', 'blas'] > superlu_defs = [('USE_VENDOR_BLAS',1)],('USE_VENDOR_ATLAS',1)] > umfpack_defs = [('DINT', 1), ('CBLAS', 1), ('ATLAS',1)] # with atlas cblas > (http://mathatlas.sf.net) > > else: > umfpack_include_dirs = ['/home/viswanath/bin/lib/ATLAS','amd', > 'umfpack'] > umfpack_library_dirs = ['','/home/viswanath/bin/lib/ATLAS'] > umfpack_libraries = ['atlas', 'cblas', 'm'] If libf77blas.a, libcblas.a, libatlas.a are in $HOME/bin/lib, it seems that you would need instead: library_dirs_list= ['/home/viswanath/bin/lib','/usr/lib'] libraries_list = ['f77blas', 'cblas', 'atlas', 'lapack'] Let us know if that works.  Dominique 
From: morovia morovia <jallikattu@go...>  20091127 05:11:03

Dear Dominique, > Could you show us the variables you changed in setup.py? Basically for > ATLAS support you need to link with libatlas, libf77blas and libcblas. > Those libraries must also be in your LD_LIBRARY_PATH. > > We really need to be using the Numpy Distutils in Pysparse ! > > Thanks for your reply. I have my atlas libraries installed in $HOME/bin/lib where the libraries, libf77blas.a, libcblas.a, libatlas.a are present. My .bashrc contains : export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/bin/lib:$HOME/bin/lib/ATLAS/ I have modified the following lines in the setup.py of pysparse, library_dirs_list= ['','/home/viswanath/bin/lib/ATLAS','/usr/lib'] libraries_list = ['atlas','lapack', 'blas'] superlu_defs = [('USE_VENDOR_BLAS',1)],('USE_VENDOR_ATLAS',1)] umfpack_defs = [('DINT', 1), ('CBLAS', 1), ('ATLAS',1)] # with atlas cblas (http://mathatlas.sf.net) else: umfpack_include_dirs = ['/home/viswanath/bin/lib/ATLAS','amd', 'umfpack'] umfpack_library_dirs = ['','/home/viswanath/bin/lib/ATLAS'] umfpack_libraries = ['atlas', 'cblas', 'm'] Thanks Best regards Viswanath. 
From: Dominique Orban <dominique.orban@gm...>  20091126 18:13:16

On Thu, Nov 26, 2009 at 2:05 AM, morovia morovia <jallikattu@...> wrote: > I could compile pysparse by enabling atlas > library as stated in the comments of setup.py. But > I still get ImportError : undefined symbol "ATL_dcopy" > when I try to import precon. > > Can you please point out, where I need to make exactly > the changes in the setup.py. Hi Viswanath, Could you show us the variables you changed in setup.py? Basically for ATLAS support you need to link with libatlas, libf77blas and libcblas. Those libraries must also be in your LD_LIBRARY_PATH. We really need to be using the Numpy Distutils in Pysparse ! Thanks,  Dominique 
From: morovia morovia <jallikattu@go...>  20091126 07:05:42

Hello, I could compile pysparse by enabling atlas library as stated in the comments of setup.py. But I still get ImportError : undefined symbol "ATL_dcopy" when I try to import precon. Can you please point out, where I need to make exactly the changes in the setup.py. Thanks in advance Best regards Viswanath. 
From: Dominique Orban <dominique.orban@gm...>  20091118 22:30:39

On Wed, Nov 18, 2009 at 11:47 AM, izateran@... <izateran@...> wrote: > Dear Sirs, > > Im using pysparse with the eigenvalue solver jdsym. It works great !!. The > only problem I have at the moment is the conversion time from the original > format of my matrix. Here is a description of my process: > >  Read a file wit around 20000 points >  Process this points and get a very big array (20000*20000) (this part is > done with pyrex) >  I reshape the array into a matrix (20000,20000) >  I do some math ( matrix transpose, matrix scaling, dot product, transpose, > etc.) >  I get a matrix from which I would like to obtain a couple of eigenvectors > > In order to use jdsm I need the matrix in linked list format. I have tried > several things: > >  write the matrix in matrix market format (using scipy) and the read with > spmatrix.ll_mat_from_mtx > (it takes very very long) >  convert directly to sparse coo format and then into ll format > (I get an error that this format is not supported for conversion) >  use the internal routine put(V,index1,index2) > (this is faster but still takes a long time compared with the eigenvalue > problem) >  I also use > for ii in range(N): > for jj in range(N): > Al[ii,jj]=K[ii,jj] > (this is almost the same as using put) > > > Is there any way to have it faster in the right format? This will improve a > lot the all process of obtaining the eigenvalues. If would be great to have > something like: > > spmatrix.ll_mat(K) > > where K is a full matrix. > > I appreciate any help with this problem. Hi Rodrigo, Thanks for using PySparse! All comments can help us improve the library. A (rough) design decision in PySparse is that matrix operations should be "cheap", i.e., O(nnz), for matrices that are indeed sparse. If your matrix is dense, I'm afraid any constructor of the form spmatrix.ll_mat(K) will require O(n^2) operations. Please correct me if I'm wrong but the Matrix Market format lists all the nonzero elements of your matrix, and we *do* need all that information to construct the matrix. I don't see how to lower this cost. Using a different sparse format won't help either. What could help is detect any exploitable structure in your matrix. Yes it is dense but it probably isn't random and may have some pattern to it. For instance, it could be (anti)symmetric, (block)circulant, (block)Toeplitz, or whatever. I feel that is where savings might be found. Alternatively, is there any chance to bypass the 2dimensional array that you process before building the PySparse matrix, construct the PySparse matrix directly and operate on it instead? Or operate on arrays of the form (irow,jcol,val) and then use put()? I hope this helps. Good luck.  Dominique 
From: <lukpank@o2...>  20091118 17:17:35

"izateran@..." <izateran@...> writes: > Dear Sirs, > > Im using pysparse with the eigenvalue solver jdsym. It works great !!. The > only problem I have at the moment is the conversion time from the original > format of my matrix. Here is a description of my process: > >  Read a file wit around 20000 points >  Process this points and get a very big array (20000*20000) (this part is done > with pyrex) >  I reshape the array into a matrix (20000,20000) >  I do some math ( matrix transpose, matrix scaling, dot product, transpose, > etc.) >  I get a matrix from which I would like to obtain a couple of eigenvectors > > In order to use jdsm I need the matrix in linked list format. I have tried > several things: > >  write the matrix in matrix market format (using scipy) and the read with > spmatrix.ll_mat_from_mtx > (it takes very very long) >  convert directly to sparse coo format and then into ll format > (I get an error that this format is not supported for conversion) >  use the internal routine put(V,index1,index2) I suggest you use nonzero() method of numpy arrays in case you do not. > (this is faster but still takes a long time compared with the eigenvalue > problem) >  I also use > for ii in range(N): > for jj in range(N): > Al[ii,jj]=K[ii,jj] > (this is almost the same as using put) > > > Is there any way to have it faster in the right format? This will improve a lot > the all process of obtaining the eigenvalues. If would be great to have > something like: > > spmatrix.ll_mat(K) > > where K is a full matrix. > > I appreciate any help with this problem. > > Best regards > > Rodrigo > > > > >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly_______________________________________________ > Pysparseusers mailing list > Pysparseusers@... > https://lists.sourceforge.net/lists/listinfo/pysparseusers 
From: Dominique Orban <dominique.orban@gm...>  20091117 21:27:38

On Tue, Nov 17, 2009 at 1:24 PM, Fergus Gallagher <fergus@...> wrote: > On Tue, Nov 17, 2009 at 01:09:24PM 0500, Dominique Orban wrote: >> >> If you really need A*A^T explicitly, I suggest building A^T and then using >> dot(). >> > > I think we do. Our matrices are ~ 10 Million square, so A^T not > cheap doing it either explictly as A[i,j]=A[j,i] (ouch) or as > A^T = spmatrix.dot(A,I) OK. What I meant was to construct B=A^T to start with, instead of A. Then you can say dot(B,B) which will give you A*A^T.  Dominique 
From: Fergus Gallagher <fergus@go...>  20091117 18:25:05

On Tue, Nov 17, 2009 at 01:09:24PM 0500, Dominique Orban wrote: Thanks for the reply > > If you really need A*A^T explicitly, I suggest building A^T and then using > dot(). > I think we do. Our matrices are ~ 10 Million square, so A^T not cheap doing it either explictly as A[i,j]=A[j,i] (ouch) or as A^T = spmatrix.dot(A,I) > Often you don't need this matrix explicitly though (e.g., it is easy > to build matrixvector products with A*A^T without assembling this matrix). > Similarly, if you need to factorize it to solve A*A^T x = b, you can solve > the equivalent system > > [ I A^T ] [x] [ 0] > [ A 0 ] [y] = [b] > > This last system is also much sparser, and it is symmetric. It is indefinite > though. Our application is more statistical (correlation)  we're not trying to solve/invert. Regards.  Fergus Gallagher "I take my children everywhere, but they always find their way back home."  Robert Orben 
From: Dominique Orban <dominique.orban@gm...>  20091117 18:09:37

On Tue, Nov 17, 2009 at 6:42 AM, Fergus Gallagher <fergus@...>wrote: > > The find/put method is surely the fastest in general. The more > > fundamental question is do you really need the explicit transpose? > > Often, algorithms that need the transpose operate directly on the > > original matrix. If what you need is matrixvector products of the form > > A.T*x, you can always use the matvec_transp() method. If you're using > > the higherlevel PysparseMatrix objects, you can do A*x and x*A. The > > latter actually computes A.T*x. > > > > I hope this helps. > > > > Hi, > > thanks for all the replies. > > What we want to do is generate the "outer product" A*B^T (actually A*A^T > in our case). > If you really need A*A^T explicitly, I suggest building A^T and then using dot(). Often you don't need this matrix explicitly though (e.g., it is easy to build matrixvector products with A*A^T without assembling this matrix). Similarly, if you need to factorize it to solve A*A^T x = b, you can solve the equivalent system [ I A^T ] [x] [ 0] [ A 0 ] [y] = [b] This last system is also much sparser, and it is symmetric. It is indefinite though.  Dominique 
From: Fergus Gallagher <fergus@go...>  20091117 12:27:56

> The find/put method is surely the fastest in general. The more > fundamental question is do you really need the explicit transpose? > Often, algorithms that need the transpose operate directly on the > original matrix. If what you need is matrixvector products of the form > A.T*x, you can always use the matvec_transp() method. If you're using > the higherlevel PysparseMatrix objects, you can do A*x and x*A. The > latter actually computes A.T*x. > > I hope this helps. > Hi, thanks for all the replies. What we want to do is generate the "outer product" A*B^T (actually A*A^T in our case). Regards  Fergus Gallagher "Setting a good example for children takes all the fun out of middle age."  William Feather 
From: Dominique Orban <dominique.orban@gm...>  20091116 16:39:51

2009/11/16 Łukasz Pankowski <lukpank@...> > Toine Bogers <tb@...> writes: > > > Hi, > > > > What is the most efficient way of transposing a matrix using PySparse? A > search > > of the website doesn't yield any results and I don't see any transpose > method > > in the list of methods. I must be missing something here; I can't imagine > there > > is no fast transpose method available! > > > > I've tried two different ways of calculating the transpose myself: > > > > 1) looping through the entries of a matrix M with .items() and then > simply > > filling a new sparse matrix with the same values, but transposed. > > > > def T_ver1( self, M ): > > (rows, cols) = M.shape > > t = spmatrix.ll_mat(cols, rows, M.nnz) > > for (x, y), value in M.items(): > > t[y, x] = M[x,y] > > return t > > > > 2) I've noticed that using the .dot() method to multiply a matrix with > the > > identity matrix is a faster way of getting the transposed matrix M. > > > > def T_ver2( self, M ): > > (rows, cols) = M.shape > > I = spmatrix.ll_mat(rows, rows, rows) > > for i in xrange(0, rows): > > I[i, i] = 1 > > return spmatrix.dot(M, I)): > > > > Method 2 is faster, which I suspect is because the .dot() function using > > either the C or Fortran code directly. But why is there no direct > M.transpose() > > or M.T method to give me the transposed matrix? > > Hi, > > Mine is: > > def transpose(a): > b = ll_mat(a.shape[1], a.shape[0], a.nnz) > v, r, c = a.find() > b.put(v, c, r) > return b > > should be faster, though for the price of extra memory usage (v, r, c > arrays). Hi Toine, The find/put method is surely the fastest in general. The more fundamental question is do you really need the explicit transpose? Often, algorithms that need the transpose operate directly on the original matrix. If what you need is matrixvector products of the form A.T*x, you can always use the matvec_transp() method. If you're using the higherlevel PysparseMatrix objects, you can do A*x and x*A. The latter actually computes A.T*x. I hope this helps.  Dominique 
From: <lukpank@o2...>  20091116 16:05:12

Toine Bogers <tb@...> writes: > Hi, > > What is the most efficient way of transposing a matrix using PySparse? A search > of the website doesn't yield any results and I don't see any transpose method > in the list of methods. I must be missing something here; I can't imagine there > is no fast transpose method available! > > I've tried two different ways of calculating the transpose myself: > > 1) looping through the entries of a matrix M with .items() and then simply > filling a new sparse matrix with the same values, but transposed. > > def T_ver1( self, M ): > (rows, cols) = M.shape > t = spmatrix.ll_mat(cols, rows, M.nnz) > for (x, y), value in M.items(): > t[y, x] = M[x,y] > return t > > 2) I've noticed that using the .dot() method to multiply a matrix with the > identity matrix is a faster way of getting the transposed matrix M. > > def T_ver2( self, M ): > (rows, cols) = M.shape > I = spmatrix.ll_mat(rows, rows, rows) > for i in xrange(0, rows): > I[i, i] = 1 > return spmatrix.dot(M, I)): > > Method 2 is faster, which I suspect is because the .dot() function using > either the C or Fortran code directly. But why is there no direct M.transpose() > or M.T method to give me the transposed matrix? Hi, Mine is: def transpose(a): b = ll_mat(a.shape[1], a.shape[0], a.nnz) v, r, c = a.find() b.put(v, c, r) return b should be faster, though for the price of extra memory usage (v, r, c arrays). > > Thanks in advance for the help! > > > Kind regards, > Toine Bogers >  > Let Crystal Reports handle the reporting  Free Crystal Reports 2008 30Day > trial. Simplify your report design, integration and deployment  and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobjjuly_______________________________________________ > Pysparseusers mailing list > Pysparseusers@... > https://lists.sourceforge.net/lists/listinfo/pysparseusers 
From: Toine Bogers <tb@db...>  20091116 14:33:54

Hi, What is the most efficient way of transposing a matrix using PySparse? A search of the website doesn't yield any results and I don't see any transpose method in the list of methods. I must be missing something here; I can't imagine there is no fast transpose method available! I've tried two different ways of calculating the transpose myself: 1) looping through the entries of a matrix M with .items() and then simply filling a new sparse matrix with the same values, but transposed. def T_ver1( self, M ): (rows, cols) = M.shape t = spmatrix.ll_mat(cols, rows, M.nnz) for (x, y), value in M.items(): t[y, x] = M[x,y] return t 2) I've noticed that using the .dot() method to multiply a matrix with the identity matrix is a faster way of getting the transposed matrix M. def T_ver2( self, M ): (rows, cols) = M.shape I = spmatrix.ll_mat(rows, rows, rows) for i in xrange(0, rows): I[i, i] = 1 return spmatrix.dot(M, I)): Method 2 is faster, which I suspect is because the .dot() function using either the C or Fortran code directly. But why is there no direct M.transpose() or M.T method to give me the transposed matrix? Thanks in advance for the help! Kind regards, Toine Bogers 
From: Dominique Orban <dominique.orban@gm...>  20091105 15:12:49

On Thu, Nov 5, 2009 at 4:00 AM, zipeppe <zipeppe@...> wrote: > Dear all, > > I don't know it this is the right place to ask, but unfortunately the > official Pythonforum doesn't help. > This is the right place for Pysparse questions. > I am quite new to Python and I am unable to correctly translate the > following MATLAB script in Python code, specially because of the *spdiags*proprietary function: > > function bd=BLC(b,A,B,s) > L=length(b); > Bs=B/s; As=A/s; > bd=ones(L,1)*median(b);bd0=b;nm=norm(bdbd0);nm0=RealMax; > M0=ones(L,1)/As; > e=ones(L,1); > D0=spdiags([1*e 4*e 6*e 4*e 1*e],2:2,L,L); > D0(1,1)=2; D0(L,L)=2; > D0(2,1)=4; D0(1,2)=4; D0(L,L1)=4; D0(L1,L)=4; > D0(2,2)=10;D0(L1,L1)=10; > I=0; > while nm>10 & I<30 > %& nm<nm0; > I=I+1; > M=M0;D=D0;bd0=bd;nm0=nm; > for i=1:L > if bd(i)>b(i) > M(i)=M(i)+2*Bs*b(i)/As; > D(i,i)=D(i,i)+2*Bs/As; > end > end > bd=D\M; > nm=norm(bd0bd); > end > > The algorithm receives *b* as (Nx1 data vector), *A* (1x1), *B* (1x1) and > *s* (1x1) as local parameters, and creates *bd* (Nx1data vector) as > output. > The Matlab command that you highlighted creates an L x L matrix with the vectors e, 4*e, 6*e, 4*e and e on and around the main diagonal. The position argument is 2:2 which means that the first vector (e) goes on the subsubdiagonal, the second vector (4e) goes on the subdiagonal, 6e goes on the main diagonal, 4e goes one above the main diagonal and e goes two above the main diagonal like so. Here is a code fragment that performs the same operation with Pysparse: In [1]: from pysparse.pysparseMatrix import PysparseSpDiagsMatrix as spdiags In [2]: import numpy as np In [3]: L = 5 # The length of b in your Matlab code In [4]: e = np.ones(L) In [5]: D0 = spdiags(L, (e, 4*e, 6*e, 4*e, e), (2,1,0,1,2)) In [6]: print D0 6.000000 4.000000 1.000000   4.000000 6.000000 4.000000 1.000000  1.000000 4.000000 6.000000 4.000000 1.000000  1.000000 4.000000 6.000000 4.000000   1.000000 4.000000 6.000000 The first argument (L) specifies the matrix size, the second is the set of vectors that should be laid out on the diagonals and the third gives the indices of the diagonals that are concerned. For a more Matlablike notation, the last argument can also be written np.r_[2:2] (see the Numpy function r_ ... note the underscore). I hope this helps.  Dominique 
From: zipeppe <zipeppe@gm...>  20091105 09:05:10

Dear all, I don't know it this is the right place to ask, but unfortunately the official Pythonforum doesn't help. I am quite new to Python and I am unable to correctly translate the following MATLAB script in Python code, specially because of the *spdiags*proprietary function: function bd=BLC(b,A,B,s) L=length(b); Bs=B/s; As=A/s; bd=ones(L,1)*median(b);bd0=b;nm=norm(bdbd0);nm0=RealMax; M0=ones(L,1)/As; e=ones(L,1); D0=spdiags([1*e 4*e 6*e 4*e 1*e],2:2,L,L); D0(1,1)=2; D0(L,L)=2; D0(2,1)=4; D0(1,2)=4; D0(L,L1)=4; D0(L1,L)=4; D0(2,2)=10;D0(L1,L1)=10; I=0; while nm>10 & I<30 %& nm<nm0; I=I+1; M=M0;D=D0;bd0=bd;nm0=nm; for i=1:L if bd(i)>b(i) M(i)=M(i)+2*Bs*b(i)/As; D(i,i)=D(i,i)+2*Bs/As; end end bd=D\M; nm=norm(bd0bd); end The algorithm receives *b* as (Nx1 data vector), *A* (1x1), *B* (1x1) and *s * (1x1) as local parameters, and creates *bd* (Nx1data vector) as output. It looks like the PySparse package is the best promising, despite that I am not able to use *PysparseSpDiagsMatrix* function in any way... I got only errors like: *TypeError: unsubscriptable object *  or  *IndexError: Not as many row indices as values* Please, help me to understand what is going on in MATLAB and how to translate this operation using the PySparse module: Let have a B matrix as follow: B = 1 6 11 2 7 12 3 8 13 4 9 14 5 10 15 The spdiags MATLAB function works as follows: A = full (spdiags (B, [2 0 2], 5, 5)) Matrix B Matrix A 1 6 11 6 0 13 0 0 2 7 12 0 7 0 14 0 3 8 13 == spdiags => 1 0 8 0 15 4 9 14 0 2 0 9 0 5 10 15 0 0 3 0 10 A(3,1), A(4,2), and A(5,3) are taken from the upper part of B(:,1) A(1,3), A(2,4), and A(3,5) are taken from the lower part of B(:,3). How can I implement the same operation in python using * PysparseSpDiagsMatrix* ? Thanks for your help, ZPP   Coltivate Linux che Windows si pianta da solo!  
From: zipeppe <zipeppe@gm...>  20091105 09:00:38

Dear all, I don't know it this is the right place to ask, but unfortunately the official Pythonforum doesn't help. I am quite new to Python and I am unable to correctly translate the following MATLAB script in Python code, specially because of the *spdiags*proprietary function: function bd=BLC(b,A,B,s) L=length(b); Bs=B/s; As=A/s; bd=ones(L,1)*median(b);bd0=b;nm=norm(bdbd0);nm0=RealMax; M0=ones(L,1)/As; e=ones(L,1); D0=spdiags([1*e 4*e 6*e 4*e 1*e],2:2,L,L); D0(1,1)=2; D0(L,L)=2; D0(2,1)=4; D0(1,2)=4; D0(L,L1)=4; D0(L1,L)=4; D0(2,2)=10;D0(L1,L1)=10; I=0; while nm>10 & I<30 %& nm<nm0; I=I+1; M=M0;D=D0;bd0=bd;nm0=nm; for i=1:L if bd(i)>b(i) M(i)=M(i)+2*Bs*b(i)/As; D(i,i)=D(i,i)+2*Bs/As; end end bd=D\M; nm=norm(bd0bd); end The algorithm receives *b* as (Nx1 data vector), *A* (1x1), *B* (1x1) and *s * (1x1) as local parameters, and creates *bd* (Nx1data vector) as output. It looks like the PySparse package is the best promising, despite that I am not able to use *PysparseSpDiagsMatrix* function in any way... I got only errors like: *TypeError: unsubscriptable object *  or  *IndexError: Not as many row indices as values* Please, help me to understand what is going on in MATLAB and how to translate this operation using the PySparse module: Let have a B matrix as follow: B = 1 6 11 2 7 12 3 8 13 4 9 14 5 10 15 The spdiags function works as follows: A = full (spdiags (B, [2 0 2], 5, 5)) Matrix B Matrix A 1 6 11 6 0 13 0 0 2 7 12 0 7 0 14 0 3 8 13 == spdiags => 1 0 8 0 15 4 9 14 0 2 0 9 0 5 10 15 0 0 3 0 10 A(3,1), A(4,2), and A(5,3) are taken from the upper part of B(:,1) A(1,3), A(2,4), and A(3,5) are taken from the lower part of B(:,3). How can I implement the same operation in python using * PysparseSpDiagsMatrix* ? Thanks for your help, ZPP   Coltivate Linux che Windows si pianta da solo!  