You can subscribe to this list here.
2001 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(10) 
_{Aug}
(5) 
_{Sep}
(3) 
_{Oct}
(41) 
_{Nov}
(41) 
_{Dec}
(33) 

2002 
_{Jan}
(75) 
_{Feb}
(10) 
_{Mar}
(170) 
_{Apr}
(174) 
_{May}
(66) 
_{Jun}
(11) 
_{Jul}
(10) 
_{Aug}
(44) 
_{Sep}
(73) 
_{Oct}
(28) 
_{Nov}
(139) 
_{Dec}
(52) 
2003 
_{Jan}
(35) 
_{Feb}
(93) 
_{Mar}
(62) 
_{Apr}
(10) 
_{May}
(55) 
_{Jun}
(70) 
_{Jul}
(37) 
_{Aug}
(16) 
_{Sep}
(56) 
_{Oct}
(31) 
_{Nov}
(57) 
_{Dec}
(83) 
2004 
_{Jan}
(85) 
_{Feb}
(67) 
_{Mar}
(27) 
_{Apr}
(37) 
_{May}
(75) 
_{Jun}
(85) 
_{Jul}
(160) 
_{Aug}
(68) 
_{Sep}
(104) 
_{Oct}
(25) 
_{Nov}
(39) 
_{Dec}
(23) 
2005 
_{Jan}
(10) 
_{Feb}
(45) 
_{Mar}
(43) 
_{Apr}
(19) 
_{May}
(108) 
_{Jun}
(31) 
_{Jul}
(41) 
_{Aug}
(23) 
_{Sep}
(65) 
_{Oct}
(58) 
_{Nov}
(44) 
_{Dec}
(54) 
2006 
_{Jan}
(96) 
_{Feb}
(27) 
_{Mar}
(69) 
_{Apr}
(59) 
_{May}
(67) 
_{Jun}
(35) 
_{Jul}
(13) 
_{Aug}
(461) 
_{Sep}
(160) 
_{Oct}
(399) 
_{Nov}
(32) 
_{Dec}
(72) 
2007 
_{Jan}
(316) 
_{Feb}
(305) 
_{Mar}
(318) 
_{Apr}
(54) 
_{May}
(194) 
_{Jun}
(173) 
_{Jul}
(282) 
_{Aug}
(91) 
_{Sep}
(227) 
_{Oct}
(365) 
_{Nov}
(168) 
_{Dec}
(18) 
2008 
_{Jan}
(71) 
_{Feb}
(111) 
_{Mar}
(155) 
_{Apr}
(173) 
_{May}
(70) 
_{Jun}
(67) 
_{Jul}
(55) 
_{Aug}
(83) 
_{Sep}
(32) 
_{Oct}
(68) 
_{Nov}
(80) 
_{Dec}
(29) 
2009 
_{Jan}
(46) 
_{Feb}
(18) 
_{Mar}
(95) 
_{Apr}
(76) 
_{May}
(140) 
_{Jun}
(98) 
_{Jul}
(84) 
_{Aug}
(123) 
_{Sep}
(94) 
_{Oct}
(131) 
_{Nov}
(142) 
_{Dec}
(125) 
2010 
_{Jan}
(128) 
_{Feb}
(158) 
_{Mar}
(172) 
_{Apr}
(134) 
_{May}
(94) 
_{Jun}
(84) 
_{Jul}
(32) 
_{Aug}
(127) 
_{Sep}
(167) 
_{Oct}
(109) 
_{Nov}
(69) 
_{Dec}
(78) 
2011 
_{Jan}
(39) 
_{Feb}
(58) 
_{Mar}
(52) 
_{Apr}
(47) 
_{May}
(56) 
_{Jun}
(76) 
_{Jul}
(55) 
_{Aug}
(54) 
_{Sep}
(165) 
_{Oct}
(255) 
_{Nov}
(328) 
_{Dec}
(263) 
2012 
_{Jan}
(82) 
_{Feb}
(147) 
_{Mar}
(400) 
_{Apr}
(216) 
_{May}
(209) 
_{Jun}
(160) 
_{Jul}
(86) 
_{Aug}
(141) 
_{Sep}
(156) 
_{Oct}
(6) 
_{Nov}

_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(1) 
_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}
(1) 
_{Dec}
(2) 
2016 
_{Jan}

_{Feb}
(2) 
_{Mar}
(2) 
_{Apr}
(1) 
_{May}
(1) 
_{Jun}
(2) 
_{Jul}
(1) 
_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1
(2) 
2
(6) 
3

4

5

6
(3) 
7
(4) 
8
(2) 
9

10
(5) 
11

12
(1) 
13
(1) 
14
(1) 
15

16
(4) 
17

18

19

20

21
(2) 
22
(2) 
23

24

25

26

27

28

29
(2) 
30


From: Jorge Barros <ficmatinfmag@us...>  20060610 21:54:28

Update of /cvsroot/octave/octavelang/base/octave/help In directory sc8prcvs3.sourceforge.net:/tmp/cvsserv27429 Modified Files: fileparts fullfile Added Files: bicubic gradient inputname interp1 interp2 interpft mat2str ndgrid pcg pchip __pchip_deriv__ pcr plot3 __plt3__ polyarea quadl Log Message: added and/or changed in octave.orgcvs  NEW FILE: ndgrid  * texinfo * @deftypefn {Function File} {[@var{y1}, @var{y2}, ..., @var{y}n]} = ndgrid (@var{x1}, @var{x2}, ..., @var{x}n) @deftypefnx {Function File} {[@var{y1}, @var{y2}, ..., @var{y}n]} = ndgrid (@var{x}) Given n vectors @var{x1}, ... @var{x}n, ndgrid returns n arrays of dimension n. The elements of the ith output argument contains the elements of the vector @var{x}i repeated over all dimensions different from the ith dimension. Calling ndgrid with only one input argument @var{x} is equivalent of calling ndgrid with all n input arguments equal to @var{x}: [@var{y1}, @var{y2}, ..., @var{y}n] = ndgrid (@var{x}, ..., @var{x}) @seealso{meshgrid} @end deftypefn  NEW FILE: __pchip_deriv__  * texinfo * @deftypefn {Loadable Function} {} __pchip_deriv__ (@var{x}, @var{y}) Wrapper for SLATEC/PCHIP function DPCHIM to calculate the derivates for piecewise polynomials. You should be using @code{pchip} function instead. @end deftypefn Index: fileparts =================================================================== RCS file: /cvsroot/octave/octavelang/base/octave/help/fileparts,v retrieving revision 1.1.1.1 retrieving revision 1.2 diff u d r1.1.1.1 r1.2  fileparts 16 Jul 2005 12:53:59 0000 1.1.1.1 +++ fileparts 10 Jun 2006 21:54:21 0000 1.2 @@ 2,4 +2,5 @@ @deftypefn {Function File} {[@var{dir}, @var{name}, @var{ext}, @var{ver}] =} fileparts (@var{filename}) Return the directory, name, extension, and version components of @var{filename}. +@seealso{fullfile} @end deftypefn  NEW FILE: interp1  * texinfo * @deftypefn {Function File} {@var{yi} =} interp1 (@var{x}, @var{y}, @var{xi}) @deftypefnx {Function File} {@var{yi} =} interp1 (@dots{}, @var{method}) @deftypefnx {Function File} {@var{yi} =} interp1 (@dots{}, @var{extrap}) @deftypefnx {Function File} {@var{pp} =} interp1 (@dots{}, 'pp') Onedimensional interpolation. Interpolate @var{y}, defined at the points @var{x}, at the points @var{xi}. The sample points @var{x} must be strictly monotonic. If @var{y} is an array, treat the columns of @var{y} seperately. Method is one of: @table @asis @item 'nearest' Return the nearest neighbour. @item 'linear' Linear interpolation from nearest neighbours @item 'pchip' Piecewise cubic hermite interpolating polynomial @item 'cubic' Cubic interpolation from four nearest neighbours @item 'spline' Cubic spline interpolationsmooth first and second derivatives throughout the curve @end table Appending '*' to the start of the above method forces @code{interp1} to assume that @var{x} is uniformly spaced, and only @code{@var{x} (1)} and @code{@var{x} (2)} are referenced. This is usually faster, and is never slower. The default method is 'linear'. If @var{extrap} is the string 'extrap', then extrapolate values beyond the endpoints. If @var{extrap} is a number, replace values beyond the endpoints with that number. If @var{extrap} is missing, assume NaN. If the string argument 'pp' is specified, then @var{xi} should not be supplied and @code{interp1} returns the piecewise polynomial that can later be used with @code{ppval} to evaluate the interpolation. There is an equivalence, such that @code{ppval (interp1 (@var{x}, @var{y}, @var{method}, 'pp'), @var{xi}) == interp1 (@var{x}, @var{y}, @var{xi}, @var{method}, 'extrap')}. An example of the use of @code{interp1} is @example @group xf=[0:0.05:10]; yf = sin(2*pi*xf/5); xp=[0:10]; yp = sin(2*pi*xp/5); lin=interp1(xp,yp,xf); spl=interp1(xp,yp,xf,'spline'); cub=interp1(xp,yp,xf,'cubic'); near=interp1(xp,yp,xf,'nearest'); plot(xf,yf,';original;',xf,lin,';linear;',xf,spl,';spline;',... xf,cub,';cubic;',xf,near,';nearest;',xp,yp,'*;;'); @end group @end example @seealso{interpft} @end deftypefn  NEW FILE: pcr  * texinfo * @deftypefn {Function File} {@var{x} =} pcr (@var{a}, @var{b}, @var{tol}, @var{maxit}, @var{m}, @var{x0}, @dots{}) @deftypefnx {Function File} {[@var{x}, @var{flag}, @var{relres}, @var{iter}, @var{resvec}] =} pcr (@dots{}) Solves the linear system of equations @code{@var{a} * @var{x} = @var{b}} by means of the Preconditioned Conjugate Residuals iterative method. The input arguments are @itemize @item @var{a} can be either a square (preferably sparse) matrix or a function handle, inline function or string containing the name of a function which computes @code{@var{a} * @var{x}}. In principle @var{a} should be symmetric and nonsingular; if @code{pcr} finds @var{a} to be numerically singular, you will get a warning message and the @var{flag} output parameter will be set. @item @var{b} is the right hand side vector. @item @var{tol} is the required relative tolerance for the residual error, @code{@var{b}  @var{a} * @var{x}}. The iteration stops if @code{norm (@var{b}  @var{a} * @var{x}) <= @var{tol} * norm (@var{b}  @var{a} * @var{x0})}. If @var{tol} is empty or is omitted, the function sets @code{@var{tol} = 1e6} by default. @item @var{maxit} is the maximum allowable number of iterations; if @code{[]} is supplied for @code{maxit}, or @code{pcr} has less arguments, a default value equal to 20 is used. @item @var{m} is the (left) preconditioning matrix, so that the iteration is (theoretically) equivalent to solving by @code{pcr} @code{@var{P} * @var{x} = @var{m} \ @var{b}}, with @code{@var{P} = @var{m} \ @var{a}}. Note that a proper choice of the preconditioner may dramatically improve the overall performance of the method. Instead of matrix @var{m}, the user may pass a function which returns the results of applying the inverse of @var{m} to a vector (usually this is the preferred way of using the preconditioner). If @code{[]} is supplied for @var{m}, or @var{m} is omitted, no preconditioning is applied. @item @var{x0} is the initial guess. If @var{x0} is empty or omitted, the function sets @var{x0} to a zero vector by default. @end itemize The arguments which follow @var{x0} are treated as parameters, and passed in a proper way to any of the functions (@var{a} or @var{m}) which are passed to @code{pcr}. See the examples below for further details. The output arguments are @itemize @item @var{x} is the computed approximation to the solution of @code{@var{a} * @var{x} = @var{b}}. @item @var{flag} reports on the convergence. @code{@var{flag} = 0} means the solution converged and the tolerance criterion given by @var{tol} is satisfied. @code{@var{flag} = 1} means that the @var{maxit} limit for the iteration count was reached. @code{@var{flag} = 3} reports t @code{pcr} breakdown, see [1] for details. @item @var{relres} is the ratio of the final residual to its initial value, measured in the Euclidean norm. @item @var{iter} is the actual number of iterations performed. @item @var{resvec} describes the convergence history of the method, so that @code{@var{resvec} (i)} contains the Euclidean norms of the residualafter the (@var{i}1)th iteration, @code{@var{i} = 1,2, @dots{}, @var{iter}+1}. @end itemize Let us consider a trivial problem with a diagonal matrix (we exploit the sparsity of A) @example @group N = 10; A = diag([1:N]); A = sparse(A); b = rand(N,1); @end group @end example @sc{Example 1:} Simplest use of @code{pcr} @example x = pcr(A, b) @end example @sc{Example 2:} @code{pcr} with a function which computes @code{@var{a} * @var{x}}. @example @group function y = applyA(x) y = [1:10]'.*x; endfunction x = pcr('applyA',b) @end group @end example @sc{Example 3:} Preconditioned iteration, with full diagnostics. The preconditioner (quite strange, because even the original matrix @var{a} is trivial) is defined as a function @example @group function y = applyM(x) K = floor(length(x)2); y = x; y(1:K) = x(1:K)./[1:K]'; endfunction [x, flag, relres, iter, resvec] = pcr(A,b,[],[],'applyM') semilogy([1:iter+1], resvec); @end group @end example @sc{Example 4:} Finally, a preconditioner which depends on a parameter @var{k}. @example @group function y = applyM(x, varargin) K = varargin@{1@}; y = x; y(1:K) = x(1:K)./[1:K]'; endfunction [x, flag, relres, iter, resvec] = pcr(A,b,[],[],'applyM',[],3) @end group @end example @sc{References} [1] W. Hackbusch, "Iterative Solution of Large Sparse Systems of Equations", section 9.5.4; Springer, 1994 @seealso{sparse, pcg} @end deftypefn  NEW FILE: interp2  * texinfo * @deftypefn {Function File} {@var{zi}=} interp2 (@var{x}, @var{y}, @var{z}, @var{xi}, @var{yi}) @deftypefnx {Function File} {@var{zi}=} interp2 (@var{Z}, @var{xi}, @var{yi}) @deftypefnx {Function File} {@var{zi}=} interp2 (@var{Z}, @var{n}) @deftypefnx {Function File} {@var{zi}=} interp2 (@dots{}, @var{method}) @deftypefnx {Function File} {@var{zi}=} interp2 (@dots{}, @var{method}, @var{extrapval}) Twodimensional interpolation. @var{x}, @var{y} and @var{z} describe a surface function. If @var{x} and @var{y} are vectors their length must correspondent to the size of @var{z}. @var{x} and @var{Yy must be monotonic. If they are matrices they must have the @code{meshgrid} format. @table @code @item interp2 (@var{x}, @var{y}, @var{Z}, @var{xi}, @var{yi}, @dots{}) Returns a matrix corresponding to the points described by the matrices @var{XI}, @var{YI}. If the last argument is a string, the interpolation method can be specified. The method can be 'linear', 'nearest' or 'cubic'. If it is omitted 'linear' interpolation is assumed. @item interp2 (@var{z}, @var{xi}, @var{yi}) Assumes @code{@var{x} = 1:rows (@var{z})} and @code{@var{y} = 1:columns (@var{z})} @item interp2 (@var{z}, @var{n}) Interleaves the Matrix @var{z} ntimes. If @var{n} is ommited a value of @code{@var{n} = 1} is assumed. @end table The variable @var{method} defines the method to use for the interpolation. It can take one of the values @table @asis @item 'nearest' Return the nearest neighbour. @item 'linear' Linear interpolation from nearest neighbours @item 'pchip' Piecewise cubic hermite interpolating polynomial @item 'cubic' Cubic interpolation from four nearest neighbours @item 'spline' Cubic spline interpolationsmooth first and second derivatives throughout the curve (Not implemented yet). @end table If a scalar value @var{extrapval} is defined as the final value, then values outside the mesh as set to this value. Note that in this case @var{method} must be defined as well. If @var{extrapval} is not defined then NaN is assumed. @seealso{interp1} @end deftypefn  NEW FILE: inputname  * texinfo * @deftypefn {Function File} {} inputname (@var{n}) Return the text defining @var{n}th input to the function. @end deftypefn  NEW FILE: polyarea  * texinfo * @deftypefn {Function File} {} polyarea (@var{x}, @var{y}) @deftypefnx {Function File} {} polyarea (@var{x}, @var{y}, @var{dim}) Determines area of a polygon by triangle method. The variables @var{x} and @var{y} define the vertex pairs, and must therefore have the same shape. Then might be either vectors or arrays. If they are arrays then the columns of @var{x} and @var{y} are treated seperately and an area returned for each. If the optional @var{dim} argument is given, then @code{polyarea} works along this dimension of the arrays @var{x} and @var{y}. @end deftypefn  NEW FILE: mat2str  * texinfo * @deftypefn {Function File} {@var{s} =} mat2str (@var{x}, @var{n}) @deftypefnx {Function File} {@var{s} =} mat2str (@dots{}, 'class') Format real/complex numerial matrices as strings. This function returns values that are suitable for the use of the @code{eval} function. The precision of the values is given by @var{n}. If @var{n} is a scalar then both real and imaginary parts of the matrix are printed to the same precision. Otherwise @code{@var{n} (1)} defines the precision of the real part and @code{@var{n} (2)} defines the precision of the imaginary part. The default for @var{n} is 17. If the argument 'class' is given, then the class of @var{x} is included in the string in such a way that the eval will result in the construction of a matrix of the same class. @example @group mat2str( [ 1/3 + i/7; 1/3  i/7 ], [4 2] ) @result{} '[0.3333+0.14i;0.33330.14i]' mat2str( [ 1/3 +i/7; 1/3 i/7 ], [4 2] ) @result{} '[0.3333+0i,0+0.14i;0.3333+0i,00.14i]' mat2str( int16([1 1]), 'class') @result{} 'int16([1,1])' @end group @end example @seealso{sprintf, int2str} @end deftypefn  NEW FILE: interpft  * texinfo * @deftypefn {Function File} {} interpft (@var{x}, @var{n}) @deftypefnx {Function File} {} interpft (@var{x}, @var{n}, @var{dim}) Fourier interpolation. If @var{x} is a vector, then @var{x} is resampled with @var{n} points. The data in @var{x} is assumed to be equispaced. If @var{x} is an array, then operate along each column of the array seperately. If @var{dim} is specified, then interpolate along the dimension @var{dim}. @code{interpft} assumes that the interpolated function is periodic, and so assumption are made about the end points of the inetrpolation. @seealso{interp1} @end deftypefn  NEW FILE: pchip  * texinfo * @deftypefn {Function File} {@var{pp} = } pchip (@var{x}, @var{y}) @deftypefnx {Function File} {@var{yi} = } pchip (@var{x}, @var{y}, @var{xi}) Piecewise Cubic Hermite interpolating polynomial. Called with two arguments, the piecewise polynomial @var{pp} is returned, that may later be used with @code{ppval} to evaluate the polynomial at specific points. The variable @var{x} must be a strictly monotonic vector (either increasing or decreasing). While @var{y} can be either a vector or array. In the case where @var{y} is a vector, it must have a length of @var{n}. If @var{y} is an array, then the size of @var{y} must have the form @iftex @tex $$[s_1, s_2, \cdots, s_k, n]$$ @end tex @end iftex @ifinfo @code{[@var{s1}, @var{s2}, @dots{}, @var{sk}, @var{n}]} @end ifinfo The array is then reshaped internally to a matrix where to leading dimension is given by @iftex @tex $$s_1 s_2 \cdots s_k$$ @end tex @end iftex @ifinfo @code{@var{s1} * @var{s2} * @dots{} * @var{sk}} @end ifinfo and each row this matrix is then treated seperately. Note that this is exactly the opposite treatment than @code{interp1} and is done for compatiability. Called with a third input argument, @code{pchip} evaluates the piecewise polynomial at the points @var{xi}. There is an equivalence between @code{ppval (pchip (@var{x}, @var{y}), @var{xi})} and @code{pchip (@var{x}, @var{y}, @var{xi})}. @seealso{spline, ppval, mkpp, unmkpp} @end deftypefn Index: fullfile =================================================================== RCS file: /cvsroot/octave/octavelang/base/octave/help/fullfile,v retrieving revision 1.1.1.1 retrieving revision 1.2 diff u d r1.1.1.1 r1.2  fullfile 16 Jul 2005 12:54:00 0000 1.1.1.1 +++ fullfile 10 Jun 2006 21:54:21 0000 1.2 @@ 1,4 +1,5 @@ * texinfo * @deftypefn {Function File} {@var{filename} =} fullfile (@var{dir1}, @var{dir2}, @dots{}, @var{file}) Return a complete filename constructed from the given components. +@seealso{fileparts} @end deftypefn  NEW FILE: bicubic  * texinfo * @deftypefn {Function File} {@var{zi}=} bicubic (@var{x}, @var{y}, @var{z}, @var{xi}, @var{yi}) Return a matrix @var{zi} corresponding to the the bicubic interpolations at @var{xi} and @var{yi} of the data supplied as @var{x}, @var{y} and @var{z}. For further information please see bicubic.pdf available at @url{http://wiki.woodpecker.org.cn/moin/Octave/Bicubic} @seealso{interp2} @end deftypefn  NEW FILE: plot3  * texinfo * @deftypefn {Function File} {} plot (@var{args}) This function produces threedimensional plots. Many different combinations of arguments are possible. The simplest form is @example plot3 (@var{x}, @var{y}, @var{z}) @end example @noindent where the arguments are taken to be the vertices of the points to be plotted in three dimensions. If all arguments are vectors of the same length, then a single continuous line is drawn. If all arguments are matrices, then each column of the matrices is treated as a seperate line. No attempt is made to transpose the arguments to make the number of rows match. To save a plot, in one of several image formats such as PostScript or PNG, use the @code{print} command. An optional format argument can be given as @example plot3 (@var{x}, @var{y}, @var{y}, @var{fmt}) @end example If the @var{fmt} argument is supplied, it is interpreted as follows. If @var{fmt} is missing, the default gnuplot line style is assumed. @table @samp @item  Set lines plot style (default). @item . Set dots plot style. @item @@ Set points plot style. @item @@ Set linespoints plot style. @item ^ Set impulses plot style. @item L Set steps plot style. @item @var{n} Interpreted as the plot color if @var{n} is an integer in the range 1 to 6. @item @var{nm} If @var{nm} is a two digit integer and @var{m} is an integer in the range 1 to 6, @var{m} is interpreted as the point style. This is only valid in combination with the @code{@@} or @code{@@} specifiers. @item @var{c} If @var{c} is one of @code{"k"}, @code{"r"}, @code{"g"}, @code{"b"}, @code{"m"}, @code{"c"}, or @code{"w"}, it is interpreted as the plot color (black, red, green, blue, magenta, cyan, or white). @item ";title;" Here @code{"title"} is the label for the key. @item + @itemx * @itemx o @itemx x Used in combination with the points or linespoints styles, set the point style. @end table The color line styles have the following meanings on terminals that support color. @example Number Gnuplot colors (lines)points style 1 red * 2 green + 3 blue o 4 magenta x 5 cyan house 6 brown there exists @end example The @var{fmt} argument can also be used to assign key titles. To do so, include the desired title between semicolons after the formatting sequence described above, e.g. "+3;Key Title;" Note that the last semicolon is required and will generate an error if it is left out. Arguments can also be given in groups of three as @example plot3 (@var{x1}, @var{y1}, @var{y1}, @var{x2}, @var{y2}, @var{y2}, @dots{}) @end example @noindent where each set of three arguments are treated as seperate lines or sets of lines in three dimensions. An example of the use of plot3 is @example @group z = [0:0.05:5]; plot3(cos(2*pi*z), sin(2*pi*z), z, ";helix;"); @end group @end example @seealso{plot, semilogx, semilogy, loglog, polar, mesh, contour, __pltopt__ bar, stairs, errorbar, replot, xlabel, ylabel, title, print} @end deftypefn  NEW FILE: __plt3__  * texinfo * @deftypefn {Function File} {} __plt3__ (@var{x}, @var{y}, @var{z}, @var{fmt}) @end deftypefn  NEW FILE: pcg  * texinfo * @deftypefn {Function File} {@var{x} =} pcg (@var{a}, @var{b}, @var{tol}, @var{maxit}, @var{m}, @var{x0}, @dots{}) @deftypefnx {Function File} {[@var{x}, @var{flag}, @var{relres}, @var{iter}, @var{resvec}, @var{eigest}] =} pcg (@dots{}) Solves the linear system of equations @code{@var{a} * @var{x} = @var{b}} by means of the Preconditioned Conjugate Gradient iterative method. The input arguments are @itemize @item @var{a} can be either a square (preferably sparse) matrix or a function handle, inline function or string containing the name of a function which computes @code{@var{a} * @var{x}}. In principle @var{a} should be symmetric and positive definite; if @code{pcg} finds @var{a} to not be positive definite, you will get a warning message and the @var{flag} output parameter will be set. @item @var{b} is the right hand side vector. @item @var{tol} is the required relative tolerance for the residual error, @code{@var{b}  @var{a} * @var{x}}. The iteration stops if @code{norm (@var{b}  @var{a} * @var{x}) <= @var{tol} * norm (@var{b}  @var{a} * @var{x0})}. If @var{tol} is empty or is omitted, the function sets @code{@var{tol} = 1e6} by default. @item @var{maxit} is the maximum allowable number of iterations; if @code{[]} is supplied for @code{maxit}, or @code{pcg} has less arguments, a default value equal to 20 is used. @item @var{m} is the (left) preconditioning matrix, so that the iteration is (theoretically) equivalent to solving by @code{pcg} @code{@var{P} * @var{x} = @var{m} \ @var{b}}, with @code{@var{P} = @var{m} \ @var{a}}. Note that a proper choice of the preconditioner may dramatically improve the overall performance of the method. Instead of matrix @var{m}, the user may pass a function which returns the results of applying the inverse of @var{m} to a vector (usually this is the preferred way of using the preconditioner). If @code{[]} is supplied for @var{m}, or @var{m} is omitted, no preconditioning is applied. @item @var{x0} is the initial guess. If @var{x0} is empty or omitted, the function sets @var{x0} to a zero vector by default. @end itemize The arguments which follow @var{x0} are treated as parameters, and passed in a proper way to any of the functions (@var{a} or @var{m}) which are passed to @code{pcg}. See the examples below for further details. The output arguments are @itemize @item @var{x} is the computed approximation to the solution of @code{@var{a} * @var{x} = @var{b}}. @item @var{flag} reports on the convergence. @code{@var{flag} = 0} means the solution converged and the tolerance criterion given by @var{tol} is satisfied. @code{@var{flag} = 1} means that the @var{maxit} limit for the iteration count was reached. @code{@var{flag} = 3} reports that the (preconditioned) matrix was found not positive definite. @item @var{relres} is the ratio of the final residual to its initial value, measured in the Euclidean norm. @item @var{iter} is the actual number of iterations performed. @item @var{resvec} describes the convergence history of the method. @code{@var{resvec} (i,1)} is the Euclidean norm of the residual, and @code{@var{resvec} (i,2)} is the preconditioned residual norm, after the (@var{i}1)th iteration, @code{@var{i} = 1,2,...@...{iter}+1}. The preconditioned residual norm is defined as @code{norm (@var{r}) ^ 2 = @var{r}' * (@var{m} \ @var{r})} where @code{@var{r} = @var{b}  @var{a} * @var{x}}, see also the description of @var{m}. If @var{eigest} is not required, only @code{@var{resvec} (:,1)} is returned. @item @var{eigest} returns the estimate for the smallest @code{@var{eigest} (1)} and largest @code{@var{eigest} (2)} eigenvalues of the preconditioned matrix @code{@var{P} = @var{m} \ @var{a}}. In particular, if no preconditioning is used, the extimates for the extreme eigenvalues of @var{a} are returned. @code{@var{eigest} (1)} is an overestimate and @code{@var{eigest} (2)} is an underestimate, so that @code{@var{eigest} (2) / @var{eigest} (1)} is a lower bound for @code{cond (@var{P}, 2)}, which nevertheless in the limit should theoretically be equal to the actual value of the condition number. The method which computes @var{eigest} works only for symmetric positive definite @var{a} and @var{m}, and the user is responsible for verifying this assumption. @end itemize Let us consider a trivial problem with a diagonal matrix (we exploit the sparsity of A) @example @group N = 10; A = diag([1:N]); A = sparse(A); b = rand(N,1); @end group @end example @sc{Example 1:} Simplest use of @code{pcg} @example x = pcg(A,b) @end example @sc{Example 2:} @code{pcg} with a function which computes @code{@var{a} * @var{x}} @example @group function y = applyA(x) y = [1:N]'.*x; endfunction x = pcg('applyA',b) @end group @end example @sc{Example 3:} Preconditioned iteration, with full diagnostics. The preconditioner (quite strange, because even the original matrix @var{a} is trivial) is defined as a function @example @group function y = applyM(x) K = floor(length(x)2); y = x; y(1:K) = x(1:K)./[1:K]'; endfunction [x, flag, relres, iter, resvec, eigest] = pcg(A,b,[],[],'applyM') semilogy([1:iter+1], resvec); @end group @end example @sc{Example 4:} Finally, a preconditioner which depends on a parameter @var{k}. @example @group function y = applyM(x, varargin) K = varargin@{1@}; y = x; y(1:K) = x(1:K)./[1:K]'; endfuntion [x, flag, relres, iter, resvec, eigest] = ... pcg(A,b,[],[],'applyM',[],3) @end group @end example @sc{References} [1] C.T.Kelley, 'Iterative methods for linear and nonlinear equations', SIAM, 1995 (the base PCG algorithm) [2] Y.Saad, 'Iterative methods for sparse linear systems', PWS 1996 (condition number estimate from PCG) Revised version of this book is available online at http://wwwusers.cs.umn.edu/~saad/books.html @seealso{sparse, pcr} @end deftypefn  NEW FILE: quadl  * texinfo * @deftypefn {Function File} {@var{q} =} quadl (@var{f}, @var{a}, @var{b}) @deftypefnx {Function File} {@var{q} =} quadl (@var{f}, @var{a}, @var{b}, @var{tol}) @deftypefnx {Function File} {@var{q} =} quadl (@var{f}, @var{a}, @var{b}, @var{tol}, @var{trace}) @deftypefnx {Function File} {@var{q} =} quadl (@var{f}, @var{a}, @var{b}, @var{tol}, @var{trace}, @var{p1}, @var{p2}, @dots{}) Numerically evaluate integral using adaptive Lobatto rule. @code{quadl (@var{f}, @var{a}, @var{b})} approximates the integral of @code{@var{f}(@var{x})} to machine precision. @var{f} is either a function handle, inline function or string containing the name of the function to evaluate. The function @var{f} must return a vector of output values if given a vector of input values. If defined, @var{tol} defines the relative tolerance to which to which to integrate @code{@var{f}(@var{x})}. While if @var{trace} is defined, displays the left end point of the current interval, the interval length, and the partial integral. Additional arguments @var{p1}, etc, are passed directly to @var{f}. To use default values for @var{tol} and @var{trace}, one may pass empty matrices. Reference: W. Gander and W. Gautschi, 'Adaptive Quadrature  Revisited', BIT Vol. 40, No. 1, March 2000, pp. 84101. @url{http://www.inf.ethz.ch/personal/gander/} @end deftypefn  NEW FILE: gradient  * texinfo * @deftypefn {Function File} {@var{x} = } gradient (@var{M}) @deftypefnx {Function File} {[@var{x}, @var{y}, @dots{}] = } gradient (@var{M}) @deftypefnx {Function File} {[@dots{}] = } gradient (@var{M}, @var{s}) @deftypefnx {Function File} {[@dots{}] = } gradient (@var{M}, @var{dx}, @var{dy}, @dots{}) Calculates the gradient. @code{@var{x} = gradient (@var{M})} calculates the one dimensional gradient if @var{M} is a vector. If @var{M} is a matrix the gradient is calculated for each row. @code{[@var{x}, @var{y}] = gradient (@var{M})} calculates the one dimensional gradient for each direction if @var{M} if @var{M} is a matrix. Additional return arguments can be use for multidimensional matrices. Spacing values between two points can be provided by the @var{dx}, @var{dy} or @var{h} parameters. If @var{h} is supplied it is assumed to be the spacing in all directions. Otherwise, seperate values of the spacing can be supplied by the @var{dx}, etc variables. A scalar value specifies an equidistant spacing, while a vector value can be used to specify a variable spacing. The length must match their respective dimension of @var{M}. At boundary points a linear extrapolation is applied. Interior points are calculated with the first approximation of the numerical gradient @example y'(i) = 1/(x(i+1)x(i1)) *(y(i1)y(i+1)). @end example @end deftypefn 
From: David Bateman <adb014@us...>  20060610 14:06:45

Update of /cvsroot/octave/octaveforge/extra/NaN In directory sc8prcvs3.sourceforge.net:/tmp/cvsserv27632 Added Files: .cvsignore Log Message:  NEW FILE: .cvsignore  *.o *.oct 
From: David Bateman <adb014@us...>  20060610 14:04:56

Update of /cvsroot/octave/octaveforge/doc In directory sc8prcvs3.sourceforge.net:/tmp/cvsserv26820 Added Files: .cvsignore Log Message:  NEW FILE: .cvsignore  new_developer.html 
From: David Bateman <adb014@us...>  20060610 13:55:31

Update of /cvsroot/octave/octaveforge/main/combinatorics In directory sc8prcvs3.sourceforge.net:/tmp/cvsserv23062 Added Files: .cvsignore Log Message:  NEW FILE: .cvsignore  *.o *.d *.oct 
From: David Bateman <adb014@us...>  20060610 13:52:52

Update of /cvsroot/octave/octaveforge/main/linearalgebra In directory sc8prcvs3.sourceforge.net:/tmp/cvsserv21835 Added Files: .cvsignore Log Message:  NEW FILE: .cvsignore  *.o *.d *.oct 