From: Joseph P. S. <js...@js...> - 2002-06-19 06:40:34
|
Hi. I was looking in the most recent packaged download (octave-forge-2002.05.09.tar.gz) at the main/optim code, and noticed that bfgs.m states that derivative must return a row vector. This is pretty non-standard, and is likely to annoy or confuse folks. Bertsekas's Nonlinear Programming (2nd ed, 1999, p. 664) clearly states that Df is a column vector. Nocedal and Wright's Numerical Optimization (1999, p. 582, eq. A.12) clearly states that Df is a column vector. Tim Kelley's Iterative Methods for Optimization (1999, p. 4) clearly states that vectors (including gradients) are assumed to be column vectors. So please, let's use standard conventions, and have the gradient of a scalar function returned as a column vector (and the derivative of a vector value function returned as [ Df1 Df2 ... ], the transpose of the Jacobian). Thanks. /Jskud |