[Vxl-users] fdgradf problem From: David Doria - 2009-02-16 19:08:18 ```I'm testing my cost function by only varying 1 parameter for the time being. The gradf function I implemented simply calls fdgradf() void gradf(vnl_vector const &x, vnl_vector &dx) { fdgradf(x, dx); //Finite Difference Gradient cout << "Grad: " << dx[0] << endl; } I output the value that is returned at each function evaluation I'm assuming the first one is to get the value of the function at the initial position. The next 2 are the values that are averaged to get the value of the gradient. F: -103879 F: -103880 F: -103876 Grad: 172625 Since the step_size is 1e-5, this gradient makes sense (because the difference in function values is about 4, so 4/(2*1e-5) is about 200,000). Because the gradient is so big, it takes a step of size 1.0. The range of values which I would like to search is [-.5, .5]. I know there is lbfgsb for constrained optimization, but I don't really want to use a constrained method, I'd rather just take a smaller step. Should I just clip the gradient to a max value in the gradf function? Or is there a better way to do this? Thanks, David ```

 [Vxl-users] fdgradf problem From: David Doria - 2009-02-16 19:08:18 ```I'm testing my cost function by only varying 1 parameter for the time being. The gradf function I implemented simply calls fdgradf() void gradf(vnl_vector const &x, vnl_vector &dx) { fdgradf(x, dx); //Finite Difference Gradient cout << "Grad: " << dx[0] << endl; } I output the value that is returned at each function evaluation I'm assuming the first one is to get the value of the function at the initial position. The next 2 are the values that are averaged to get the value of the gradient. F: -103879 F: -103880 F: -103876 Grad: 172625 Since the step_size is 1e-5, this gradient makes sense (because the difference in function values is about 4, so 4/(2*1e-5) is about 200,000). Because the gradient is so big, it takes a step of size 1.0. The range of values which I would like to search is [-.5, .5]. I know there is lbfgsb for constrained optimization, but I don't really want to use a constrained method, I'd rather just take a smaller step. Should I just clip the gradient to a max value in the gradf function? Or is there a better way to do this? Thanks, David ```