Space Station Attitude Control, vectorized?
A MATLAB Automatic Differentiation Tool
Brought to you by:
anilvrao2,
weinstein87
Hello,
I was reading the 2015 optimal control paper https://arc.aiaa.org/doi/10.2514/6.2015-1085. I noticed that, compared to some examples made available in the package, in example 3 on space station attitude control, the objective incorporates an energy minimization of the control. In the ADiGator user manual in section 8.3, it states that "Furthermore, one may not sum over any vectorized dimension, as this results in derivatives of the output being dependent upon all points in time."
My interpretation of the discretized version of the cost functional in example 3 is that of a summation over the time index. Was this example not using vectorized differentiation? And if it was, if one wanted to minimize the energy of say a dynamical variable in a control problem, is that possible?
Thank you.
Hi,
Sorry I wrote this awhile back but it appears it never sent!
The link errors out for me, but I think you are referring to this paper: http://www.anilvrao.com/Publications/ConferencePublications/AIAA-GNC-ADiGator-GPOPS-II.pdf
It is true that the vectorized mode doesn't allow you to sum over the vectorized dimension, but in the collocation framework we use ADiGator only to differentiate the portions of the problem which are unique to individual problems (discretization separation). That is, the ADiGator is used to compute the derivatives only of the functions identified in tables 1 and 2. From there, the derivatives of the NLP are built by carrying out the chain rule using the discretization scheme.
As a simple example, if you want to compute the derivative of y = sum(G(X)), where G(X):R^N->R^N, you can use ADiGator to compute the derivative dG/dX, then compute dy/dX by summing over the first dimension of dG/dX (which, in this case, would simply be the the diagonal elements).
The example problem in examples/optimization/vectorized/minimumclimb/main_1stderivs_vect.m uses the approach described in the paper to compute derivatives of the dynamic constraints (cost is just final time, however).
In the end, it adds some extra work to use the vectorized mode for these types of problems, but the improvement is worth it for problems where you have a generalized collocation scheme, but are switching out different dynamics.
Matt W.
Hey Matt,
Thanks for the reply.
Additionally, I was wondering about circumstances in which one would be trying to consider taking derivatives of the dynamics with respect to not only the variables / controls, but certain (time-invariant) parameters.
Namely, we have F(X(t), U(t), theta) , and our final VOD is something like [X(t) ,U(t), theta] at some fixed times t, plus any additional parameters that may exist in the constraints/ objective function.
To relate it to one of the examples, suppose 'g' in the brachistrochrome problem was actually an unknown parameter one was trying to optimize.
Is there any opportunity to use the vectorized mode for this problem, or is it only for circumstances in which the arguments of a function (with derivatives wrt the vod) are vectorized?
If so, is there an obvious way to alter the example to include this?
As a disclaimer, my interests are in simultaneous estimation of state trajectories and unknown model parameters.
-Matt
I never quite figured out an elegant way to mix vectorized and non-vectorized derivatives. The quick and dirty way is to write your code as if the time-invariant parameter is time variant, set all elements equal to the single value, and then just pick off a single column from the Jacobian.
In terms of the optimal control problem, the quickest/dirtiest way is to consider this as a state with theta_dot = 0, theta(t0) = theta(tFinal). Assuming you are using a direct collocation method, you can further remove the extraneous NLP variables/constraints by adding knowledge that it is a single parameter, and that you can pick off the derivatives by looking at a single column from the Jacobian.