Hi,
I have another question regarding constraints handling of grampc. I have a state which is constrained very close to a discontinous value. In other words, in the ODE I take log(x) and x is constrained by 0.05. So far I run into issues (getting NaNs) when I get close to the inequality constraint. I tried to adapt the penalizing term cmin, but run into the same issue. Are there other parameters worth to tune in order to not run into this issue?
Best regards
Daniel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Daniel,
you can have a look at “PenaltyIncreaseFactor” and “PenaltyIncreaseThreshold”. These two parameters control the speed at which the penalty parameter is increased, see Section 4.4.2 in the manual. It might also be helpful to use the scaling option (difficult to say without further information of the system). Lastly, changing the number and ratio of inner gradient iterations and outer multiplier iterations could be useful (in conjunction with the parameter AugLagUpdateGradientRelTol).
Regards,
Felix
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
some additions to what Felix said: no matter how well you tune the penalty parameters, the states in intermediate iterations can violate the constraints. Since GRAMPC does not check for NaNs, you are responsible for preventing these values, i.e. you have to take care that x is greater than zero before you evaluate log(x). In your case, one option is to check whether x is smaller than some epsilon and if so, set x to epsilon in the problem function.
Regards,
Andreas
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hey Andreas and Felix,
I tuned the parameters and it improved the results in term of convergency speed and constraints compliance, but the final solution to prevent NaNs was thresholding of x in the problem formulation. It seem to work pretty well now! Thank you :)
Best regards,
Daniel
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
I have another question regarding constraints handling of grampc. I have a state which is constrained very close to a discontinous value. In other words, in the ODE I take log(x) and x is constrained by 0.05. So far I run into issues (getting NaNs) when I get close to the inequality constraint. I tried to adapt the penalizing term cmin, but run into the same issue. Are there other parameters worth to tune in order to not run into this issue?
Best regards
Daniel
Hi Daniel,
you can have a look at “PenaltyIncreaseFactor” and “PenaltyIncreaseThreshold”. These two parameters control the speed at which the penalty parameter is increased, see Section 4.4.2 in the manual. It might also be helpful to use the scaling option (difficult to say without further information of the system). Lastly, changing the number and ratio of inner gradient iterations and outer multiplier iterations could be useful (in conjunction with the parameter AugLagUpdateGradientRelTol).
Regards,
Felix
Hello Daniel,
some additions to what Felix said: no matter how well you tune the penalty parameters, the states in intermediate iterations can violate the constraints. Since GRAMPC does not check for NaNs, you are responsible for preventing these values, i.e. you have to take care that x is greater than zero before you evaluate log(x). In your case, one option is to check whether x is smaller than some epsilon and if so, set x to epsilon in the problem function.
Regards,
Andreas
Hey Andreas and Felix,
I tuned the parameters and it improved the results in term of convergency speed and constraints compliance, but the final solution to prevent NaNs was thresholding of x in the problem formulation. It seem to work pretty well now! Thank you :)
Best regards,
Daniel