Re: [Bayes++] Square-root UKF
Brought to you by:
mistevens
From: Michael S. <ma...@mi...> - 2007-01-11 10:39:54
|
Hello Matt, On Wednesday 10 January 2007 00:14, Matt Hazard wrote: > Is there any support for Square-root (cholesky decomposition, etc) > formulations of the Unscented Kalman filter in Bayes++? Sorry there are no Square-root Unscented implementations in Bayes++. The only unscented filter implemented is that described in the original Unscented paper. Simon (Julier) has > I'm working on a > GPS/INS system purposed for real-time control of a helicopter UAV; any > computational cost reduction on the estimation end could directly lead to a > performance gain in the controller. OK I have a lot of experience in the GPS/INS fusion field so maybe I can help here. I would be interested in what sensors you are using. > Also, what is the best way to organize the state and measurement vectors? I > hope to use the UKF to estimate the bias and scale-factor errors present in > the signals from 3-axis rate gyros, 3-axis accelerometers, 3-axis > magnetometers, GPS, and barometric altitude/airspeed sensors. Of course, > only the sensor measurements are directly observable (the bias/sf errors > can't be measured) - any suggestions on the best way to represent this in > the observation implementation? It is probably a bad idea to try and estimate bias and scale-factor error in the inertial sensors on-line. Most affordable strapdown sensors tend to have a variety of horrible non-linear and dynamic effects (hysteresis etc) which prevent the estimation of their values while in motion. You can do a lot with off-line calibration and you will probably need to do this anyway to get some idea of the sensor noise. Alan variance analysis of the sensor noise can be quite helpful to get an idea of how well they will perform without correction. I would start with 9 variables in your systems state vector: position, velocity and attitude in 3 dimensions.In this case observations of the inertial rates and accelerations (after calibration) are control inputs into the filter prediction and not observations. For efficiency (high rate inertial data, 100Hz for example) an indirect filter architecture is normally used. In this case the navigation state is intergrated directly from the inertial observation and the filter only estimates error corrections. A further advantage of an indirect filter architecture is that you can represent attitude in a redundant form, either as a direction cosine matrix or using quaternions. This avoids the nasty problems associated with Euler angles. With regard to GPS observations you will need to choose and appropriate coordinate frame. If you are going to be moving in a limited area (with differential GPS for example) then you can represent position using simple Local Tangent Plane coordinates such as North, East, Down. Your GPS receiver can either report them directly or you can use a universal transverse Mercator projection to convert from lat,long. In this form the GPS observation model is simple. Your GPS may also give observations of velocity which can be used in the filter if the GPS unit has not already used them to estimate position. A good reference to all this is "Global Positioning Systems, Inertial Navigation, and Integration" ISBN 0-471-35032-X > Right now I'm just working on getting the correct predict/observe cycle. > I'm using an Addative_predict_model and an > Uncorrelated_addative_observe_model. Are there underlying assumptions for > these models that make them less suitable than the base model > (Unscented_scheme::observe, for instance?) I am a bit confused by you "base model". 'Unscented_scheme::observe' is a member function of Unscented_scheme. Specifying the model as 'Uncorrelated_addative_observe_model' is fine for the Unscented_scheme however. Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |