bayesclasses-general Mailing List for Bayes+Estimate (Page 3)
Brought to you by:
mistevens
You can subscribe to this list here.
2004 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(2) |
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(14) |
Oct
(6) |
Nov
|
Dec
|
2006 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
(15) |
Dec
(5) |
2007 |
Jan
(10) |
Feb
(4) |
Mar
(1) |
Apr
|
May
(8) |
Jun
|
Jul
|
Aug
|
Sep
(11) |
Oct
(2) |
Nov
(1) |
Dec
|
2008 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: Michael S. <ma...@mi...> - 2007-01-26 16:13:32
|
On Thursday 25 January 2007 18:38, Nithya Nirmal Vijayakumar wrote: > Hello all, > > I am trying to install Bayes++ in RHEL 4. I am using gcc 3.4.6 > > I first download bjam and then installed boost_1_33_1 using the command: > bjam --prefix={install_dir} -sTOOLS=gcc install This is not needed. Bayes++ does not use use any compiled libraries from Boost. Everything that is used from Boost is header only template libraries. > > Then, when I try to compile Bayes++. I get the following error: > > /home/nvijayak/Bayes++> bjam --v2 -sBOOST_ROOT="../boost_1_33_1" > /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/feature.jam:431: in > feature.validate-value-string from module feature > error: "gcc" is not a known value of feature <toolset> > error: legal values: Ok. You are now using Boost build version 2 to compile Bayes++ as I recommend. For Boost build version 2 to use the gcc toolset it needs a little configuration. Copy boost_1_33_1_2/tools/build/v2/user-config.jam to your home directory. Then edit it so the '#' comment before 'using gcc' is removed. That should get you going. -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Nithya N. V. <nvi...@cs...> - 2007-01-25 17:39:06
|
Hello all, I am trying to install Bayes++ in RHEL 4. I am using gcc 3.4.6 I first download bjam and then installed boost_1_33_1 using the command: bjam --prefix={install_dir} -sTOOLS=gcc install Then, when I try to compile Bayes++. I get the following error: /home/nvijayak/Bayes++> bjam --v2 -sBOOST_ROOT="../boost_1_33_1" /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/feature.jam:431: in feature.validate-value-string from module feature error: "gcc" is not a known value of feature <toolset> error: legal values: /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/property.jam:250: in validate1 from module property /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/property.jam:273: in property.validate from module property /home/nvijayak/boost_1_33_1_2/tools/build/v2/tools/builtin.jam:176: in variant from module builtin project-root.jam:7: in modules.load from module Jamfile</home/nvijayak/Bayes++> /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/project.jam:306: in load-jamfile from module project /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/project.jam:68: in load from module project /home/nvijayak/boost_1_33_1_2/tools/build/v2/build/project.jam:164: in project.find from module project /home/nvijayak/boost_1_33_1_2/tools/build/v2/build-system.jam:136: in load from module build-system /home/nvijayak/boost_1_33_1_2/tools/build/v2/kernel/modules.jam:259: in import from module modules /home/nvijayak/boost_1_33_1_2/tools/build/v2/kernel/bootstrap.jam:153: in boost-build from module /home/nvijayak/boost_1_33_1_2/boost-build.jam:12: in module scope from module I am not sure why the gcc toolset was not installed or what else is the problem. Anyone faced this problem before? Any suggestions? thanks, Nithya |
From: Michael S. <ma...@mi...> - 2007-01-25 14:27:32
|
On Saturday 13 January 2007 13:36, =E7=B6=AD=E5=9D=87 wrote: > Hi, > > If I'd like to design a class inheriting Linrz_uncorrelated_observe_model > to describe a pinhole model, how would I assign the values of Hx Jacobian > matrix? I thought the Jacobian depends on where you linearize the > observation function which means it is related to the values of the state > vector when you linearize it. But in the PV sample code I just see things > like: > Hx(0,0) =3D 1; > Hx(0,1) =3D 0.; and these are not time-varying. > There must be something wrong in my understanding. Would anyone help? > You can modify the the observation models values at any time. As you sugges= t=20 it is very common that the Jacobian is not constant. With a linearized=20 observation model it usually depends on the state vector. The PV example on requires a simple constant and linear observation model. = It=20 uses Linrz_uncorrelated_observe_model to show the general form, even though= =20 it is not necessary in this case. If your state dependant observation model inherts from=20 Linrz_uncorrelated_observe_model then it is easy to represent the state=20 dependance of the model. class Obs : public Linrz_uncorrelated_observe_model { public: void state(const Vec& s) { s_ =3D s; // linearised Jacobian Hx(0,0) =3D some function of the elements of s etc=20 } private: Vec s_; // Current state about which Hx was linearised }; The member function 'state' is simply called with the state about which you= =20 wish to linearise before the model is used as the model paramter=20 to 'observe'. All the best, Michael ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Matt H. <mwh...@gm...> - 2007-01-16 14:24:29
|
You're correct! The Position-Velocity example has truly linear prediction/observation functions, so the Jacobian is constant. You will find what you're looking for in the QuadCalib example, which uses a linearized observation. In QuadCalib.cpp, a member function called 'state' is called to fill the appropriate fields in the Jacobian matrix using the current state. Reading on, you'll see this function gets called before the observation takes place. I assume you are doing work with images... what states have you chosen to represent the pinhole camera model? Hope this helps, Matt Hazard mwh...@gm... On 1/13/07, 維均 <inn...@gm...> wrote: > Hi, > > If I'd like to design a class inheriting Linrz_uncorrelated_observe_model > to describe a pinhole model, how would I assign the values of Hx Jacobian > matrix? I thought the Jacobian depends on where you linearize the > observation function which means it is related to the values of the state > vector when you linearize it. But in the PV sample code I just see things > like: > Hx(0,0) = 1; > Hx(0,1) = 0.; and these are not time-varying. > There must be something wrong in my understanding. Would anyone help? > > Best, > Wei-chun > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > Bayesclasses-general mailing list > Bay...@li... > https://lists.sourceforge.net/lists/listinfo/bayesclasses-general > > > |
From: <inn...@gm...> - 2007-01-13 12:36:42
|
Hi, If I'd like to design a class inheriting Linrz_uncorrelated_observe_model to describe a pinhole model, how would I assign the values of Hx Jacobian matrix? I thought the Jacobian depends on where you linearize the observation function which means it is related to the values of the state vector when you linearize it. But in the PV sample code I just see things like: Hx(0,0) = 1; Hx(0,1) = 0.; and these are not time-varying. There must be something wrong in my understanding. Would anyone help? Best, Wei-chun |
From: Michael S. <ma...@mi...> - 2007-01-11 21:08:18
|
On Thursday 11 January 2007 18:01, you wrote: > On 1/11/07, Michael Stevens <ma...@mi...> wrote: > > Hello Matt, > > > > On Wednesday 10 January 2007 00:14, Matt Hazard wrote: > > > Is there any support for Square-root (cholesky decomposition, etc) > > > formulations of the Unscented Kalman filter in Bayes++? > > > > Sorry there are no Square-root Unscented implementations in Bayes++. The > > only > > unscented filter implemented is that described in the original Unscented > > paper. Simon (Julier) has > > > > > I'm working on a > > > GPS/INS system purposed for real-time control of a helicopter UAV; any > > > computational cost reduction on the estimation end could directly lead > > > > to a > > > > > performance gain in the controller. > > > > OK I have a lot of experience in the GPS/INS fusion field so maybe I can > > help > > here. I would be interested in what sensors you are using. > > Our sensor board prototype uses Analog Devices' gyros (ADXRS300) and > accelerometers (ADXL330, I think). The magnetometer is a PNI MicroMag3. We > also have a complete (working) Crista IMU, which uses basically the same > gyros/accelerometers, but lacks the magnetometers. At any rate we should > see comparable results between the two sensor packages. OK. Low cost MEMS type sensors. With this kind of low accuracy sensor you will need good differential GPS position to maintain the platform attitude. The Crista unit look very sensible. It avoid ny attitude processing, which usually gets in the way. The temperature compensation and 1pps syncronisation are are worth having. > > > I'm using an Addative_predict_model and an > > > Uncorrelated_addative_observe_model. Are there underlying assumptions > > > > for > > > > > these models that make them less suitable than the base model > > > (Unscented_scheme::observe, for instance?) > > > > I am a bit confused by you "base model". 'Unscented_scheme::observe' is a > > member function of Unscented_scheme. > > Specifying the model as 'Uncorrelated_addative_observe_model' is fine for > > the > > Unscented_scheme however. > > My mistake. I copied the wrong thing. What happens if I use the > uncorrelated noise model and the noise turns out to be correlated? Not a problem for the Unscented_scheme. For most square root filters it is however a problem! The factorisations use to put the system into squareroot form require that the noise can be decorrelated and there is only a general solution for this for linear observation models. Luckily most observation generally only have weak cross correlations! Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Michael S. <ma...@mi...> - 2007-01-11 20:49:51
|
On Thursday 11 January 2007 18:20, Matt Hazard wrote: > One more idea. > > Would a full dynamic model of the airframe be necessary in an indirect > formulation? It seems like you'd only be correcting the inertial navigation > solution with the aiding sensors - only requiring an error model. Nothing is made 'necessary' by the indirect formulation. In it simpliest form it is mathematically (though not numerically) the same as a simple direct filter. It does however allow you more flexibility in the state representation and how it is updated. > A tightly-coupled filter with a nonlinear dynamics model of the airframe > could use the control surface deflections as inputs in the predict_model, > and the inertial and aiding sensors as measurements in the observe_model, > correct? To use gyro rates and acceleration as observation you need to represent them in the state vector. This requires a 15 variable state composed of position, velocity, acceleration, attitude, and attitude rate! Possible on a modern computers. > It's a more complicated system, but wouldn't the specification of > the system dynamics model improve the results? Hard question to answer on paper! Possibly. It would require a lot of work to model the sensors and dynamics such that the filter would work with the observation. Possibly you would not gain anything of much value for navigation but data that could be used by the controller. All the best, Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Matt H. <mwh...@gm...> - 2007-01-11 17:20:33
|
One more idea. Would a full dynamic model of the airframe be necessary in an indirect formulation? It seems like you'd only be correcting the inertial navigation solution with the aiding sensors - only requiring an error model. A tightly-coupled filter with a nonlinear dynamics model of the airframe could use the control surface deflections as inputs in the predict_model, and the inertial and aiding sensors as measurements in the observe_model, correct? It's a more complicated system, but wouldn't the specification of the system dynamics model improve the results? Matt Hazard mwh...@nc... On 1/11/07, Matt Hazard <mwh...@gm...> wrote: > > On 1/11/07, Michael Stevens <ma...@mi...> wrote: > > > > Hello Matt, > > > > On Wednesday 10 January 2007 00:14, Matt Hazard wrote: > > > Is there any support for Square-root (cholesky decomposition, etc) > > > formulations of the Unscented Kalman filter in Bayes++? > > > > Sorry there are no Square-root Unscented implementations in Bayes++. The > > only > > unscented filter implemented is that described in the original Unscented > > paper. Simon (Julier) has > > > > > I'm working on a > > > GPS/INS system purposed for real-time control of a helicopter UAV; any > > > computational cost reduction on the estimation end could directly lead > > to a > > > performance gain in the controller. > > > > OK I have a lot of experience in the GPS/INS fusion field so maybe I can > > help > > here. I would be interested in what sensors you are using. > > > Our sensor board prototype uses Analog Devices' gyros (ADXRS300) and > accelerometers (ADXL330, I think). The magnetometer is a PNI MicroMag3. We > also have a complete (working) Crista IMU, which uses basically the same > gyros/accelerometers, but lacks the magnetometers. At any rate we should see > comparable results between the two sensor packages. > > > Also, what is the best way to organize the state and measurement > > vectors? I > > > hope to use the UKF to estimate the bias and scale-factor errors > > present in > > > the signals from 3-axis rate gyros, 3-axis accelerometers, 3-axis > > > magnetometers, GPS, and barometric altitude/airspeed sensors. Of > > course, > > > only the sensor measurements are directly observable (the bias/sf > > errors > > > can't be measured) - any suggestions on the best way to represent this > > in > > > the observation implementation? > > > > It is probably a bad idea to try and estimate bias and scale-factor > > error in > > the inertial sensors on-line. Most affordable strapdown sensors tend to > > have > > a variety of horrible non-linear and dynamic effects (hysteresis etc) > > which > > prevent the estimation of their values while in motion. > > You can do a lot with off-line calibration and you will probably need to > > do > > this anyway to get some idea of the sensor noise. Alan variance analysis > > of > > the sensor noise can be quite helpful to get an idea of how well they > > will > > perform without correction. > > > OK. > > > I would start with 9 variables in your systems state vector: position, > > velocity and attitude in 3 dimensions.In this case observations of the > > inertial rates and accelerations (after calibration) are control inputs > > into > > the filter prediction and not observations. > > > I see. In that case, the other sensors (GPS, altimeter, airspeed > indicator, pitch/roll corrected magnetic heading) would be the observations, > correct? > > For efficiency (high rate inertial data, 100Hz for example) an indirect > > filter > > architecture is normally used. In this case the navigation state is > > intergrated directly from the inertial observation and the filter only > > estimates error corrections. A further advantage of an indirect filter > > architecture is that you can represent attitude in a redundant form, > > either > > as a direction cosine matrix or using quaternions. This avoids the nasty > > > > problems associated with Euler angles. > > > > With regard to GPS observations you will need to choose and appropriate > > coordinate frame. If you are going to be moving in a limited area (with > > differential GPS for example) then you can represent position using > > simple > > Local Tangent Plane coordinates such as North, East, Down. Your GPS > > receiver > > can either report them directly or you can use a universal transverse > > Mercator projection to convert from lat,long. In this form the GPS > > observation model is simple. Your GPS may also give observations of > > velocity > > which can be used in the filter if the GPS unit has not already used > > them to > > estimate position. > > > UTM coordinates sound like a logical choice. I'm already using the proj > cartographic library for similar conversions. > > A good reference to all this is "Global Positioning Systems, Inertial > > Navigation, and Integration" ISBN 0-471-35032-X > > > Wow! I just picked that book up from the library *before* I got your > email. Thanks for the tip. > > > Right now I'm just working on getting the correct predict/observe cycle. > > > I'm using an Addative_predict_model and an > > > Uncorrelated_addative_observe_model. Are there underlying assumptions > > for > > > these models that make them less suitable than the base model > > > (Unscented_scheme::observe, for instance?) > > > > I am a bit confused by you "base model". 'Unscented_scheme::observe' is > > a > > member function of Unscented_scheme. > > Specifying the model as 'Uncorrelated_addative_observe_model' is fine > > for the > > Unscented_scheme however. > > > My mistake. I copied the wrong thing. What happens if I use the > uncorrelated noise model and the noise turns out to be correlated? > > Michael > > > > Thanks for all the clarifications, > > Matt Hazard > mwh...@nc... > > ------------------------------------------------------------------------- > > Take Surveys. Earn Cash. Influence the Future of IT > > Join SourceForge.net's Techsay panel and you'll get the chance to share > > your > > opinions on IT & business topics through brief surveys - and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > _______________________________________________ > > Bayesclasses-general mailing list > > Bay...@li... > > https://lists.sourceforge.net/lists/listinfo/bayesclasses-general > > > > |
From: Michael S. <ma...@mi...> - 2007-01-11 10:39:54
|
Hello Matt, On Wednesday 10 January 2007 00:14, Matt Hazard wrote: > Is there any support for Square-root (cholesky decomposition, etc) > formulations of the Unscented Kalman filter in Bayes++? Sorry there are no Square-root Unscented implementations in Bayes++. The only unscented filter implemented is that described in the original Unscented paper. Simon (Julier) has > I'm working on a > GPS/INS system purposed for real-time control of a helicopter UAV; any > computational cost reduction on the estimation end could directly lead to a > performance gain in the controller. OK I have a lot of experience in the GPS/INS fusion field so maybe I can help here. I would be interested in what sensors you are using. > Also, what is the best way to organize the state and measurement vectors? I > hope to use the UKF to estimate the bias and scale-factor errors present in > the signals from 3-axis rate gyros, 3-axis accelerometers, 3-axis > magnetometers, GPS, and barometric altitude/airspeed sensors. Of course, > only the sensor measurements are directly observable (the bias/sf errors > can't be measured) - any suggestions on the best way to represent this in > the observation implementation? It is probably a bad idea to try and estimate bias and scale-factor error in the inertial sensors on-line. Most affordable strapdown sensors tend to have a variety of horrible non-linear and dynamic effects (hysteresis etc) which prevent the estimation of their values while in motion. You can do a lot with off-line calibration and you will probably need to do this anyway to get some idea of the sensor noise. Alan variance analysis of the sensor noise can be quite helpful to get an idea of how well they will perform without correction. I would start with 9 variables in your systems state vector: position, velocity and attitude in 3 dimensions.In this case observations of the inertial rates and accelerations (after calibration) are control inputs into the filter prediction and not observations. For efficiency (high rate inertial data, 100Hz for example) an indirect filter architecture is normally used. In this case the navigation state is intergrated directly from the inertial observation and the filter only estimates error corrections. A further advantage of an indirect filter architecture is that you can represent attitude in a redundant form, either as a direction cosine matrix or using quaternions. This avoids the nasty problems associated with Euler angles. With regard to GPS observations you will need to choose and appropriate coordinate frame. If you are going to be moving in a limited area (with differential GPS for example) then you can represent position using simple Local Tangent Plane coordinates such as North, East, Down. Your GPS receiver can either report them directly or you can use a universal transverse Mercator projection to convert from lat,long. In this form the GPS observation model is simple. Your GPS may also give observations of velocity which can be used in the filter if the GPS unit has not already used them to estimate position. A good reference to all this is "Global Positioning Systems, Inertial Navigation, and Integration" ISBN 0-471-35032-X > Right now I'm just working on getting the correct predict/observe cycle. > I'm using an Addative_predict_model and an > Uncorrelated_addative_observe_model. Are there underlying assumptions for > these models that make them less suitable than the base model > (Unscented_scheme::observe, for instance?) I am a bit confused by you "base model". 'Unscented_scheme::observe' is a member function of Unscented_scheme. Specifying the model as 'Uncorrelated_addative_observe_model' is fine for the Unscented_scheme however. Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Matt H. <mwh...@gm...> - 2007-01-09 23:14:42
|
Is there any support for Square-root (cholesky decomposition, etc) formulations of the Unscented Kalman filter in Bayes++? I'm working on a GPS/INS system purposed for real-time control of a helicopter UAV; any computational cost reduction on the estimation end could directly lead to a performance gain in the controller. Also, what is the best way to organize the state and measurement vectors? I hope to use the UKF to estimate the bias and scale-factor errors present in the signals from 3-axis rate gyros, 3-axis accelerometers, 3-axis magnetometers, GPS, and barometric altitude/airspeed sensors. Of course, only the sensor measurements are directly observable (the bias/sf errors can't be measured) - any suggestions on the best way to represent this in the observation implementation? Right now I'm just working on getting the correct predict/observe cycle. I'm using an Addative_predict_model and an Uncorrelated_addative_observe_model. Are there underlying assumptions for these models that make them less suitable than the base model (Unscented_scheme::observe, for instance?) Thanks for your advice, Matt Hazard NCSU Aerial Robotics Club http://art1.mae.ncsu.edu North Carolina State University |
From: Nicola B. <nb...@es...> - 2006-12-19 20:22:16
|
Hello, I have recently installed Linux on a new laptop with an Intel Core 2 Duo processor. After compiling the last version Bayes++ (2003.8-6), I had a segmentation fault when I tried to run the sample PV_SIR. With a little of backtrace, I discovered the cause was the gcc option '-fstrict-aliasing', included in the '-O2' optimization. Removed that option and inserting manually all the others included in '-O2' and '-O3', the program run successfully. It's not clear to me whether the problem is on gcc, kernel or Bayes++. The gcc I use is the following: gcc (GCC) 4.1.1 20061011 (Red Hat 4.1.1-30) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. This is the kernel version; 2.6.18-1.2849.fc6 #1 SMP Fri Nov 10 12:45:28 EST 2006 i686 i686 i386 GNU/Linux Cheers, Nicola -- ------------------------------------------ Nicola Bellotto Department of Computer Science University of Essex Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Michael S. <ma...@mi...> - 2006-12-17 18:48:34
|
On Tuesday, 12. December 2006 22:41, Michael Simon wrote: > Michael Stevens wrote: > > On Monday, 11. December 2006 17:24, Michael Simon wrote: > >> Hello, everyone. > >> > >> I am attempting to use the Covariance_filter as an EKF for a non-linear > >> model. However, I do not understand which type of model class to use, > >> and how to specify the data for the models, even after looking at the > >> Bayesian Filtering overview. I'm pretty sure I need to use a > >> linrz_predict_model (or something like that) but beyond that... > > > > 'Linrz_predict_model' is definately what you want. > > > >> The main problem is that the elements of the f function I want to use > >> for the model are exponential. It's non-linear, with noise, but no > >> control (it's essentially 'the object I'm trying to predict is walking > >> around randomly, but within the laws of physics') How do you specify > >> that? > > > > No-control is fine. Control inputs just make the implementation more > > complex! I don't understand how you model can be 'exponential' when the > > physics are of something moving around at random. Or do you mean that you > > have exponentially correlated noise in you model? > > Actually, after sketching the algorithms out by hand, I don't understand > it either. ;) I do want a model that would be sufficient to track > something that might move at random, but the formula I had down clearly > wouldn't give it. > > >> I've been looking at the Welch and Bishop introduction and trying > >> to compare back to Bayes++, but with no avail. I'm fairly sure I could > >> use the libraries if my model was linear, but I don't quite see how to > >> put the non-linearity in there. > > > > I not sure of where you are stubling. Would it be possible to post the > > equations that model your system? > > As I mentioned above, I was using the wrong formulas. At the time I was > reading a report about EKF that purported to be using unconstrained > Brownian Motion for modeling and trying to reproduce their results, but > the formulas they gave don't produce anything approaching that, at > least, not how I read them. (One example: x_k = exp(-1/4(x_(k-1) + 1.5 > (deltax_(k-1)) from > http://page.mi.fu-berlin.de/~zaldivar/files/tr-b-05-12.pdf ) The paper looks to be totaly bogus to me! The first 13 pages are just a standard derivation of the Kalman filter then can be found in many books. The dynamic system model at the end of page 16 which you quote from is just nonsense. I'm not sure where they go it from. I think Welch and Bishop should have the equations you need. Otherwise take a look in: Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software Yaakov Bar-Shalom, X. Rong Li, Thiagalingam Kirubarajan 2001 John Wiley & Sons, Inc. ISBNs: 0-471-41655-X (Hardback) 0-471-22127-9 (Electronic) Start with a "Continuous White Noise Acceleration Model". My own PV example in Bayes++ is a extension of this. It implements the IOU (see reference in code) in 1D. This is a little better as it bounds the growth of velocity uncertainty. Sorry I can't wrie much more at the moment. I will be checking my emails over the Christmas holidays and will attempt to help out if you have more questions. All the best, Michaelesclasses-general -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Michael S. <ms...@21...> - 2006-12-12 21:42:26
|
Michael Stevens wrote: > On Monday, 11. December 2006 17:24, Michael Simon wrote: >> Hello, everyone. >> >> I am attempting to use the Covariance_filter as an EKF for a non-linear >> model. However, I do not understand which type of model class to use, >> and how to specify the data for the models, even after looking at the >> Bayesian Filtering overview. I'm pretty sure I need to use a >> linrz_predict_model (or something like that) but beyond that... > > 'Linrz_predict_model' is definately what you want. > >> The main problem is that the elements of the f function I want to use >> for the model are exponential. It's non-linear, with noise, but no >> control (it's essentially 'the object I'm trying to predict is walking >> around randomly, but within the laws of physics') How do you specify >> that? > > No-control is fine. Control inputs just make the implementation more complex! > I don't understand how you model can be 'exponential' when the physics are of > something moving around at random. Or do you mean that you have exponentially > correlated noise in you model? Actually, after sketching the algorithms out by hand, I don't understand it either. ;) I do want a model that would be sufficient to track something that might move at random, but the formula I had down clearly wouldn't give it. >> I've been looking at the Welch and Bishop introduction and trying >> to compare back to Bayes++, but with no avail. I'm fairly sure I could >> use the libraries if my model was linear, but I don't quite see how to >> put the non-linearity in there. > > I not sure of where you are stubling. Would it be possible to post the > equations that model your system? As I mentioned above, I was using the wrong formulas. At the time I was reading a report about EKF that purported to be using unconstrained Brownian Motion for modeling and trying to reproduce their results, but the formulas they gave don't produce anything approaching that, at least, not how I read them. (One example: x_k = exp(-1/4(x_(k-1) + 1.5 (deltax_(k-1)) from http://page.mi.fu-berlin.de/~zaldivar/files/tr-b-05-12.pdf ) > The Covariance_filter implements the EKF in a fairly standard form. It should > be usable for any non-linear problem that can be solved by an EKF. > You need to turn the maths of you model into 3 things: > f(x) : This should be easy once you have discrete time state equation of the > system! I suppose this is where my biggest problem was when the question was asked, but after some rooting through the code, I am under the impression you are supposed to replace the virtual f function with a function that takes x, computes f(x), computes the Jacobian for that time step and puts it in Fx, and returns the new value. Is that correct? > Fx : Is the Jacobian of f(x) > GqG' : This is often written as the symmetric matrix Q and is the covariance > of the addative noise in you discrete state equations. Generally it is > possible (an physically more meaningful) to write this a coupling matrix G > and an noise variance q. If this is not possible then you can numerically > obtain G and q from Q with the following: > > // Equivalent de-correlated form as GqG' > Float rcond = Bayesian_filter_matrix::UdUfactor (Qtemp, Q); > rclimit.check_PSD(rcond, "decorrelating Q not PSD"); > Bayesian_filter_matrix::UdUseperate (G, q, Qtemp); The two above things are simply variables to be set, yes? > > Hope this helps you in the right direction, > Michael Yes, this has been helpful so far, thanks. Michael S. |
From: Michael S. <ma...@mi...> - 2006-12-12 21:00:30
|
On Monday, 11. December 2006 17:24, Michael Simon wrote: > Hello, everyone. > > I am attempting to use the Covariance_filter as an EKF for a non-linear > model. However, I do not understand which type of model class to use, > and how to specify the data for the models, even after looking at the > Bayesian Filtering overview. I'm pretty sure I need to use a > linrz_predict_model (or something like that) but beyond that... 'Linrz_predict_model' is definately what you want. > The main problem is that the elements of the f function I want to use > for the model are exponential. It's non-linear, with noise, but no > control (it's essentially 'the object I'm trying to predict is walking > around randomly, but within the laws of physics') How do you specify > that? No-control is fine. Control inputs just make the implementation more complex! I don't understand how you model can be 'exponential' when the physics are of something moving around at random. Or do you mean that you have exponentially correlated noise in you model? > I've been looking at the Welch and Bishop introduction and trying > to compare back to Bayes++, but with no avail. I'm fairly sure I could > use the libraries if my model was linear, but I don't quite see how to > put the non-linearity in there. I not sure of where you are stubling. Would it be possible to post the equations that model your system? The Covariance_filter implements the EKF in a fairly standard form. It should be usable for any non-linear problem that can be solved by an EKF. You need to turn the maths of you model into 3 things: f(x) : This should be easy once you have discrete time state equation of the system! Fx : Is the Jacobian of f(x) GqG' : This is often written as the symmetric matrix Q and is the covariance of the addative noise in you discrete state equations. Generally it is possible (an physically more meaningful) to write this a coupling matrix G and an noise variance q. If this is not possible then you can numerically obtain G and q from Q with the following: // Equivalent de-correlated form as GqG' Float rcond = Bayesian_filter_matrix::UdUfactor (Qtemp, Q); rclimit.check_PSD(rcond, "decorrelating Q not PSD"); Bayesian_filter_matrix::UdUseperate (G, q, Qtemp); Hope this helps you in the right direction, Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Michael S. <ms...@21...> - 2006-12-11 16:25:35
|
Hello, everyone. I am attempting to use the Covariance_filter as an EKF for a non-linear model. However, I do not understand which type of model class to use, and how to specify the data for the models, even after looking at the Bayesian Filtering overview. I'm pretty sure I need to use a linrz_predict_model (or something like that) but beyond that... The main problem is that the elements of the f function I want to use for the model are exponential. It's non-linear, with noise, but no control (it's essentially 'the object I'm trying to predict is walking around randomly, but within the laws of physics') How do you specify that? I've been looking at the Welch and Bishop introduction and trying to compare back to Bayes++, but with no avail. I'm fairly sure I could use the libraries if my model was linear, but I don't quite see how to put the non-linearity in there. The documentation doesn't make it completely clear what all the pieces that I'm specifying are. Is there any help anyone could offer? Thanks, Mike Simon |
From: Michael S. <ma...@mi...> - 2006-11-30 12:09:20
|
On Tuesday, 28. November 2006 21:05, Nicola Bellotto wrote: > Hello, > In the SIR_kalman_scheme, is it possible I have several weights much bigger > that 1? When you call SIR_kalman_scheme::observe() the weight for each particle is assigned from you Likelihood function. It is quite possible that several of these are much bigger then 1. For example the likelihood of a Gaussian with small variance near the mean will be >1. For each subsequent SIR_kalman_scheme::observe() the are simply the likelihoods multiplied together. When SIR_kalman_scheme::update() is called the particles are resampled. First the cummulative sum of the weights is first computed. These are then normalised so the largest is 1. After resampling all weights are set to 1 ready for the next observations. Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Nicola B. <nb...@es...> - 2006-11-28 20:06:10
|
Hello, In the SIR_kalman_scheme, is it possible I have several weights much bigger that 1? If not, which one could be the problem? Thanks. Nicola On Sunday 19 Nov 2006 17:49, Nicola Bellotto wrote: > Steven, > > Thanks for your reply. I fixed variance and number of samples also in my > model and now there are no more exceptions indeed. > As you said, unfortunately the SIR_scheme seems to be more sensitive to the > model and this might be a big problem for me, since I am tracking humans > for which a good motion model does not exist in general (although UKF seems > to perform not too bad). > I read some authors (Arulampalam et al., A tutorial on particle filters > for online nonlinear/non-GaussianBayesian tracking, 2002) suggest > Systematic Resampling, which is already implemented in your library indeed. > What do you think? > Also, I was wondering if the SIR_kalman_scheme is basically just a SIR > filter providing the first two moments, 'x' and 'X', or something more > performant (I saw for example that it has a different roughening > procedure). > > Regards, > > Nicola > > On Saturday 18 Nov 2006 19:20, Michael Stevens wrote: > > Nicola, > > > > On Saturday, 18. November 2006 15:47, Nicola Bellotto wrote: > > > Steven, > > > > > > I modified my model inheriting from Likelihood_observe_model and > > > copying the likelihood function code, but I had continuosly exceptions > > > like "Roughening X not PSD" or "zero cumulative weight sum". > > > So I went back to the simplest example, the PV_SIR.cpp, and modified > > > just a little the observation model multiplying z_pred[0] and Hx(0,0) > > > by 2 (see attached file). This caused again a "zero cumulative weight > > > sum" exception. I looked through the library's code but I could not > > > really understand the reason of the problem. What am I doing wrong? > > > > I took a quick look at this because I was fearing a bug. Lucky for me the > > problem is with you small but significant change! > > > > Both the exceptions occur when the the SIR cannot solve the problem. As > > you found this is quite easy to achive. > > > > "zero cumulative weight sum" occurs when the Likelihoods computed for all > > the samples are zero. When you doubled z_pred and Hx you omited to > > double the true observation at line 199 in PV_SIR.cpp. Because the > > position is 1000 this meant that the observation was 2*1000-1000 away > > from the true. With an position variance of just 1, something 1000 away > > is very very unlikely. In fact the likeilhood computed is 0! > > > > You should also note that the SIR_scheme is only constructed with 10 > > samples. This is very few and can also very quickly lead to on 1 sample > > being chosen in many situations. This can lead to the "Roughening X not > > PSD" exception. In practice you will usualy need more particles. Even > > then, the standard SIR is very poor at solving problem if the prediction > > is far away from the observation. > > > > Michael -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2006-11-19 17:51:18
|
Steven, Thanks for your reply. I fixed variance and number of samples also in my model and now there are no more exceptions indeed. As you said, unfortunately the SIR_scheme seems to be more sensitive to the model and this might be a big problem for me, since I am tracking humans for which a good motion model does not exist in general (although UKF seems to perform not too bad). I read some authors (Arulampalam et al., A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking, 2002) suggest Systematic Resampling, which is already implemented in your library indeed. What do you think? Also, I was wondering if the SIR_kalman_scheme is basically just a SIR filter providing the first two moments, 'x' and 'X', or something more performant (I saw for example that it has a different roughening procedure). Regards, Nicola On Saturday 18 Nov 2006 19:20, Michael Stevens wrote: > Nicola, > > On Saturday, 18. November 2006 15:47, Nicola Bellotto wrote: > > Steven, > > > > I modified my model inheriting from Likelihood_observe_model and copying > > the likelihood function code, but I had continuosly exceptions like > > "Roughening X not PSD" or "zero cumulative weight sum". > > So I went back to the simplest example, the PV_SIR.cpp, and modified just > > a little the observation model multiplying z_pred[0] and Hx(0,0) by 2 > > (see attached file). This caused again a "zero cumulative weight sum" > > exception. I looked through the library's code but I could not really > > understand the reason of the problem. What am I doing wrong? > > I took a quick look at this because I was fearing a bug. Lucky for me the > problem is with you small but significant change! > > Both the exceptions occur when the the SIR cannot solve the problem. As you > found this is quite easy to achive. > > "zero cumulative weight sum" occurs when the Likelihoods computed for all > the samples are zero. When you doubled z_pred and Hx you omited to double > the true observation at line 199 in PV_SIR.cpp. Because the position is > 1000 this meant that the observation was 2*1000-1000 away from the true. > With an position variance of just 1, something 1000 away is very very > unlikely. In fact the likeilhood computed is 0! > > You should also note that the SIR_scheme is only constructed with 10 > samples. This is very few and can also very quickly lead to on 1 sample > being chosen in many situations. This can lead to the "Roughening X not > PSD" exception. In practice you will usualy need more particles. Even then, > the standard SIR is very poor at solving problem if the prediction is far > away from the observation. > > Michael -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Michael S. <ma...@mi...> - 2006-11-18 19:20:54
|
Nicola, On Saturday, 18. November 2006 15:47, Nicola Bellotto wrote: > Steven, > > I modified my model inheriting from Likelihood_observe_model and copying > the likelihood function code, but I had continuosly exceptions like > "Roughening X not PSD" or "zero cumulative weight sum". > So I went back to the simplest example, the PV_SIR.cpp, and modified just a > little the observation model multiplying z_pred[0] and Hx(0,0) by 2 (see > attached file). This caused again a "zero cumulative weight sum" exception. > I looked through the library's code but I could not really understand the > reason of the problem. What am I doing wrong? I took a quick look at this because I was fearing a bug. Lucky for me the problem is with you small but significant change! Both the exceptions occur when the the SIR cannot solve the problem. As you found this is quite easy to achive. "zero cumulative weight sum" occurs when the Likelihoods computed for all the samples are zero. When you doubled z_pred and Hx you omited to double the true observation at line 199 in PV_SIR.cpp. Because the position is 1000 this meant that the observation was 2*1000-1000 away from the true. With an position variance of just 1, something 1000 away is very very unlikely. In fact the likeilhood computed is 0! You should also note that the SIR_scheme is only constructed with 10 samples. This is very few and can also very quickly lead to on 1 sample being chosen in many situations. This can lead to the "Roughening X not PSD" exception. In practice you will usualy need more particles. Even then, the standard SIR is very poor at solving problem if the prediction is far away from the observation. Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Nicola B. <nb...@es...> - 2006-11-18 14:47:33
|
Steven, I modified my model inheriting from Likelihood_observe_model and copying the likelihood function code, but I had continuosly exceptions like "Roughening X not PSD" or "zero cumulative weight sum". So I went back to the simplest example, the PV_SIR.cpp, and modified just a little the observation model multiplying z_pred[0] and Hx(0,0) by 2 (see attached file). This caused again a "zero cumulative weight sum" exception. I looked through the library's code but I could not really understand the reason of the problem. What am I doing wrong? Regards, Nicola On Thursday 16 Nov 2006 11:57, Michael Stevens wrote: > On Wednesday, 15. November 2006 22:01, Nicola Bellotto wrote: > > Steven, > > > > I am trying to implement a particle filter using the "SIR_schee", but I > > have some problem with the observation model. The filter needs a model > > derived from "Likelihood_observe_model" and four classes are already > > implemented for linear-linearized/correlated-uncorrelated models. > > OK. I think you are refering to the four General_XX_observe_model classes > in models.hpp. > > > Unfortunately my observation model is not linear (with uncorrelated > > noises), indeed I used a "Correlated_addative_observe_model" for a > > previous UKF implementation. Do you have an example of non-linear > > observation model for the SIR_scheme? > > Looking at the class 'General_LzCoAd_observe_mode', the implementation only > needs a 'Correlated_addative_observe_model' to generalise. Sadly I have > restricted the interface to a 'Linrz_correlated_observe_model' which is > unnecessary. I think I should change this in the future. > > You can easily generalise you own 'Correlated_addative_observe_model' by > inheriting from Likelihood_observe_model and copying the addition > likelihood function code from the class 'General_LzCoAd_observe_model'. The > function definitions, which compute the likelihood of a correlated Gaussian > noise are, at the end of 'bayesFltAlg.hpp'. > > > Do I have to define a new likelihood "L"? If so, I do > > not understand exactly what it is... > > For the SIR_scheme your observe model must define a likelihood function. > > I like the following from Wikipedia... > Informally, if "probability" allows us to predict unknown outcomes based > on known parameters, then "likelihood" allows us to determine unknown > parameters based on known outcomes. > > The likelihood function is at the core of all Bayesian inference. > Fundementally, all the observation models in Bayes++ define a likelihood. > Most of the definitions restrict the form the function takes however. > > These restrictions (such as Gaussian noise) allow for simple numeric > solutions. So when you use a Kalman filter, the observation likelihood is > expressed as a few matrices rather then an arbitary likelihood function. > The SIR_scheme, and other sampled solutions to Bayesian inference, have the > advantage that they can find an approximate solution when an arbitary > likelihood function is defined. > > How to define a likelihood function: > > Imagine you can define the conditional probability of some observation z, > given the system state x as p( z | x ). > > If you knew x (which you generally don't!) then you can use this function > to find the probability of a observation z. > > Conversely if you know z (which is true in our case) then fixing z in p( z > | x ) defines a function of x. This is the Likelihood function, and can be > writen L( x ). It is not a probability, for example, a continues likelihood > function will not in general integrate to 1. I has many similar properties. > > All this said, you only need to use the SIR_scheme when you have a complex > likelihood function which cannot be well approximated by any of the simpler > models. Building likelihood function of real observations can be quite > difficult however. "Bayesian Multiple Target Tracking" Lawrence D Stone, > Carl A Barlow, Thomas L Corwin is a good reference. > > As a starter, it can be quite a lot of fun to compare the results on > simiple problems between the SIR_scheme and other schemes. > > All the best, > Michael -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2006-11-16 16:02:18
|
Thanks a lot for your help, I really appreciate! I am looking forward to implementing the particle filter and compare the performance against a previous UKF. I think I understood also the likelihood function, your explanation was very clear. I have just requested the book you suggested to our library to know more about it. Thanks again, Nicola On Thursday 16 Nov 2006 11:57, Michael Stevens wrote: > On Wednesday, 15. November 2006 22:01, Nicola Bellotto wrote: > > Steven, > > > > I am trying to implement a particle filter using the "SIR_schee", but I > > have some problem with the observation model. The filter needs a model > > derived from "Likelihood_observe_model" and four classes are already > > implemented for linear-linearized/correlated-uncorrelated models. > > OK. I think you are refering to the four General_XX_observe_model classes > in models.hpp. > > > Unfortunately my observation model is not linear (with uncorrelated > > noises), indeed I used a "Correlated_addative_observe_model" for a > > previous UKF implementation. Do you have an example of non-linear > > observation model for the SIR_scheme? > > Looking at the class 'General_LzCoAd_observe_mode', the implementation only > needs a 'Correlated_addative_observe_model' to generalise. Sadly I have > restricted the interface to a 'Linrz_correlated_observe_model' which is > unnecessary. I think I should change this in the future. > > You can easily generalise you own 'Correlated_addative_observe_model' by > inheriting from Likelihood_observe_model and copying the addition > likelihood function code from the class 'General_LzCoAd_observe_model'. The > function definitions, which compute the likelihood of a correlated Gaussian > noise are, at the end of 'bayesFltAlg.hpp'. > > > Do I have to define a new likelihood "L"? If so, I do > > not understand exactly what it is... > > For the SIR_scheme your observe model must define a likelihood function. > > I like the following from Wikipedia... > Informally, if "probability" allows us to predict unknown outcomes based > on known parameters, then "likelihood" allows us to determine unknown > parameters based on known outcomes. > > The likelihood function is at the core of all Bayesian inference. > Fundementally, all the observation models in Bayes++ define a likelihood. > Most of the definitions restrict the form the function takes however. > > These restrictions (such as Gaussian noise) allow for simple numeric > solutions. So when you use a Kalman filter, the observation likelihood is > expressed as a few matrices rather then an arbitary likelihood function. > The SIR_scheme, and other sampled solutions to Bayesian inference, have the > advantage that they can find an approximate solution when an arbitary > likelihood function is defined. > > How to define a likelihood function: > > Imagine you can define the conditional probability of some observation z, > given the system state x as p( z | x ). > > If you knew x (which you generally don't!) then you can use this function > to find the probability of a observation z. > > Conversely if you know z (which is true in our case) then fixing z in p( z > | x ) defines a function of x. This is the Likelihood function, and can be > writen L( x ). It is not a probability, for example, a continues likelihood > function will not in general integrate to 1. I has many similar properties. > > All this said, you only need to use the SIR_scheme when you have a complex > likelihood function which cannot be well approximated by any of the simpler > models. Building likelihood function of real observations can be quite > difficult however. "Bayesian Multiple Target Tracking" Lawrence D Stone, > Carl A Barlow, Thomas L Corwin is a good reference. > > As a starter, it can be quite a lot of fun to compare the results on > simiple problems between the SIR_scheme and other schemes. > > All the best, > Michael -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Michael S. <ma...@mi...> - 2006-11-16 11:57:32
|
On Wednesday, 15. November 2006 22:01, Nicola Bellotto wrote: > Steven, > > I am trying to implement a particle filter using the "SIR_schee", but I > have some problem with the observation model. The filter needs a model > derived from "Likelihood_observe_model" and four classes are already > implemented for linear-linearized/correlated-uncorrelated models. OK. I think you are refering to the four General_XX_observe_model classes in models.hpp. > Unfortunately my observation model is not linear (with uncorrelated > noises), indeed I used a "Correlated_addative_observe_model" for a previous > UKF implementation. Do you have an example of non-linear observation model > for the SIR_scheme? Looking at the class 'General_LzCoAd_observe_mode', the implementation only needs a 'Correlated_addative_observe_model' to generalise. Sadly I have restricted the interface to a 'Linrz_correlated_observe_model' which is unnecessary. I think I should change this in the future. You can easily generalise you own 'Correlated_addative_observe_model' by inheriting from Likelihood_observe_model and copying the addition likelihood function code from the class 'General_LzCoAd_observe_model'. The function definitions, which compute the likelihood of a correlated Gaussian noise are, at the end of 'bayesFltAlg.hpp'. > Do I have to define a new likelihood "L"? If so, I do > not understand exactly what it is... For the SIR_scheme your observe model must define a likelihood function. I like the following from Wikipedia... Informally, if "probability" allows us to predict unknown outcomes based on known parameters, then "likelihood" allows us to determine unknown parameters based on known outcomes. The likelihood function is at the core of all Bayesian inference. Fundementally, all the observation models in Bayes++ define a likelihood. Most of the definitions restrict the form the function takes however. These restrictions (such as Gaussian noise) allow for simple numeric solutions. So when you use a Kalman filter, the observation likelihood is expressed as a few matrices rather then an arbitary likelihood function. The SIR_scheme, and other sampled solutions to Bayesian inference, have the advantage that they can find an approximate solution when an arbitary likelihood function is defined. How to define a likelihood function: Imagine you can define the conditional probability of some observation z, given the system state x as p( z | x ). If you knew x (which you generally don't!) then you can use this function to find the probability of a observation z. Conversely if you know z (which is true in our case) then fixing z in p( z | x ) defines a function of x. This is the Likelihood function, and can be writen L( x ). It is not a probability, for example, a continues likelihood function will not in general integrate to 1. I has many similar properties. All this said, you only need to use the SIR_scheme when you have a complex likelihood function which cannot be well approximated by any of the simpler models. Building likelihood function of real observations can be quite difficult however. "Bayesian Multiple Target Tracking" Lawrence D Stone, Carl A Barlow, Thomas L Corwin is a good reference. As a starter, it can be quite a lot of fun to compare the results on simiple problems between the SIR_scheme and other schemes. All the best, Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Nicola B. <nb...@es...> - 2006-11-15 21:02:04
|
Steven, I am trying to implement a particle filter using the "SIR_schee", but I have some problem with the observation model. The filter needs a model derived from "Likelihood_observe_model" and four classes are already implemented for linear-linearized/correlated-uncorrelated models. Unfortunately my observation model is not linear (with uncorrelated noises), indeed I used a "Correlated_addative_observe_model" for a previous UKF implementation. Do you have an example of non-linear observation model for the SIR_scheme? Do I have to define a new likelihood "L"? If so, I do not understand exactly what it is... Thanks. Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Michael S. <ma...@mi...> - 2006-11-15 10:36:40
|
On Monday, 13. November 2006 07:09, Vinh wrote: > Have been fiddeling arround for an hour. Could it be that I need to > replace the linear prediction model with an "Unscented_predict_model"? > I would derive from the mentioned model and then overwrite the > function "f" to insert my own state transition function? Same with the > noise i.e. covariance matrix Q? That is correct. I would probably use 'Addative_predict_model' (in bayesFlt.hpp) which is the most general model that fits your problem. It can be used by the predict functions in the Unscented_scheme. > > > > Since the motion/prediction model is not linear due to the rotation, > > what would I need to change to modify it so that the filter can deal > > with non-linearities? Yes. The most simple (Kinematic) model you can use for your 2d robot position estimator requires 3 states. Position in cartessian coordinates (x[0],x[1]) plus the orientation of robot (x[2]). class Predict3 : public Addative_predict_model { public: Predict3 : (); const Vec& f(const Vec& x) const; void odometry (float speed, rotation) { cur_speed = speed; cur_rotation = rotation; } mutable Vec xp; float cur_speed, cur_rotation; }; The mutable 'xp' can be used as the return value for the 'f' function. In the class constructors it can be constructed with it fixed size of 3 elements. Using the simplist kinematic model (bicycle model) the robot travels purely in the direction it points. In this case your predicted function 'f' is xp[0] += cur_speed * cos(x[2]); xp[1] += cur_speed * sin(x[2]); xp[2] += cur_rotation; All angles in radians. Before you ask the filter to predict, you simple call the 'odometry' function to tell it the input from your wheel encoders. With the 'f' function the Unscented filter will automatically determine how the non-linearity effects future state and how they are correlated. > > > > In the state prediction (Linear_predict_model), there is something > > looking like this: > > > > q[0] = dt*sqr((1-Fvv)*V_NOISE); > > G(0,0) = 0.; > > G(1,0) = 1.; > > > > Can I leave it like that if I assume that the noise is always constant? The noise model is the model is critical in such robot applications. If you want to get really good results you need to take a look at how you can model such physical effects such as the slip and slide of the wheels and wheel radius. Alternatively you can use a simple assumed noise model. This can be fixed as above. But since you know the speed and rotation inputs it is probably better no make the noise proportional to these values. The 'dt*sqr((1-Fvv)*V_NOISE)' noise from the PV example needs a bit of work to extend to the 2d robot case. It would probably be best to start with simple addative noise proportional to the inputs. If you assume addative white velocity noise that independant for each of the 3 states you have: q[0] = dt* speed * SPEED_POSITION_NOISE^2; q[1] = dt* speed * SPEED_POSITION_NOISE^2; q[2] = dt* (speed * SPEED_ORIENTATION_NOISE^2 + rotation * ROTATION_ORIENTATION_NOISE^2; This leaves you with three tuneable parameters. SPEED_POSITION_NOISE - how speed uncertainty effects position SPEED_ORIENTATION_NOISE - how speed uncertainty effects orientation ROTATION_ORIENTATION_NOISE - how rotation uncertainty effects orientation You could probably have a ROTATION_POSITION_NOISE parameter, but once you go to this complexity you will probably want to go to a more physically realistic model! All the best, Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Nicola B. <nb...@es...> - 2006-11-14 09:25:28
|
Vinh, In general, if you use a Linrz_* model, you implement the prediction "f" (or observation "h") and the relative Jacobian "Fx" (or "Hx"), for example for the EKF. Other filters, like UKF, don't need linearized models instead. Have a look inside "bayesFlt.hpp" to get more information about models. Regards, Nicola On Monday 13 Nov 2006 22:56, you wrote: > Hi Nicola, > if there's any chance you can have a look at the code I would greatly > appreciate it. The project is due in two days and I'm basically > stucked with this. I was wondering whether I get the concept right, > not too much of memory leaks. Should I overwrite the functions "f" and > "Q" and are those corresponding to the system motion model (f) and > covariance of the motion model (Q)? But if I measure a simple 2D > position while my state is a 2d position as well, should I use a > Linrz_uncorrelated_observe_model like in the PV example? > I found out that I used the random functions wrong (was too late at > night). But that didn't fix the problem that the estimate constantly > jumps towards the measurement, even though their covariance is quite > high. > > Regards, > Vinh > > On 11/14/06, Nicola Bellotto <nb...@es...> wrote: > > Vinh, > > I didn't have time to look at the entire code, but for sure the following > > doesn't look very correct: > > > > const Vec& PVpredict::f(const Vec& x) const > > { > > // Functional part of addative model > > // Note: Reference return value as a speed optimisation, MUST be copied > > by caller. > > Vec* v = new Vec(2); > > (*v)[0] = x[0] + 0.1; > > (*v)[1] = x[1] + 0.2; > > return *v; > > } > > > > Everytime this member returns a reference to a _new_ location, allocated > > with a _local_ pointer. Well, I guess that's not what you want... > > Regards > > Nicola > > > > On Monday 13 Nov 2006 14:04, Vinh wrote: > > > This is what I have so far. I've tried changing the observation to be > > > a 2D position and the state itself the same. It compiles and runs, > > > however the results a far from what I expected. > > > Somehow the values of the filter stay very close to the one of the > > > initial estimate which would mean that the initial estimate's > > > covariance should be very small or the observations covariance very > > > big - none of this intended. Anyone can help? > > > > > > here's some sample output. The file, derived from the PV example, is > > > attached. > > > > > > ---- > > > True [2](7.180000e+01,1.436000e+02) > > > Direct > > > [2](1.435908e+02,2.871707e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000 > > >000e +00,8.571429e-05)) True [2](7.190000e+01,1.438000e+02) > > > Direct > > > [2](1.437973e+02,2.875657e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000 > > >000e +00,8.571429e-05)) True [2](7.200000e+01,1.440000e+02) > > > Direct > > > [2](1.439857e+02,2.879593e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000 > > >000e +00,8.571429e-05)) True [2](7.210000e+01,1.442000e+02) > > > Direct > > > [2](1.441731e+02,2.883677e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000 > > >000e +00,8.571429e-05)) ------ > > > > > > On 11/13/06, Vinh <arb...@go...> wrote: > > > > Have been fiddeling arround for an hour. Could it be that I need to > > > > replace the linear prediction model with an > > > > "Unscented_predict_model"? I would derive from the mentioned model > > > > and then overwrite the function "f" to insert my own state transition > > > > function? Same with the noise i.e. covariance matrix Q? > > > > > > > > Vinh > > > > > > > > On 11/13/06, Vinh <arb...@go...> wrote: > > > > > Hi, > > > > > I'm just started playing around with the bayes++ library to > > > > > accomplish following task: > > > > > > > > > > A vision system provides me with the position of our robot on the > > > > > ground (2d). In addition, I want to merge this information with the > > > > > internal wheelencoders (giving me the speed and rotation of the > > > > > robot) to get an estimate of the position of the robot. > > > > > Since the robot is rotating as well the system model is not linear. > > > > > First I thought of using a particle filter to get the estimate, but > > > > > that probably would be overkill, making the system slower than > > > > > needed since the underlying probability distribution could simply > > > > > be one gaussian. > > > > > I had a look at the PV example, but got stucked and would like to > > > > > ask for advice. > > > > > > > > > > In the state prediction (Linear_predict_model), there is something > > > > > looking like this: > > > > > > > > > > q[0] = dt*sqr((1-Fvv)*V_NOISE); > > > > > G(0,0) = 0.; > > > > > G(1,0) = 1.; > > > > > > > > > > Can I leave it like that if I assume that the noise is always > > > > > constant? > > > > > > > > > > Since the motion/prediction model is not linear due to the > > > > > rotation, what would I need to change to modify it so that the > > > > > filter can deal with non-linearities? > > > > > > > > > > Thanks very much for your help!! > > > > > > > > > > Vinh > > > > -- > > ------------------------------------------ > > Nicola Bellotto > > University of Essex > > Department of Computer Science > > Wivenhoe Park > > Colchester CO4 3SQ > > United Kingdom > > > > Room: 1N1.2.8 > > Tel. +44 (0)1206 874094 > > URL: http://privatewww.essex.ac.uk/~nbello > > ------------------------------------------ > > > > ------------------------------------------------------------------------- > > Using Tomcat but need to do more? Need to support web services, security? > > Get stuff done quickly with pre-integrated technology to make your job > > easier Download IBM WebSphere Application Server v.1.0.1 based on Apache > > Geronimo > > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > > _______________________________________________ > > Bayesclasses-general mailing list > > Bay...@li... > > https://lists.sourceforge.net/lists/listinfo/bayesclasses-general -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |