bayesclasses-general Mailing List for Bayes+Estimate (Page 4)
Brought to you by:
mistevens
You can subscribe to this list here.
2004 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2005 |
Jan
(2) |
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(14) |
Oct
(6) |
Nov
|
Dec
|
2006 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(2) |
Aug
(1) |
Sep
|
Oct
|
Nov
(15) |
Dec
(5) |
2007 |
Jan
(10) |
Feb
(4) |
Mar
(1) |
Apr
|
May
(8) |
Jun
|
Jul
|
Aug
|
Sep
(11) |
Oct
(2) |
Nov
(1) |
Dec
|
2008 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
From: Vinh <arb...@go...> - 2006-11-13 22:56:22
|
Hi Nicola, if there's any chance you can have a look at the code I would greatly appreciate it. The project is due in two days and I'm basically stucked with this. I was wondering whether I get the concept right, not too much of memory leaks. Should I overwrite the functions "f" and "Q" and are those corresponding to the system motion model (f) and covariance of the motion model (Q)? But if I measure a simple 2D position while my state is a 2d position as well, should I use a Linrz_uncorrelated_observe_model like in the PV example? I found out that I used the random functions wrong (was too late at night). But that didn't fix the problem that the estimate constantly jumps towards the measurement, even though their covariance is quite high. Regards, Vinh On 11/14/06, Nicola Bellotto <nb...@es...> wrote: > Vinh, > I didn't have time to look at the entire code, but for sure the following > doesn't look very correct: > > const Vec& PVpredict::f(const Vec& x) const > { > // Functional part of addative model > // Note: Reference return value as a speed optimisation, MUST be copied by > caller. > Vec* v = new Vec(2); > (*v)[0] = x[0] + 0.1; > (*v)[1] = x[1] + 0.2; > return *v; > } > > Everytime this member returns a reference to a _new_ location, allocated with > a _local_ pointer. Well, I guess that's not what you want... > Regards > Nicola > > > On Monday 13 Nov 2006 14:04, Vinh wrote: > > This is what I have so far. I've tried changing the observation to be > > a 2D position and the state itself the same. It compiles and runs, > > however the results a far from what I expected. > > Somehow the values of the filter stay very close to the one of the > > initial estimate which would mean that the initial estimate's > > covariance should be very small or the observations covariance very > > big - none of this intended. Anyone can help? > > > > here's some sample output. The file, derived from the PV example, is > > attached. > > > > ---- > > True [2](7.180000e+01,1.436000e+02) > > Direct > > [2](1.435908e+02,2.871707e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e > >+00,8.571429e-05)) True [2](7.190000e+01,1.438000e+02) > > Direct > > [2](1.437973e+02,2.875657e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e > >+00,8.571429e-05)) True [2](7.200000e+01,1.440000e+02) > > Direct > > [2](1.439857e+02,2.879593e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e > >+00,8.571429e-05)) True [2](7.210000e+01,1.442000e+02) > > Direct > > [2](1.441731e+02,2.883677e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e > >+00,8.571429e-05)) ------ > > > > On 11/13/06, Vinh <arb...@go...> wrote: > > > Have been fiddeling arround for an hour. Could it be that I need to > > > replace the linear prediction model with an "Unscented_predict_model"? > > > I would derive from the mentioned model and then overwrite the > > > function "f" to insert my own state transition function? Same with the > > > noise i.e. covariance matrix Q? > > > > > > Vinh > > > > > > On 11/13/06, Vinh <arb...@go...> wrote: > > > > Hi, > > > > I'm just started playing around with the bayes++ library to accomplish > > > > following task: > > > > > > > > A vision system provides me with the position of our robot on the > > > > ground (2d). In addition, I want to merge this information with the > > > > internal wheelencoders (giving me the speed and rotation of the robot) > > > > to get an estimate of the position of the robot. > > > > Since the robot is rotating as well the system model is not linear. > > > > First I thought of using a particle filter to get the estimate, but > > > > that probably would be overkill, making the system slower than needed > > > > since the underlying probability distribution could simply be one > > > > gaussian. > > > > I had a look at the PV example, but got stucked and would like to ask > > > > for advice. > > > > > > > > In the state prediction (Linear_predict_model), there is something > > > > looking like this: > > > > > > > > q[0] = dt*sqr((1-Fvv)*V_NOISE); > > > > G(0,0) = 0.; > > > > G(1,0) = 1.; > > > > > > > > Can I leave it like that if I assume that the noise is always constant? > > > > > > > > Since the motion/prediction model is not linear due to the rotation, > > > > what would I need to change to modify it so that the filter can deal > > > > with non-linearities? > > > > > > > > Thanks very much for your help!! > > > > > > > > Vinh > > -- > ------------------------------------------ > Nicola Bellotto > University of Essex > Department of Computer Science > Wivenhoe Park > Colchester CO4 3SQ > United Kingdom > > Room: 1N1.2.8 > Tel. +44 (0)1206 874094 > URL: http://privatewww.essex.ac.uk/~nbello > ------------------------------------------ > > ------------------------------------------------------------------------- > Using Tomcat but need to do more? Need to support web services, security? > Get stuff done quickly with pre-integrated technology to make your job easier > Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 > _______________________________________________ > Bayesclasses-general mailing list > Bay...@li... > https://lists.sourceforge.net/lists/listinfo/bayesclasses-general > |
From: Nicola B. <nb...@es...> - 2006-11-13 17:40:44
|
Vinh, I didn't have time to look at the entire code, but for sure the following doesn't look very correct: const Vec& PVpredict::f(const Vec& x) const { // Functional part of addative model // Note: Reference return value as a speed optimisation, MUST be copied by caller. Vec* v = new Vec(2); (*v)[0] = x[0] + 0.1; (*v)[1] = x[1] + 0.2; return *v; } Everytime this member returns a reference to a _new_ location, allocated with a _local_ pointer. Well, I guess that's not what you want... Regards Nicola On Monday 13 Nov 2006 14:04, Vinh wrote: > This is what I have so far. I've tried changing the observation to be > a 2D position and the state itself the same. It compiles and runs, > however the results a far from what I expected. > Somehow the values of the filter stay very close to the one of the > initial estimate which would mean that the initial estimate's > covariance should be very small or the observations covariance very > big - none of this intended. Anyone can help? > > here's some sample output. The file, derived from the PV example, is > attached. > > ---- > True [2](7.180000e+01,1.436000e+02) > Direct > [2](1.435908e+02,2.871707e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e >+00,8.571429e-05)) True [2](7.190000e+01,1.438000e+02) > Direct > [2](1.437973e+02,2.875657e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e >+00,8.571429e-05)) True [2](7.200000e+01,1.440000e+02) > Direct > [2](1.439857e+02,2.879593e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e >+00,8.571429e-05)) True [2](7.210000e+01,1.442000e+02) > Direct > [2](1.441731e+02,2.883677e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e >+00,8.571429e-05)) ------ > > On 11/13/06, Vinh <arb...@go...> wrote: > > Have been fiddeling arround for an hour. Could it be that I need to > > replace the linear prediction model with an "Unscented_predict_model"? > > I would derive from the mentioned model and then overwrite the > > function "f" to insert my own state transition function? Same with the > > noise i.e. covariance matrix Q? > > > > Vinh > > > > On 11/13/06, Vinh <arb...@go...> wrote: > > > Hi, > > > I'm just started playing around with the bayes++ library to accomplish > > > following task: > > > > > > A vision system provides me with the position of our robot on the > > > ground (2d). In addition, I want to merge this information with the > > > internal wheelencoders (giving me the speed and rotation of the robot) > > > to get an estimate of the position of the robot. > > > Since the robot is rotating as well the system model is not linear. > > > First I thought of using a particle filter to get the estimate, but > > > that probably would be overkill, making the system slower than needed > > > since the underlying probability distribution could simply be one > > > gaussian. > > > I had a look at the PV example, but got stucked and would like to ask > > > for advice. > > > > > > In the state prediction (Linear_predict_model), there is something > > > looking like this: > > > > > > q[0] = dt*sqr((1-Fvv)*V_NOISE); > > > G(0,0) = 0.; > > > G(1,0) = 1.; > > > > > > Can I leave it like that if I assume that the noise is always constant? > > > > > > Since the motion/prediction model is not linear due to the rotation, > > > what would I need to change to modify it so that the filter can deal > > > with non-linearities? > > > > > > Thanks very much for your help!! > > > > > > Vinh -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Vinh <arb...@go...> - 2006-11-13 17:26:07
|
Have been fiddeling arround for an hour. Could it be that I need to replace the linear prediction model with an "Unscented_predict_model"? I would derive from the mentioned model and then overwrite the function "f" to insert my own state transition function? Same with the noise i.e. covariance matrix Q? Vinh On 11/13/06, Vinh <arb...@go...> wrote: > Hi, > I'm just started playing around with the bayes++ library to accomplish > following task: > > A vision system provides me with the position of our robot on the > ground (2d). In addition, I want to merge this information with the > internal wheelencoders (giving me the speed and rotation of the robot) > to get an estimate of the position of the robot. > Since the robot is rotating as well the system model is not linear. > First I thought of using a particle filter to get the estimate, but > that probably would be overkill, making the system slower than needed > since the underlying probability distribution could simply be one > gaussian. > I had a look at the PV example, but got stucked and would like to ask > for advice. > > In the state prediction (Linear_predict_model), there is something > looking like this: > > q[0] = dt*sqr((1-Fvv)*V_NOISE); > G(0,0) = 0.; > G(1,0) = 1.; > > Can I leave it like that if I assume that the noise is always constant? > > Since the motion/prediction model is not linear due to the rotation, > what would I need to change to modify it so that the filter can deal > with non-linearities? > > Thanks very much for your help!! > > Vinh > |
From: Vinh <arb...@go...> - 2006-11-13 17:06:57
|
This is what I have so far. I've tried changing the observation to be a 2D position and the state itself the same. It compiles and runs, however the results a far from what I expected. Somehow the values of the filter stay very close to the one of the initial estimate which would mean that the initial estimate's covariance should be very small or the observations covariance very big - none of this intended. Anyone can help? here's some sample output. The file, derived from the PV example, is attached. ---- True [2](7.180000e+01,1.436000e+02) Direct [2](1.435908e+02,2.871707e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e+00,8.571429e-05)) True [2](7.190000e+01,1.438000e+02) Direct [2](1.437973e+02,2.875657e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e+00,8.571429e-05)) True [2](7.200000e+01,1.440000e+02) Direct [2](1.439857e+02,2.879593e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e+00,8.571429e-05)) True [2](7.210000e+01,1.442000e+02) Direct [2](1.441731e+02,2.883677e+02),[2,2]((8.571428e-05,0.000000e+00),(0.000000e+00,8.571429e-05)) ------ On 11/13/06, Vinh <arb...@go...> wrote: > Have been fiddeling arround for an hour. Could it be that I need to > replace the linear prediction model with an "Unscented_predict_model"? > I would derive from the mentioned model and then overwrite the > function "f" to insert my own state transition function? Same with the > noise i.e. covariance matrix Q? > > Vinh > > On 11/13/06, Vinh <arb...@go...> wrote: > > Hi, > > I'm just started playing around with the bayes++ library to accomplish > > following task: > > > > A vision system provides me with the position of our robot on the > > ground (2d). In addition, I want to merge this information with the > > internal wheelencoders (giving me the speed and rotation of the robot) > > to get an estimate of the position of the robot. > > Since the robot is rotating as well the system model is not linear. > > First I thought of using a particle filter to get the estimate, but > > that probably would be overkill, making the system slower than needed > > since the underlying probability distribution could simply be one > > gaussian. > > I had a look at the PV example, but got stucked and would like to ask > > for advice. > > > > In the state prediction (Linear_predict_model), there is something > > looking like this: > > > > q[0] = dt*sqr((1-Fvv)*V_NOISE); > > G(0,0) = 0.; > > G(1,0) = 1.; > > > > Can I leave it like that if I assume that the noise is always constant? > > > > Since the motion/prediction model is not linear due to the rotation, > > what would I need to change to modify it so that the filter can deal > > with non-linearities? > > > > Thanks very much for your help!! > > > > Vinh > > > |
From: Vinh <arb...@go...> - 2006-11-13 16:49:33
|
Hi, I'm just started playing around with the bayes++ library to accomplish following task: A vision system provides me with the position of our robot on the ground (2d). In addition, I want to merge this information with the internal wheelencoders (giving me the speed and rotation of the robot) to get an estimate of the position of the robot. Since the robot is rotating as well the system model is not linear. First I thought of using a particle filter to get the estimate, but that probably would be overkill, making the system slower than needed since the underlying probability distribution could simply be one gaussian. I had a look at the PV example, but got stucked and would like to ask for advice. In the state prediction (Linear_predict_model), there is something looking like this: q[0] = dt*sqr((1-Fvv)*V_NOISE); G(0,0) = 0.; G(1,0) = 1.; Can I leave it like that if I assume that the noise is always constant? Since the motion/prediction model is not linear due to the rotation, what would I need to change to modify it so that the filter can deal with non-linearities? Thanks very much for your help!! Vinh |
From: Nithya N. V. <nvi...@cs...> - 2006-08-08 18:12:15
|
Hi Nicola, Michael, thank you very much for your reply. It really helped! -Nithya On Sun, 6 Aug 2006, Michael Stevens wrote: > Date: Sun, 6 Aug 2006 21:50:01 +0200 > From: Michael Stevens <ma...@mi...> > To: Nithya Nirmal Vijayakumar <nvi...@cs...> > Subject: Re: Bayes++ EKF Example Request > > Hi Hithya, > > Sorry to be so late in replying. I am horribly busy at the moment! > > You at the PV (Position Velocity) filter example, is actually very close to > waht you need. > > The PV example uses an Unscented filter which is closely related to an > Extended Kalman filter. If you wish to use an Extended Kalman filter the code > can be trivially changed to use the 'Covariance_scheme'. > > In the PV example the data points of the time series are the position of a > moving object. The filter estimates the object velocity using a motion model > and can predict future positions and velocities. The estimator fuses it own > predictions with noisy obserations to give a best estimate given the model. > > In your question you say. >> and fill in any missing values. > > This makes me wonder if require and iterative estimate such as the Kalman > filter for you problem. If you have all the know data points of time series a > priori and wish to determine estimates of data points at any time then what > you require is a batch smoother. Sadly Bayes++ does not have any smoothers > they are a topic in themselves!! > > Hopes this helps a bit, > > Michael > > -- > ___________________________________ > Michael Stevens Systems Engineering > > 34128 Kassel, Germany > Phone/Fax: +49 561 5218038 > > Navigation Systems, Estimation and > Bayesian Filtering > http://bayesclasses.sf.net > ___________________________________ > |
From: Nicola B. <nb...@es...> - 2006-07-28 07:31:06
|
Nithya, Look at the documentation and the library's sources for the classes "Covariance_scheme", "Linrz_predict_model" and "Linrz_(un)correlated_observe_model" for the EKF implementation. Also, if you change "Unscented_scheme" with "Covariance_scheme" in the file "simpleExample.cpp", you have an example of classic Kalman Filter. Finally, if you didn't do it yet, take a look at this good introduction: http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html Hope it helps. Nicola On Tuesday 25 Jul 2006 16:37, Nithya Nirmal Vijayakumar wrote: > Hi, > > I am working on a prediction system implemented using Extended Kalman > filters. Ideally, my system would take a set of time series data points > and fill in any missing values. I have installed Bayes++ and tried out the > examples. I am looking for more examples on extended kalman filters > implemented using Bayes++. As a novice to Kalman filters, I greatly > appreciate any help. > > To begin with, I am working on extending the position velocity example to > take in a set of observations made at 10 time points, update the kalman > filter and then predict the observations for the 11th through 15th time > points. I would like to use EKF. I wrote a simple example using the kalman > filter toolbox in matlab. I appreciate any help in coding this up in > Bayes++. The partial code in matlab is as follows. > > ------position_velocity.m---------- > > %Take as observation vector and velocity vector values > > %set F, H, Q, R values > ss = 2; % state size > os = 1; % observation size > F = [1 dt; ... > 0 1]; > H = [1 0]; > Q = 0.1*eye(ss); > R = 1*eye(os); > > initx = [obsVec(1); velVec(1)] > initV = 10*eye(ss) > > %first 10 observations. perform kalman update > T = 10; > xfilt(:,1) = initx; > Vfilt(:,:,1) = initV; > for i = 2:T > [xfilt(:,i), Vfilt(:,:,i)] = kalman_update(F, H, Q, R, obsVec(i), ... > xfilt(:,i-1), Vfilt(:,:,i-1), 'initial', 0); > end > > %predict next 5 values > x = [obsVec; velVec]; > temp = zeros(2,2); > for i=T:T+5 > %calculate next position > temp = F*xfilt(:,i-1); > %update filter > [xfilt(:,i), Vfilt(:,:,i)] = kalman_update(F, H, Q, R, temp(1), ... > xfilt(:,i-1),Vfilt(:,:,i-1), 'initial', 0); > end > > > thanks, > Nithya > > > > > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your opinions on IT & business topics through brief surveys -- and earn > cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Bayesclasses-general mailing list > Bay...@li... > https://lists.sourceforge.net/lists/listinfo/bayesclasses-general -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nithya N. V. <nvi...@cs...> - 2006-07-25 15:37:18
|
Hi, I am working on a prediction system implemented using Extended Kalman filters. Ideally, my system would take a set of time series data points and fill in any missing values. I have installed Bayes++ and tried out the examples. I am looking for more examples on extended kalman filters implemented using Bayes++. As a novice to Kalman filters, I greatly appreciate any help. To begin with, I am working on extending the position velocity example to take in a set of observations made at 10 time points, update the kalman filter and then predict the observations for the 11th through 15th time points. I would like to use EKF. I wrote a simple example using the kalman filter toolbox in matlab. I appreciate any help in coding this up in Bayes++. The partial code in matlab is as follows. ------position_velocity.m---------- %Take as observation vector and velocity vector values %set F, H, Q, R values ss = 2; % state size os = 1; % observation size F = [1 dt; ... 0 1]; H = [1 0]; Q = 0.1*eye(ss); R = 1*eye(os); initx = [obsVec(1); velVec(1)] initV = 10*eye(ss) %first 10 observations. perform kalman update T = 10; xfilt(:,1) = initx; Vfilt(:,:,1) = initV; for i = 2:T [xfilt(:,i), Vfilt(:,:,i)] = kalman_update(F, H, Q, R, obsVec(i), ... xfilt(:,i-1), Vfilt(:,:,i-1), 'initial', 0); end %predict next 5 values x = [obsVec; velVec]; temp = zeros(2,2); for i=T:T+5 %calculate next position temp = F*xfilt(:,i-1); %update filter [xfilt(:,i), Vfilt(:,:,i)] = kalman_update(F, H, Q, R, temp(1), ... xfilt(:,i-1),Vfilt(:,:,i-1), 'initial', 0); end thanks, Nithya |
From: Nicola B. <nb...@es...> - 2006-05-02 19:53:12
|
On Monday 01 May 2006 21:05, Jack Collier wrote: > 1. It appears as though I should be using the Covariance_scheme. Is > this correct? Yes it is. > > 2. My state prediction model is as follows: > > x' = x + (-(v/w)*sin(b) + (v/w)*sin(b+w*dt)) > y' = y + ((v/w)*cos(b) - (v/w)*cos(b+w*dt)) > theta' = theta + (w*dt) > > I am wondering how I put this in the prediction model. First of all, > I'm not sure whether I should be using a linrz_predict_model, > linear_predict_model or Gaussian_predict_model. It's a non-linear system, so you have to use a linrz_predict_model (linearized prediction model). > Second the matrix Fx > seems to implement x' = Fx(x) but I don't see how to incorporate the > control component into the prediction. For a linrz_predict_model (your case), you have to provide the prediction equations in the virtual function fx. Then Fx is the Jacobian matrix of partial derivatives relative to fx and must be updated at every time step using the previous state estimate and your control vector. You will also need an observation model, linear or not, to correct the prediction according to the current observation. Before, I suggest you to have a look at this nice introduction about Kalman filtering: http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html Then you could have a look at the examples provided with Bayes++. Hope it helps. Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Jack C. <jac...@dr...> - 2006-05-01 20:05:22
|
Our research department is interested in using Bayes++ to do SLAM for unmanned ground vehicles. As a primer, I have recently started playing around with Bayes++ to do a simple EKF localization filter to predict the stat (x,y,theta) of a robot given a control U (v,w) of velocity and steering. The lack of documentation has made this difficult so I thought I would send an email to ensure I am on the right track. Here are my questions. Thanks in advance for your help. 1. It appears as though I should be using the Covariance_scheme. Is this correct? 2. My state prediction model is as follows: x' = x + (-(v/w)*sin(b) + (v/w)*sin(b+w*dt)) y' = y + ((v/w)*cos(b) - (v/w)*cos(b+w*dt)) theta' = theta + (w*dt) I am wondering how I put this in the prediction model. First of all, I'm not sure whether I should be using a linrz_predict_model, linear_predict_model or Gaussian_predict_model. Second the matrix Fx seems to implement x' = Fx(x) but I don't see how to incorporate the control component into the prediction. As of now I make Fx a 3x3 identity matrix, do the update and simply add the second term directly on to the state. Is this the correct way to handle this? I've included this code snippet below: void Jpredict::stateUpdate( float dt, Vec& U, Bayesian_filter::Covariance_scheme& kf ) { float a = U[0] / U[1]; Vec A(3); A[0] = -a*sin(kf.x[2]) + a*sin(kf.x[2]+U[1]*dt); A[1] = a*cos(kf.x[2]) - a*cos(kf.x[2]+U[1]*dt);; A[2] = U[1]*dt; //State transition matrix Fx is 3x3 identity Fx(0,0) = 1; Fx(0,1) = 0; Fx(0,2) = 0; Fx(1,0) = 0; Fx(1,1) = 1; Fx(0,2) = 0; Fx(2,0) = 0; Fx(2,1) = 0; Fx(2,2) = 1; kf.update(); kf.x[0] = kf.x[0] + A[0]; kf.x[1] = kf.x[1] + A[1]; kf.x[2] = kf.x[2] + A[2]; q[0] = 1; G(0,0) = 0.; G(1,0) = 1.; } |
From: Nicola B. <nb...@es...> - 2006-04-07 18:10:34
|
Still referring to my previous email, what would be a normalization function for the following observation model: z_pred[0] = atan2(x[1], x[0]); z_pred[1] = - atan2(x[2], sqrt(sqr(x[0]) + sqr(x[1])); z_pred[2] = sqrt(sqr(x[0]) + sqr(x[1])); where the first two components are angles? I'd really appreciate any suggestion. Thanks. Nicola ---------- Forwarded Message ---------- Subject: [Bayes++] normalise funtion in observe models Date: Wednesday 05 Apr 2006 20:52 To: bay...@li... Hi, I'm having some problems with the UKF and I don't understand whether this depends from the fact that I didn't implement a normalise(...) function on my observation model. This function seems to be necessary when the model is discontinuos, but still it's not very clear to me how to use it. Can anyone report a simple example of discontinuos model and relative normalisation function? Thanks Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ ------------------------------------------------------- This SF.Net email is sponsored by xPML, a groundbreaking scripting language that extends applications into web and mobile media. Attend the live webcast and join the prime developer group breaking into this new coding territory! http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 _______________________________________________ Bayesclasses-general mailing list Bay...@li... https://lists.sourceforge.net/lists/listinfo/bayesclasses-general ------------------------------------------------------- -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2006-04-05 19:52:28
|
Hi, I'm having some problems with the UKF and I don't understand whether this depends from the fact that I didn't implement a normalise(...) function on my observation model. This function seems to be necessary when the model is discontinuos, but still it's not very clear to me how to use it. Can anyone report a simple example of discontinuos model and relative normalisation function? Thanks Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2006-03-12 19:24:16
|
hi there, i was wandering why none of the comments in the source files are included by doxygen to generate the documentation. i thought it was a problem of my doxygen version, but then i noticed that also the documentation on the website has the same problem... another thing i'd like to point out is that the installation procedure, based on boost-jam, is not exactly the most straightforward. a classic "configure -> make -> make install" would be surely preferable... i believe the lack of documentation and the tedious installation keep away lots of potential users, which probably prefer to choose a more approachable library like BFL. this is really a pity, even because i think Bayes++ has a more elegant and better performing code. i'm writing this because i've recently done some nice work using Bayes++, but i feel like i am only one of the very few users (or so it seems from the mailing list, at least...) hope to see some developments soon cheers nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Vadim <vo...@ya...> - 2006-01-07 04:12:19
|
Hi, I am selecting a library for state-space modeling and have some general questions about Bayes++. I am new to the area and it's hard for me to assess suitability of the library from reading the code and the Doxygen docs. 1. Does the library have methods for estimation of innovation covariance matrixes if they are not a-priory known? 2. Any methods for estimating the initial state of the filters? 3. How does Bayes++ fare against the Bayesian Filtering Library, http://people.mech.kuleuven.be/~kgadeyne/bfl.html ? The latter sure looks less polished, but what about functionality? 4. Did anyone try to compile Bayes++ with gcc 3.2.3. I read the list of tested compilers, but it would help me to know that 3.2.3 will not compile the library for sure. I should say that (at least from the first glance) Bayes++ looks like a well designed library and I have a strong itch to try it out. Thanks, Vadim |
From: Nicola B. <nb...@es...> - 2005-10-24 15:41:59
|
On Monday 24 Oct 2005 16:06, you wrote: > Sounds quite simmilar to my work :) > > I'm using "Active Contours" (Blake & Isard) as my primary reference, but > find the coverage of spline-fitting rather weak, as the recursive algorithm > on p.127 is almost presented out of the blue. I somewhat see the > relationship to information filters (inverse covariance Kalman), but this > is quite unclear to me. > > Do you have any good references on spline-fitting (projection of sampled > contours to feature-space) to recommend? Actually I'm not that deeply involved into vision as you seem to be, that's why I let OpenCV do most of the work ;) Don't know if it might be of interest, but perhaps you want to have a look at Brethes et al, "Data Fusion for Visual Tracking dedicated to Human-Robot Interaction", ICRA'05. Hope it helps. Let me know how it goes with the Bayes++ implementation. Bye Nicola PS: I cc to the mailing-list, hoping someone else shows up with his Bayes++ application... > > Thanks in advance, and good luck in your research! :) > > regards, > Fredrik Orderud > Ph.D student in Computer Science, NTNU > ----- Original Message ----- > From: "Nicola Bellotto" <nb...@es...> > To: "Fredrik Orderud" <fre...@id...> > Sent: Monday, October 24, 2005 4:30 PM > Subject: Re: [Bayes++] "observe" and "observe_innovation" for EKF > > > On Monday 24 Oct 2005 13:34, you wrote: > >> Vi just started using Bayes++ this fall. > >> > >> I'm currently digging into contour-based video tracking, and find the > >> "condensation"-algorithm (Blake & Isard @ univ. of Oxford) quite > >> interesting, as it provides a firm statistical foundation for video > >> tracking. My goal is to develop an open-source video tracking library > >> based > >> on OpenCV and Bayes++. > > > > That's cool, 'cause I've also started recently and I use OpenCV. I'm > > doing people tracking from a mobile platform (robot), combining vision > > and laser range data. I know of course the CONDENSATION algorithm, but at > > the moment I've just started with an EKF, planning to move later towards > > particle techniques. > > Hope to hear from you again and good luck with your research. > > Nicola > > > >> regards, > >> Fredrik Orderud > >> Ph.D student in Computer Science, NTNU > >> ----- Original Message ----- > >> From: "Nicola Bellotto" <nb...@es...> > >> To: <bay...@li...> > >> Sent: Monday, October 24, 2005 2:04 PM > >> Subject: Re: [Bayes++] "observe" and "observe_innovation" for EKF > >> > >> > Fredrik, > >> > thanks for you answer. Glad to see I got a reply, unfortunately this > >> > mailing-list seems to be deserted... > >> > What are you using Bayes++ for? Sometimes I'd like to share my > >> > experiences/doubts with someone else who's using it. > >> > Cheers > >> > Nicola > >> > > >> > On Monday 24 Oct 2005 02:00, you wrote: > >> >> I believe you're right in your assumptions about "observe" and > >> >> "observe_innovation". > >> >> > >> >> It you look into the sourcecode for EKF (bayesFlt.cpp), you'll > >> >> discover > >> >> that "observe" first calculates the innovation based on the > >> >> difference between the actual and predicted measure, and then calls > >> >> "observe_innovation" internally. > >> >> > >> >> ----- Original Message ----- > >> >> From: "Nicola Bellotto" <nb...@es...> > >> >> To: <bay...@li...> > >> >> Sent: Saturday, October 22, 2005 7:24 PM > >> >> Subject: [Bayes++] "observe" and "observe_innovation" for EKF > >> >> > >> >> > I've some doubt about the use of the two funtions "observe" and > >> >> > "observe_innovation". From what I've understood, they do the same > >> >> > job > >> >> > (correction step), but: > >> >> > - "observe" computes internally the innovation from the given real > >> >> > measure 'z'; > >> >> > - "observe_innovation" needs the innovation 's' to be computed > >> >> > externally > >> >> > and > >> >> > then passed as argument. > >> >> > Is that correct? > > > > -- > > ------------------------------------------ > > Nicola Bellotto > > University of Essex > > Department of Computer Science > > Wivenhoe Park > > Colchester CO4 3SQ > > United Kingdom > > > > Room: 1N1.2.8 > > Tel. +44 (0)1206 874094 > > E-Mail: nb...@es... > > URL: http://privatewww.essex.ac.uk/~nbello > > ------------------------------------------ -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 E-Mail: nb...@es... URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2005-10-24 12:05:08
|
Fredrik, thanks for you answer. Glad to see I got a reply, unfortunately this mailing-list seems to be deserted... What are you using Bayes++ for? Sometimes I'd like to share my experiences/doubts with someone else who's using it. Cheers Nicola On Monday 24 Oct 2005 02:00, you wrote: > I believe you're right in your assumptions about "observe" and > "observe_innovation". > > It you look into the sourcecode for EKF (bayesFlt.cpp), you'll discover > that "observe" first calculates the innovation based on the difference > between the actual and predicted measure, and then calls > "observe_innovation" internally. > > regards, > Fredrik Orderud > Ph.D student in Computer Science, NTNU > > ----- Original Message ----- > From: "Nicola Bellotto" <nb...@es...> > To: <bay...@li...> > Sent: Saturday, October 22, 2005 7:24 PM > Subject: [Bayes++] "observe" and "observe_innovation" for EKF > > > I've some doubt about the use of the two funtions "observe" and > > "observe_innovation". From what I've understood, they do the same job > > (correction step), but: > > - "observe" computes internally the innovation from the given real > > measure 'z'; > > - "observe_innovation" needs the innovation 's' to be computed externally > > and > > then passed as argument. > > Is that correct? -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 E-Mail: nb...@es... URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Fredrik O. <fre...@id...> - 2005-10-24 01:00:22
|
I believe you're right in your assumptions about "observe" and "observe_innovation". It you look into the sourcecode for EKF (bayesFlt.cpp), you'll discover that "observe" first calculates the innovation based on the difference between the actual and predicted measure, and then calls "observe_innovation" internally. regards, Fredrik Orderud Ph.D student in Computer Science, NTNU ----- Original Message ----- From: "Nicola Bellotto" <nb...@es...> To: <bay...@li...> Sent: Saturday, October 22, 2005 7:24 PM Subject: [Bayes++] "observe" and "observe_innovation" for EKF > I've some doubt about the use of the two funtions "observe" and > "observe_innovation". From what I've understood, they do the same job > (correction step), but: > - "observe" computes internally the innovation from the given real measure > 'z'; > - "observe_innovation" needs the innovation 's' to be computed externally > and > then passed as argument. > Is that correct? |
From: Nicola B. <nb...@es...> - 2005-10-22 17:25:01
|
Hi, I've some doubt about the use of the two funtions "observe" and "observe_innovation". From what I've understood, they do the same job (correction step), but: - "observe" computes internally the innovation from the given real measure 'z'; - "observe_innovation" needs the innovation 's' to be computed externally and then passed as argument. Is that correct? Cheers Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 E-Mail: nb...@es... URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2005-10-13 10:19:53
|
Hi, does anybody have an example that make use of the "Linrz_correlated_observe_model" class? Cheers Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 E-Mail: nb...@es... URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Nicola B. <nb...@es...> - 2005-10-11 16:27:07
|
Hi, I've implemented an EKF, where I have a linearized observation model (Linrz_correlated_observe_model) with correlated noise. The covariance matrix of such a noise is the following: // | * 0 0 0 0 0 0 0 0 | // | 0 * 0 0 0 0 0 0 0 | // | 0 0 * 0 0 0 0 0 0 | // | 0 0 0 * 0 0 0 0 0 | // Z = | 0 0 0 0 * 0 0 0 0 | // | 0 0 0 0 0 * * 0 0 | // | 0 0 0 0 0 * * 0 0 | // | 0 0 0 0 0 0 0 * * | // | 0 0 0 0 0 0 0 * * | Z.clear(); Z(0,0) = VAR_alpha; Z(1,1) = VAR_beta; Z(2,2) = VAR_r; Z(3,3) = VAR_v; Z(4,4) = VAR_omega; Z(5,5) = VAR_psi; Z(7,7) = VAR_theta; Z(5,6) = VAR_psi / 0.01; Z(6,5) = Z(5,6); // probably not necessary because symmetric Z(6,6) = 2 * VAR_psi / 0.01; Z(7,8) = VAR_theta / 0.01; Z(8,7) = Z(7,8); // probably not necessary because symmetric Z(8,8) = 2 * VAR_theta / 0.01; Unfortunately, at runtime, when I call the "Covariance_scheme::observe(Linrz_correlated_observe_model &h, const FM::Vec &z)" function I have the following exception: terminate called after throwing an instance of 'Bayesian_filter::Numeric_exception' what(): S not PD in observe I've noticed that removing the off-diagonal elements of Z, Z(5,6) and Z(7,8), there are no exceptions, but of course this is not a solution. I guess it's some mathematical error, but checking inside the Bayes++ code, I cannot understand what's wrong and how to fix it. Any suggestion? Thanks, Nicola -- ------------------------------------------ Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 E-Mail: nb...@es... URL: http://privatewww.essex.ac.uk/~nbello ------------------------------------------ |
From: Fredrik O. <fre...@id...> - 2005-09-27 23:39:54
|
----- Original Message ----- From: "Michael Stevens" <ma...@mi...> To: <bay...@li...> Sent: Tuesday, September 27, 2005 10:01 PM Subject: [Bayes++] Re: Doxygen documentation > I started work on some experimental Doxygen comments once before. It > didn't > get very far. There are still some /** comments in unsFlt.xpp. > > Apart from the time required I came across two problems. > a) Many of the member functions group naturally together. In the resulting > documentation I wanted to keep this grouping. That is with a function > declerations followed by the explanitory text. You probably have to modify your comments to be either above the functions; or after, on the same line. It would be far easier to get into Bayes++ if the doxgen-doc had been improved to also include function descriptions. > b) At the moment the comments are seperated between the .cpp and .hpp > files. > In the .hpp there are shorter interface comments related to the interface > specification. In the .cpp files there are algorithm descriptions. I was > never clear how I would put all this together. You can choose which commments to use for documentation, by using "/**" in .cpp and "/*" in .hpp, or vice versa. But I think the .hpp commments takes predecence if you use "/**" in both .hpp and .cpp. >> I would gladly help you in the task of "fixing" the comments to comply >> with >> Doxygen if you're interested in any help. > > That would be very helpful and a great offer. From your previous comments > I > assume you are using code from the CVS HEAD and not one of the 2003.8 > releases. Yes, I've been using the CVS-code. I am, however, quite fresh when it comes to sourceforge-projects. I only registered recently, with username "orderud". How does one gain write-access to the repostories? > I have been working on this code recently with the aim of putting out a > new > release based on the code in the HEAD. I have some ideas how to intergrate > the 'by-product' work in the head with the previous interface. > > The CVS HEAD also has and FAQ page. Which I think will help in the future. > Any other ideas how improve the documentation would also be great. I assume your'e talking about "Bayes++FAQ.html", which seems to be an improved and extended version of the FAQ on the website. The improved "noise coupling" answer looks like a most welcome addition, as the sourcecode is a bit vauge on this. regards, Fredrik Orderud |
From: Michael S. <ma...@mi...> - 2005-09-27 20:01:29
|
Hello Fredrick. On Sunday 25 Sep 2005 23:02, Fredrik Orderud wrote: > The automatically generated Doxygen documentation for Bayes++ does not seem > to include the comments found in the code. This is probably due to the fact > you've written the comments between the function signature and the function > body; while Doxygen prefers comments to be written before the function > signatures (or after the body). You will also need to start each comment > with double asterix' ("/**") to mark them as code-documentation. I started work on some experimental Doxygen comments once before. It didn't get very far. There are still some /** comments in unsFlt.xpp. Apart from the time required I came across two problems. a) Many of the member functions group naturally together. In the resulting documentation I wanted to keep this grouping. That is with a function declerations followed by the explanitory text. b) At the moment the comments are seperated between the .cpp and .hpp files. In the .hpp there are shorter interface comments related to the interface specification. In the .cpp files there are algorithm descriptions. I was never clear how I would put all this together. > I would gladly help you in the task of "fixing" the comments to comply with > Doxygen if you're interested in any help. That would be very helpful and a great offer. From your previous comments I assume you are using code from the CVS HEAD and not one of the 2003.8 releases. I have been working on this code recently with the aim of putting out a new release based on the code in the HEAD. I have some ideas how to intergrate the 'by-product' work in the head with the previous interface. The CVS HEAD also has and FAQ page. Which I think will help in the future. Any other ideas how improve the documentation would also be great. Thanks, Michael -- ___________________________________ Michael Stevens Systems Engineering 34128 Kassel, Germany Phone/Fax: +49 561 5218038 Navigation Systems, Estimation and Bayesian Filtering http://bayesclasses.sf.net ___________________________________ |
From: Nicola B. <nic...@gm...> - 2005-09-26 13:23:09
|
Hi Baskar, On Monday 26 Sep 2005 01:20, Baskar Jayaraman wrote: > Hi Nicola, I saw your following question on the bayes++ ML. I am also a > little bit confused about modeling Q on my problem in bayes++ and would > like to see if you can help me. I am using linear_predict model to model > the following problem: > x1(k+1) =3D x1(k) + u1(k)-u2(k)+w1(k) > x2(k+1) =3D x2(k) + u2(k)-u3(k)+w2(k) > y(k) =3D x1(k) + x2(k) + v(k) > w1,w2,v are noises. u1, u2 are control actions. u3 is a disturbance on x= 2. > y is the observation. u1 u2 and u3 have noises which I am modeling as bei= ng > included in w1 and w2. This causes a difficulty because u2 is there in bo= th > x1 and x2 equations and hence would make w1 and w2 correlated and hence t= he > covariance matrix would have off-diagonal elements. The filter is to > estimate x1 and x2. My question is how to specify G and q of bayes++. I > know from the examples that G is a 2x2 matrix and q is a 2x1 vector. But I > am lost because the code calls q as covariance. I would appreciate any > light you can throw on modeling this in bayes++. Actually that comment is incorrect because the covariance is a matrix indee= d. I think the following answer, that I got from Michael a few weeks ago, may= =20 help to understand how to use 'q' and 'G': =2E.. G and q together represent the process (predict) noise. q is the noise=20 variance (a vector) and G is the noise coupling. In this case the process model is=20 =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0x(k+1) =3D f(x(k)) + G.q(k) where q(k) in Gaussian white noise with variance q This leads to a Kalman filter covariance update for the linear case =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0X(k+1) =3D F.X(k).F' + G.q.= G' This is equivalent to =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0X(k+1) =3D F.X(k).F' + Q where Q =3D G.q.G' The are a couple of reasons for expressing the process noise in this way. a) For factorised filters (such as the UD_scheme) it is in the perfect form b) The same noise is often additive to more then one element of the state. = In=20 this case the size of q is less then x and G provides a physically easily=20 interpreted of how the elements of q effect x. =2E.. However, instead of including the control noise in w(k), I'd rather extend = the=20 state vector trying to estimate such a noise. For example, a simple system= =20 like this: x(k+1) =3D x(k) + u(k) + n(k) + w(k) y(k+1) =3D x(k) + v(k) where n(k) is the noise of the control u(k), could be extended as follows: x1(k+1) =3D x1(k) + x2(k) + u(k) + w(k) x2(k+1) =3D n(k) y(k+1) =3D x1(k) + v(k) I'm posting this on the bayes++ mailing list, so maybe someone with more=20 experience than me can give you some suggestion. Cheers, Nicola > Thanks for your help. > Baskar > Your question on ML: > ------------------------------- > "I'm using a "Linear_predict_model" (but this is related also to other > models...) for implementing an EKF and there's something I cannot > understand. > In the "bayesFlt.hpp" is reported that the vector "q" is the covariance of > the state noise w(k)... but isn't such a covariance a symmetric matrix > (normally called "Q")? So, if I wanna represent my covariance Q, how shou= ld > I > do?" =2D-=20 =2D----------------------------------------- Nicola Bellotto University of Essex Department of Computer Science Wivenhoe Park Colchester CO4 3SQ United Kingdom Room: 1N1.2.8 Tel. +44 (0)1206 874094 E-Mail: nb...@es... URL: http://privatewww.essex.ac.uk/~nbello =2D----------------------------------------- |
From: Fredrik O. <fre...@id...> - 2005-09-25 22:21:04
|
Sorrry that I didn't notice your "SIR" classes. They seem to do exactly what I've wanted :). Once again, thanks for your great work! :) The abstract "SIR_random" class does however still need an "default", and easily accessible implementation. The "Boost_SIR_random_helper" class found in Matlab/matlabBfilter.hpp and MuPad/bfilter.cpp seems to be a great candidate for a default random-implementation. Would it be possible to extend this class with the (optional) possibility of proper seeding, and include it in "SIRFlt.hpp" (or other file in "BayesFilter/")? This would help avoid the need for creating a custom RNG for each project (, like Rtheta_random, Test_random, SLAM_random, etc.), as they all contain basically the same implementation. As pointed out in a previous mail; I would gladly assist you in any future development of Bayes++, if you're interrested in any help. regards, Fredrik Orderud Ph.D student in Computer Science, NTNU ----- Original Message ----- From: "Michael Stevens" <ma...@mi...> To: <bay...@li...> Sent: Saturday, September 24, 2005 10:49 AM Subject: Re: [Bayes++] Code issues >> I have also been unable to find a method for generating samples from the >> models. It would have been great to be able to get samples with desired >> covariance, based on prediction- and observation models. Typical usage >> would be to simulate a process, and to generate observations from it. > > There is a Gaussian sampler as a helper class for in the SIR filter. In > "SIRFlt.hpp" I define the class template 'Sampled_general_predict_model' > and the two predefine classes 'Sampled_LiAd_predict_model' > 'Sampled_LiInAd_predict_model'. > > These convert Linear or Linrz predict models into 'Sampled_predict' > models. > You define the Gaussian covaiance by the values of G and q in the predict > model. This is much the same as your 'SampleCorelated' function. Except I > think I have the spelling correct in this case :-) > > In the case of observation models Bayes++ has model generalisers in > "models.hpp". For example 'General_LiUnAd_observe_model' generalises a > Linear > Uncorrelated Addative noise observe model. The generalised model is also a > 'Likelihood_observe_model' and so the likelihood of a state given an > obervation can be computed with the 'L' function. This allows you to use a > Gaussian noise models to resample, for example in the 'SIR_scheme'. > > Best regard, and thanks for the feedback, > Michael |
From: Fredrik O. <fre...@id...> - 2005-09-25 21:03:00
|
The automatically generated Doxygen documentation for Bayes++ does not seem to include the comments found in the code. This is probably due to the fact you've written the comments between the function signature and the function body; while Doxygen prefers comments to be written before the function signatures (or after the body). You will also need to start each comment with double asterix' ("/**") to mark them as code-documentation. I would gladly help you in the task of "fixing" the comments to comply with Doxygen if you're interested in any help. A simple example of Doxygen code-comments can be found on http://www.stack.nl/~dimitri/doxygen/docblocks.html regards, Fredrik Orderud Ph.D student in Computer Science, NTNU |