• JohnD

    JohnD - 2007-05-15

    I'll preface this question by saying that it may be a misunderstanding of particle filtering on my behalf rather than any problem with the IVT. However, the following has left me a little confused. Perhaps someone can offer comment?

    Here is the PickBaseSample() function reproduced from IVT/src/ParticleFilter/ParticleFilterFramework.cpp:

    int CParticleFilterFramework::PickBaseSample()
        double choice = uniform_random() * c_total;

        int low = 0;
        int high = m_nParticles;

        while (high > (low + 1))
            int middle = (high + low) / 2;
            if (choice > c[middle])
                low = middle;
                high = middle;

        return low;

    This function is used by PredictNewBases to choose particles from the last distribution. The particle position then has drift and noise applied to give a new position, in this way we can produce our new particle distribution. A random number and the value of c[middle] is used in order to select the particle.

    It was my understanding that particles should be sampled from the 'old' distribution with likelihood proportional to their last fitness function score, or weight. However, having updated the particle weights (stored in pi[]), the ParticleFilter() function updates the array c[] as follows:

    c_total = 0;

    for (i = 0; i < m_nParticles; i++)
            c[i] = c_total;
            c_total += pi[i];

    That is, for each particle i the value of c[i] is the sum of the weights (fitness function scores) of all particles with a lower index value, j<i. But is it not true that there is no relationship between a particle's index and it's weight, or it's postition in the feature space?

    Why do we use the particle index as a way to group scores? And how does this provide a measure that allows PickBaseSample to chose an appropriate particle from the last distribution?


    • Pedram Azad

      Pedram Azad - 2007-05-22

      The algorithm is indeed not very intuitive, but it is correct.

      The idea of binary subdivision in the method PickBaseSample() is from the code one can download from the "Condensation Algorithm Homepage":

      In 'pi', we have the probability density function (normalized to 1). In 'c', we have the accumulated 'pi' before normalization, i.e. 'c' can be considered the probability distribution multiplied by 'c_total'.

      The function uniform_random() returns a uniformly distributed random number from the interval [0, 1], thus 'choice' is a uniformly distributed number from the interval [0, c_total]. The while-loop then efficiently finds (by using binary subdivision) the greatest number in 'c' which is smaller than 'choice'. It depends on the distribution which sub-intervals within [0, c_total] correspond to which particles and how big these sub-intervals are. A particle with a very low probability will only have a small sub-interval, a particle with a high probability will have a greater sub-interval. The size of the sub-interval is proportional to the corresponding particle's probability and therefore the probability of picking a specific particle is proportional to its probability.


      Particle 1: pi = 0.2
      Particle 2: pi = 0.8
      c_total = 1

      In 'c', we then have: particle 1: 0.0 and particle 2: 0.2 (see the for-loop in John's posting). Now, a uniformly distributed number is taken from [0, 1]. The probabilty of picking particle 1 is 0.2 (interval 0.0-0.2), and the probability of picking particle 2 is 0.8 (interval 0.2-1.0), i.e. exactly proportional to their probabilities.


Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks