Note: This report is written in GitHub Flavored Markdown. Some content is invisible if you are viewing this in GitHub with Light Theme
- Introduction
- Simulated System Dynamics
- Particle Filter
3.1 Sequential Importance Resampling
3.2 Particle Filter Assumptions
3.3 Number of Particles
3.4 Multiple Target Tracking
3.5 Detecting New Targets - Results and Qualitative Analysis
4.1 Main Criticism - Improvements for Next Time
- Final Words
- References
In this project, a Rust implementation of the Sequential Importance Resampling (SIR) particle filter for multiple target tracking is presented. The SIR particle filter is a popular algorithm used in state estimation problems, and there are several extensions of the filter for tracking multiple targets in a dynamic environment. By using a simple nearest measurement particle association approach, combined with some thresholding heuristics, a fairly simple implementation is able to track multiple targets with switching dynamics to varying degrees in the face of clutter. This is done with somewhat conservative assumptions, allowing for improvements in an actual application. The main goal of the project to learn about the particle filter and extend it in some fashion. It's written in Rust because I wanted to start learning the language.
An arbitrary nonlinear hybrid system is constructed and simulated using 4th order Runge-Kutta for 20 seconds. The system has 3 modes
Legend: Green: true state positions
As shown in the GIF above, the modes
Further, some noise is added to the measurements. For
Legend: Green: true state positions | Red: measurements
Finally, uniform clutter is also added to the measurements, and there is no way to distinguish between clutter and a measurement originating from a target based off any single frame/simulation step.
Legend: Red: measurements (clutter and target oriented)
In the GIF above there are always 15 false measurements. Try to see if you can keep track of the targets, with the prior knowlegde of where they are going to be. It's fairly easy when they are moving in a constant pattern together, but it gets a bit harder when they jump to separate trajectories, and trying to keep your eyes on all three quickly becomes a challenge.
The particle filter is a Monte Carlo-based algorithm used to estimate the state of a system given noisy and/or partial observations. It works by representing the posterior distribution of the state using a set of particles, where each particle represents a hypothesis of the state. The particles are updated recursively based on modeled system dynamics, observed data (measurements) and modeled measurement noise, allowing us to approximate the true state posterior distribution.
The Sequential Importance Resampling (SIR) algorithm is a specific implementation of the particle filter. It involves 3 main steps: predict, update and resample. In the prediction step, particles are propagated forward in time using the system dynamics. In the update step, particles are weighted according to the likelihood of the observed data and resampled to generate a new set of particles. The resampling step aims to eliminate particles with low weights and duplicate particles with high weights in an attempt to concentrate particles in regions of high posterior probability. This is commonly presented as a solution to the problem of particle degeneracy seen in the Sequential Importance Sampling (SIS) filter, wherein the majority of particles are quickly weighted close to zero, making them useless of the estimation of the posterior distribution of the state.
It should be noted that there are many particle filters, and even more variants to each of those filters. The description layed out here is what you will generally find to be the standard one, and its the one this project is mostly based on. For more info and comparison of different types of filter I would recommend starting with [1].
Because this is a simulated system, where the behavior of the true state is known, there is a need to assert what information is available to the particle filter and what is not.
The filter has, as mentioned access to all measurements including the clutter, and can not distinguish between them, i.e. no signal amplitudes or the like.
Motion modes are modeled as:
with a Markov chain transition matrix as:
so it's clear that the filter does not have a completely accurate model of the true dynamics, but there is some notion of how likely it is to go from, for instance, going straight quickly to turning left or right sharply. Additionally, the measurements given to the filter only convey position, while the filters motion model state also contains a heading angle. In a real application, one would obviously try to get as good a model as possible, possibly by adaptive means.
The filter also employs the addition of artificial process noise in the prediction phase, for greater particle diversity, which in general helps with robustness against modeling errors.
So far, no mention has been made to the number of particles to use in the particle filter. The results of the standard SIR filter come from approximating the analytical form of the posterior distribution of the state, that arises from applying Bayes' theorem, with a finite sum. The approximation error decreases, as you might imagine, with the number of particles, and approaches zero as the number of particles approach infinity.
One can try to adaptively change the number of particles in order to best use computational resources and keep error below a certain value, but that was not attempted in this implementation.
For practical reasons related to getting consistent results and making changes to certain parts of the implementation that will be discussed later, a particle amount of 5000 was chosen in the end, as a fairly high amount of particles, that still allowed relatively quick iterations to the codebase. At earlier stages of development, as low as 50 and 500 particles were used without much issue, but as the scope extended to tracking multiple targets in noisy conditions, it was deemed better to just set a high particle number to eliminate a variable from the development process.
There are multiple ways to approach tracking multiple targets. What could be said to be the proper/direct way to do it, is to include the number of targets are part of the state. This was not done here, because the project developed from a simple single target particle filter, and there is certainly some complexity in estimating how many targets exist in the face of clutter. If the number of targets is known, then there is no issue. A possible future project is to do it the proper way, simultaneously estimating the number of targets and including that number into a single combined state for all targets.
Either way, the approach taken here is to associate particles to their nearest measurement, and perform the update and resample steps relative to that measurement. In this way we form multiple approximations of the posterior distribution of the state around measurements. If there is no clutter / false measurments, then just pick the highest weight particle in each group as you state estimate and you're done.
Legend: Green: true state positions | Orange: state estimates
Dark Matt Pink: clutter | Red: measurements | Blue: particles
As we can see from the GIF on the left above, targets that come into view later will not be detected until they come close enough to a group of particles. The GIF on the right reveals another issue that will be discussed shortly.
To be able to detect new targets, we need particles around the new target. One idea is to inject some uniformly distributed particles every iteration, which should allow for particles to gather around the new target over time. Random particle injection is also sometimes used to combat the loss of diversity that comes with the removal and duplication of particles in the resampling step. The approach here though, is to skip the middle-man, and take the
Legend: Green: true state positions | Orange: state estimates
Dark Matt Pink: clutter | Red: measurements | Blue: particles
Since the clutter does not move like the targets, we might expect the weight of particles that might get distributed around the false measurments to immediately become close to zero, and we should be able to simply threshold estimates on the sum of the weights in a given particle group.
Legend: Green: true state positions | Orange: state estimates
Dark Matt Pink: clutter | Red: measurements | Blue: particles
Unfortunately, when there's a fair amount of clutter, life's not that easy, since clutter can appear close together on consecutive timesteps. This brings us to how we can get the filter to perform, and to the main criticism of the approach taken.
The quick-and-dirty solution was to create and fine-tune heuristics to eliminate state estimates in the spurious particle groups, to best as possible detect only the targets. The properties chosen to filter out (pun intended) the wrong estimates, for a given particle group, are as follows:
number of particles,
sum of particle weights,
value of highest weight particle
additionally estimates that pass those thresholds are subject to filtering by:
distance from any estimate 1 timestep back,
distance from any estimate 2 timesteps back
which sets a limit on 'teleporting' between timesteps.
Legend: Green: true state positions | Orange: state estimates
Dark Matt Pink: clutter | Red: measurements | Blue: particles
For the cases of 3 and 5 false measurements at all times, the filter is able to track the targets with a few dropouts and very few false positives. Detection of the third target happens after a small delay, but seems to be decent from that point onwards.
Legend: Green: true state positions | Orange: state estimates
Dark Matt Pink: clutter | Red: measurements | Blue: particles
Facing 15 false measurements at all times, the filter does struggle with some false positives, and there are a few more dropouts which sometimes go on for longer as well. It could be argued that we're still able to successfully track the targets to some degree, and we see that the third, slow moving target, again is tracked better.
For the case of 35 false measurements at all times, the tracking fails completely for the circular and figure-eight trajectory targets, giving at least as many false positives as true positives, and still quite a few false negatives / dropouts as well. Investigating the third target on the other hand, there are some signs of life. The results cautiously suggest that if our area of application has very slow moving targets that we can model accurately enough, a filter like this one might still be able to give useful results with quite high levels of clutter.
Another issue that may or may not have been noticed from the low clutter results, is what the consequences of the tuned heuristics are when compared to the earlier MAP estimates.
Legend: Green: true state positions | Orange: state estimates
Red: measurements | Blue: particles
The side-by-side comparison reveals that the heuristics used have a negative effect on the estimates when there is no clutter. This suggests that this is generally not the approach to take forward into the future, unless you know there will be very little clutter, and you can quickly fine-tune the heuristics to fit that situation. In that case, this approach is probably a bit simpler to implement, especially if extending a single target filter to multi-target, or if it's your first time implementing a particle filter (as in this case).
This problem comes from the one-size-fits-all nature of tuning simple threshold heuristics, and is a fundamental problem with the approach.
- One combined state posterior distribution
- Include number of target explicitly in state description
- Estimate number of targets
- Adaptive number of particles
- Adaptive motion model fitting
- Kerneled/Regularized particle filter
- Constrain state estimate motion directly (not always a good idea)
Performs alright, learned a lot, will do better next time.
[1] M.S. Arulampalam; S. Maskell; N. Gordon; T. Clapp (2002). A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 50(2), 174 - 188. [https://doi.org/10.1109/78.978374]