Obtain Kalman gains from parallel algorithm #10
-
First of all: thank you so much for providing these code examples! They really helped me understand the associative operators in the parallelized Kalman filter. I am facing a situation where I am not only interested in the final means and covariances, but also need to return the Kalman gains Can anyone help me with some intuitions here? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 1 reply
-
I think (I need to check but I'm on the phone now) that the result of the associative scan for the A matrix should be the gain. Have you tried this? |
Beta Was this translation helpful? Give feedback.
-
Thanks for the quick answer! That was my first assumption, but the shape of the result of the associative scan operator for the A matrix is of shape |
Beta Was this translation helpful? Give feedback.
-
Right. I'll check it out later and revert. It's probably easy. Can you also
explain why you need it? Maybe you can do what you want without it.
…On Fri, 1 Nov 2024, 13:40 Dominik Straub, ***@***.***> wrote:
Thanks for the quick answer!
That was my first assumption, but the shape of the result of the
associative scan operator for the A matrix is of shape $n_x \times n_x$,
while the Kalman gains should be of shape $n_x \times n_y$. I will try my
hand at some more linear algebra to see if I can figure it out from here.
—
Reply to this email directly, view it on GitHub
<#10 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEYGFZ5FZZQRHU2ELAR2SV3Z6OAEPAVCNFSM6AAAAABRAGO5QSVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCMJSGIZTKMY>
.
You are receiving this because you commented.Message ID:
<EEA-sensors/sequential-parallelization-examples/repo-discussions/10/comments/11122353
@github.com>
|
Beta Was this translation helpful? Give feedback.
-
It's a bit more involved... I am using the KF inside an inverse optimal control algorithm, which assumes that an agent perceives the world according to the KF and acts according to a linear controller. Here's my paper if you're interested. In the algorithm, I am setting up a linear dynamical system describing the behavior of the agent, which contains the agent's Kalman gains. I am trying to figure out if the parallel Kalman filter can speed up my inverse optimal control algorithm. Thanks again for your help! |
Beta Was this translation helpful? Give feedback.
-
So, the answer is: if there is a direct way I can't see it. The gain is a non-linear function of the innovation covariance, so it's not obvious from the current formulation. P = model.F @ P @ model.F.T + model.Q
S = model.H @ P @ model.H.T + model.R
K = jsc.linalg.solve(S, model.H @ P, assume_a='pos').T I.e., if the function is Ps = jnp.insert(fPs[:-1], P0, 0, 0)
Ks = jax.vmap(get_K)(Ps) and that should do the trick. There may be a trick to do it all at once rather than this way to be honest, for the longest time I was sure that we couldn't obtain the marginal likelihood of the Kalman filter from associative scan only, but these guys managed to: Maybe @matthew9671 who visibly derived the marginal likelihood formula has an idea about the gain too? |
Beta Was this translation helpful? Give feedback.
-
Thank you so much! That did the trick. I would also be curious to see if it can be done within the associative scan itself. |
Beta Was this translation helpful? Give feedback.
So, the answer is: if there is a direct way I can't see it. The gain is a non-linear function of the innovation covariance, so it's not obvious from the current formulation.
On the other hand, you can relatively easily get them in parallel after the fact: once you have obtained the filtering mean and covariances, you can simply apply these steps in parallel to the filtered_covariances shifted by 1.
I.e., if the function is
get_K(P)
, you can doand that should do the trick.
There may be a trick to do …