Hacker News new | past | comments | ask | show | jobs | submit login

This reminds me of my Achilles heel during undergrad. Kalman filters.

No matter how much I read about the topic, I just could not grok it. The whole "state update" algorithm and sheer amount of different variables threw me off. Does anyone else feel the same way?




You are not alone! From what I've seen many people struggle with Kalman filters. If you want to build in depth understanding of them I heard lots of good things about this online book: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Pyt...


Yes, indeed!

What worked for me in the end, was to first understand state observers (e.g. Luenberger observers) and then consider a Kalman filter as an optimal observer, where the observer gain is the steady-state Kalman gain.

The whole duality between state-feedback design and observer design -- and LQR and Kalman filters. Then it made sense :-)


The book "Probabilistic Robotics" explains the Kalman filter really well by putting it in the larger context as a Bayesian filter. I understood it only after reading that book


I haven't read that book, but seeing it as a Bayesian filter made it click for me. All the control theory guys who just handwaved with "it's basically a stochastic Luenberger observer" didn't capture the essence of the filter for me. With a strong statistics background it was much better to go from Bayesian first principles to the Kalman filter.



I totally understand where you are coming from. I read multiple tutorials and even implemented Kalman filters in Python, Matlab, and C++ for a balancing robot project. I never really understood what was happening even though I knew what the Kalman filter is used for and when I need it. The thing that "clicked" for me is that multiple measurements are always more accurate than a single measurement. That's why there is the feedback loop that derives the predicted value and the uncertainty that is associated with it. By combining the predicted value and the measured value (and their uncertainties), you get a more accurate measurement. Then you can use this more accurate measurement to "correct" your predictions so that the next prediction has even less uncertainty. This perpetuality is why Kalman filters lead to the "true" state.


The whole "state update" algorithm and sheer amount of different variables threw me off.

Imo it starts with the name, it tells nothing and probably implies something different. It is the dynamic programming of control theory.


So true. Filters should be reserved for something like low pass filters.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: