<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Generalised-Bayes on Gerardo Duran-Martin</title><link>https://grdm.io/tags/generalised-bayes/</link><description>Recent content in Generalised-Bayes on Gerardo Duran-Martin</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 01 Mar 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://grdm.io/tags/generalised-bayes/index.xml" rel="self" type="application/rss+xml"/><item><title>A unifying framework for generalised Bayesian online learning in non-stationary environments</title><link>https://grdm.io/articles/bone2025/</link><pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate><guid>https://grdm.io/articles/bone2025/</guid><description>&lt;h2 id="abstract"&gt;Abstract&lt;/h2&gt;
&lt;p&gt;We propose a unifying framework for methods that perform probabilistic online learning in non-stationary environments. We call the framework BONE, which stands for generalised (B)ayesian (O)nline learning in (N)on-stationary (E)nvironments. BONE provides a common structure to tackle a variety of problems, including online continual learning, prequential forecasting, and contextual bandits. The framework requires specifying three modelling choices: (i) a model for measurements (e.g., a neural network), (ii) an auxiliary process to model non-stationarity (e.g., the time since the last changepoint), and (iii) a conditional prior over model parameters (e.g., a multivariate Gaussian). The framework also requires two algorithmic choices, which we use to carry out approximate inference under this framework: (i) an algorithm to estimate beliefs (posterior distribution) about the model parameters given the auxiliary variable, and (ii) an algorithm to estimate beliefs about the auxiliary variable. We show how the modularity of our framework allows for many existing methods to be reinterpreted as instances of BONE, and it allows us to propose new methods. We compare experimentally existing methods with our proposed new method on several datasets, providing insights into the situations that make each method more suitable for a specific task. We provide a Jax open source library to facilitate the adoption of this framework.&lt;/p&gt;</description></item><item><title>Outlier-robust Kalman Filtering through Generalised Bayes</title><link>https://grdm.io/articles/wolf2024/</link><pubDate>Sat, 01 Jun 2024 00:00:00 +0000</pubDate><guid>https://grdm.io/articles/wolf2024/</guid><description>&lt;h2 id="abstract"&gt;Abstract&lt;/h2&gt;
&lt;p&gt;We derive a novel, provably robust, efficient, and closed-form Bayesian update rule for online filtering in state-space models in the presence of outliers and misspecified measurement models. Our method combines generalised Bayesian inference with filtering methods such as the extended and ensemble Kalman filter. We use the former to show robustness and the latter to ensure computational efficiency in the case of nonlinear models. Our method matches or outperforms other robust filtering methods (such as those based on variational Bayes) at a much lower computational cost. We show this empirically on a range of filtering problems with outlier measurements, such as object tracking, state estimation in high-dimensional chaotic systems, and online learning of neural networks.&lt;/p&gt;</description></item></channel></rss>