<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Changepoints on Gerardo Duran-Martin</title><link>https://grdm.io/tags/changepoints/</link><description>Recent content in Changepoints on Gerardo Duran-Martin</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Fri, 10 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://grdm.io/tags/changepoints/index.xml" rel="self" type="application/rss+xml"/><item><title>A Predictive View on Streaming Hidden Markov Models</title><link>https://grdm.io/articles/streaming-hmm2026/</link><pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate><guid>https://grdm.io/articles/streaming-hmm2026/</guid><description>&lt;h2 id="abstract"&gt;Abstract&lt;/h2&gt;
&lt;p&gt;We develop a predictive-first optimisation framework for streaming hidden Markov models. Unlike classical approaches that prioritise full posterior recovery under a fully specified generative model, we assume access to regime-specific predictive models whose parameters are learned online while maintaining a fixed transition prior over regimes. Our objective is to sequentially identify latent regimes while maintaining accurate step-ahead predictive distributions. Because the number of possible regime paths grows exponentially, exact filtering is infeasible. We therefore formulate streaming inference as a constrained projection problem in predictive-distribution space: under a fixed hypothesis budget, we approximate the full posterior predictive by the forward-KL optimal mixture supported on S paths. The solution is the renormalised top-S posterior-weighted mixture, providing a principled derivation of beam search for HMMs. The resulting algorithm is fully recursive and deterministic, performing beam-style truncation with closed-form predictive updates and requiring neither EM nor sampling. Empirical comparisons against Online EM and Sequential Monte Carlo under matched computational budgets demonstrate competitive prequential performance.&lt;/p&gt;</description></item><item><title>A unifying framework for generalised Bayesian online learning in non-stationary environments</title><link>https://grdm.io/articles/bone2025/</link><pubDate>Sat, 01 Mar 2025 00:00:00 +0000</pubDate><guid>https://grdm.io/articles/bone2025/</guid><description>&lt;h2 id="abstract"&gt;Abstract&lt;/h2&gt;
&lt;p&gt;We propose a unifying framework for methods that perform probabilistic online learning in non-stationary environments. We call the framework BONE, which stands for generalised (B)ayesian (O)nline learning in (N)on-stationary (E)nvironments. BONE provides a common structure to tackle a variety of problems, including online continual learning, prequential forecasting, and contextual bandits. The framework requires specifying three modelling choices: (i) a model for measurements (e.g., a neural network), (ii) an auxiliary process to model non-stationarity (e.g., the time since the last changepoint), and (iii) a conditional prior over model parameters (e.g., a multivariate Gaussian). The framework also requires two algorithmic choices, which we use to carry out approximate inference under this framework: (i) an algorithm to estimate beliefs (posterior distribution) about the model parameters given the auxiliary variable, and (ii) an algorithm to estimate beliefs about the auxiliary variable. We show how the modularity of our framework allows for many existing methods to be reinterpreted as instances of BONE, and it allows us to propose new methods. We compare experimentally existing methods with our proposed new method on several datasets, providing insights into the situations that make each method more suitable for a specific task. We provide a Jax open source library to facilitate the adoption of this framework.&lt;/p&gt;</description></item></channel></rss>