Filter Configurations¶
The single Filter() handler is directed to the appropriate filtering algorithm via the provided FilterConfig. We provide a summary below, as well as an exhaustive list of classes.
Available filter configurations¶
| Config class | Time domain | When it fits best |
|---|---|---|
KFConfig |
Discrete | Linear-Gaussian dynamics and linear-Gaussian observations (exact & optimal). |
EKFConfig |
Discrete | Nonlinear, differentiable Gaussian dynamics, nonlinear but differentiable Gaussian observations (approximate). (default). |
UKFConfig |
Discrete | Nonlinear, differentiable Gaussian dynamics, nonlinear but differentiable Gaussian observations (approximate). Generally more accurate, but slower than EKFConfig. |
EnKFConfig |
Discrete | High-dimensional or expensive models with lower-dimensional structure and Gaussian observations (approximate). |
PFConfig |
Discrete | Applicable for arbitrary state-space models, but quite expensive and noisy estimates (asymptotically exact in the limit of infinite particles, approximate in practice). |
HMMConfig |
Discrete (HMM) | Finite discrete latent state space (exact & optimal). |
ContinuousTimeKFConfig |
Continuous-discrete | Linear-Gaussian SDE + linear-Gaussian observations (exact and optimal). |
ContinuousTimeEKFConfig |
Continuous-discrete | Mildly nonlinear SDE with differentiable drift and difussion terms; Gaussian observations (approximate). |
ContinuousTimeUKFConfig |
Continuous-discrete | Nonlinear SDE; derivative-free; Gaussian observations (approximate). Generally more accurate, but slower than ContinuousTimeEKFConfig. |
ContinuousTimeEnKFConfig |
Continuous-discrete | High-dimensional or expensive models with lower-dimensional structure and Gaussian observations (approximate). Performs reasonably as a default. (default) |
ContinuousTimeDPFConfig |
Continuous-discrete | Applicable for arbitrary state-space models, but quite expensive and noisy estimates (asymptotically exact in the limit of infinite particles, approximate in practice). |
Discrete Time Configuration Classes¶
Filter configuration dataclasses. Shared by dispatchers and integration backends.
BaseFilterConfig
dataclass
¶
Shared configuration options inherited by all filter configs.
You do not instantiate this class directly; use one of the concrete
subclasses (e.g. KFConfig, PFConfig).
The record_* fields let you save intermediate filtering outputs into the
NumPyro trace as numpyro.deterministic sites, making them accessible
after inference (e.g. for plotting filtered trajectories). None defers
to the backend's default for that quantity.
Attributes:
| Name | Type | Description |
|---|---|---|
record_filtered_states_mean |
bool | None
|
Save the posterior mean \(\mathbb{E}[x_t \mid y_{1:t}]\) at each time step. |
record_filtered_states_cov |
bool | None
|
Save the full posterior
covariance at each step. Can be large — prefer
|
record_filtered_states_cov_diag |
bool | None
|
Save only the marginal variances (diagonal of the covariance) at each step. |
record_filtered_states_chol_cov |
bool | None
|
Save the Cholesky factor of the posterior covariance (Gaussian filters only). |
record_filtered_particles |
bool | None
|
Save the full particle array at each step (particle-based filters only). |
record_filtered_log_weights |
bool | None
|
Save the log importance weights at each step (particle-based filters only). |
record_max_elems |
int
|
Hard cap on total scalar elements saved across
all |
cov_rescaling |
float | None
|
Multiply all predicted covariances by
this factor before the update. Values slightly above |
crn_seed |
Array | None
|
Fix the PRNG key for stochastic filters
(EnKF, PF). Useful when differentiating through the filter:
a fixed key makes the randomness a deterministic function of model
parameters. |
warn |
bool
|
Whether or not to suppress warnings from filtering backends.
Defaults to |
filter_source |
FilterSource | None
|
Internal backend library. Set by each subclass; rarely needs to be changed manually. |
extra_filter_kwargs |
dict
|
Extra keyword arguments passed directly to the backend. Useful for advanced backend-specific options. |
KFConfig
dataclass
¶
Bases: BaseFilterConfig
Kalman Filter (KF) for discrete-time linear-Gaussian models.
The exact Bayesian filter for linear-Gaussian state-space models; requires
a model built with LTI_discrete or using
LinearGaussianStateEvolution + LinearGaussianObservation. For
nonlinear Gaussian models, use EKFConfig, UKFConfig, or EnKFConfig instead.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
When the dynamics and observation process of a dynamical system are both linear-Gaussian, the recursive updates can be computed in closed form.
This proceeds via a "prediction" step, where the mean and covariance are propagated forward in time, and an "update" step, where the mean and covariance are updated with the observation.
The prediction step is given by:
The update step is given by:
where \(K_t\) is the Kalman gain.
The Kalman gain is given by:
where \(H\) is the Jacobian of \(h\) at \(\hat x_{t|t-1}\).
There are variants to the particular algorithm; the cuthbert implementation is the so-called "square root" form.
This provides a more numerically stable implementation of the Kalman filter.
References:
- For the classsical reference, see: Kalman, R. E. (1960). A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering, 82(1), 35-45.
- For a more modern textbook reference, see Chapter 6 of: Särkkä, S., & Svensson, L. (2023). Bayesian Filtering and Smoothing (Vol. 17). Cambridge University Press. Available Online.
- For more details on the
cuthbertimplementation, see the cuthbert documentation.
EKFConfig
dataclass
¶
Bases: BaseFilterConfig
Extended Kalman Filter (EKF) for discrete-time models.
The EKF linearizes nonlinear dynamics at the current mean estimate via a first-order Taylor expansion. It is fast and simple, but may not work well for strongly nonlinear models. The Taylor series expansion is automatically performed via Jax autodiff.
This is exact (but wasteful) for linear-Gaussian models.
This is the default discrete-time filter when no filter_config is
passed to Filter.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_emission_order |
FilterEmissionOrder
|
Linearisation order for
the observation function. |
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
The EKF propagates a Gaussian approximation \(\mathcal{N}(\hat x_{t|t}, P_{t|t})\) through Jacobian linearizations of \(f\) and \(h\):
where \(F_t\) is the Jacobian of \(f\) at \(\hat x_{t|t-1}\), and proceeds via the typical Kalman update.
References:
- The
cuthbertimplementation of the EKF is based on thetaylor_kfmodule therein. See the cuthbert documentation for more information. - For a more modern textbook reference, see Chapter 7 of: Särkkä, S., & Svensson, L. (2023). Bayesian Filtering and Smoothing (Vol. 17). Cambridge University Press. Available Online.
UKFConfig
dataclass
¶
Bases: BaseFilterConfig
Unscented Kalman Filter (UKF) for discrete-time models.
A derivative-free Gaussian filter that handles stronger nonlinearities than the EKF by propagating a small, deterministic set of sigma points through the dynamics. No Jacobians are computed. Slightly more expensive than the EKF but often more accurate on curved manifolds.
The default parameters (alpha, beta, kappa) work well for most
problems; they rarely need to be changed.
Attributes:
| Name | Type | Description |
|---|---|---|
alpha |
float
|
Spread of sigma points around the current mean. Smaller → tighter cluster; larger → sigma points reach further. Defaults to \(\sqrt{3}\). |
beta |
int
|
Encodes prior knowledge about the distribution shape.
|
kappa |
int
|
Secondary scaling parameter. Defaults to |
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
For a state of dimension \(n\), \(2n+1\) sigma points are placed as:
Each sigma point is propagated through \(f\) and \(h\); the outputs are recombined with weights depending on \(\alpha, \beta, \kappa\) to recover the predicted mean and covariance.
References: - For the original paper, see: Julier, S. J., & Uhlmann, J. K. (1997). New extension of the Kalman filter to nonlinear systems. SPIE Proceedings, 3068. - For a more modern textbook reference, see Section 8.8 of: Särkkä, S., & Svensson, L. (2023). Bayesian Filtering and Smoothing (Vol. 17). Cambridge University Press. Available Online.
PFConfig
dataclass
¶
Bases: BaseFilterConfig
Bootstrap Particle Filter (PF) for discrete-time models.
The most flexible filter: works with any model, including non-Gaussian observations and highly nonlinear dynamics. The main cost is that accuracy scales with the number of particles, so large state dimensions can become expensive.
The primary tuning knob is n_particles. Estimates will generally get better
and less noisy with more particles, but introduces a linear computational cost.
ess_threshold_ratio controls the frequency of resampling; sampling more frequently
can help avoid particle degeneracy, but also increases variance.
Attributes:
| Name | Type | Description |
|---|---|---|
n_particles |
int
|
Number of particles. More particles give a lower-
variance log-likelihood estimate at linear compute cost. Defaults
to |
resampling_method |
PFResamplingConfig
|
Controls the resampling
algorithm and gradient behaviour. See |
ess_threshold_ratio |
float
|
Resampling fires when the effective
sample size drops below |
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
At each step, particles are propagated through the transition and reweighted by the observation likelihood. The resulting empirical distribution is asymptotically exact to the true filtering distribution as the number of particles goes to infinity. The marginal log-likelihood without a resampling step is estimated as:
where \(\tilde{w}_t^{(i)}\) is the are unnormalized weights of each particle.
There are several different resampling algorithms available, which result in different
approximations of the score function \(\nabla_\theta \log p(y_{1:T} | \theta)\).
For more information on these options, see PFResamplingConfig.
References:
- For a classical reference to particle filters, see: Doucet, A., De Freitas, N., & Gordon, N. (2001). An Introduction to Sequential Monte Carlo Methods. In Sequential Monte Carlo Methods in Practice (pp. 3-14). New York, NY: Springer New York.
- For a more modern textbook, see Chapter 11.4 of: Särkkä, S., & Svensson, L. (2023). Bayesian Filtering and Smoothing (Vol. 17). Cambridge University Press. Available Online.
- For a more recent review of differentiable particle filters, see: Brady, J. J., Cox, B., Li, Y., & Elvira, V. (2025). PyDPF: A Python Package for Differentiable Particle Filtering. arXiv:2510.25693.
EnKFConfig
dataclass
¶
Bases: BaseFilterConfig
Ensemble Kalman Filter (EnKF) for discrete-time models.
A good general-purpose filter for nonlinear models. Works with any differentiable or non-differentiable dynamics and scales well to moderate state dimensions. Cheaper per-step than the particle filter, but assumes observations are approximately Gaussian given the ensemble.
The primary tuning knob is n_particles, with more particles providing
more accurate results at the cost of higher compute.
If the ensemble collapses over long trajectories, increase
inflation_delta slightly (e.g. 0.05–0.2).
Attributes:
| Name | Type | Description |
|---|---|---|
n_particles |
int
|
Number of ensemble members. More members give a
better covariance estimate at higher compute cost. Defaults to
|
crn_seed |
Array | None
|
Fixed PRNG key for the ensemble. Defaults
to |
perturb_measurements |
bool | None
|
Add noise to observations before
the ensemble update (stochastic EnKF). Set |
inflation_delta |
float | None
|
Scale ensemble anomalies by
\(\sqrt{1 + \delta}\) before the update to prevent collapse.
|
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
The ensemble Kalman filter comprises ensemble members \(x_t^{(i)}, i = 1, \ldots, N_{\text{particles}}\). There are many implementation tricks in the EnKF; we describe the basic version here.
For each time step \(t\), the ensemble is propagated forward by the transition model:
where \(u_t\) is the control input at time \(t\) and \(t_t\) is the time of the transition, and \(\epsilon_t^{(i)} \sim \mathcal{N}(0, Q)\) is the process noise.
Each ensemble member is then updated using observations:
where \(\hat{K}_t^{(i)}\) is the Kalman gain for the \(i\)-th ensemble member, computed as
where \(\hat{P}_t^{(i)}\) is the empirical covariance of the particles, and \(R\) is the covariance of the observation model.
The resulting estimator is known to be biased for non-linear observations, but is often rather robust in practice to moderate nonlinearities. It is particualrly effective for high-dimensional inverse problems, where other particle methods like particle filters often struggle.
References:
- The implementation details are due to: Sanz-Alonso, D., Stuart, A. M., & Taeb, A. (2018).
Inverse problems and data assimilation. [arXiv:1810.06191](https://arxiv.org/abs/1810.06191).
- For a classical reference to the ensemble Kalman filter, see: Evensen, G. (2003).
The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dynamics, 53(4), 343-367.
- The solution using automatic differentiation for nonlinear dynamics is due to: Chen, Y., Sanz-Alonso, D., & Willett, R. (2022).
Autodifferentiable ensemble Kalman filters. SIAM Journal on Mathematics of Data Science, 4(2), 801-833.
[Available Online](https://epubs.siam.org/doi/abs/10.1137/21M1434477).
Continuous Time Configuration Classes¶
Filter configuration dataclasses. Shared by dispatchers and integration backends.
ContinuousTimeConfig
dataclass
¶
Solver options shared by all continuous-discrete filter configs.
Between observation times, the filter propagates a distribution (or ensemble/particles) forward in continuous time by solving an ODE/SDE numerically. These options control that solver.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_state_order |
FilterStateOrder
|
Accuracy of the continuous-time
propagation between observations. |
diffeqsolve_max_steps |
int
|
Maximum ODE solver steps between any two
consecutive observations. Increase if the solver hits this limit
(stiff dynamics or very long inter-observation gaps). Defaults to
|
diffeqsolve_dt0 |
float
|
Initial step-size hint for the solver.
Adaptive solvers adjust this automatically; fixed-step solvers
use it as the constant step. Defaults to |
diffeqsolve_kwargs |
dict
|
Additional kwargs forwarded to
|
ContinuousTimeKFConfig
dataclass
¶
Bases: BaseFilterConfig, ContinuousTimeConfig
Continuous-discrete Kalman Filter (CD-KF).
The exact Bayesian filter for continuous-time linear-Gaussian models.
Use this when your model was built with LTI_continuous. For nonlinear
SDEs, use ContinuousTimeEKFConfig, ContinuousTimeUKFConfig, or
ContinuousTimeEnKFConfig.
Inherits solver options from ContinuousTimeConfig and recording
options from BaseFilterConfig.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
Between observations the mean and covariance evolve via the Kalman–Bucy ODEs:
At each observation the standard Kalman update is applied.
References:
- For a modern textbook reference, see Chapter 10.6 of: Särkkä, S., & Solin, A. (2019). Applied Stochastic Differential Equations. Cambridge University Press. Available Online.
ContinuousTimeEKFConfig
dataclass
¶
Bases: EKFConfig, ContinuousTimeConfig
Continuous-discrete Extended Kalman Filter (CD-EKF).
Fast Gaussian filter for mildly nonlinear SDEs. Requires differentiable dynamics (JAX autodiff is used). The moment equations for the Gaussian approximation are solved between observations and a Kalman update is applied at each observation.
See EKFConfig for linearisation options and ContinuousTimeConfig
for solver options.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
References:
- For a modern textbook reference, see Chapter 10.7 of: Särkkä, S., & Solin, A. (2019). Applied Stochastic Differential Equations. Cambridge University Press. Available Online.
ContinuousTimeKFConfig
dataclass
¶
Bases: BaseFilterConfig, ContinuousTimeConfig
Continuous-discrete Kalman Filter (CD-KF).
The exact Bayesian filter for continuous-time linear-Gaussian models.
Use this when your model was built with LTI_continuous. For nonlinear
SDEs, use ContinuousTimeEKFConfig, ContinuousTimeUKFConfig, or
ContinuousTimeEnKFConfig.
Inherits solver options from ContinuousTimeConfig and recording
options from BaseFilterConfig.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
Between observations the mean and covariance evolve via the Kalman–Bucy ODEs:
At each observation the standard Kalman update is applied.
References:
- For a modern textbook reference, see Chapter 10.6 of: Särkkä, S., & Solin, A. (2019). Applied Stochastic Differential Equations. Cambridge University Press. Available Online.
ContinuousTimeEnKFConfig
dataclass
¶
Bases: EnKFConfig, ContinuousTimeConfig
Continuous-discrete Ensemble Kalman Filter (CD-EnKF).
The default filter for continuous-time models. Each ensemble member is propagated forward by solving the SDE between observations; the ensemble Kalman update is applied at observation times. Works with any SDE model without requiring gradients.
See EnKFConfig for particle/ensemble tuning options and
ContinuousTimeConfig for solver options.
Attributes:
| Name | Type | Description |
|---|---|---|
filter_source |
FilterSource
|
Backend. Defaults to |
Algorithm Reference
References:
- The implementation details are due to: Sanz-Alonso, D., Stuart, A. M., & Taeb, A. (2018). Inverse problems and data assimilation. arXiv:1810.06191.
- For a classical reference to the ensemble Kalman filter, see: Evensen, G. (2003). The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean Dynamics, 53(4), 343-367.
- The solution using automatic differentiation for nonlinear dynamics is due to: Chen, Y., Sanz-Alonso, D., & Willett, R. (2022). Autodifferentiable ensemble Kalman filters. SIAM Journal on Mathematics of Data Science, 4(2), 801-833. Available Online.