Skip to content

LinearGaussianObservation

Bases: ObservationModel

Linear-Gaussian observation model.

Observations are modeled as

\[ y_t \sim \mathcal{N}(H x_t + D u_t + b, R). \]

Here, \(H\) is the observation matrix, \(D\) is an optional control-input matrix, \(b\) is an optional observation bias, and \(R\) is the observation noise covariance.

__init__(H: jax.Array, R: jax.Array, D: jax.Array | None = None, bias: jax.Array | None = None)

Parameters:

Name Type Description Default
H Array

Observation matrix with shape \((d_y, d_x)\).

required
R Array

Observation noise covariance with shape \((d_y, d_y)\).

required
D Array | None

Optional control matrix with shape \((d_y, d_u)\). If None, no control contribution is used.

None
bias Array | None

Optional additive bias with shape \((d_y,)\).

None

Structured inference

You can instantiate equivalent observation behavior without this class (for example, with a custom callable). However, this structured linear-Gaussian observation form is what lets filtering backends use fast Kalman-family methods; see Filters, especially KFConfig and EnKFConfig (or ContinuousTimeKFConfig / ContinuousTimeEnKFConfig) in FilterConfigs.

Without this exploitable structure, parameter inference that marginalizes latent trajectories often relies on particle filters (PFConfig and related particle methods), which are typically slower.

Example

Linear Gaussian observation with control input
import jax.numpy as jnp
from dynestyx import LinearGaussianObservation

observation = LinearGaussianObservation(
    H=jnp.array([[1.0, 0.0], [0.0, 1.0]]),
    R=0.1 * jnp.eye(2),
    D=jnp.array([[1.0], [0.5]]),
    bias=jnp.array([0.0, 0.1]),
)

x_t = jnp.array([1.2, -0.3])
u_t = jnp.array([0.8])
dist_y = observation(x_t, u_t, t=0.0)  # p(y_t | x_t, u_t, t)