MCMC Configurations¶
Internal API reference for MCMC configuration dataclasses used by inference dispatchers and backend integrations.
AdjustedMCLMCDynamicConfig
dataclass
¶
Bases: BaseMCMCConfig
Dynamic adjusted MCLMC (MHMCHMC) configuration.
This maps to blackjax.adjusted_mclmc_dynamic(...) and uses its
top-level API arguments.
Attributes:
| Name | Type | Description |
|---|---|---|
step_size |
float
|
Integrator step size. |
L_proposal_factor |
float
|
Proposal length scaling factor. |
divergence_threshold |
float
|
Energy-difference threshold used to flag divergences. |
integration_steps_min |
int
|
Minimum random integration steps per proposal. |
integration_steps_max |
int
|
Exclusive upper bound for random integration steps per proposal. |
BaseMCMCConfig
dataclass
¶
Shared configuration options inherited by all MCMC configs.
You do not instantiate this class directly; use one of the concrete
subclasses (NUTSConfig, HMCConfig, SGLDConfig, MALAConfig,
AdjustedMCLMCDynamicConfig).
Attributes:
| Name | Type | Description |
|---|---|---|
num_samples |
int
|
Number of post-warmup samples to return. |
num_warmup |
int
|
Number of warmup/burn-in transitions. |
num_chains |
int
|
Number of Markov chains to run in parallel. |
mcmc_source |
MCMCSource
|
Backend library used for inference.
Supported values are |
init_strategy |
callable
|
NumPyro initialization strategy used when constructing unconstrained initial parameters. |
HMCConfig
dataclass
¶
Bases: BaseMCMCConfig
Hamiltonian Monte Carlo (HMC) configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
step_size |
float
|
Integrator step size used by the leapfrog solver. |
num_steps |
int
|
Number of leapfrog steps per HMC proposal. |
MALAConfig
dataclass
¶
Bases: BaseMCMCConfig
Metropolis-Adjusted Langevin Algorithm (MALA) configuration.
Attributes:
| Name | Type | Description |
|---|---|---|
step_size |
float
|
Proposal step size used by |
NUTSConfig
dataclass
¶
SGLDConfig
dataclass
¶
Bases: BaseMCMCConfig
Stochastic Gradient Langevin Dynamics (SGLD) configuration.
SGLD performs first-order Langevin updates using noisy gradients and injected Gaussian noise. In this implementation, gradients are computed on the full dataset (no minibatching), so the method behaves as full-batch Langevin dynamics with an annealed step schedule.
Attributes:
| Name | Type | Description |
|---|---|---|
step_size |
float
|
Base learning rate used in the SGLD schedule. This should generally be small. |
schedule_power |
float
|
Power in the polynomial decay schedule
\(\epsilon_t = \text{step_size} \cdot t^{-\text{schedule_power}}\).
Values in |