ChiRho logo

Causal Reasoning with ChiRho

ChiRho is a causal extension to the Pyro probabilistic programming language. It was built to bridge the gap between the capabilities of modern probabilistic programming systems, such as Pyro, and the needs of policymakers, scientists, and AI researchers, who often want to use models to answer their questions about cause-and-effect relationships. As a non-exhaustive set of examples, ChiRho makes it easier to answer the following kinds of causal questions that appear frequently in practice.

  • Interventional: How many COVID-19 hospitalizations will occur if the USA imposes a national mask mandate?

  • Counterfactual: Given that 100,000 people were infected with COVID-19 in the past month, how many would have been infected if a mask mandate had been in place?

  • Explanation: Why were 100,000 people infected with COVID-19 in the past month?

  • Causal structure discovery: What individual attributes influence risk of COVID-19 hospitalization?

To see how ChiRho supports causal reasoning, take a look at our Tutorial.

Installation

Install using pip:

pip install chirho

Install from source:

git clone git@github.com:BasisResearch/chirho.git
cd chirho
git checkout master
pip install .

Install with extra packages:

To install the dependencies required to run the tutorials in examples/tutorials directories, please use the following command:

pip install chirho[extras]

Make sure that the models come from the same release version of the ChiRho source code as you have installed.

Getting Started

Below is a simple example of how to use ChiRho to answer an interventional question. For more in-depth examples, go to Learn more.

import torch
import pyro
import pyro.distributions as dist
from chirho.interventional.handlers import do

pyro.set_rng_seed(101)

# Define a causal model with single confounder h
def model():
    h = pyro.sample("h", dist.Normal(0, 1))
    x = pyro.sample("x", dist.Normal(h, 1))
    y = pyro.sample("y", dist.Normal(x + h, 1))
    return y

# Define a causal query (here intervening on x)
def queried_model():
    return do(model, {"x": 1})

# Generate 10,000 samples from the observational distribution P(y) ~ N(0, 2)
obs_samples = pyro.infer.Predictive(model, num_samples=1000)()["y"]

# Generate 10,000 samples from the interventional distribution P(y | do(X=1)) ~ N(1, 1)
int_samples = pyro.infer.Predictive(queried_model(), num_samples=1000)()["y"]

Learn more

We have written a number of tutorials and examples for ChiRho. We have tried to choose simple examples that would be of interest to both the causal inference and probabilistic programming communities: they collectively span Pearl’s causal hierarchy (Pearl 2009), and most are broadly applicable, empirically validated, have an unconventional or limited identification result, and make use of modern probabilistic machine learning tools, like neural networks or stochastic variational inference.

Our examples demonstrate how real-world causal assumptions can be expressed as probabilistic programs and real-world causal estimands can be expressed as program transformations. These example illustrate how ChiRho is compatible with any inference method implemented in Pyro, including the kinds of scalable gradient-based approximations that power much of the modern probabilistic machine learning landscape.

Note: These tutorials and examples assume some familiarity with Pyro and probabilistic programming. For introductory Pyro tutorials, please see Additional background reading material below.

Documentation

  • Counterfactual - Effect handlers for counterfactual world splitting

  • Interventional - Effect handlers for performing interventions

  • Observational - Effect handler utilities for computing probabilistic quantities for partially deterministic models which is useful for counterfactual reasoning

  • Indexed - Effect handler utilities for named indices in ChiRho which is useful for manipluating and tracking counterfactual worlds

  • Dynamical - Operations and effect handlers for counterfactual reasoning in dynamical systems

  • Robust - Operations and effect handlers for robust estimation

  • Explainable - Operations and effect handlers for causal explanation

Caveats

ChiRho does not answer causal questions by magic. In fact, there is no escaping the fact that

behind any causal conclusion there must lie some causal assumption,

a phrase made famous by Judea Pearl (Pearl 2009). Instead, ChiRho provides a substrate for writing causal assumptions as probabilistic programs, and for writing causal questions in terms of program transformations.

Additional background reading material

References

Pearl, Judea. Causality: Models, Reasoning and Inference. 2nd ed. USA: Cambridge University Press, 2009.