This program is tentative and subject to change.
The design of scientific experiments deserves its own variation of formal verification to catch cases where scientists made important mistakes, such as forgetting to take confounding variables into account. One of the most fundamental underpinnings of science is \emph{causality}, or what it means for interventions in the world to \emph{cause} other outcomes, as formalized by computer scientists like Judea Pearl. However, these ideas had not previously been made rigorous to the standards of the programming-languages community, where one expects a (syntactic) program analysis to be proved sound with respect to a natural semantics. In the domain of causality, as the relevant ``program analysis,'' we focus on $d$-separation, a classic condition on graphs that can be used to decide when the design of an experiment controls for sufficiently many confounding variables, even though the reason that this condition works is often unintuitive. Our central result (mechanized in Rocq) is that $d$-separation exactly coincides with a novel \emph{semantic} definition inspired by noninterference from the theory of security. This characterization provides a structural semantic foundation for $d$-separation and helps explain why the graph-theoretic condition is correct, independently of probabilistic assumptions.
For each given automated test on the quality of an experiment design, our theorem justifies an associated method for falsifying the world-modeling hypothesis behind the experiment.
This program is tentative and subject to change.
Thu 18 JunDisplayed time zone: Mountain Time (US & Canada) change
13:40 - 15:20 | |||
13:40 20mTalk | Towards Removing Undef Values from LLVM IR PLDI Research Papers Pedro Lobo INESC-ID; Instituto Superior Técnico - University of Lisbon, John McIver Virginia Tech, George Mitenkov Aptos, Juneyoung Lee AWS, Kirshanthan Sundararajah Virginia Tech, Nuno P. Lopes INESC-ID; Instituto Superior Técnico - University of Lisbon DOI Pre-print | ||
14:00 20mTalk | The Downgrading Semantics of Memory Safety PLDI Research Papers René Rydhof Hansen Aalborg University, Andreas Stenbæk Larsen Aarhus University, Aslan Askarov Aarhus University DOI | ||
14:20 20mTalk | Causality and Semantic Separation PLDI Research Papers Anna Zhang Massachusetts Institute of Technology, Qinglan Luo Wellesley College; Massachusetts Institute of Technology, London Bielicke University of California at Los Angeles, Eunice Jun University of California at Los Angeles, Adam Chlipala Massachusetts Institute of Technology DOI | ||
14:40 20mTalk | Hyper Separation Logic PLDI Research Papers Trayan Gospodinov INSAIT at Sofia University St. Kliment Ohridski, Peter Müller ETH Zurich, Thibault Dardinier EPFL DOI | ||
15:00 20mTalk | Pantomime: Constructive Leakage Proofs via Simulation PLDI Research Papers Robin Webbers Vrije Universiteit Amsterdam, Robert Schenck Northeastern University, Wind Wong Vrije Universiteit Amsterdam, Kristina Sojakova Vrije Universiteit Amsterdam, Klaus von Gleissenthall Vrije Universiteit Amsterdam DOI | ||