Publikation
Not All Causal Inference is the Same
Matej Zecevic; Devendra Singh Dhami; Kristian Kersting
In: Transactions on Machine Learning Research (TMLR), Vol. 2023, Pages 1-22, Open Review, 2023.
Zusammenfassung
Neurally-parameterized Structural Causal Models in the Pearlian notion to causality, re-
ferred to as NCM, were recently introduced as a step towards next-generation learning
systems. However, said NCM are only concerned with the learning aspect of causal infer-
ence and totally miss out on the architecture aspect. That is, actual causal inference within
NCM is intractable in that the NCM won’t return an answer to a query in polynomial
time. This insight follows as corollary to the more general statement on the intractability
of arbitrary structural causal model (SCM) parameterizations, which we prove in this work
through classical 3-SAT reduction. Since future learning algorithms will be required to deal
with both high dimensional data and highly complex mechanisms governing the data, we
ultimately believe work on tractable inference for causality to be decisive. We also show
that not all “causal” models are created equal. More specifically, there are models capable
of answering causal queries that are not SCM, which we refer to as partially causal models
(PCM). We provide a tabular taxonomy in terms of tractability properties for all of the dif-
ferent model families, namely correlation-based, PCM and SCM. To conclude our work, we
also provide some initial ideas on how to overcome parts of the intractability of causal infer-
ence with SCM by showing an example of how parameterizing an SCM with SPN modules
can at least allow for tractable mechanisms.
With this work we hope that our insights can raise awareness for this novel research direction
since achieving success with causality in real world downstream tasks will not only depend
on learning correct models but also require having the practical ability to gain access to
model inferences.
