diff --git a/README.rst b/README.rst index 04366e53c..11d9be512 100755 --- a/README.rst +++ b/README.rst @@ -264,7 +264,7 @@ Citing this package If you find DoWhy useful for your work, please cite **both** of the following two references: - Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216 -- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821 +- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2024. MLOSS 25(147):1−7. https://jmlr.org/papers/v25/22-1258.html Bibtex:: @@ -275,14 +275,17 @@ Bibtex:: year={2020} } - @article{dowhy_gcm, - author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik}, - title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models}, - journal={arXiv preprint arXiv:2206.06821}, - year={2022} + @article{JMLR:v25:22-1258, + author = {Patrick Bl{{\"o}}baum and Peter G{{\"o}}tz and Kailash Budhathoki and Atalanti A. Mastakouri and Dominik Janzing}, + title = {DoWhy-GCM: An Extension of DoWhy for Causal Inference in Graphical Causal Models}, + journal = {Journal of Machine Learning Research}, + year = {2024}, + volume = {25}, + number = {147}, + pages = {1--7}, + url = {http://jmlr.org/papers/v25/22-1258.html} } - Issues ~~~~~~ If you encounter an issue or have a specific request for DoWhy, please `raise an issue `_. diff --git a/docs/source/cite.rst b/docs/source/cite.rst index cc9e94ff9..5c8020cf0 100644 --- a/docs/source/cite.rst +++ b/docs/source/cite.rst @@ -4,7 +4,7 @@ Citing this package If you find DoWhy useful for your work, please cite **both** of the following two references: - Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216 -- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821 +- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2024. MLOSS 25(147):1−7. https://jmlr.org/papers/v25/22-1258.html Bibtex:: @@ -15,9 +15,13 @@ Bibtex:: year={2020} } - @article{dowhy_gcm, - author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik}, - title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models}, - journal={arXiv preprint arXiv:2206.06821}, - year={2022} + @article{JMLR:v25:22-1258, + author = {Patrick Bl{{\"o}}baum and Peter G{{\"o}}tz and Kailash Budhathoki and Atalanti A. Mastakouri and Dominik Janzing}, + title = {DoWhy-GCM: An Extension of DoWhy for Causal Inference in Graphical Causal Models}, + journal = {Journal of Machine Learning Research}, + year = {2024}, + volume = {25}, + number = {147}, + pages = {1--7}, + url = {http://jmlr.org/papers/v25/22-1258.html} } diff --git a/docs/source/example_notebooks/gcm_icc.ipynb b/docs/source/example_notebooks/gcm_icc.ipynb index ce4254abd..883ada24b 100644 --- a/docs/source/example_notebooks/gcm_icc.ipynb +++ b/docs/source/example_notebooks/gcm_icc.ipynb @@ -13,7 +13,7 @@ "id": "5b1241d5-010d-4532-9889-f719f30f19c2", "metadata": {}, "source": [ - "This notebook demonstrates the usage of the intrinsic causal influence (ICC) method, a way to estimate causal influence in a system. A common question in many applications is: \"What is the causal influence of node X on node Y?\" Here, \"causal influence\" can be defined in various ways. One approach could be to measure the interventional influence, which asks, \"How much does node Y change if I intervene on node X?\" or, from a more feature relevance perspective, \"How relevant is X in describing Y?\"\n", + "This notebook demonstrates the usage of the [intrinsic causal influence (ICC) method](https://proceedings.mlr.press/v238/janzing24a.html), a way to estimate causal influence in a system. A common question in many applications is: \"What is the causal influence of node X on node Y?\" Here, \"causal influence\" can be defined in various ways. One approach could be to measure the interventional influence, which asks, \"How much does node Y change if I intervene on node X?\" or, from a more feature relevance perspective, \"How relevant is X in describing Y?\"\n", "\n", "In the following we focus on a particular type of causal influence, which is based on decomposing the generating process into mechanisms in place at each node, formalized by the respective causal mechanism. Then, ICC quantifies for each node the amount of uncertainty of the target that can be traced back to the respective mechanism. Hence, nodes that are deterministically computed from their parents obtain zero contribution. This concept may initially seem complex, but it is based on a simple idea:\n", "\n", diff --git a/docs/source/user_guide/causal_tasks/quantify_causal_influence/icc.rst b/docs/source/user_guide/causal_tasks/quantify_causal_influence/icc.rst index c47e1cf06..d896e5a6c 100644 --- a/docs/source/user_guide/causal_tasks/quantify_causal_influence/icc.rst +++ b/docs/source/user_guide/causal_tasks/quantify_causal_influence/icc.rst @@ -8,8 +8,8 @@ By quantifying intrinsic causal influence, we answer the question: Naturally, descendants will have a zero intrinsic causal influence on the target node. This method is based on the paper: - Dominik Janzing, Patrick Blöbaum, Lenon Minorics, Philipp Faller, Atalanti Mastakouri. `Quantifying intrinsic causal contributions via structure preserving interventions `_ - arXiv:2007.00714, 2021 + Dominik Janzing, Patrick Blöbaum, Atalanti A Mastakouri, Philipp M Faller, Lenon Minorics, Kailash Budhathoki. `Quantifying intrinsic causal contributions via structure preserving interventions `_ + Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2188-2196, 2024 Let's consider an example from the paper to understand the type of influence being measured here. Imagine a schedule of three trains, ``Train A, Train B`` and ``Train C``, where the departure time of ``Train C`` depends on the arrival time of ``Train B``, diff --git a/docs/source/user_guide/causal_tasks/root_causing_and_explaining/distribution_change.rst b/docs/source/user_guide/causal_tasks/root_causing_and_explaining/distribution_change.rst index 893dac28a..6d2f77851 100644 --- a/docs/source/user_guide/causal_tasks/root_causing_and_explaining/distribution_change.rst +++ b/docs/source/user_guide/causal_tasks/root_causing_and_explaining/distribution_change.rst @@ -17,8 +17,8 @@ Additionally, for explaining changes in the mean of the target variable (or othe DoWhy implements a multiply-robust causal change attribution method, which uses a combination of regression and re-weighting to make the final estimates less sensitive to estimation error. This method was presented in the following paper: - Quintas-Martinez, V., Bahadori, M. T., Santiago, E., Mu, J., Janzing, D., and Heckerman, D. `Multiply-Robust Causal Change Attribution ` - Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. + Victor Quintas-Martinez, Mohammad Taha Bahadori, Eduardo Santiago, Jeff Mu, David Heckerman. `Multiply-Robust Causal Change Attribution `_ + Proceedings of the 41st International Conference on Machine Learning, PMLR 235:41821--41840, 2024. How to use it ^^^^^^^^^^^^^^