Signed-off-by: Patrick Bloebaum <bloebp@amazon.com>
This commit is contained in:
Patrick Bloebaum 2024-07-29 10:08:16 -07:00 коммит произвёл Patrick Blöbaum
Родитель a20909499b
Коммит e783e37db0
5 изменённых файлов: 25 добавлений и 18 удалений

Просмотреть файл

@ -264,7 +264,7 @@ Citing this package
If you find DoWhy useful for your work, please cite **both** of the following two references:
- Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216
- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821
- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2024. MLOSS 25(147):17. https://jmlr.org/papers/v25/22-1258.html
Bibtex::
@ -275,14 +275,17 @@ Bibtex::
year={2020}
}
@article{dowhy_gcm,
author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik},
title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models},
journal={arXiv preprint arXiv:2206.06821},
year={2022}
@article{JMLR:v25:22-1258,
author = {Patrick Bl{{\"o}}baum and Peter G{{\"o}}tz and Kailash Budhathoki and Atalanti A. Mastakouri and Dominik Janzing},
title = {DoWhy-GCM: An Extension of DoWhy for Causal Inference in Graphical Causal Models},
journal = {Journal of Machine Learning Research},
year = {2024},
volume = {25},
number = {147},
pages = {1--7},
url = {http://jmlr.org/papers/v25/22-1258.html}
}
Issues
~~~~~~
If you encounter an issue or have a specific request for DoWhy, please `raise an issue <https://github.com/py-why/dowhy/issues>`_.

Просмотреть файл

@ -4,7 +4,7 @@ Citing this package
If you find DoWhy useful for your work, please cite **both** of the following two references:
- Amit Sharma, Emre Kiciman. DoWhy: An End-to-End Library for Causal Inference. 2020. https://arxiv.org/abs/2011.04216
- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2022. https://arxiv.org/abs/2206.06821
- Patrick Blöbaum, Peter Götz, Kailash Budhathoki, Atalanti A. Mastakouri, Dominik Janzing. DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models. 2024. MLOSS 25(147):17. https://jmlr.org/papers/v25/22-1258.html
Bibtex::
@ -15,9 +15,13 @@ Bibtex::
year={2020}
}
@article{dowhy_gcm,
author = {Bl{\"o}baum, Patrick and G{\"o}tz, Peter and Budhathoki, Kailash and Mastakouri, Atalanti A. and Janzing, Dominik},
title = {DoWhy-GCM: An extension of DoWhy for causal inference in graphical causal models},
journal={arXiv preprint arXiv:2206.06821},
year={2022}
@article{JMLR:v25:22-1258,
author = {Patrick Bl{{\"o}}baum and Peter G{{\"o}}tz and Kailash Budhathoki and Atalanti A. Mastakouri and Dominik Janzing},
title = {DoWhy-GCM: An Extension of DoWhy for Causal Inference in Graphical Causal Models},
journal = {Journal of Machine Learning Research},
year = {2024},
volume = {25},
number = {147},
pages = {1--7},
url = {http://jmlr.org/papers/v25/22-1258.html}
}

Просмотреть файл

@ -13,7 +13,7 @@
"id": "5b1241d5-010d-4532-9889-f719f30f19c2",
"metadata": {},
"source": [
"This notebook demonstrates the usage of the intrinsic causal influence (ICC) method, a way to estimate causal influence in a system. A common question in many applications is: \"What is the causal influence of node X on node Y?\" Here, \"causal influence\" can be defined in various ways. One approach could be to measure the interventional influence, which asks, \"How much does node Y change if I intervene on node X?\" or, from a more feature relevance perspective, \"How relevant is X in describing Y?\"\n",
"This notebook demonstrates the usage of the [intrinsic causal influence (ICC) method](https://proceedings.mlr.press/v238/janzing24a.html), a way to estimate causal influence in a system. A common question in many applications is: \"What is the causal influence of node X on node Y?\" Here, \"causal influence\" can be defined in various ways. One approach could be to measure the interventional influence, which asks, \"How much does node Y change if I intervene on node X?\" or, from a more feature relevance perspective, \"How relevant is X in describing Y?\"\n",
"\n",
"In the following we focus on a particular type of causal influence, which is based on decomposing the generating process into mechanisms in place at each node, formalized by the respective causal mechanism. Then, ICC quantifies for each node the amount of uncertainty of the target that can be traced back to the respective mechanism. Hence, nodes that are deterministically computed from their parents obtain zero contribution. This concept may initially seem complex, but it is based on a simple idea:\n",
"\n",

Просмотреть файл

@ -8,8 +8,8 @@ By quantifying intrinsic causal influence, we answer the question:
Naturally, descendants will have a zero intrinsic causal influence on the target node. This method is based on the paper:
Dominik Janzing, Patrick Blöbaum, Lenon Minorics, Philipp Faller, Atalanti Mastakouri. `Quantifying intrinsic causal contributions via structure preserving interventions <https://arxiv.org/abs/2007.00714>`_
arXiv:2007.00714, 2021
Dominik Janzing, Patrick Blöbaum, Atalanti A Mastakouri, Philipp M Faller, Lenon Minorics, Kailash Budhathoki. `Quantifying intrinsic causal contributions via structure preserving interventions <https://proceedings.mlr.press/v238/janzing24a.html>`_
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:2188-2196, 2024
Let's consider an example from the paper to understand the type of influence being measured here. Imagine a schedule of
three trains, ``Train A, Train B`` and ``Train C``, where the departure time of ``Train C`` depends on the arrival time of ``Train B``,

Просмотреть файл

@ -17,8 +17,8 @@ Additionally, for explaining changes in the mean of the target variable (or othe
DoWhy implements a multiply-robust causal change attribution method, which uses a combination of regression and re-weighting
to make the final estimates less sensitive to estimation error. This method was presented in the following paper:
Quintas-Martinez, V., Bahadori, M. T., Santiago, E., Mu, J., Janzing, D., and Heckerman, D. `Multiply-Robust Causal Change Attribution <https://arxiv.org/abs/2404.08839>`
Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024.
Victor Quintas-Martinez, Mohammad Taha Bahadori, Eduardo Santiago, Jeff Mu, David Heckerman. `Multiply-Robust Causal Change Attribution <http://proceedings.mlr.press/v130/budhathoki21a/budhathoki21a.pdf>`_
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:41821--41840, 2024.
How to use it
^^^^^^^^^^^^^^