From 1c218bd753c79d8b987f4d7152e9234a85ef42c0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 08:42:12 -0800 Subject: [PATCH 01/10] Update community-videos.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Changing sort order on community videos Signed-off-by: Emre Kıcıman --- community-videos.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/community-videos.md b/community-videos.md index 51beac8..2435a12 100644 --- a/community-videos.md +++ b/community-videos.md @@ -3,4 +3,4 @@ layout: page permalink: community/videos.html --- -{% include articles.html collection="community_videos" %} \ No newline at end of file +{% include articles.html collection="community_videos" sort-order="desc" %} From 08f681f83389403e3b327aa79703251755d297db Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 08:47:14 -0800 Subject: [PATCH 02/10] Create 03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Added Kun Zhang's talk abstract to videos list Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 _community_videos/03_talk_series.md diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md new file mode 100644 index 0000000..1ccbab7 --- /dev/null +++ b/_community_videos/03_talk_series.md @@ -0,0 +1,20 @@ +--- +title: Causal Representation Learning: Discovery of the Hidden World +slug: pywhy-video +layout: page +description: >- + PyWhy Causality in Practice - Causal Representation Learning: Discovery of the Hidden World +summary: >- + Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications. +
+ The talk will include a description of the causal-learn package in PyWhy. Learn more: https://github.com/py-why/causal-learn +
+ Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023. +
+ + + +--- From e7de286de84537947e7f2bcd2113905af7ab55f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 08:49:34 -0800 Subject: [PATCH 03/10] Update 03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit fixed colon quoting Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index 1ccbab7..70f8d5b 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -1,5 +1,5 @@ --- -title: Causal Representation Learning: Discovery of the Hidden World +title: "Causal Representation Learning: Discovery of the Hidden World" slug: pywhy-video layout: page description: >- From cb9743e5baf37dd3b7f9f19991221241a1c6b35f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 08:50:12 -0800 Subject: [PATCH 04/10] Update 03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Added Kun's name to title line Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index 70f8d5b..9690875 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -3,7 +3,7 @@ title: "Causal Representation Learning: Discovery of the Hidden World" slug: pywhy-video layout: page description: >- - PyWhy Causality in Practice - Causal Representation Learning: Discovery of the Hidden World + PyWhy Causality in Practice - Causal Representation Learning: Discovery of the Hidden World - Kun Zhang summary: >- Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
From cd42fdd628bffe35eb8ac70281926535c61499ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 08:54:27 -0800 Subject: [PATCH 05/10] Update 03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index 9690875..1d2c0a8 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -12,9 +12,6 @@ summary: >- Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.
- + Join the live seminar on January 29, 2024 at Monday 8:00am pacific / 11:00am eastern / 4:00pm GMT / 9:30pm IST. --- From 9e56fd0d361fa6b7f5caf28848e93e03ce22c27c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 09:10:50 -0800 Subject: [PATCH 06/10] Rename 00_talk_series.md to 99_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Moved the description of the talk series to '99' so that it shows up at the top of the sort order Signed-off-by: Emre Kıcıman --- _community_videos/{00_talk_series.md => 99_talk_series.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename _community_videos/{00_talk_series.md => 99_talk_series.md} (100%) diff --git a/_community_videos/00_talk_series.md b/_community_videos/99_talk_series.md similarity index 100% rename from _community_videos/00_talk_series.md rename to _community_videos/99_talk_series.md From d1d45106040aa688a46f5f16307719c2f444baf8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 09:37:58 -0800 Subject: [PATCH 07/10] Update _community_videos/03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: fverac Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index 1d2c0a8..fa5e776 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -12,6 +12,6 @@ summary: >- Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.
- Join the live seminar on January 29, 2024 at Monday 8:00am pacific / 11:00am eastern / 4:00pm GMT / 9:30pm IST. + Join the live seminar on January 29, 2024 at Monday 8:00am pacific / 11:00am eastern / 4:00pm GMT / 9:30pm IST. --- From 72dd64681846881234e82f0400345eda1bf62e05 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 09:38:49 -0800 Subject: [PATCH 08/10] Update _community_videos/03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: fverac Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index fa5e776..0afce1d 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -9,7 +9,7 @@ summary: >-
The talk will include a description of the causal-learn package in PyWhy. Learn more: https://github.com/py-why/causal-learn
- Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023. + Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.
Join the live seminar on January 29, 2024 at Monday 8:00am pacific / 11:00am eastern / 4:00pm GMT / 9:30pm IST. From f115785858ab6fa6e2bc2a94877270bf9838b630 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 09:39:12 -0800 Subject: [PATCH 09/10] Update _community_videos/03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: fverac Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index 0afce1d..4383dbf 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -7,7 +7,7 @@ description: >- summary: >- Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
- The talk will include a description of the causal-learn package in PyWhy. Learn more: https://github.com/py-why/causal-learn + The talk will include a description of the causal-learn package in PyWhy. Learn more: https://github.com/py-why/causal-learn
Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.
From f4f9556fe24dbebc5a7065ecf75a86bbbb7496a1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Emre=20K=C4=B1c=C4=B1man?= Date: Mon, 22 Jan 2024 09:41:56 -0800 Subject: [PATCH 10/10] Update _community_videos/03_talk_series.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Co-authored-by: fverac Signed-off-by: Emre Kıcıman --- _community_videos/03_talk_series.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_community_videos/03_talk_series.md b/_community_videos/03_talk_series.md index 4383dbf..ed4167a 100644 --- a/_community_videos/03_talk_series.md +++ b/_community_videos/03_talk_series.md @@ -8,7 +8,7 @@ summary: >- Causality is a fundamental notion in science, engineering, and even in machine learning. Causal representation learning aims to reveal the underlying high-level hidden causal variables and their relations. It can be seen as a special case of causal discovery, whose goal is to recover the underlying causal structure or causal model from observational data. The modularity property of a causal system implies properties of minimal changes and independent changes of causal representations, and in this talk, we show how such properties make it possible to recover the underlying causal representations from observational data with identifiability guarantees: under appropriate assumptions, the learned representations are consistent with the underlying causal process. Various problem settings are considered, involving independent and identically distributed (i.i.d.) data, temporal data, or data with distribution shift as input. We demonstrate when identifiable causal representation learning can benefit from flexible deep learning and when suitable parametric assumptions have to be imposed on the causal process, with various examples and applications.
The talk will include a description of the causal-learn package in PyWhy. Learn more: https://github.com/py-why/causal-learn -
+

Speaker: Kun Zhang is currently on leave from Carnegie Mellon University (CMU), where he is an associate professor of philosophy and an affiliate faculty in the machine learning department; he is working as a professor and the acting chair of the machine learning department and the director of the Center for Integrative AI at Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He develops methods for making causality transparent by torturing various kinds of data and investigates machine learning problems including transfer learning, representation learning, and reinforcement learning from a causal perspective. He has been frequently serving as a senior area chair, area chair, or senior program committee member for major conferences in machine learning or artificial intelligence, including UAI, NeurIPS, ICML, IJCAI, AISTATS, and ICLR. He was a general & program co-chair of the first Conference on Causal Learning and Reasoning (CLeaR 2022), a program co-chair of the 38th Conference on Uncertainty in Artificial Intelligence (UAI 2022), and is a general co-chair of UAI 2023.