filmov
tv
Yixin Wang, University of Michigan

Показать описание
University of Arizona, Theoretical Astrophysics Program (TAP) Cosmology Initiative Lectureship Series
TITLE:
Representation Learning: A Causal Perspective
ABSTRACT:
Representation learning constructs low-dimensional
representations to summarize essential features of high-dimensional
data like images and texts. Ideally, such a representation should
eliciently capture non-spurious features of the data. It shall also
be disentangled so that we can interpret what feature each of its
dimensions captures. However, these desiderata are often intuitively
defined and challenging to quantify or enforce.
In this talk, we take on a causal perspective of representation
learning. We show how desiderata of representation learning can be
formalized using counterfactual notions, enabling metrics and
algorithms that target elicient, non-spurious, and disentangled
representations of data. We discuss the theoretical underpinnings of
the algorithm and illustrate its empirical performance in both
supervised and unsupervised representation learning.
This is joint work with Michael Jordan, Kartik Ahuja, Divyat Mahajan,
and Yoshua Bengio.
BIO:
Wang is an assistant professor of statistics at the University of Michigan.
She works in the fields of Bayesian statistics, machine learning, and causal
inference. Previously, she was a postdoctoral researcher with Professor Michael
Jordan at the University of California, Berkeley. She completed her PhD in
statistics at Columbia, advised by Professor David Blei, and her undergraduate
studies in mathematics and computer science at the Hong Kong University of
Science and Technology. Her research has been recognized by the j-ISBA
Blackwell-Rosenbluth Award, ICSA Conference Young Researcher Award, ISBA
Savage Award Honorable Mention, ACIC Tom Ten Have Award Honorable
Mention, and INFORMS data mining and COPA best paper awards.
TITLE:
Representation Learning: A Causal Perspective
ABSTRACT:
Representation learning constructs low-dimensional
representations to summarize essential features of high-dimensional
data like images and texts. Ideally, such a representation should
eliciently capture non-spurious features of the data. It shall also
be disentangled so that we can interpret what feature each of its
dimensions captures. However, these desiderata are often intuitively
defined and challenging to quantify or enforce.
In this talk, we take on a causal perspective of representation
learning. We show how desiderata of representation learning can be
formalized using counterfactual notions, enabling metrics and
algorithms that target elicient, non-spurious, and disentangled
representations of data. We discuss the theoretical underpinnings of
the algorithm and illustrate its empirical performance in both
supervised and unsupervised representation learning.
This is joint work with Michael Jordan, Kartik Ahuja, Divyat Mahajan,
and Yoshua Bengio.
BIO:
Wang is an assistant professor of statistics at the University of Michigan.
She works in the fields of Bayesian statistics, machine learning, and causal
inference. Previously, she was a postdoctoral researcher with Professor Michael
Jordan at the University of California, Berkeley. She completed her PhD in
statistics at Columbia, advised by Professor David Blei, and her undergraduate
studies in mathematics and computer science at the Hong Kong University of
Science and Technology. Her research has been recognized by the j-ISBA
Blackwell-Rosenbluth Award, ICSA Conference Young Researcher Award, ISBA
Savage Award Honorable Mention, ACIC Tom Ten Have Award Honorable
Mention, and INFORMS data mining and COPA best paper awards.