Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
Causal representation learning for out-of-distribution recommendation
Modern recommender systems learn user representations from historical interactions, which
suffer from the problem of user feature shifts, such as an income increase. Historical …
suffer from the problem of user feature shifts, such as an income increase. Historical …
A review of the role of causality in develo** trustworthy ai systems
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that
governs human understanding of the real world. Consequently, these models do not …
governs human understanding of the real world. Consequently, these models do not …
Adversarial machine learning: Bayesian perspectives
Abstract Adversarial Machine Learning (AML) is emerging as a major field aimed at
protecting Machine Learning (ML) systems against security threats: in certain scenarios …
protecting Machine Learning (ML) systems against security threats: in certain scenarios …
Learning causal semantic representation for out-of-distribution prediction
Conventional supervised learning methods, especially deep ones, are found to be sensitive
to out-of-distribution (OOD) examples, largely because the learned representation mixes the …
to out-of-distribution (OOD) examples, largely because the learned representation mixes the …
Causaladv: Adversarial robustness through the lens of causality
The adversarial vulnerability of deep neural networks has attracted significant attention in
machine learning. As causal reasoning has an instinct for modelling distribution change, it is …
machine learning. As causal reasoning has an instinct for modelling distribution change, it is …
Certified robustness against natural language attacks by causal intervention
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …
to adversarial examples. This paper follows a causal perspective to look into the adversarial …
Explicit tradeoffs between adversarial and natural distributional robustness
Several existing works study either adversarial or natural distributional robustness of deep
neural networks separately. In practice, however, models need to enjoy both types of …
neural networks separately. In practice, however, models need to enjoy both types of …
Causal GraphSAGE: A robust graph method for classification based on causal sampling
T Zhang, HR Shan, MA Little - Pattern Recognition, 2022 - Elsevier
GraphSAGE is a widely-used graph neural network for classification, which generates node
embeddings in two steps: sampling and aggregation. In this paper, we introduce causal …
embeddings in two steps: sampling and aggregation. In this paper, we introduce causal …
Backdoor defense via deconfounded representation learning
Deep neural networks (DNNs) are recently shown to be vulnerable to backdoor attacks,
where attackers embed hidden backdoors in the DNN model by injecting a few poisoned …
where attackers embed hidden backdoors in the DNN model by injecting a few poisoned …