Generalizing to unseen domains: A survey on domain generalization

J Wang, C Lan, C Liu, Y Ouyang, T Qin… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …

Causal representation learning for out-of-distribution recommendation

W Wang, X Lin, F Feng, X He, M Lin… - Proceedings of the ACM …, 2022 - dl.acm.org
Modern recommender systems learn user representations from historical interactions, which
suffer from the problem of user feature shifts, such as an income increase. Historical …

A review of the role of causality in develo** trustworthy ai systems

N Ganguly, D Fazlija, M Badar, M Fisichella… - arxiv preprint arxiv …, 2023 - arxiv.org
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that
governs human understanding of the real world. Consequently, these models do not …

Adversarial machine learning: Bayesian perspectives

D Rios Insua, R Naveiro, V Gallego… - Journal of the American …, 2023 - Taylor & Francis
Abstract Adversarial Machine Learning (AML) is emerging as a major field aimed at
protecting Machine Learning (ML) systems against security threats: in certain scenarios …

Learning causal semantic representation for out-of-distribution prediction

C Liu, X Sun, J Wang, H Tang, T Li… - Advances in …, 2021 - proceedings.neurips.cc
Conventional supervised learning methods, especially deep ones, are found to be sensitive
to out-of-distribution (OOD) examples, largely because the learned representation mixes the …

Causaladv: Adversarial robustness through the lens of causality

Y Zhang, M Gong, T Liu, G Niu, X Tian, B Han… - arxiv preprint arxiv …, 2021 - arxiv.org
The adversarial vulnerability of deep neural networks has attracted significant attention in
machine learning. As causal reasoning has an instinct for modelling distribution change, it is …

Certified robustness against natural language attacks by causal intervention

H Zhao, C Ma, X Dong, AT Luu… - International …, 2022 - proceedings.mlr.press
Deep learning models have achieved great success in many fields, yet they are vulnerable
to adversarial examples. This paper follows a causal perspective to look into the adversarial …

Explicit tradeoffs between adversarial and natural distributional robustness

M Moayeri, K Banihashem… - Advances in Neural …, 2022 - proceedings.neurips.cc
Several existing works study either adversarial or natural distributional robustness of deep
neural networks separately. In practice, however, models need to enjoy both types of …

Causal GraphSAGE: A robust graph method for classification based on causal sampling

T Zhang, HR Shan, MA Little - Pattern Recognition, 2022 - Elsevier
GraphSAGE is a widely-used graph neural network for classification, which generates node
embeddings in two steps: sampling and aggregation. In this paper, we introduce causal …

Backdoor defense via deconfounded representation learning

Z Zhang, Q Liu, Z Wang, Z Lu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are recently shown to be vulnerable to backdoor attacks,
where attackers embed hidden backdoors in the DNN model by injecting a few poisoned …