Domain generalization: A survey

K Zhou, Z Liu, Y Qiao, T **ang… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …

Fairness in machine learning: A survey

S Caton, C Haas - ACM Computing Surveys, 2024 - dl.acm.org
When Machine Learning technologies are used in contexts that affect citizens, companies as
well as researchers need to be confident that there will not be any unexpected social …

Test-time training with masked autoencoders

Y Gandelsman, Y Sun, X Chen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Test-time training adapts to a new test distribution on the fly by optimizing a model for each
test input using self-supervision. In this paper, we use masked autoencoders for this one …

Federated learning from pre-trained models: A contrastive learning approach

Y Tan, G Long, J Ma, L Liu, T Zhou… - Advances in neural …, 2022 - proceedings.neurips.cc
Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to
learn collaboratively without sharing their private data. However, excessive computation and …

Federated learning for generalization, robustness, fairness: A survey and benchmark

W Huang, M Ye, Z Shi, G Wan, H Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of federated learning, an …

Fedbn: Federated learning on non-iid features via local batch normalization

X Li, M Jiang, X Zhang, M Kamp, Q Dou - arxiv preprint arxiv:2102.07623, 2021 - arxiv.org
The emerging paradigm of federated learning (FL) strives to enable collaborative training of
deep models on the network edge without centrally aggregating raw data and hence …

A brief review of domain adaptation

A Farahani, S Voghoei, K Rasheed… - Advances in data science …, 2021 - Springer
Classical machine learning assumes that the training and test sets come from the same
distributions. Therefore, a model learned from the labeled training data is expected to …

Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation

J Liang, D Hu, J Feng - International conference on machine …, 2020 - proceedings.mlr.press
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a
labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA …

Attracting and dispersing: A simple approach for source-free domain adaptation

S Yang, S Jui, J van de Weijer - Advances in Neural …, 2022 - proceedings.neurips.cc
We propose a simple but effective source-free domain adaptation (SFDA) method. Treating
SFDA as an unsupervised clustering problem and following the intuition that local neighbors …

Rethinking federated learning with domain shift: A prototype view

W Huang, M Ye, Z Shi, H Li, B Du - 2023 IEEE/CVF Conference …, 2023 - ieeexplore.ieee.org
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …