Domain generalization: A survey
Generalization to out-of-distribution (OOD) data is a capability natural to humans yet
challenging for machines to reproduce. This is because most learning algorithms strongly …
challenging for machines to reproduce. This is because most learning algorithms strongly …
Fairness in machine learning: A survey
When Machine Learning technologies are used in contexts that affect citizens, companies as
well as researchers need to be confident that there will not be any unexpected social …
well as researchers need to be confident that there will not be any unexpected social …
Test-time training with masked autoencoders
Test-time training adapts to a new test distribution on the fly by optimizing a model for each
test input using self-supervision. In this paper, we use masked autoencoders for this one …
test input using self-supervision. In this paper, we use masked autoencoders for this one …
Federated learning from pre-trained models: A contrastive learning approach
Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to
learn collaboratively without sharing their private data. However, excessive computation and …
learn collaboratively without sharing their private data. However, excessive computation and …
Federated learning for generalization, robustness, fairness: A survey and benchmark
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of federated learning, an …
collaboration among different parties. Recently, with the popularity of federated learning, an …
Fedbn: Federated learning on non-iid features via local batch normalization
The emerging paradigm of federated learning (FL) strives to enable collaborative training of
deep models on the network edge without centrally aggregating raw data and hence …
deep models on the network edge without centrally aggregating raw data and hence …
A brief review of domain adaptation
Classical machine learning assumes that the training and test sets come from the same
distributions. Therefore, a model learned from the labeled training data is expected to …
distributions. Therefore, a model learned from the labeled training data is expected to …
Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a
labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA …
labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA …
Attracting and dispersing: A simple approach for source-free domain adaptation
We propose a simple but effective source-free domain adaptation (SFDA) method. Treating
SFDA as an unsupervised clustering problem and following the intuition that local neighbors …
SFDA as an unsupervised clustering problem and following the intuition that local neighbors …
Rethinking federated learning with domain shift: A prototype view
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …
technique. However, prevalent solutions mainly focus on all private data sampled from the …