Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis
The full acceptance of Deep Learning (DL) models in the clinical field is rather low with
respect to the quantity of high-performing solutions reported in the literature. End users are …
respect to the quantity of high-performing solutions reported in the literature. End users are …
Generalizing to unseen domains: A survey on domain generalization
Machine learning systems generally assume that the training and testing distributions are
the same. To this end, a key requirement is to develop models that can generalize to unseen …
the same. To this end, a key requirement is to develop models that can generalize to unseen …
Causal inference in natural language processing: Estimation, prediction, interpretation and beyond
A fundamental goal of scientific research is to learn about causal relationships. However,
despite its critical role in the life and social sciences, causality has not had the same …
despite its critical role in the life and social sciences, causality has not had the same …
Fishr: Invariant gradient variances for out-of-distribution generalization
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …
for real-world applications. To this end, there has been a growing surge of interest to learn …
Nonparametric identifiability of causal representations from unknown interventions
We study causal representation learning, the task of inferring latent causal variables and
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …
their causal relations from high-dimensional functions (“mixtures”) of the variables. Prior …
Warm: On the benefits of weight averaged reward models
Aligning large language models (LLMs) with human preferences through reinforcement
learning (RLHF) can lead to reward hacking, where LLMs exploit failures in the reward …
learning (RLHF) can lead to reward hacking, where LLMs exploit failures in the reward …
A survey on evaluation of out-of-distribution generalization
Machine learning models, while progressively advanced, rely heavily on the IID assumption,
which is often unfulfilled in practice due to inevitable distribution shifts. This renders them …
which is often unfulfilled in practice due to inevitable distribution shifts. This renders them …
On the paradox of learning to reason from data
Logical reasoning is needed in a wide range of NLP tasks. Can a BERT model be trained
end-to-end to solve logical reasoning problems presented in natural language? We attempt …
end-to-end to solve logical reasoning problems presented in natural language? We attempt …
Uncertainty quantification with pre-trained language models: A large-scale empirical analysis
Pre-trained language models (PLMs) have gained increasing popularity due to their
compelling prediction performance in diverse natural language processing (NLP) tasks …
compelling prediction performance in diverse natural language processing (NLP) tasks …
Assaying out-of-distribution generalization in transfer learning
Since out-of-distribution generalization is a generally ill-posed problem, various proxy
targets (eg, calibration, adversarial robustness, algorithmic corruptions, invariance across …
targets (eg, calibration, adversarial robustness, algorithmic corruptions, invariance across …