Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
[HTML][HTML] A review of uncertainty quantification in deep learning: Techniques, applications and challenges
Uncertainty quantification (UQ) methods play a pivotal role in reducing the impact of
uncertainties during both optimization and decision making processes. They have been …
uncertainties during both optimization and decision making processes. They have been …
Priors in bayesian deep learning: A review
While the choice of prior is one of the most critical parts of the Bayesian inference workflow,
recent Bayesian deep learning models have often fallen back on vague priors, such as …
recent Bayesian deep learning models have often fallen back on vague priors, such as …
Efficient continual learning with modular networks and task-driven priors
Existing literature in Continual Learning (CL) has focused on overcoming catastrophic
forgetting, the inability of the learner to recall how to perform tasks observed in the past …
forgetting, the inability of the learner to recall how to perform tasks observed in the past …
Adaptive compositional continual meta-learning
This paper focuses on continual meta-learning, where few-shot tasks are heterogeneous
and sequentially available. Recent works use a mixture model for meta-knowledge to deal …
and sequentially available. Recent works use a mixture model for meta-knowledge to deal …
Dangers of Bayesian model averaging under covariate shift
Approximate Bayesian inference for neural networks is considered a robust alternative to
standard training, often providing good performance on out-of-distribution data. However …
standard training, often providing good performance on out-of-distribution data. However …
Same state, different task: Continual reinforcement learning without interference
Continual Learning (CL) considers the problem of training an agent sequentially on a set of
tasks while seeking to retain performance on all previous tasks. A key challenge in CL is …
tasks while seeking to retain performance on all previous tasks. A key challenge in CL is …
Continual learning using a bayesian nonparametric dictionary of weight factors
Naively trained neural networks tend to experience catastrophic forgetting in sequential task
settings, where data from previous tasks are unavailable. A number of methods, using …
settings, where data from previous tasks are unavailable. A number of methods, using …
[HTML][HTML] Online continual learning through unsupervised mutual information maximization
Catastrophic forgetting remains a challenge for artificial learning systems, especially in the
case of Online learning, where task information is unavailable. This work proposes a novel …
case of Online learning, where task information is unavailable. This work proposes a novel …
A continual learning framework for uncertainty-aware interactive image segmentation
Deep learning models have achieved state-of-the-art performance in semantic image
segmentation, but the results provided by fully automatic algorithms are not always …
segmentation, but the results provided by fully automatic algorithms are not always …
Hierarchically structured task-agnostic continual learning
One notable weakness of current machine learning algorithms is the poor ability of models
to solve new problems without forgetting previously acquired knowledge. The Continual …
to solve new problems without forgetting previously acquired knowledge. The Continual …