Url: A representation learning benchmark for transferable uncertainty estimates
Abstract Representation learning has significantly driven the field to develop pretrained
models that can act as a valuable starting point when transferring to new datasets. With the …
models that can act as a valuable starting point when transferring to new datasets. With the …
Probabilistic contrastive learning recovers the correct aleatoric uncertainty of ambiguous inputs
Contrastively trained encoders have recently been proven to invert the data-generating
process: they encode each input, eg, an image, into the true latent vector that generated the …
process: they encode each input, eg, an image, into the true latent vector that generated the …
Multi-similarity contrastive learning
Given a similarity metric, contrastive methods learn a representation in which examples that
are similar are pushed together and examples that are dissimilar are pulled apart …
are similar are pushed together and examples that are dissimilar are pulled apart …
Enhancing Out-of-Distribution Detection Through Stochastic Embeddings in Self-supervised Learning
In recent years, self-supervised learning has played a pivotal role in advancing machine
learning by allowing models to acquire meaningful representations from unlabeled data. An …
learning by allowing models to acquire meaningful representations from unlabeled data. An …
Unveiling the Potential of Probabilistic Embeddings in Self-Supervised Learning
In recent years, self-supervised learning has played a pivotal role in advancing machine
learning by allowing models to acquire meaningful representations from unlabeled data. An …
learning by allowing models to acquire meaningful representations from unlabeled data. An …
Uni-SLAM: Uncertainty-Aware Neural Implicit SLAM for Real-Time Dense Indoor Scene Reconstruction
Neural implicit fields have recently emerged as a powerful representation method for multi-
view surface reconstruction due to their simplicity and state-of-the-art performance …
view surface reconstruction due to their simplicity and state-of-the-art performance …
Uncertainties of latent representations in computer vision
M Kirchhof - arxiv preprint arxiv:2408.14281, 2024 - arxiv.org
Uncertainty quantification is a key pillar of trustworthy machine learning. It enables safe
reactions under unsafe inputs, like predicting only when the machine learning model detects …
reactions under unsafe inputs, like predicting only when the machine learning model detects …
LiST: An All-Linear-Layer Spatial-Temporal Feature Extractor with Uncertainty Estimation for RUL Prediction
In the context of Remaining Useful Life (RUL) prediction for industrial systems, the pursuit of
prediction accuracy must be balanced against the hardware costs of model operation and …
prediction accuracy must be balanced against the hardware costs of model operation and …
Quantifying Representation Reliability in Self-Supervised Learning Models
Self-supervised learning models extract general-purpose representations from data.
Quantifying the reliability of these representations is crucial, as many downstream models …
Quantifying the reliability of these representations is crucial, as many downstream models …
LLM2Loss: Leveraging Language Models for Explainable Model Diagnostics
S Ardeshir - arxiv preprint arxiv:2305.03212, 2023 - arxiv.org
Trained on a vast amount of data, Large Language models (LLMs) have achieved
unprecedented success and generalization in modeling fairly complex textual inputs in the …
unprecedented success and generalization in modeling fairly complex textual inputs in the …