Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Trustworthy llms: a survey and guideline for evaluating large language models' alignment
Fedfixer: mitigating heterogeneous label noise in federated learning
Federated Learning (FL) heavily depends on label quality for its performance. However, the
label distribution among individual clients is always both noisy and heterogeneous. The …
label distribution among individual clients is always both noisy and heterogeneous. The …
Unmasking and improving data credibility: A study with datasets for training harmless language models
Language models have shown promise in various tasks but can be affected by undesired
data during training, fine-tuning, or alignment. For example, if some unsafe conversations …
data during training, fine-tuning, or alignment. For example, if some unsafe conversations …
Imprecise label learning: A unified framework for learning with various imprecise label configurations
Learning with reduced labeling standards, such as noisy label, partial label, and
supplementary unlabeled data, which we generically refer to as imprecise label, is a …
supplementary unlabeled data, which we generically refer to as imprecise label, is a …
Weak proxies are sufficient and preferable for fairness with missing sensitive attributes
Evaluating fairness can be challenging in practice because the sensitive attributes of data
are often inaccessible due to privacy constraints. The go-to approach that the industry …
are often inaccessible due to privacy constraints. The go-to approach that the industry …
Transferring annotator-and instance-dependent transition matrix for learning from crowds
Learning from crowds describes that the annotations of training data are obtained with
crowd-sourcing services. Multiple annotators each complete their own small part of the …
crowd-sourcing services. Multiple annotators each complete their own small part of the …
Mitigating memorization of noisy labels via regularization between representations
Designing robust loss functions is popular in learning with noisy labels while existing
designs did not explicitly consider the overfitting property of deep neural networks (DNNs) …
designs did not explicitly consider the overfitting property of deep neural networks (DNNs) …