Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Revisiting zero-shot abstractive summarization in the era of large language models from the perspective of position bias
We characterize and study zero-shot abstractive summarization in Large Language Models
(LLMs) by measuring position bias, which we propose as a general formulation of the more …
(LLMs) by measuring position bias, which we propose as a general formulation of the more …
Reciprocal learning
We demonstrate that numerous machine learning algorithms are specific instances of one
single paradigm: reciprocal learning. These instances range from active learning over multi …
single paradigm: reciprocal learning. These instances range from active learning over multi …
Most influential subset selection: Challenges, promises, and beyond
How can we attribute the behaviors of machine learning models to their training data? While
the classic influence function sheds light on the impact of individual samples, it often fails to …
the classic influence function sheds light on the impact of individual samples, it often fails to …
Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models
A core data-centric learning challenge is the identification of training samples that are
detrimental to model performance. Influence functions serve as a prominent tool for this task …
detrimental to model performance. Influence functions serve as a prominent tool for this task …
Revisit, Extend, and Enhance Hessian-Free Influence Functions
Influence functions serve as crucial tools for assessing sample influence in model
interpretation, subset training set selection, noisy label detection, and more. By employing …
interpretation, subset training set selection, noisy label detection, and more. By employing …
Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond
In recent years, training data attribution (TDA) methods have emerged as a promising
direction for the interpretability of neural networks. While research around TDA is thriving …
direction for the interpretability of neural networks. While research around TDA is thriving …
Data-Efficient Pretraining with Group-Level Data Influence Modeling
Data-efficient pretraining has shown tremendous potential to elevate scaling laws. This
paper argues that effective pretraining data should be curated at the group level, treating a …
paper argues that effective pretraining data should be curated at the group level, treating a …
TLXML: Task-Level Explanation of Meta-Learning via Influence Functions
The scheme of adaptation via meta-learning is seen as an ingredient for solving the problem
of data shortage or distribution shift in real-world applications, but it also brings the new risk …
of data shortage or distribution shift in real-world applications, but it also brings the new risk …
Addressing Delayed Feedback in Conversion Rate Prediction via Influence Functions
In the realm of online digital advertising, conversion rate (CVR) prediction plays a pivotal
role in maximizing revenue under cost-per-conversion (CPA) models, where advertisers are …
role in maximizing revenue under cost-per-conversion (CPA) models, where advertisers are …
Salutary Labeling with Zero Human Annotation
W **ao, H Liu - arxiv preprint arxiv:2405.17627, 2024 - arxiv.org
Active learning strategically selects informative unlabeled data points and queries their
ground truth labels for model training. The prevailing assumption underlying this machine …
ground truth labels for model training. The prevailing assumption underlying this machine …