Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation

C Fan, J Liu, Y Zhang, E Wong, D Wei, S Liu - ar**_Perspective_CVPR_2023_paper.pdf" data-clk="hl=sv&sa=T&oi=gga&ct=gga&cd=2&d=4339835495476026380&ei=YiqvZ9OYMZbO6rQP6tvC0A0" data-clk-atid="DNx6w2MxOjwJ" target="_blank">[PDF] thecvf.com

Understanding and improving visual prompting: A label-map** perspective

A Chen, Y Yao, PY Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
We revisit and advance visual prompting (VP), an input prompting technique for vision tasks.
VP can reprogram a fixed, pre-trained source model to accomplish downstream tasks in the …

Model sparsity can simplify machine unlearning

J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …

Unleashing the power of data tsunami: A comprehensive survey on data assessment and selection for instruction tuning of language models

Y Qin, Y Yang, P Guo, G Li, H Shao, Y Shi, Z Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
Instruction tuning plays a critical role in aligning large language models (LLMs) with human
preference. Despite the vast amount of open instruction datasets, naively training a LLM on …

When does bias transfer in transfer learning?

H Salman, S Jain, A Ilyas, L Engstrom, E Wong… - arxiv preprint arxiv …, 2022 - arxiv.org
Using transfer learning to adapt a pre-trained" source model" to a downstream" target task"
can dramatically increase performance with seemingly no downside. In this work, we …

On the trade-off of intra-/inter-class diversity for supervised pre-training

J Zhang, B Wang, Z Hu, PWW Koh… - Advances in Neural …, 2024 - proceedings.neurips.cc
Pre-training datasets are critical for building state-of-the-art machine learning models,
motivating rigorous study on their impact on downstream tasks. In this work, we study the …

Discovering bugs in vision models using off-the-shelf image generation and captioning

O Wiles, I Albuquerque, S Gowal - arxiv preprint arxiv:2208.08831, 2022 - arxiv.org
Automatically discovering failures in vision models under real-world settings remains an
open challenge. This work demonstrates how off-the-shelf, large-scale, image-to-text and …

Selectivity drives productivity: efficient dataset pruning for enhanced transfer learning

Y Zhang, Y Zhang, A Chen, J Jia, J Liu… - Advances in …, 2023 - proceedings.neurips.cc
Massive data is often considered essential for deep learning applications, but it also incurs
significant computational and infrastructural costs. Therefore, dataset pruning (DP) has …

Know your self-supervised learning: A survey on image-based generative and discriminative training

U Ozbulak, HJ Lee, B Boga, ET Anzaku, H Park… - arxiv preprint arxiv …, 2023 - arxiv.org
Although supervised learning has been highly successful in improving the state-of-the-art in
the domain of image-based computer vision in the past, the margin of improvement has …