Urmăriți
Lu Yin
Lu Yin
Asst. Professor, CS@University of Surrey & Researcher Fellow, CS@TU/e
Adresă de e-mail confirmată pe tue.nl - Pagina de pornire
Titlu
Citat de
Citat de
Anul
Do we actually need dense over-parameterization? in-time over-parameterization in sparse training
S Liu, L Yin, DC Mocanu, M Pechenizkiy
[ICML 2021] International Conference on Machine Learning, 6989-7000, 2021
1432021
Sparse training via boosting pruning plasticity with neuroregeneration
S Liu, T Chen, X Chen, Z Atashgahi, L Yin, H Kou, L Shen, M Pechenizkiy, ...
[NeurIPS 2021] Advances in Neural Information Processing Systems 34, 9908-9922, 2021
1382021
Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity
L Yin, Y Wu, Z Zhang, CY Hsieh, Y Wang, Y Jia, M Pechenizkiy, Y Liang, ...
[ICML 2024] The Forty-first International Conference on Machine Learning, 2023
562023
Dynamic Sparsity Is Channel-Level Sparsity Learner
L Yin, G Li, M Fang, L Shen, T Huang, Z Wang, V Menkovski, X Ma, ...
[NeurIPS 2023], 2023
232023
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained Graph Tickets
T Huang, T Chen, M Fang, V Menkovski, J Zhao, L Yin, Y Pei, DC Mocanu, ...
[LoG BEST PAPER] Learning on Graphs Conference, LOG 2022, 2022
23*2022
From galore to welore: How low-rank weights non-uniformly emerge from low-rank gradients
A Jaiswal, L Yin, Z Zhang, S Liu, J Zhao, Y Tian, Z Wang
arXiv preprint arXiv:2407.11239, 2024
142024
Are Large Kernels Better Teachers than Transformers for ConvNets?
T Huang, L Yin, Z Zhang, L Shen, M Fang, M Pechenizkiy, Z Wang, S Liu
[ICML 2023] International Conference on Machine Learning, 2023
142023
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost
L Yin, S Liu, M Fang, T Huang, V Menkovski, M Pechenizkiy
[AAAI 2023] Thirty-Seventh AAAI Conference on Artificial Intelligence., 2022
142022
Q-galore: Quantized galore with int4 projection and layer-adaptive low-rank gradients
Z Zhang, A Jaiswal, L Yin, S Liu, J Zhao, Y Tian, Z Wang
arXiv preprint arXiv:2407.08296, 2024
112024
Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks
Z Atashgahi, X Zhang, N Kichler, S Liu, L Yin, M Pechenizkiy, R Veldhuis, ...
[TMLR] Transactions on Machine Learning Research, 2023
112023
Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs
L Yin, S Liu, A Jaiswal, S Kundu, Z Wang
[ICML 2024] The Forty-first International Conference on Machine Learning, 2023
10*2023
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training
L Yin, V Menkovski, M Fang, T Huang, Y Pei, M Pechenizkiy, DC Mocanu, ...
[UAI 2022] The 38th Conference on Uncertainty in Artificial Intelligence, 2022
82022
Knowledge Elicitation using Deep Metric Learning and Psychometric Testing
L Yin, V Menkovski, M Pechenizkiy
[ECML 2020] European Conference on Machine Learning, 2020
82020
Owlore: Outlier-weighed layerwise sampled low-rank projection for memory-efficient llm fine-tuning
P Li, L Yin, X Gao, S Liu
arXiv preprint arXiv:2405.18380, 2024
62024
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping
A Jaiswal, B Hu, L Yin, Y Ro, S Liu, T Chen, A Akella
[EMNLP 2024], 2024
62024
Junk dna hypothesis: Pruning small pre-trained weights Irreversibly and Monotonically impairs “difficult” downstream tasks in llms,”
L Yin, A Jaiswal, S Liu, S Kundu, Z Wang
Forty-first International Conference on Machine Learning, 2024
42024
A Structural-Clustering Based Active Learning for Graph Neural Networks
RM Fajri, Y Pei, L Yin, M Pechenizkiy
[IDA 2024] International Symposium on Intelligent Data Analysis, 2023
42023
Enhancing Adversarial Training via Reweighting Optimization Trajectory
T Huang, S Liu, T Chen, M Fang, L Shen, V Menkovski, L Yin, Y Pei, ...
[ECML PKDD 2023] European Conference on Machine Learning and Principles and …, 2023
42023
Hierarchical Semantic Segmentation using Psychometric Learning
L Yin, V Menkovski, S Liu, M Pechenizkiy
[ACML 2021 LONG ORAL] Asian Conference on Machine Learning, 2021
42021
Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
A Bandari, L Yin, CY Hsieh, AK Jaiswal, T Chen, L Shen, R Krishna, S Liu
[EMNLP 2024], 2024
32024
Sistemul nu poate realiza operația în acest moment. Încercați din nou mai târziu.
Articole 1–20