A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations

H Cheng, M Zhang, JQ Shi - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
Modern deep neural networks, particularly recent large language models, come with
massive model sizes that require significant computational and storage resources. To …

Knowledge distillation on graphs: A survey

Y Tian, S Pei, X Zhang, C Zhang, N Chawla - ACM Computing Surveys, 2023 - dl.acm.org
Graph Neural Networks (GNNs) have received significant attention for demonstrating their
capability to handle graph data. However, they are difficult to be deployed in resource …

Evaluating post-hoc explanations for graph neural networks via robustness analysis

J Fang, W Liu, Y Gao, Z Liu, A Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
This work studies the evaluation of explaining graph neural networks (GNNs), which is
crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation …

Unleashing the power of graph data augmentation on covariate distribution shift

Y Sui, Q Wu, J Wu, Q Cui, L Li, J Zhou… - Advances in Neural …, 2024 - proceedings.neurips.cc
The issue of distribution shifts is emerging as a critical concern in graph representation
learning. From the perspective of invariant learning and stable learning, a recently well …

S2gae: Self-supervised graph autoencoders are generalizable learners with graph masking

Q Tan, N Liu, X Huang, SH Choi, L Li, R Chen… - Proceedings of the …, 2023 - dl.acm.org
Self-supervised learning (SSL) has been demonstrated to be effective in pre-training models
that can be generalized to various downstream tasks. Graph Autoencoder (GAE), an …

Handling distribution shifts on graphs: An invariance perspective

Q Wu, H Zhang, J Yan, D Wipf - arxiv preprint arxiv:2202.02466, 2022 - arxiv.org
There is increasing evidence suggesting neural networks' sensitivity to distribution shifts, so
that research on out-of-distribution (OOD) generalization comes into the spotlight …

Chasing sparsity in vision transformers: An end-to-end exploration

T Chen, Y Cheng, Z Gan, L Yuan… - Advances in Neural …, 2021 - proceedings.neurips.cc
Vision transformers (ViTs) have recently received explosive popularity, but their enormous
model sizes and training costs remain daunting. Conventional post-training pruning often …

Structure-free graph condensation: From large-scale graphs to condensed graph-free data

X Zheng, M Zhang, C Chen… - Advances in …, 2024 - proceedings.neurips.cc
Graph condensation, which reduces the size of a large-scale graph by synthesizing a small-
scale condensed graph as its substitution, has immediate benefits for various graph learning …

Trustworthy graph neural networks: Aspects, methods and trends

H Zhang, B Wu, X Yuan, S Pan, H Tong… - arxiv preprint arxiv …, 2022 - arxiv.org
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …

The lottery tickets hypothesis for supervised and self-supervised pre-training in computer vision models

T Chen, J Frankle, S Chang, S Liu… - Proceedings of the …, 2021 - openaccess.thecvf.com
The computer vision world has been re-gaining enthusiasm in various pre-trained models,
including both classical ImageNet supervised pre-training and recently emerged self …