Genimage: A million-scale benchmark for detecting ai-generated image M Zhu, H Chen, Q Yan, X Huang, G Lin, W Li, Z Tu, H Hu, J Hu, Y Wang Advances in Neural Information Processing Systems 36, 2024 | 99 | 2024 |
Design space exploration of neural network activation function circuits T Yang, Y Wei, Z Tu, H Zeng, MA Kinsy, N Zheng, P Ren IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2018 | 72 | 2018 |
Adabin: Improving binary neural networks with adaptive binary sets Z Tu, X Chen, P Ren, Y Wang European conference on computer vision, 379-395, 2022 | 64 | 2022 |
NTIRE 2023 challenge on image denoising: Methods and results Y Li, Y Zhang, R Timofte, L Van Gool, Z Tu, K Du, H Wang, H Chen, W Li, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 47 | 2023 |
A survey on transformer compression Y Tang, Y Wang, J Guo, Z Tu, K Han, H Hu, D Tao arXiv preprint arXiv:2402.05964, 2024 | 27 | 2024 |
Toward accurate post-training quantization for image super resolution Z Tu, J Hu, H Chen, Y Wang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 18 | 2023 |
Cbq: Cross-block quantization for large language models X Ding, X Liu, Y Zhang, Z Tu, W Li, J Hu, H Chen, Y Tang, Z Xiong, B Yin, ... ICLR2025, arXiv preprint arXiv:2312.07950, 2023 | 8 | 2023 |
Data upcycling knowledge distillation for image super-resolution Y Zhang, W Li, S Li, J Hu, H Chen, H Wang, Z Tu, W Wang, B Jing, ... ICLR2025, arXiv preprint arXiv:2309.14162, 2023 | 6 | 2023 |
U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers Y Tian, Z Tu, H Chen, J Hu, C Xu, Y Wang NeurIPS2024, arXiv preprint arXiv:2405.02730, 2024 | 4 | 2024 |
Effective diffusion transformer architecture for image super-resolution K Cheng, L Yu, Z Tu, X He, L Chen, Y Guo, M Zhu, N Wang, X Gao, J Hu AAAI2025, arXiv preprint arXiv:2409.19589, 2024 | 2 | 2024 |
One step diffusion-based super-resolution with time-aware distillation X He, H Tang, Z Tu, J Zhang, K Cheng, H Chen, Y Guo, M Zhu, N Wang, ... arXiv preprint arXiv:2408.07476, 2024 | 2 | 2024 |
IPT-V2: Efficient Image Processing Transformer using Hierarchical Attentions Z Tu, K Du, H Chen, H Wang, W Li, J Hu, Y Wang arXiv preprint arXiv:2404.00633, 2024 | 2 | 2024 |
PQ-SAM: Post-training Quantization for Segment Anything Model X Liu, X Ding, L Yu, Y Xi, W Li, Z Tu, J Hu, H Chen, B Yin, Z Xiong European Conference on Computer Vision, 420-437, 2024 | 1 | 2024 |
LIPT: Latency-aware Image Processing Transformer J Qiao, W Li, H Xie, H Chen, Y Zhou, Z Tu, J Hu, S Lin arXiv preprint arXiv:2404.06075, 2024 | 1 | 2024 |
CAQ: Context-aware quantization via reinforcement learning Z Tu, J Ma, T Xia, W Zhao, P Ren, N Zheng 2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021 | 1 | 2021 |