Mix and match: A novel fpga-centric deep neural network quantization framework SE Chang, Y Li, M Sun, R Shi, HKH So, X Qian, Y Wang, X Lin 2021 IEEE International Symposium on High-Performance Computer Architecture …, 2021 | 117 | 2021 |
Language model compression with weighted low-rank factorization YC Hsu, T Hua, S Chang, Q Lou, Y Shen, H Jin arXiv preprint arXiv:2207.00112, 2022 | 84 | 2022 |
Film-qnn: Efficient fpga acceleration of deep neural networks with intra-layer, mixed-precision quantization M Sun, Z Li, A Lu, Y Li, SE Chang, X Ma, X Lin, Z Fang Proceedings of the 2022 ACM/SIGDA International Symposium on Field …, 2022 | 76 | 2022 |
Sparse progressive distillation: Resolving overfitting under pretrain-and-finetune paradigm S Huang, D Xu, IEH Yen, Y Wang, SE Chang, B Li, S Chen, M Xie, ... arXiv preprint arXiv:2110.08190, 2021 | 31 | 2021 |
RMSMP: A novel deep neural network quantization framework with row-wise mixed schemes and multiple precisions SE Chang, Y Li, M Sun, W Jiang, S Liu, Y Wang, X Lin Proceedings of the IEEE/CVF international conference on computer vision …, 2021 | 19 | 2021 |
Latent feature lasso IEH Yen, WC Lee, SE Chang, AS Suggala, SD Lin, P Ravikumar International Conference on Machine Learning, 3949-3957, 2017 | 11 | 2017 |
MSP: an FPGA-specific mixed-scheme, multi-precision deep neural network quantization framework SE Chang, Y Li, M Sun, W Jiang, R Shi, X Lin, Y Wang arXiv preprint arXiv:2009.07460, 2020 | 10 | 2020 |
You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding G Yuan, SE Chang, Q Jin, A Lu, Y Li, Y Wu, Z Kong, Y Xie, P Dong, M Qin, ... European Conference on Computer Vision, 34-51, 2022 | 4 | 2022 |
ESRU: Extremely Low-Bit and Hardware-Efficient Stochastic Rounding Unit Design for Low-Bit DNN Training SE Chang, G Yuan, A Lu, M Sun, Y Li, X Ma, Z Li, Y Xie, M Qin, X Lin, ... 2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), 1-6, 2023 | 3 | 2023 |
Mixlasso: Generalized mixed regression via convex atomic-norm regularization IEH Yen, WC Lee, K Zhong, SE Chang, PK Ravikumar, SD Lin Advances in Neural Information Processing Systems 31, 2018 | 3 | 2018 |
SDA: Low-Bit Stable Diffusion Acceleration on Edge FPGAs G Yang, Y Xie, ZJ Xue, SE Chang, Y Li, P Dong, J Lei, W Xie, Y Wang, ... 2024 34th International Conference on Field-Programmable Logic and …, 2024 | 1 | 2024 |
ILMPQ: An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA SE Chang, Y Li, M Sun, Y Wang, X Lin arXiv preprint arXiv:2111.00155, 2021 | 1 | 2021 |
Learning tensor latent features SE Chang, X Zheng, IE Yen, P Ravikumar, R Yu arXiv preprint arXiv:1810.04754, 2018 | 1 | 2018 |
Fully Open Source Moxin-7B Technical Report P Zhao, X Shen, Z Kong, Y Shen, SE Chang, T Rupprecht, L Lu, E Nan, ... arXiv preprint arXiv:2412.06845, 2024 | | 2024 |
Digital avatars: framework development and their evaluation T Rupprecht, SE Chang, Y Wu, L Lu, E Nan, C Li, C Lai, Z Li, Z Hu, Y He, ... arXiv preprint arXiv:2408.04068, 2024 | | 2024 |
SuperFlow: A Fully-Customized RTL-to-GDS Design Automation Flow for Adiabatic Quantum-Flux-Parametron Superconducting Circuits Y Xie, P Dong, G Yuan, Z Li, M Zabihi, C Wu, SE Chang, X Zhang, X Lin, ... 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE), 1-6, 2024 | | 2024 |
Hardware-efficient stochastic rounding unit design for DNN training: late breaking results SE Chang, G Yuan, A Lu, M Sun, Y Li, X Ma, Z Li, Y Xie, M Qin, X Lin, ... Proceedings of the 59th ACM/IEEE Design Automation Conference, 1396-1397, 2022 | | 2022 |
Efficient Tensor Decomposition with Boolean Factors SE Chang, X Zheng, IEH Yen, P Ravikumar, R Yu arXiv preprint arXiv:1810.04754, 2018 | | 2018 |