フォロー
Souvik Kundu
Souvik Kundu
Staff Research Scientist, Intel Labs; Ph.D - University of Southern California
確認したメール アドレス: intel.com - ホームページ
タイトル
引用先
引用先
Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression
S Kundu, G Datta, M Pedram, PA Beerel
Proceedings of the IEEE/CVF winter conference on applications of computer …, 2021
1292021
HIRE-SNN: Harnessing the inherent robustness of energy-efficient deep spiking neural networks by training with crafted input noise
S Kundu, M Pedram, PA Beerel
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2021
912021
DNR: A tunable robust pruning framework through dynamic network rewiring of dnns
S Kundu, M Nazemi, PA Beerel, M Pedram
Proceedings of the 26th Asia and South Pacific Design Automation Conference …, 2021
762021
A novel approach to predict COVID-19 using support vector machine
S Guhathakurata, S Kundu, A Chakraborty, JS Banerjee
Data Science for COVID-19, 351-364, 2021
712021
Exploiting explicit paths for multi-hop reading comprehension
S Kundu, T Khot, A Sabharwal, P Clark
arXiv preprint arXiv:1811.01127, 2018
652018
Pre-defined sparsity for low-complexity convolutional neural networks
S Kundu, M Nazemi, M Pedram, KM Chugg, PA Beerel
IEEE Transactions on Computers 69 (7), 1045-1058, 2020
532020
GEAR: An efficient KV cache compression recipe for near-lossless generative inference of LLM
H Kang, Q Zhang, S Kundu, G Jeong, Z Liu, T Krishna, T Zhao
🏆 NeurIPS Workshop 2024 (best paper honorable mention), 2024
522024
Pipeedge: Pipeline parallelism for large-scale model inference on heterogeneous edge devices
Y Hu, C Imes, X Zhao, S Kundu, PA Beerel, SP Crago, JP Walters
2022 25th Euromicro Conference on Digital System Design (DSD), 298-307, 2022
49*2022
A processing-in-pixel-in-memory paradigm for resource-constrained tinyml applications
S Kundu*, G Datta*, Z Yin, RT Lakkireddy, J Mathai, AP Jacob, PA Beerel, ...
Scientific Reports 12 (1), 14396, 2022
442022
Training energy-efficient deep spiking neural networks with single-spike hybrid input encoding
G Datta, S Kundu, PA Beerel
2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021
432021
Learning to linearize deep neural networks for secure and efficient private inference
S Kundu, S Lu, Y Zhang, J Liu, PA Beerel
International Conference on Learning Representation (ICLR) 2023, 2023
392023
Analyzing the confidentiality of undistillable teachers in knowledge distillation
S Kundu, Q Sun, Y Fu, M Pedram, P Beerel
Advances in Neural Information Processing Systems (NeurIPS) 2021 34, 9181-9192, 2021
332021
ACE-SNN: Algorithm-hardware co-design of energy-efficient & low-latency deep spiking neural networks for 3d image recognition
G Datta, S Kundu, AR Jaiswal, PA Beerel
Frontiers in neuroscience 16, 815258, 2022
322022
Fusing models with complementary expertise
H Wang, FM Polo, Y Sun, S Kundu, E Xing, M Yurochkin
International Conference on Learning Representation (ICLR) 2024, 2023
302023
Vision-HGNN: An Image is More Than a Graph of Nodes
Y Han, P Wang, S Kundu, Y Ding, Z Wang
IEEE/CVF International Conference on Computer Vision (ICCV) 2023, 19878-19888, 2023
292023
Memristors enabled computing correlation parameter in-memory system: A potential alternative to von Neumann architecture
S Kundu, PB Ganganaik, J Louis, H Chalamalasetty, BP Rao
IEEE Transactions on Very Large Scale Integration (VLSI) Systems 30 (6), 755-768, 2022
292022
Attentionlite: Towards efficient self-attention models for vision
S Kundu, S Sundaresan
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
272021
Revisiting Sparsity Hunting in Federated Learning: Why does Sparsity Consensus Matter?
S Kundu*, S Babakniya*, S Prakash, Y Niu, S Avestimehr
Transactions on Machine Learning Research (TMLR) 2023, 2023
25*2023
ViTA: A vision transformer inference accelerator for edge applications
S Nag, G Datta, S Kundu, N Chandrachoodan, PA Beerel
2023 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2023
242023
Towards low-latency energy-efficient deep snns via attention-guided compression
S Kundu, G Datta, M Pedram, PA Beerel
arXiv preprint arXiv:2107.12445, 2021
222021
現在システムで処理を実行できません。しばらくしてからもう一度お試しください。
論文 1–20