Follow
Yulhwa Kim
Title
Cited by
Cited by
Year
Area-efficient and variation-tolerant in-memory BNN computing using 6T SRAM array
J Kim, J Koo, T Kim, Y Kim, H Kim, S Yoo, JJ Kim
2019 Symposium on VLSI Circuits, C118-C119, 2019
1022019
Monolithically integrated RRAM-and CMOS-based in-memory computing optimizations for efficient deep learning
S Yin, Y Kim, X Han, H Barnaby, S Yu, Y Luo, W He, X Sun, JJ Kim, J Seo
IEEE Micro 39 (6), 54-63, 2019
892019
2-bit-per-cell RRAM-based in-memory computing for area-/energy-efficient deep learning
W He, S Yin, Y Kim, X Sun, JJ Kim, S Yu, JS Seo
IEEE Solid-State Circuits Letters 3, 194-197, 2020
612020
BitBlade: Energy-efficient variable bit-precision hardware accelerator for quantized neural networks
S Ryu, H Kim, W Yi, E Kim, Y Kim, T Kim, JJ Kim
IEEE Journal of Solid-State Circuits 57 (6), 1924-1935, 2022
432022
Input-splitting of large neural networks for power-efficient accelerator with resistive crossbar memory array
Y Kim, H Kim, D Ahn, JJ Kim
Proceedings of the International Symposium on Low Power Electronics and …, 2018
352018
In-memory batch-normalization for resistive memory based binary neural network hardware
H Kim, Y Kim, JJ Kim
Proceedings of the 24th Asia and South Pacific Design Automation Conference …, 2019
292019
Neural network-hardware co-design for scalable RRAM-based BNN accelerators
Y Kim, H Kim, JJ Kim
arXiv preprint arXiv:1811.02187, 2018
242018
Time-delayed convolutions for neural network device and method
S Kim, J Kim, KIM Yulhwa, J Kim, D Park, H Kim
US Patent 11,521,046, 2022
202022
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks
J Song, K Oh, T Kim, H Kim, Y Kim, JJ Kim
arXiv preprint arXiv:2402.09025, 2024
152024
Algorithm/hardware co-design for in-memory neural network computing with minimal peripheral circuit overhead
H Kim, Y Kim, S Ryu, JJ Kim
2020 57th ACM/IEEE Design Automation Conference (DAC), 1-6, 2020
142020
L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ
H Jeon, Y Kim, JJ Kim
arXiv preprint arXiv:2402.04902, 2024
12*2024
A 44.1 TOPS/W precision-scalable accelerator for quantized neural networks in 28nm CMOS
S Ryu, H Kim, W Yi, J Koo, E Kim, Y Kim, T Kim, JJ Kim
2020 IEEE Custom Integrated Circuits Conference (CICC), 1-4, 2020
122020
Energy-efficient in-memory binary neural network accelerator design based on 8T2C SRAM cell
H Oh, H Kim, D Ahn, J Park, Y Kim, I Lee, JJ Kim
IEEE Solid-State Circuits Letters 5, 70-73, 2022
112022
Effect of device variation on mapping binary neural network to memristor crossbar array
W Yi, Y Kim, JJ Kim
2019 Design, Automation & Test in Europe Conference & Exhibition (DATE), 320-323, 2019
112019
Squeezing large-scale diffusion models for mobile
J Choi, M Kim, D Ahn, T Kim, Y Kim, D Jo, H Jeon, JJ Kim, H Kim
arXiv preprint arXiv:2307.01193, 2023
92023
Time-step interleaved weight reuse for LSTM neural network computing
N Park, Y Kim, D Ahn, T Kim, JJ Kim
Proceedings of the ACM/IEEE International Symposium on Low Power Electronics …, 2020
92020
Extreme partial-sum quantization for analog computing-in-memory neural network accelerators
Y Kim, H Kim, JJ Kim
ACM Journal on Emerging Technologies in Computing Systems (JETC) 18 (4), 1-19, 2022
82022
FIGNA: Integer Unit-Based Accelerator Design for FP-INT GEMM Preserving Numerical Accuracy
J Jang, Y Kim, J Lee, JJ Kim
2024 IEEE International Symposium on High-Performance Computer Architecture …, 2024
72024
Mapping binary ResNets on computing-in-memory hardware with low-bit ADCs
Y Kim, H Kim, J Park, H Oh, JJ Kim
2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), 856-861, 2021
72021
Maximizing parallel activation of word-lines in MRAM-based binary neural network accelerators
D Ahn, H Oh, H Kim, Y Kim, JJ Kim
IEEE Access 9, 141961-141969, 2021
62021
The system can't perform the operation now. Try again later.
Articles 1–20