Survey of CPU and memory simulators in computer architecture: A comprehensive analysis including compiler integration and emerging technology applications

I Hwang, J Lee, H Kang, G Lee, H Kim - Simulation Modelling Practice and …, 2024 - Elsevier
In computer architecture studies, simulators are crucial for design verification, reducing
research and development time and ensuring the high accuracy of verification results …

Understanding the security benefits and overheads of emerging industry solutions to dram read disturbance

O Canpolat, AG Yağlıkçı, GF Oliveira, A Olgun… - arxiv preprint arxiv …, 2024 - arxiv.org
We present the first rigorous security, performance, energy, and cost analyses of the state-of-
the-art on-DRAM-die read disturbance mitigation method, Per Row Activation Counting …

A mess of memory system benchmarking, simulation and application profiling

P Esmaili-Dokht, F Sgherzi, VS Girelli… - 2024 57th IEEE/ACM …, 2024 - ieeexplore.ieee.org
The Memory stress (Mess) framework provides a unified view of the memory system
benchmarking, simulation and application profiling. The Mess benchmark provides a holistic …

Chronus: Understanding and Securing the Cutting-Edge Industry Solutions to DRAM Read Disturbance

O Canpolat, AG Yağlıkçı, GF Oliveira, A Olgun… - arxiv preprint arxiv …, 2025 - arxiv.org
We 1) present the first rigorous security, performance, energy, and cost analyses of the state-
of-the-art on-DRAM-die read disturbance mitigation method, Per Row Activation Counting …

Variable Read Disturbance: An Experimental Analysis of Temporal Variation in DRAM Read Disturbance

A Olgun, F Bostanci, IE Yuksel, O Canpolat… - arxiv preprint arxiv …, 2025 - arxiv.org
Modern DRAM chips are subject to read disturbance errors. State-of-the-art read
disturbance mitigations rely on accurate and exhaustive characterization of the read …

Duplex: A Device for Large Language Models with Mixture of Experts, Grouped Query Attention, and Continuous Batching

S Yun, K Kyung, J Cho, J Choi, J Kim… - 2024 57th IEEE/ACM …, 2024 - ieeexplore.ieee.org
Large language models (LLMs) have emerged due to their capability to generate high-
quality content across diverse contexts. To reduce their explosively increasing demands for …

A heterogeneous chiplet architecture for accelerating end-to-end transformer models

H Sharma, P Dhingra, J Doppa, U Ogras… - ACM Transactions on …, 2023 - dl.acm.org
Transformers have revolutionized deep learning and generative modeling, enabling
advancements in natural language processing tasks. However, the size of transformer …

Marca: Mamba accelerator with reconfigurable architecture

J Li, S Huang, J Xu, J Liu, L Ding, N Xu… - arxiv preprint arxiv …, 2024 - arxiv.org
We propose a Mamba accelerator with reconfigurable architecture, MARCA. We propose
three novel approaches in this paper.(1) Reduction alternative PE array architecture for both …