(De/Re)-Composition of Data-Parallel Computations via Multi-Dimensional Homomorphisms

A Rasch - ACM Transactions on Programming Languages and …, 2024 - dl.acm.org
Data-parallel computations, such as linear algebra routines and stencil computations,
constitute one of the most relevant classes in parallel computing, eg, due to their importance …

Full Version:(De/Re)-Composition of Data-Parallel Computations via Multi-Dimensional Homomorphisms

A Rasch - arxiv preprint arxiv:2405.05118, 2024 - arxiv.org
We formally introduce a systematic (de/re)-composition approach, based on the algebraic
formalism of" Multi-Dimensional Homomorphisms (MDHs)". Our approach is designed as …

mlirSynth: Automatic, Retargetable Program Raising in Multi-Level IR using Program Synthesis

A Brauckmann, E Polgreen, T Grosser… - 2023 32nd …, 2023 - ieeexplore.ieee.org
MLIR is an emerging compiler infrastructure for modern hardware, but existing programs
cannot take advantage of MLIR's high-performance compilation if they are described in …

Compiling Recurrences over Dense and Sparse Arrays

S Sundram, MU Tariq, F Kjolstad - Proceedings of the ACM on …, 2024 - dl.acm.org
We present a framework for compiling recurrence equations into native code. In our
framework, users specify a system of recurrences, the types of data structures that store …

(De/Re)-Compositions Expressed Systematically via MDH-Based Schedules

A Rasch, R Schulze, D Shabalin, A Elster… - Proceedings of the …, 2023 - dl.acm.org
We introduce a new scheduling language, based on the formalism of Multi-Dimensional
Homomorphisms (MDH). In contrast to existing scheduling languages, our MDH-based …

Incremental Computation: What Is the Essence?

YA Liu - arxiv preprint arxiv:2312.07946, 2023 - arxiv.org
Incremental computation aims to compute more efficiently on changed input by reusing
previously computed results. We give a high-level overview of works on incremental …

Parallelizing neural network models effectively on gpu by implementing reductions atomically

J Zhao, C Bastoul, Y Yi, J Hu, W Nie, R Zhang… - Proceedings of the …, 2022 - dl.acm.org
Due to the missing of a good orchestration of loop transformations, existing optimizing
compilers for deploying neural networks on GPU either parallelize reductions ineffectively or …

Simplification of Polyhedral Reductions in Practice

L Narmour, R Job, T Yuki, S Rajopadhye - arxiv preprint arxiv:2411.17498, 2024 - arxiv.org
Reductions combine collections of inputs with an associative (and here, also commutative)
operator to produce collections of outputs. When the same value contributes to multiple …

Maximal Simplification of Polyhedral Reductions

L Narmour, T Yuki, S Rajopadhye - Proceedings of the ACM on …, 2025 - dl.acm.org
Reductions combine collections of input values with an associative and often commutative
operator to produce collections of results. When the same input value contributes to multiple …

Incremental Computation: What Is the Essence?(Invited Contribution)

YA Liu - Proceedings of the 2024 ACM SIGPLAN International …, 2024 - dl.acm.org
Incremental computation aims to compute more efficiently on changed input by reusing
previously computed results. We give a high-level overview of works on incremental …