Turnitin
降AI改写
早检测系统
早降重系统
Turnitin-UK版
万方检测-期刊版
维普编辑部版
Grammarly检测
Paperpass检测
checkpass检测
PaperYY检测
Approximate computing through the lens of uncertainty quantification
As computer system technology approaches the end of Moore's law, new computing
paradigms that improve performance become a necessity. One such paradigm is …
paradigms that improve performance become a necessity. One such paradigm is …
Auto-hpcnet: An automatic framework to build neural network-based surrogate for high-performance computing applications
High-performance computing communities are increasingly adopting Neural Networks (NN)
as surrogate models in their applications to generate scientific insights. Replacing an …
as surrogate models in their applications to generate scientific insights. Replacing an …
Approximate High-Performance Computing: A Fast and Energy-Efficient Computing Paradigm in the Post-Moore Era
As we reach the limits of Moore's law and the end of Dennard scaling, increased emphasis
is being given to alternative system architectures and computing paradigms to achieve …
is being given to alternative system architectures and computing paradigms to achieve …
Towards a SYCL API for approximate computing
Approximate computing is a well-known method [7] to achieve higher performance or lower
energy consumption while accepting a loss of output accuracy. Many applications such as …
energy consumption while accepting a loss of output accuracy. Many applications such as …
HPAC-ML: A Programming Model for Embedding ML Surrogates in Scientific Applications
Recent advancements in Machine Learning (ML) have substantially improved its predictive
and computational abilities, offering promising opportunities for surrogate modeling in …
and computational abilities, offering promising opportunities for surrogate modeling in …
Hpac-offload: Accelerating hpc applications with portable approximate computing on the gpu
The end of Dennard scaling and the slowdown of Moore's law led to a shift in technology
trends towards parallel architectures, particularly in HPC systems. To continue providing …
trends towards parallel architectures, particularly in HPC systems. To continue providing …
Fast, transparent, and high-fidelity memoization cache-keys for computational workflows
Computational workflows are important methods for automating complex data-generation
and analysis pipelines. Workflows are composed of sub-graphs that perform specific tasks …
and analysis pipelines. Workflows are composed of sub-graphs that perform specific tasks …
AdaptMD: Balancing Space and Performance in NUMA Architectures With Adaptive Memory Deduplication
Memory deduplication effectively relieves the memory space bottleneck by removing
duplicate pages, especially in virtualized systems in which virtual machines run the same …
duplicate pages, especially in virtualized systems in which virtual machines run the same …
SeTHet-Sending Tuned numbers over DMA onto Heterogeneous clusters: an automated precision tuning story
G Magnani, D Cattaneo, L Denisov… - Proceedings of the 21st …, 2024 - dl.acm.org
Energy and performance optimization of embedded hardware and software is of critical
importance to achieve the overall system goals. In this work, we study the optimization of …
importance to achieve the overall system goals. In this work, we study the optimization of …
Towards an Approximation-Aware Computational Workflow Framework for Accelerating Large-Scale Discovery Tasks
The use of approximation is fundamental in computational science. Almost all computational
methods adopt approximations in some form in order to obtain a favourable cost/accuracy …
methods adopt approximations in some form in order to obtain a favourable cost/accuracy …