default search action
30th HPDC 2021: Virtual Event, Sweden
- Erwin Laure, Stefano Markidis, Ana Lucia Verbanescu, Jay F. Lofstead:
HPDC '21: The 30th International Symposium on High-Performance Parallel and Distributed Computing, Virtual Event, Sweden, June 21-25, 2021. ACM 2021, ISBN 978-1-4503-8217-5 - Gustavo Alonso:
Hardware Specialization for Distributed Computing. 1 - Maria Girone:
Computing Challenges for High Energy Physics. 3 - Rosa M. Badia:
Superscalar Programming Models: A Perspective from Barcelona. 5 - Tyler J. Skluzacek, Ryan Wong, Zhuozhao Li, Ryan Chard, Kyle Chard, Ian T. Foster:
A Serverless Framework for Distributed Bulk Metadata Extraction. 7-18 - Chen Wang, Kathryn M. Mohror, Marc Snir:
File System Semantics Requirements of HPC Applications. 19-30 - Shashank Gugnani, Xiaoyi Lu:
DStore: A Fast, Tailless, and Quiescent-Free Object Store for PMEM. 31-43 - Sian Jin, Jesus Pulido, Pascal Grosset, Jiannan Tian, Dingwen Tao, James P. Ahrens:
Adaptive Configuration of In Situ Lossy Compression for Cosmology Simulations via Fine-Grained Rate-Quality Modeling. 45-56 - Dakota Fulp, Alexandra Poulos, Robert Underwood, Jon C. Calhoun:
ARC: An Automated Approach to Resiliency for Lossy Compressed Data via Error Correcting Codes. 57-68 - Jan-Patrick Lehr, Tim Jammer, Christian H. Bischof:
MPI-CorrBench: Towards an MPI Correctness Benchmark Suite. 69-80 - Sergi Laut, Ricard Borrell, Marc Casas:
Cache-aware Sparse Patterns for the Factorized Sparse Approximate Inverse Preconditioner. 81-93 - Carl Pearson, Kun Wu, I-Hsin Chung, Jinjun Xiong, Wen-Mei Hwu:
TEMPI: An Interposed MPI Library with a Canonical Representation of CUDA-aware Datatypes. 95-106 - Jaehoon Jung, Daeyoung Park, Gangwon Jo, Jungho Park, Jaejin Lee:
SnuRHAC: A Runtime for Heterogeneous Accelerator Clusters with CUDA Unified Memory. 107-120 - Piyush Sao, Hao Lu, Ramakrishnan Kannan, Vijay Thakkar, Richard W. Vuduc, Thomas E. Potok:
Scalable All-pairs Shortest Paths for Huge Graphs on Multi-GPU Clusters. 121-131 - Laiping Zhao, Fangshu Li, Wenyu Qu, Kunlin Zhan, Qingman Zhang:
AITurbo: Unified Compute Allocation for Partial Predictable Training in Commodity Clusters. 133-145 - Neeraj Rajesh, Hariharan Devarajan, Jaime Cernuda Garcia, Keith Bateman, Luke Logan, Jie Ye, Anthony Kougkas, Xian-He Sun:
Apollo: : An ML-assisted Real-Time Storage Resource Observer. 147-159 - Albert Njoroge Kahira, Truong Thao Nguyen, Leonardo Bautista-Gomez, Ryousei Takano, Rosa M. Badia, Mohamed Wahib:
An Oracle for Guiding Large-Scale Model/Hybrid Parallel Training of Convolutional Neural Networks. 161-173 - Ruobing Chen, Jinping Wu, Haosen Shi, Yusen Li, Xiaoguang Liu, Gang Wang:
DRLPart: A Deep Reinforcement Learning Framework for Optimally Efficient and Robust Resource Partitioning on Commodity Servers. 175-188 - Yao Kang, Xin Wang, Zhiling Lan:
Q-adaptive: A Multi-Agent Reinforcement Learning Based Routing on Dragonfly Network. 189-200 - Staci A. Smith, David K. Lowenthal:
Jigsaw: A High-Utilization, Interference-Free Job Scheduler for Fat-Tree Clusters. 201-213 - Hang Huang, Jia Rao, Song Wu, Hai Jin, Hong Jiang, Hao Che, Xiaofeng Wu:
Towards Exploiting CPU Elasticity via Efficient Thread Oversubscription. 215-226 - Rankyung Hong, Abhishek Chandra:
DLion: Decentralized Distributed Deep Learning in Micro-Clouds. 227-238 - Bin Wang, Ahmed Ali-Eldin, Prashant J. Shenoy:
LaSS: Running Latency Sensitive Serverless Computations at the Edge. 239-251 - Thaleia Dimitra Doudali, Ada Gavrilovska:
Machine Learning Augmented Hybrid Memory Management. 253-254 - Namratha Urs, Marco Mambelli, Dave Dykstra:
Using Pilot Jobs and CernVM File System for Simplified Use of Containers and Software Distribution. 255-256 - Michael Davis, Hans Vandierendonck:
Achieving Scalable Consensus by Being Less Writey. 257-258 - Shobhit Jagga, Preeti Malakar:
Parallel Program Scaling Analysis using Hardware Counters. 259-260 - Jaemin Choi, David F. Richards, Laxmikant V. Kalé:
CharminG: A Scalable GPU-resident Runtime System. 261-262 - Vito Giovanni Castellana, Marco Minutoli:
Productive Programming of Distributed Systems with the SHAD C++ Library. 263-264
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.