Updated

HCW 2024

The thirty-third Heterogeneity in Computing Workshop (HCW) was held at the Hyatt Regency San Francisco, San Francisco, California, on May 27, 2024. HCW is annually organized in conjunction with the International Parallel and Distributed Processing Symposium (IPDPS).

HCW 2024 Program

HCW 2024 Call for Papers

Heterogeneous computing systems comprise growing numbers of increasingly more diverse computing resources that can be local to one another or geographically distributed. The opportunity and need for effectively utilizing heterogeneous computing resources has given rise to the notions of cluster computing, grid computing, and cloud computing. HCW encourages paper submissions from both the research and industry communities presenting novel ideas on theoretical and practical aspects of computing in heterogeneous computing environments.

San Francisco, California

San Francisco, California

HCW 2024 Program

Session 1: Introductions and Keynote Presentation (8:45-10 am)

Session Chairs: DK Panda (The Ohio State University, US), Hari Subramoni (The Ohio State University, US), and Kamesh Madduri (Penn State, US)

Yale Patt, Professor and the Ernest Cockrell, Jr. Centennial Chair in Engineering at The University of Texas at Austin, will deliver the HCW 2024 keynote.

Title: Hetero: Where we’ve been, Where we are, and What Next?

Abstract: My first connection with hetero was back in my assembly language days on the PDP 11/60 with DEC’s EMT instruction which allowed users to design functions by writing their own microcode that appropriately manipulated the data path. Even then, there were mostly naysayers objecting to the extra challenges in repurposing the data path at the accompanying extra expense. In their view, the data path had one fixed use. I never bought into that in the same way that I think having eleven quarterbacks on the field makes no sense to me. Later, when chip multiprocessors became the sine qua non of microarchitecture, they insisted on homogeneous processors since hetero meant hiring extra design teams. I remember a panel I was on at HiPEAC in 2010 where my fellow panelists all agreed homogeneous processors was the only thing that made sense economically. We have successfully overcome that nonsense, and in fact pretty much everyone now agrees that future chips will make abundant use of accelerators. In my view the obvious next step is to make the microarchitectures heterogeneous, and turn those structures over to the compiler to allow their effective use. Again the pushback is, “No, that will get rid of portability, and no company will ever allow that…for obvious reasons.” My answer: “Economics be damned!” In this talk I hope to put hetero in perspective, and discuss why portability is not always the right answer.

Bio: Yale Patt is a teacher at the The University of Texas at Austin, where he still enjoys teaching freshmen, seniors, and graduate students, doing research, and consulting more than 60 years after first getting involved with computer technology. He earned obligatory degrees from reputable universities, and has received more than enough awards for his research and teaching. More information is available on his website users.ece.utexas.edu/~patt for those who are interested.

Break (10-10:30 am)

Session 2: Research Papers (10:30 am-12 pm)

Session Chair: Anne Benoit (École Normale Supérieure de Lyon, FR)

Performance Portability of the Chapel Language on Heterogeneous Architectures
Josh Milthorpe (Oak Ridge National Laboratory, US / Australian National University, AU), Xianghao Wang (Australian National University, AU), Ahmad Azizi (Australian National University, AU)

Towards dynamic autotuning of SpMV in CUSP library
Miroslav Demek (Masaryk University, CZ), Jiri Filipovic (Masaryk University, CZ)

A Runtime Manager Integrated Emulation Environment for Heterogeneous SoC Design with RISC-V Cores
H. Umut Suluhan (The University of Arizona, US), Serhan Gener (The University of Arizona, US), Alexander Fusco (The University of Arizona, US), Joshua Mack (The University of Arizona, US), Ismet Dagli (Colorado School of Mines, US), Mehmet Belviranli (Colorado School of Mines, US), Cagatay Edemen (Ozyegin University, TR), Ali Akoglu (The University of Arizona, US)

Dynamic Tasks Scheduling with Multiple Priorities on Heterogeneous Computing Systems
Hayfa Tayeb (Inria/University of Bordeaux, FR), Bérenger Bramas (Inria/University of Strasbourg, FR), Mathieu Faverge (Inria/University of Strasbourg, FR), Abdou Guermouche (Inria/University of Bordeaux, FR)

Lunch break (12-1:30 pm)

Session 3: Research Papers (1:30-3 pm)

Session Chair: Ali Akoglu (The University of Arizona, US)

PSyGS Gen A Generator of Domain-Specific Architectures to Accelerate Sparse Linear System Resolution
Niccolò Nicolosi (Politecnico di Milano, IT), Francesco Renato Negri (Politecnico di Milano, IT), Francesco Pesce (Politecnico di Milano, IT), Francesco Peverelli (Politecnico di Milano, IT), Davide Conficconi (Politecnico di Milano, IT), Marco Domenico Santambrogio (Politecnico di Milano, IT)

Toward a Holistic Performance Evaluation of Large Language Models Across Diverse AI Accelerators
Murali Emani (Argonne National Laboratory, US), Sam Foreman (Argonne National Laboratory, US), Varuni Sastry (Argonne National Laboratory, US), Zhen Xie (State University of New York, Binghamton, US), Siddhisanket Raskar (Argonne National Laboratory, US), William Arnold (Argonne National Laboratory, US), Rajeev Thakur (Argonne National Laboratory, US), Venkatram Vishwanath (Argonne National Laboratory, US), Michael E. Papka (Argonne National Laboratory, US), Sanjif Shanmugavelu (Groq, US), Darshan Gandhi (SambaNova, US), Dun Ma (SambaNova, US), Kiran Ranganath (SambaNova, US), Rick Weisner (SambaNova, US), Jiunn-yeu Chen (Intel Habana, US), Yuting Yang (Intel Habana, US), Natalia Vassilieva (Cerebras, US), Bin C. Zhang (Cerebras, US), Sylvia Howland (Cerebras, US), Alexandar Tsyplikhin (Graphcore, US)

IRIS: Exploring Performance Scaling of the Intelligent Runtime System and its Dynamic Scheduling Policies
Beau Johnston (Oak Ridge National Laboratory, US), Narasinga Rao Miniskar (Oak Ridge National Laboratory, US), Aaron Young (Oak Ridge National Laboratory, US), Mohammad Alaul Haque Monil (Oak Ridge National Laboratory, US), Seyong Lee (Oak Ridge National Laboratory, US), Jeffrey S. Vetter (Oak Ridge National Laboratory, US)

Heterogeneous Hyperthreading Architecture for Homogeneous Workloads
Mingxuan He (Purdue University / Futurewei Technologies, US), Fangping Liu (Futurewei Technologies, US), Sang Wook Stephen Do (Futurewei Technologies, US)

Break (3-3:30 pm)

Session 4: Panel and Closing Remarks (3:30-5 pm)

Impact of LLMs and Generative AI on Future Heterogeneous Systems?
Panel Moderator: Anne C. Elster (Norwegian University of Science and Technology, NO)
Panelists: Fredrik Kjolstad (Stanford University, US), Charles Leiserson (Massachusetts Institute of Technology, US), DK Panda (The Ohio State University, US), Yale Patt (The University of Texas at Austin, US), Philippe Tillet (OpenAI, US)

Closing Remarks
DK Panda (The Ohio State University, US) and Hari Subramoni (The Ohio State University, US)

HCW 2024 Call for Papers

May 27, 2024
San Francisco, CA, USA

In conjunction with the 38th IEEE International Parallel and Distributed Processing Symposium (IPDPS 2024)

Sponsored by the IEEE Computer Society
through the Technical Committee on Parallel Processing (TCPP)

Most modern computing systems are heterogeneous, either for organic reasons because components grew independently, as it is the case in desktop grids, or by design to leverage the strength of specific hardware, as it is the case in accelerated systems. In any case, all computing systems have some form of hardware or software heterogeneity that must be managed, leveraged, understood, and exploited. The Heterogeneity in Computing Workshop (HCW) is a venue to discuss and innovate in all theoretical and practical aspects of heterogeneous computing: design, programmability, efficient utilization, algorithms, modeling, applications, etc. HCW 2024 will be the thirty-third annual gathering of this workshop.

Topics

Topics of interest include but are not limited to the following areas:

Heterogeneous multicore systems and architectures: Design, exploration, and experimental analysis of heterogeneous computing systems such as Graphics Processing Units, heterogeneous systems-on-chip, Artificial Intelligence chips, Field Programmable Gate Arrays, big.LITTLE, and application-specific architectures.

Heterogeneous parallel and distributed systems: Design and analysis of computing grids, cloud systems, hybrid clusters, datacenters, geo-distributed computing systems, and supercomputers.

Deep memory hierarchies: Design and analysis of memory hierarchies with SRAM, DRAM, Flash/SSD, and HDD technologies; NUMA architectures; cache coherence strategies; novel memory systems such as phase-change RAM, magnetic (e.g., STT) RAM, 3D Xpoint/crossbars, and memristors.

On-chip, off-chip, and heterogeneous network architectures: Network-on-chip (NoC) architectures and protocols for heterogeneous multicore applications; energy, latency, reliability, and security optimizations for NoCs; off-chip (chip-to-chip) network architectures and optimizations; heterogeneous networks (combination of NoC and off-chip) design, evaluation, and optimizations; large-scale parallel and distributed heterogeneous network design, evaluation, and optimizations.

Programming models and tools: Programming paradigms and tools for heterogeneous systems; middleware and runtime systems; performance-abstraction tradeoff; interoperability of heterogeneous software environments; workflows; dataflows.

Resource management and algorithms for heterogeneous systems: Parallel algorithms for solving problems on heterogeneous systems (e.g., multicores, hybrid clusters, grids, or clouds); strategies for scheduling and allocation on heterogeneous 2D and 3D multicore architectures; static and dynamic scheduling and resource management for large-scale and parallel heterogeneous systems.

Modeling, characterization, and optimizations: Performance models and their use in the design of parallel and distributed algorithms for heterogeneous platforms; characterizations and optimizations for improving the time to solve a problem (e.g., throughput, latency, runtime); modeling and optimizing electricity consumption (e.g., power, energy); modeling for failure management (e.g., fault tolerance, recovery, reliability); modeling for security in heterogeneous platforms.

Applications on heterogeneous systems: Case studies; confluence of Big Data systems and heterogeneous systems; data-intensive computing; scientific computing.

This year we wish to focus on and expand submissions and presentations in the following “hot topics” areas; therefore, we especially invite submissions in the following three areas:

Heterogeneous Integration of Quantum Computing: Design, exploration, and analysis of architectures and software frameworks enabling heterogeneous integration of classical computing and quantum computing (e.g., heterogeneous quantum computers, error correction, heterogeneous applications that use both classical and quantum logic, benchmarks for heterogeneous quantum computers).

Heterogeneity and Interoperability in Software & Data Systems: Design, exploration, and analysis of architectures and software frameworks for interoperability in software and data systems (e.g., semantic frameworks, interoperability for heterogeneous Internet-of-Things systems, model-driven frameworks).

Heterogeneous Computing for Machine Learning (ML) and Deep Learning (DL): Design, exploration, benchmarking, and analysis of accelerators and software frameworks for ML and DL applications on heterogeneous computing systems.

Important Dates

  • Paper submission: February 19, 2024
  • Author notification: February 29, 2024
  • Camera-ready submission: March 6, 2024

Paper Submissions

Manuscripts submitted to HCW 2024 should not have been previously published or be under review for a different workshop, conference, or journal.

Submissions must use the latest IEEE manuscript templates for conference proceedings. Submissions may not exceed a total of ten single-spaced double-column pages using 10-point size font on 8.5x11 inch pages. The page limit includes figures, tables, and references. A single-blind review process will be followed.

Files should be submitted by following the instructions at the IPDPS 2024 submission site.

Workshop Organization

General Co-Chairs: Anne C. Elster and Jan Christian Meyer, Norwegian University of Science and Technology, Norway

Technical Program Committee Co-Chairs: DK Panda and Hari Subramoni, The Ohio State University, USA

Questions may be sent to the HCW 2024 General Co-Chairs (Anne Elster: elster at ntnu dot no, Jan Christian Meyer: jan dot christian dot meyer at ntnu dot no) or Technical Program Committee Co-Chairs (DK Panda: panda.2 at osu dot edu, Hari Subramoni: subramoni.1 at osu dot edu).

Technical Program Committee

(Partial list, last updated Jan 9, 2024)
Shashank Adavally, Micron Technology, USA
Gonzalo Brito Gadeschi, NVIDIA Corporation, Germany
Nick Brown, Edinburgh Parallel Computing Centre, University of Edinburgh, UK
Daniel Cordeiro, University of São Paulo, Brazil
Mattan Erez, University of Texas, USA
Richard Graham, NVIDIA Corporation, USA
Yanfei Guo, Argonne National Laboratory, USA
Diana Göhringer, Technical University Dresden, Germany
H. Peter Hofstee, IBM, USA
Tanzima Islam, Texas State University, USA
Emmanuel Jeannot, INRIA/University of Bordeaux, France
Joanna Kolodziej, Cracow University of Technology/NASK National Research Institute, Poland
Hatem Ltaief, King Abdullah University of Science and Technology, Saudi Arabia
Pankaj Mehra, Elephance Memory, Inc./University of California at Santa Cruz, USA
Raymond Namyst, INRIA/University of Bordeaux, France
William Schonbein, Sandia National Laboratories, USA
Marko Scrbak, AMD, USA
Aamir Shafi, The Ohio State University, USA
Sameer Shende, University of Oregon, USA
Devesh Tiwari, Northeastern University, USA

Steering Committee

Kamesh Madduri, Pennsylvania State University, USA (Co-Chair)
Behrooz Shirazi, National Science Foundation, USA (Co-Chair)
H. J. Siegel, Colorado State University, USA (Past Chair)
John Antonio, University of Oklahoma, USA
David Bader, New Jersey Institute of Technology, USA
Anne Benoit, École Normale Supérieure de Lyon, France
Jack Dongarra, University of Tennessee, USA
Alexey Lastovetsky, University College Dublin, UK
Sudeep Pasricha, Colorado State University, USA
Viktor K. Prasanna, University of Southern California, USA
Yves Robert, École Normale Supérieure de Lyon, France
Erik Saule, University of North Carolina at Charlotte, USA
Uwe Schwiegelshohn, TU Dortmund University, Germany

Sponsors

IEEE IPDPS 2024 is sponsored by the IEEE Computer Society, through the Technical Committee on Parallel Processing (TCPP), and is held in cooperation with the IEEE Computer Society Technical Committees on Computer Architecture (TCCA) and Distributed Processing (TCDP).

HCW 2024 is sponsored by the U.S. Office of Naval Research and IEEE IPDPS 2024.

HCW logo