Journal Information
Journal of Parallel and Distributed Computing (JPDC)
https://www.sciencedirect.com/journal/journal-of-parallel-and-distributed-computing
Impact Factor:
4.0
Publisher:
Elsevier
ISSN:
0743-7315
Viewed:
46835
Tracked:
99
Call For Papers
This international journal is directed to researchers, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing.

The Journal of Parallel and Distributed Computing publishes original research papers and timely review articles on the theory, design, evaluation, and use of parallel and/or distributed computing systems. The journal also features special issues on these topics; again covering the full range from the design to the use of our targeted systems.

Research Areas Include:

• Theory of parallel and distributed computing
• Parallel algorithms and their implementation
• Innovative computer architectures
• Parallel programming
• Applications, algorithms and platforms for accelerators
• Cloud, edge and fog computing
• Data-intensive platforms and applications
• Parallel processing of graph and irregular applications
• Parallel and distributed programming models
• Software tools and environments for distributed systems
• Algorithms and systems for Internet of Things
• Performance analysis of parallel applications
• Architecture for emerging technologies e.g., novel memory technologies, quantum computing
• Application-specific architectures e.g., accelerator-based and reconfigurable architecture
• Interconnection network, router and network interface architecture
Last updated by Dou Sun in 2025-12-30
Special Issues
Special Issue on Dynamic Resource Management in HPC
Submission Date: 2026-03-31

As High-Performance Computing (HPC) systems approach the exascale era and beyond, the complexity of applications, architectures, and workloads is rapidly increasing. Traditional static resource allocation models—where resources are fixed at job launch—are proving inadequate to fully exploit the potential of modern HPC infrastructures. Emerging applications such as multi-physics simulations, large-scale AI training, digital twins, and adaptive mesh refinement demand dynamic and irregular resource usage. At the same time, HPC systems are becoming increasingly heterogeneous and distributed, integrating CPUs, GPUs, FPGAs, SmartNICs, and even quantum accelerators, all within energy-constrained environments.In this context, Dynamic Resource Management (DRM) is a key enabler for performance, scalability, energy efficiency, and system throughput. DRM allows applications and runtimes to adapt resource usage at runtime, meeting the needs of future scientific and industrial workloads. This special issue aims to provide a timely forum for consolidating advances in DRM and fostering cross-fertilization between the HPC, cloud, and distributed computing communities. We invite original contributions that advance the theory, algorithms, software, and systems for dynamic resource management in HPC. Guest editors: Dr. Sergio Iserte Barcelona Supercomputing Center, Spain His research interests lie mainly in the following areas: Parallel distributed programming models Dynamic processes and resources management HPC workload modeling In-network acceleration Applied Artificial Intelligence Verónica G. Melesse Vergara Oak Ridge National Laboratory, USAHer research interests lie mainly in the following areas: HPC systems architecture Parallel programming models Large-scale HPC system testing and benchmarking Performance evaluation and optimization of scientific applications Machine learning/artificial intelligence applied to systems biology Prof. Miwako TsujiRiken, Japan Her research interests are: Programming Model Performance Model Quantum and HPC integration, middleware, and programming environment Special issue information: Topics include, but are not limited to: Dynamic (malleable and elastic) parallel applications: programming models, runtime support, and case studies Dynamic Load Balancing (DLB) and integration with DRM strategies Middleware and runtime frameworks enabling resource reallocation at scale Scheduling and resource managers supporting dynamic jobs in HPC and exascale systems Adaptive workflows and pipelines spanning edge-to-cloud-HPC environments Energy- and power-aware DRM Integration with heterogeneous resources (GPUs, FPGAs, SmartNICs, quantum accelerators) AI-driven decision making for adaptive resource allocation Resilience and fault tolerance under dynamic resource reconfiguration Case studies and real-world applications of DRM in science and industry
Last updated by Dou Sun in 2025-12-30
Related Conferences
CCFCOREQUALISShortFull NameSubmissionNotificationConference
baa1ICDCSInternational Conference on Distributed Computing Systems2026-01-072026-07-20
cb3ParCoInternational Conference on Parallel Computing2019-02-282019-05-152019-09-10
cbb1ICPADSInternational Conference on Parallel and Distributed Systems2025-08-312025-10-152025-12-14
baa1HPDCInternational ACM Symposium on High-Performance Parallel and Distributed Computing2026-01-292026-03-312026-07-13
baa1IPDPSInternational Parallel & Distributed Processing Symposium2025-10-022026-02-022026-05-25
cb1ISORCInternational Symposium on Real-time Distributed Computing2025-01-082025-03-052025-05-26
cPDCNInternational Conference on Parallel and Distributed Computing and Networks2013-10-202013-11-012014-02-17
aa2DISCInternational Symposium on Distributed Computing2025-05-202025-08-072025-10-27
baa2Euro-ParEuropean Conference on Parallel and Distributed Computing2026-02-272026-04-302026-08-24
cb2ISPDCInternational Symposium on Parallel and Distributed Computing2025-04-112025-05-302025-07-08