仕訳帳情報
Future Generation Computer Systems (FGCS)
https://www.sciencedirect.com/journal/future-generation-computer-systems
インパクト ・ ファクター:
6.200
出版社:
Elsevier
ISSN:
0167-739X
閲覧:
90318
追跡:
180
論文募集
The International Journal of eScience

Computing infrastructures and systems are rapidly developing and so are novel ways to map, control and execute scientific applications which become more and more complex and collaborative.
Computational and storage capabilities, databases, sensors, and people need true collaborative tools. Over the last years there has been a real explosion of new theory and technological progress supporting a better understanding of these wide-area, fully distributed sensing and computing systems. Big Data in all its guises require novel methods and infrastructures to register, analyze and distill meaning.

FGCS aims to lead the way in advances in distributed systems, collaborative environments, high performance and high performance computing, Big Data on such infrastructures as grids, clouds and the Internet of Things (IoT).

The Aims and Scope of FGCS cover new developments in:

[1] Applications and application support:

    Novel applications for novel e-infrastructures
    Complex workflow applications
    Big Data registration, processing and analyses
    Problem solving environments and virtual laboratories
    Semantic and knowledge based systems
    Collaborative infrastructures and virtual organizations
    Methods for high performance and high throughput computing
    Urgent computing
    Scientific, industrial, social and educational implications
    Education

[2] Methods and tools:

    Tools for infrastructure development and monitoring
    Distributed dynamic resource management and scheduling
    Information management
    Protocols and emerging standards
    Methods and tools for internet computing
    Security aspects

[3] Theory:

    Process specification;
    Program and algorithm design
    Theoretical aspects of large scale communication and computation
    Scaling and performance theory
    Protocols and their verification
最終更新 Dou Sun 2024-07-11
Special Issues
Special Issue on Approximate Computing: the need for efficient and sustainable computing
提出日: 2025-01-31

Motivation and Scope Today, computing systems face unprecedented computational demands. They serve as bridges between the digital and physical world, processing vast data from diverse sources. Our digital world is constantly producing an immense volume of data. According to recent estimates, millions of terabytes of data are created each day. To handle this immense volume of data, increasingly sophisticated and resource-constrained devices are deployed at the edge, where energy and power efficiency takes center stage. Additionally, the growth of modern AI models, particularly neural networks, has led to boundless computational and power demands. For example, GPT-3 featuring 175 billion parameters, BERT large equipped with 340 million parameters, require high energy costs for training and inferencing data. Approximate Computing (AxC) provides a promising solution: by intentionally allowing slight inaccuracies in computations, AxC significantly reduces overhead (including energy, area, and latency) while preserving practical accuracy levels. These paradigms find applications across several domains. With the intent of navigating the intricate balance between accuracy, reliability, and energy efficiency, exploring Approximate Computing (AxC) techniques becomes crucial. The proposed Special Issue (SI) investigates the intersection of energy-efficient computing and accuracy of state-of-the-art workloads, shedding light on innovative approaches and practical implementations. The potential areas of interest for the proposed SI include, but are not limited to, the following topics: Approximation for Deep Learning applications, including Large Language Models (LLMs) Approximation techniques for emerging processor and memory technologies Approximation-induced error modeling and propagation Approximation in edge computing applications Approximation in HPC and embedded systems Approximation in Foundation Models Approximation in reconfigurable computing Architectural support for approximation Cross-layer approximate computing Hardware/software co-design of approximate systems Dependability of approximate circuits and systems Design automation of approximate architectures Design of approximate reconfigurable architectures Error resilient Near-Threshold Computing Methods for monitoring and controlling approximation quality Modeling, specification, and verification of approximate circuits, and systems Safety and reliability applications of approximate computing Security in the context of approximation Software-based fault-tolerant technique for approximate computing Test and fault tolerance of approximate systems Guest Editors Annachiara Ruospo Politecnico di Torino, Italy annachiara.ruospo@polito.it Salvatore Barone University of Naples Federico II, Italy salvatore.barone@unina.it Jorge Castro-Godinez School of Electronics Engineering Instituto Tecnologico de Costa Rica, Costa Rica jocastro@itcr.ac.cr Important Dates Submission portal opens: November 1st, 2024 Deadline for paper submission: January 31st, 2025 Latest acceptance deadline for all papers: May 31st, 2025
最終更新 Dou Sun 2024-09-28
Special Issue on On-device Artificial Intelligence solutions with applications on Smart Environments
提出日: 2025-02-25

Motivation and Scope The recent advancements in Artificial Intelligence (AI) and the increasing computational power acted as catalyzer for the widespread diffusion of Intelligent Cyber Physical Systems (ICPSs) as a novel way to run smart applications with a “reasoning” component. Unfortunately, the limited hardware capabilities of these devices pose significant limitations on the complexity of the tasks and Deep Learning models that can be run effectively. During the years, solutions like weights compression or quantization have been proposed to address this issue, but they usually require a careful tuning and most of the time they consist in a post-training operation. From the very beginning, the training of complex Deep Learning models has always been reserved to powerful machines with large computing capabilities (typically identified in the Cloud), limiting the Edge only to the inference. However, these solutions do not work especially when latency, security, and high customization aspects become key prerequisites. In such a context emerges the need of novel methods to deliver the intelligence into an embedded system without the data leaving the device. Originally born as a complementary technology, On-device AI is expected to become a hot topic in the next years as a new paradigm where both training and inference processes are performed on the same device. If on the one hand, the possibility to run intelligent algorithms on these systems is a challenging task, on the other the benefits in terms of response time and energy efficiency derived from this technology are going to be the foundations for a novel type of “reasoning” systems. To this aim novel architectures and frameworks should be explored to enable the access to AI based tailored services. Considering a scenario where the Edge would potentially store sensitive data (that should never travers the Internet), it is evident that these devices could become the target of attacks by malicious users. In this sense, privacy and security aspects represent another key elements to be carefully considered and implemented. This special issue has the goal to promote original, unpublished, high-quality research about On-device AI solutions applied to the Smart Environments and Industry 4.0 contexts. The topics of interest include, but are not limited to: On-device training solutions On-device AI applications Federated Learning training and inference strategies on Edge devices AI Intelligent Systems AI for Microcontrollers AI applications at the Edge AI methods for Industrial applications AI based services at the Edge Hardware efficient Deep Learning applications Energy efficient Deep Learning algorithms Privacy and Security for Deep Learning Comparative analysis of on-device AI frameworks Implementation case studies Low-power AI applications and methods Lightweight AI algorithms for Edge devices Edge architectures and frameworks for AI Important Dates Submission portal opens: July 25, 2024 Deadline for paper submission: February 25, 2025 Latest acceptance deadline for all papers: June 30, 2025
最終更新 Dou Sun 2024-07-11
Special Issue on Advances in Quantum Computing: Methods, Algorithms, and Systems Vol II
提出日: 2025-02-28

Motivation and Scope Quantum computing (QC) is an emerging, potentially disruptive computational model gaining strong momentum in the scientific community. QC research covers multiple intertwined aspects, ranging from potential hardware design and different implementations and technologies to the quantum software stack, including compilers, high-level programming abstraction, tools, and quantum algorithms and applications. With the advent of the QC systems openly available to the scientific community and the first promising benchmarks showing quantum advantage, the effort towards practical, yet daunting, issues such as the hardware and software integration of QC systems into the HPC infrastructure, the QC acceleration of classical scientific and industrial applications and workflows (e.g., quantum chemistry and quantum simulations, drug discovery, computational fluid dynamics) intensified in the last few years, leading to several proposed approaches to harness the QC power. From a high-level point of view, the Quantum Processing Unit (QPU) can be seen as a specialized device to accelerate certain applications that exploit algorithmic formulations to use quantum state superposition, entanglement, quantum tunneling, or interference. QPUs can be deployed today as an accelerator for the first time in HPC systems. While extensive experience has been gained to operate and exploit other accelerators, such as GPUs that today provide the backbone of HPC systems, we face challenges fundamentally different from the past. These challenges include the usage of technologies, often requiring exceedingly low temperature and shielding, the need for interfacing the classical and quantum systems, the development of error correction algorithms, and quantum computer simulators to test the results of QC systems and QC real-world use cases and applications still in their infancy. This special issue aims to collect influential contributions to address these challenges of quantum computing (QC). This special collection invites papers targeting the following topics: Integration of QC systems into HPC software and hardware infrastructure Large-scale HPC quantum computer simulators Tools for quantum applications, including compilers, runtimes, workflow managers, schedulers, and orchestrators Quantum algorithms and applications for solving scientific and engineering applications Quantum machine learning algorithms and applications Quantum data and quantum memories Quantum error correction codes Hybrid QC-HPC algorithms, applications, and workflows Performance modeling, analysis, and characterization of QC systems Quantum computer and cloud computing Quantum technologies for computation Characterization of quantum speed-up and supremacy Benchmarking of quantum systems Quantum Annealers: algorithms and applications Guest Editors Stefano Markidis KTH Royal Institute of Technology, Sweden markidis@kth.se Michela Taufer University of Tennessee Knoxville, USA taufer@utk.edu Lucio Grandinetti Università della Calabria, Italy lucio.grandinetti@unical.it Important Dates Submission portal opens: August 15, 2024 Deadline for paper submission: February 28, 2025 Latest acceptance deadline for all papers: April 30, 2025
最終更新 Dou Sun 2024-09-28
Special Issue on Large-scale HPC Approaches and Applications on Highly Distributed Platforms
提出日: 2025-03-15

Motivation and Scope The ever-increasing complexity of scientific and industrial challenges due to the enormous amount of data available nowadays requires advanced high-performance computing (HPC) solutions capable of processing and analyzing data efficiently on highly distributed platforms. Traditional centralized HPC systems frequently fall short of the demands of contemporary large-scale applications (e.g., large language models), prompting a move towards more flexible and scalable distributed computing environments. Furthermore, the growing emphasis on the environmental impact of large-scale computing has highlighted the need for sustainable computing practices that minimize energy consumption and carbon footprint. This Special Issue targets innovative solutions that investigate and tackle the challenges and opportunities of deploying HPC applications on distributed platforms, such as cloud, edge, and hybrid systems, focusing on promoting sustainable computing practices. The aim is to bring together pioneering research and practical insights highlighting advancements in scalable algorithms, efficient data management, robust performance optimization techniques, and sustainable computing strategies. By providing a platform for disseminating innovative solutions and best practices, this Special Issue aspires to support the development of resilient, efficient, and sustainable HPC applications that can meet the demands of future scientific and industrial challenges. The topics of this special issue include (but are not limited to): High-performance computing architectures and systems for big data Parallel and distributed algorithms for big data processing HPC-enabled data analytics High-performance data storage and retrieval systems Performance modeling and evaluation of HPC systems in distributed platforms HPC in machine learning and artificial intelligence HPC for scientific computing and simulations HPC in the Cloud for big data processing Energy-efficient and green HPC Guest Editors Alessia Antelmi University of Turin, Italy alessia.antelmi@unito.it Emanuele Carlini National Research Council of Italy, Italy emanuele.carlini@isti.cnr.it Important Dates Submission portal opens: Sep 15, 2024 Deadline for paper submission: March 15, 2025 Latest acceptance deadline for all papers: June 15, 2025
最終更新 Dou Sun 2024-09-28
Special Issue on High-Performance eScience key technologies, tools, and applications
提出日: 2025-03-31

Motivation and Scope For some time, eScience tools and applications have been first-class citizens in the parallel and distributed computing ecosystem, especially at extreme scales. With the advent of deep learning and foundation models, this trend has been fated to continue. Researchers across various domains leverage advanced computational techniques to analyze vast data, simulate complex systems, and accelerate scientific workflows. Furthermore, when moving from the researcher’s desk to a production-grade pipeline, all parallel and distributed computing algorithms must integrate more or less directly with eScience tools and applications. The convergence between the modular nature of modern applications and the eScience domain, coupled with diverse hardware configurations, necessitates adaptable workflow systems. These systems must support various execution environments, from traditional High-Performance Computing (HPC) infrastructures to cloud environments and emerging Edge computing platforms. Optimization policies balancing performance and energy efficiency are paramount, alongside the integration of various computational models, including classical and quantum computing paradigms. This Special Issue aims to provide a platform for researchers from diverse communities to present their latest work on high-performance eScience tools, algorithms, and applications. It fosters collaboration and knowledge exchange among participants from diverse backgrounds, including computer science, computational science, artificial intelligence, domain sciences, and engineering. The audience is encouraged to discuss practical challenges and best practices in developing and deploying high-performance eScience solutions in real-world settings, identifying future research directions and potential areas for innovation in high-performance eScience. Guest Editors Iacopo Colonnelli Department of Computer Science, University of Turin, Torino, Italy iacopo.colonnelli@unito.it Paula Fernanda Olaya García Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA polaya@vols.utk.edu Diana Di Luccio Department of Computer Science, University of Naples Parthenope, Italy diana.diluccio@uniparthenope.it Raffaele Montella Department of Computer Science, University of Naples Parthenope, Italy raffaele.montella@uniparthenope.it Important Dates Submission portal opens: December 2nd, 2024 Deadline for paper submission: March 31st, 2025 Latest acceptance deadline for all papers: October 6th, 2025
最終更新 Dou Sun 2024-12-18
Special Issue on Generative AI in Cybersecurity
提出日: 2025-05-15

Motivation and Scope The world of cybersecurity is changing very rapidly, and the integration of Generative Artificial Intelligence (GenAI) represents a major transition in defense systems as well as attack tactics. Over the last decade, AI developments, especially through chatbots such as ChatGPT, Gemini, and DALL-E, have permeated several sectors, enhancing efficiency in operations and availing innovative approaches. This transformative technology is now at the center stage of cybersecurity by offering unprecedented possibilities and new challenges. Generative AI’s power transcends traditional use cases, fortifying defenses but also creating fresh angles for cyber threats. This special issue intends to examine the multifaceted influence of GenAI on cybersecurity by providing a comprehensive understanding of its potential to transform threat detection, mitigation, and response strategies. In particular, we are looking for ground-breaking studies that address topics including vulnerability assessment, automated hacking, ransomware and malware generation, as well as automation in cyber-defense mechanisms. There is also a need for papers examining the ethical concerns surrounding the use of GenAI within the cybersecurity landscape, hence promoting a balanced approach toward this potent tool. The topics of Interest Include, but are not limited to: Vulnerability Assessment: Enhancing detection and assessment methodologies with GenAI. Social Engineering and Phishing Attacks: Crafting sophisticated social engineering attacks and developing prevention strategies. Automated Hacking and Attack Payload Generation: Automating hacking processes and generating complex attack payloads. Ransomware and Malware Code Generation: Creating and detecting advanced malicious software. Polymorphic Malware Generation: Generating and neutralizing dynamic, AI-generated threats. Cyberdefense Automation: Automating and enhancing defense mechanisms through AI integration. Cybersecurity Reporting and Threat Intelligence: Leveraging AI for advanced threat intelligence and proactive defense. Secure Code Generation and Detection: Employing AI for secure code generation and vulnerability detection. Identification of Cyber Attacks: Real-time attack identification and response using AI. Developing Ethical Guidelines: Establishing ethical norms for AI deployment in cybersecurity. Enhancing Cybersecurity Technologies: Augmenting existing tools and methodologies with AI. Incident Response Guidance: Utilizing AI in incident response and management. Malware Detection: Advancing detection techniques with AI. Social, Legal, and Ethical Implications of Generative AI: Comprehensive analysis of societal impacts and ethical considerations. Guest Editors S. Leili Mirtaheri University of Calabria, Italy leili.mirtaheri@dimes.unical.it Andrea Pugliese University of Calabria, Italy andrea.pugliese@unical.it Valerio Pascucci University of Utah, United States of America pascucci@acm.org Important Dates Submission portal opens: November 1, 2024 Deadline for paper submission: May 15, 2025 Latest acceptance deadline for all papers: September 15, 2025
最終更新 Dou Sun 2024-09-28
Special Issue on Novel Applications and Techniques for Information Security
提出日: 2025-05-15

Motivation and Scope We focus on the advances in two critical areas in security: insider threat detection and secure quantum computing. Insider threats require advanced detection strategies, while quantum computing demands new solutions and cloud security. This special issue focuses on two key challenges: i) Insider Threat Detection; ii) Secure Quantum Computing. It will also feature selected papers from the 2024 International Conference on Applications and Techniques in Information Security. With quantum computing on the rise, it brings both opportunities and security risks. Researchers will showcase new cryptographic techniques and hardware solutions to protect against these emerging threats. By sharing cutting-edge research, this issue aims to improve understanding, offer practical solutions, and encourage knowledge sharing to help build stronger, more secure systems. Guest Editors Shiva Raj Pokhrel Deakin University, Australia shiva.pokhrel@deakin.edu.au Gang Li Deakin University, Australia Gang.li@deakin.edu.au V S Shankar Sriram SASTRA Deemed University, India shankar.sriram@sastra.edu Important Dates Submission portal opens: January 15, 2025 Deadline for paper submission: May 15, 2025 Latest acceptance deadline for all papers: August 15, 2025
最終更新 Dou Sun 2024-12-18
Special Issue on Digital Twin Ecosystems Engineering & Applications
提出日: 2025-05-20

Motivation and Scope Recent advances in the areas of the Internet of Things, Artificial Intelligence, and Big Data Analytics have accelerated the use of Digital Twins to engineer cyber-physical systems in a range of diverse application domains such as manufacturing, healthcare, farming, and smart cities. Digital Twins are virtual replicas of physical entities supported by real-time sensory inputs, and advanced data processing to create and feed complex digital models. Continuous monitoring, analysis, prediction and simulation of the real-world counterparts are among the main capabilities of Digital Twins, enhancing decision-making, efficiency, and proactive maintenance. When applied to large-scale scenarios, where monolithic solutions are not feasible, integrated networks of Digital Twins are being proposed as a modelling tool to offer a comprehensive view of a domain leading to the concept of Digital Twin Ecosystems. This Special Issue seeks to explore the latest trends, challenges, and opportunities in the engineering and application of Digital Twin Ecosystems. Designing and implementing these next-generation Digital Twin Ecosystems face engineering and societal challenges including scalability, interoperability, data security, privacy, and ethical issues. Discussing these challenges requires collaborative efforts from both the research community and industry professionals to ensure effective and responsible development and is at the core of this Special Issue. Guest Editors Sara Montagna Department of Pure and Applied Science, University of Urbino Carlo Bo, Italy sara.montagna@uniurb.it Samuele Burattini Department of Computer Science and Engineering, University of Bologna, Italy samuele.burattini@unibo.it Marco Picone Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, Italy marco.picone@unimore.it Important Dates Submission portal opens: January 15th, 2025 Deadline for paper submission: May 20th, 2025 Latest acceptance deadline for all papers: February 15th, 2026
最終更新 Dou Sun 2024-12-18
Special Issue on High-performance Computing Heterogeneous Systems and Subsystems
提出日: 2025-05-30

Motivation and Scope High-performance computing (HPC) is at a pivotal juncture, characterized by significant advancements in computing technologies and architectural features. This special issue explores this dynamic field's latest advancements, challenges, and innovations. Heterogeneous HPC systems integrate diverse computational resources, including CPUs, GPUs, FPGAs, and other specialized accelerators, to deliver superior performance for various applications. To harness their potential fully, these systems require novel resource management, scheduling, programming models, and performance optimization approaches. Combining cutting-edge research and practical insights, this special issue provides a comprehensive overview of heterogeneous HPC systems' current state and future directions. It is a valuable resource for researchers, practitioners, and policymakers interested in leveraging heterogeneous computing to solve complex scientific, engineering, and data-intensive problems more efficiently and effectively. The topics include but are not limited to: 1. Heterogeneous Programming Models and Runtime Systems: Models, parallel resource management, and automated parallelization Algorithms, libraries, and frameworks for heterogeneous systems 2. Heterogeneous Architectures: Power/energy management, reliability, and non-von Neuman architectures Memory and interconnection designs Data allocation, caching, and disaggregated memory Consistency models, persistency, and failure-atomicity 3. Heterogeneous Resource Management: System and software designs for dynamic resources High-level programming, run-time techniques, and resource frameworks Scheduling algorithms, resource management, and I/O provisioning 4. Heterogeneity in Artificial Intelligence: AI/ML/DL predictive models and optimized systems for heterogeneous workflows and applications Tools and workflows for AI/ML/DL in scientific applications Guest Editors Sergio Iserte Barcelona Supercomputing Center, Spain sergio.iserte@bsc.es Pedro Valero-Lara Oak Ridge National Laboratory, USA valerolarap@ornl.gov Kevin A. Brown Argonne National Laboratory, USA kabrown@anl.gov Important Dates Submission portal opens: October 01, 2024 Deadline for paper submission: May 30, 2025 Latest acceptance deadline for all papers: July 31, 2025
最終更新 Dou Sun 2024-09-28
Special Issue on Cloud Continuum
提出日: 2025-08-30

Motivation and Scope Cloud computing has become a common commodity with many different providers and solutions. Several new architectural models are being developed and applied to ensure scalability, quality of service, and resilience. The models focus both on the providers, optimizing the use of their infrastructure, and on the users' side, optimizing the response times and/or costs. This scenario is becoming more complex with the possibility of having computing power close to the users on edge/fog models. All this scenario can be seen as the Cloud Continuum. There are already some conferences that have the Cloud Continuum in their call-for-papers. However, only some of them have explicitly focused on the applications. We aim to attract a broader range of papers, from software engineering to High-Performance Computing applications. All of them will be discussed in the Cloud Continuum scenario. The following list includes some of the major topics for this special issue: Energy Efficiency AI-powered Services Security IoT Applications Architectural Models Serverless Computing Elasticity Storage Virtualization Sustainable Models Programming Models QoS for Applications Optimization and Performance Issues Communication Protocols Big Data High-Performance Computing Applications Innovative Cloud Applications and Experiences Availability and Reliability Microservices New Models (e.g., spot instances) Frameworks and APIs HPC as a Service Guest Editors Alfredo Goldman University of São Paulo, Brazil gold@ime.usp.br Eduardo Guerra University of Bolzano, Italy eduardo.guerra@unibz.it Jean Luca Bez Lawrence Berkeley National Laboratory, USA jlbez@lbl.gov Important Dates; Submission Portal Opens: April, 15th, 2025; Deadline for paper submission: August, 30th 2025; Latest acceptance deadline for all papers: February 15th, 2026.
最終更新 Dou Sun 2024-09-28
関連仕訳帳
CCF完全な名前インパクト ・ ファクター出版社ISSN
Journal of Forecasting3.400Wiley-Blackwell0277-6693
cBehaviour & Information Technology2.900Taylor & Francis0144-929X
bACM Transactions on Internet Technology3.900ACM1533-5399
IEEE Internet Computing Magazine3.700IEEE1089-7801
The Scientific World JournalHindawi1537-744X
IEEE Transactions on Control Systems Technology4.900IEEE1063-6536
International Journal of Information Technology and Web Engineering IGI Global1554-1045
bIEEE Transactions on VLSI Systems2.800IEEE1063-8210
Journal of Function Spaces1.900Hindawi2314-8896
bFrontiers of Computer Science3.400Springer2095-2228
関連会議
CCFCOREQUALIS省略名完全な名前提出日通知日会議日
baa2PACTInternational Conference on Parallel Architectures and Compilation Techniques2024-03-252024-07-012024-10-13
bb1ECBSEuropean Conference on the Engineering of Computer Based Systems2019-05-152019-06-152019-09-02
ICTCInternational Conference on ICT Convergence2024-08-232024-09-102024-10-16
NVICTInternational Conference on New Visions for Information and Communication Technology2014-12-312015-03-152015-05-27
NATAPInternational Conference on Natural Language Processing and Trends2022-06-042022-06-142022-06-18
ba1MobisysInternational Conference on Mobile Systems, Applications and Services2024-12-092025-03-072025-06-03
ICeNDInternational Conference on e-Technologies and Networks for Development2017-06-112017-06-202017-07-11
ECPDCInternational Academic Conference on Edge Computing, Parallel and Distributed Computing2024-03-012024-04-102024-04-19
APSACInternational Conference on Applied Physics, System Science and Computers2017-06-30 2018-09-26
ECELEuropean Conference on e-Learning2020-04-222020-04-222020-10-29
おすすめ