Journal Information
Robot Learning
https://www.elspub.com/journals/rl/home
Publisher:
ELSP
ISSN:
2960-1436
Viewed:
939
Tracked:
0
Call For Papers
Robot Learning is a peer-reviewed open access journal focused on publishing original works in all areas of robot learning in both theoretical research and application achievement.

Topics of interest include but are not limited to the following:

    Learning enhanced perception for robotics
    Learning enhanced robot planning
    Learning enhanced robot manipulation
    Learning enhanced robot control
    Learning method for human-robot coordination
    Learning method for multi-robot cooperation or confrontation
    Learning method for self-driving
    Learning based robotic warehousing
    Learning enhanced intelligent transportation
    Learning method for bionic robotics and medical robotics
    Learning method for UAV and USV
    Deep learning, imitation learning and reinforcement learning for robotic system
    Sim-to-Real transfer for robotic applications
    Dataset, benchmark and simulator for robotic learning
    Applications for robotic learning
Last updated by Dou Sun in 2025-11-28
Special Issues
Special Issue on Human-Robot Interaction and Human-Centered Robotics
Submission Date: 2025-12-31

Robots are becoming an integral part of human society, being deployed across diverse scenarios where humans are present. From households (e.g., kitchens and bedrooms) to public spaces (e.g., airports and banks), and service settings (e.g., hospitals and elder-care facilities), robotics technology is increasingly being integrated into environments shaped by human needs. This trend is driving both academia and industry to push the boundaries of robotic technologies into challenging and dynamic working environments. This growing presence of robots in human-centric settings introduces exciting opportunities but also significant challenges. Ensuring safe and effective human-robot collaboration is paramount. For robots to coexist with humans in these environments, they must operate safely, interact naturally, and accomplish tasks efficiently, even in the presence of human disturbance or assistance. This special issue aims to collect high-quality research contributions addressing the challenges and opportunities in human-robot interaction (HRI) and human-centered robotics. We invite submissions that focus on theoretical advancements, novel algorithms, practical applications in relevant domains. Topics of interest include, but are not limited to: Learning algorithms: Machine learning approaches, including imitation learning and reinforcement learning, specifically designed for human-centric tasks; Reactive and predictive control: Advanced control strategies that can deal with unpredictable or dynamic human behaviors; Multi-modal perception: Techniques for robots to interpret multi-modal sensory data (e.g., visual, auditory, tactile) using techniques like generative or foundation models; Safety: Ensuring the safety of both humans and robots during interactions in shared environments; Healthcare services: Assistive robot systems developed for medical settings, eldercare, and rehabilitation applications; Physical and/or remote interaction: Robots engaging with humans through physical forces or remotely via visual, auditory, or verbal communication; Human intention understanding: Inferring human goals, emotions, and intentions to enable seamless collaboration and effective interaction; Review and tutorial papers: Comprehensive reviews or tutorials discussing key topics in human-robot interaction and human-centered robotics.
Last updated by Dou Sun in 2025-11-28
Special Issue on Human-in-the-Loop Robot Learning in the Era of Foundation Models: Challenges and Opportunities
Submission Date: 2025-12-31

This interdisciplinary special issue focuses on the latest advancements in human-in-the-loop robot learning, which integrates multi-modal human input (e.g., natural language, gestures, and haptic interaction) and online feedback (e.g., rewards, corrections, and preferences) to enhance robot performance, adaptability, and alignment with human intentions. Recent breakthroughs in foundation models, such as Large Language Models (LLMs), Vision-Language Models (VLMs), and Vision-Language-Action Models (VLAs), have provided robots with unprecedented perception and reasoning capabilities. However, effectively integrating these models into robotic systems remains an emerging and underexplored challenge. This special issue aims to gather high-quality research contributions that address the challenges and opportunities in synergistically combining foundation models and human-in-the-loop learning to advance robot learning through active and intuitive human participation. We invite submissions that explore the following critical themes: (1) leveraging foundation models for adaptive and generalizable robot learning in complex dynamic environments, (2) incorporating real-time human feedback to refine learning processes, and (3) designing frameworks for safe and trustworthy human-robot interaction. Topics of interest include, but are not limited to: Human-AI collaboration for robot learning Human-AI hybrid intelligence Foundation models for robotics Transfer learning and fine-tuning of foundation models for robotic applications Knowledge representation and reasoning in robots Human feedback in robot learning Human-in-the-loop reinforcement learning Learning from demonstrations and corrections Interactive robot learning Multi-task robot learning Architectures and frameworks for human-in-the-loop learning Cognitive models for robot learning Adaptive human-robot interaction Safety and robustness in human-robot collaboration We welcome original research articles, reviews, and case studies that contribute to the theoretical, algorithmic, and practical aspects of human-in-the-loop robot learning. Submissions should highlight novel methodologies, experimental validations, and real-world applications that advance the state of the art in this rapidly evolving field. Once a manuscript is received for this special issue, it will proceed directly to the review process.
Last updated by Dou Sun in 2025-11-28
Special Issue on Learning Based Robot Path and Task Planning
Submission Date: 2026-01-20

The path/task planning problem is one of the most fundamental problems in robotics. The path planning problem requires generating the shortest path for the robot from a given starting point to a target point while satisfying the spatial constraints. There are multiple factors to consider in path planning for robotics, starting from understanding task requirements, modeling the robot's dynamics and environment, and defining the target and possible constraints. It is also necessary to consider collision detection for the planned trajectory/path. The task planning problem requires finding a sequence of primitive motion commands for solving a given task. On each robot, a task planner automatically converts the robot's world model and skill definitions into a planning problem which is then solved to find a sequence of actions that the robot should perform to complete its mission. There are existing many traditional path/task planning methods developed for robotics. Recent advances in AI have increasingly impacts on robotic research in path/task planning. For example, Large language models (LLMs) have been used to augment robotic path/task planning with traditional methods like A* and reinforcement learning. As the real world is mostly uncertain and dynamic, robotic path/task planning needs to be adaptive to uncertainty and changes in the real world. This is important in safety-critical applications, e.g., robots operating in our living environments and field robots like underwater robots operating in hazardous environments. The aim of this research topic is to cover recent advances and trends in path/task planning for robotics. Areas to be covered in this research topic could include but not limited to: Deep reinforcement learning for robotic path and task planning in simulation and on real robot platforms LLM augmented robotic path and task planning Robotic path/task planning with adaptive world models Human-centered reinforcement learning, imitation learning, learning from demonstration, learning from observation for robotic path and task planning Deep learning approaches for robotic motion planning Safe reinforcement learning for robotic path and task planning Learning based task planning for multi-robots Other related topics
Last updated by Dou Sun in 2025-11-28
Special Issue on Intelligent Vision-Driven Robotics
Submission Date: 2026-07-31

Special Issue Editors Dr. Peng Zhou, Great Bay University, China Prof. David Navarro-Alarcon, The Hong Kong Polytechnic University, China About This Special Issue Aims and Motivation Robotics is converging on an intelligent, vision-driven paradigm where precise geometry, robust control, and data-driven adaptation co-exist and reinforce one another. Prof. Liu’s oeuvre exemplifies this synthesis—from early uncalibrated visual servoing and grasp theory to modern soft/surgical autonomy and large-scale SLAM—providing a unifying backbone for next-generation robots that are safe, dexterous, and reliable in unstructured, visually complex environments. This invitation-only Special Issue honors Prof. Yunhui Liu’s enduring impact on intelligent, vision-driven robotics. His work—spanning uncalibrated visual servoing, grasping and fixturing theory, motion planning, soft and continuum manipulation, surgical robotics, large-scale SLAM and 3D vision, networked teleoperation, and learning-enabled autonomy—has consistently connected rigorous theory with deployable, closed-loop systems, closing the loop between sensing and action in real-world environments. The collection is authored by Prof. Liu’s friends, former students, and close collaborators, and reflects his profound influence on vision-centered robotic intelligence. Submission Policy Invitation-first. This Special Issue is primarily invitation-based; invited manuscripts will be reviewed on a rolling basis. Inquiries welcome. If you haven’t received an invitation but believe your work is a strong fit for vision-driven robotics, you’re welcome to email the Guest Editors with a brief summary. Depending on space and scope, we may be able to extend additional invitations. Authors are encouraged to add a brief note on the relationship between your submission and Prof. Liu's academic work—e.g., which ideas, methods, or perspectives served as motivation or inspiration—consistent with the article type. Scope and Themes We welcome contributions that tightly integrate visual perception (including 3D geometry) with control and learning to achieve robust, generalizable, and deployable autonomy. Visual servoing and perception-driven control: Uncalibrated/model-free schemes; eye-in-hand and fixed-camera control; observers without visual velocity; nonholonomic/mobile visual control; task-oriented and invariant visual features. Grasping, fixturing, and dexterous manipulation:Vision- and tactility-informed grasp analysis and fixture design; multimodal sensing; soft/variable-stiffness hands; compliant/origami-inspired grippers; in-hand and textile/cable manipulation with visual feedback; geometry-aware policies. Deformable, soft, and continuum robots: Visual/FBG-based shape sensing and reconstruction; deformation and shape servoing; constrained-environment modeling and control; hybrid model–data methods for perception–control fusion. Surgical robotics and medical applications: Vision-centric autonomy in MRI/OR-integrated systems; autonomous endoscopic view control; instrument/tissue perception and 3D reconstruction (stereo/NeRF/Gaussian splatting); integrated perception–planning–control for safe task autonomy. SLAM, 3D vision, and geometric learning: Point/line/vanishing-point geometry; LiDAR/visual–inertial/edge-based SLAM; transparent/reflective/medical surface reconstruction; calibration and metrology; neural and geometric scene representations for control. Networked and human-in-the-loop robotics: Internet-based teleoperation with haptics and QoS; cooperative teleoperation; AR/gaze-based interaction; shared autonomy with intent inference; distributed estimation and coordination for multi-robot systems. Learning for vision-driven autonomy: Self-/weakly supervised visual representations for video and 3D; RL and imitation for manipulation, surgery, and locomotion with visual feedback; sim-to-real transfer; transformer/graph models coupling perception with planning and control; grounding policies in geometric priors. Field and industrial robotics: Vision-centric construction and finishing; warehouse fleets and swarm logistics; autonomous forklifts/AGVs and tractor–trailer control; robust bin picking and assembly with multi-view/active perception; long-horizon, closed-loop deployments. Article Types Original research articles with strong theoretical and experimental validation (bench-top to clinical/field), emphasizing vision-in-the-loop autonomy. System and integration papers demonstrating deployable, vision-driven, closed-loop performance in real applications. Survey/tutorial papers synthesizing state of the art at the intersection of vision, learning, and control, with clear roadmaps for future research. Benchmark/dataset papers that enable reproducibility and accelerate vision-based robotics, including protocols, metrics, code, and models. Intended Audience Researchers and practitioners in robotics, computer vision, control, and AI/ML for robotics; surgical/medical robotics; industrial and field automation; and human–robot interaction and teleoperation. Dedication It is an unforgettable memory and a great pleasure for many of us to have collaborated with Prof. Yunhui Liu—and, for some, to have worked under his mentorship. In deepest respect for his strong and inquiring mind, his enthusiasm for scientific inquiry, and his passion for education, we dedicate this Special Issue to him.
Last updated by Dou Sun in 2025-12-18
Related Journals
CCFFull NameImpact FactorPublisherISSN
Robot LearningELSP2960-1436
bMachine Learning4.300Springer0885-6125
International Journal of Robotics Research5.0SAGE0278-3649
Electronic Journal of e-LearningAcademic Publishing Limited1479-4403
Journal of Computer Assisted Learning5.100Wiley-Blackwell0266-4909
Robotica2.700Cambridge University Press0263-5747
ACM Transactions on Probabilistic Machine LearningACM0000-0000
bIEEE Transactions on Robotics10.5IEEE1552-3098
Journal of Robotics1.400Hindawi1687-9600
RoboticsMDPI2218-6581
Full NameImpact FactorPublisher
Robot LearningELSP
Machine Learning4.300Springer
International Journal of Robotics Research5.0SAGE
Electronic Journal of e-LearningAcademic Publishing Limited
Journal of Computer Assisted Learning5.100Wiley-Blackwell
Robotica2.700Cambridge University Press
ACM Transactions on Probabilistic Machine LearningACM
IEEE Transactions on Robotics10.5IEEE
Journal of Robotics1.400Hindawi
RoboticsMDPI