Forthcoming and Online First Articles

International Journal of Vehicle Performance

International Journal of Vehicle Performance (IJVP)

Forthcoming articles have been peer-reviewed and accepted for publication but are pending final changes, are not yet published and may not appear here in their final order of publication until they are assigned to issues. Therefore, the content conforms to our standards but the presentation (e.g. typesetting and proof-reading) is not necessarily up to the Inderscience standard. Additionally, titles, authors, abstracts and keywords may change before publication. Articles will not be published until the final proofs are validated by their authors.

Forthcoming articles must be purchased for the purposes of research, teaching and private study only. These articles can be cited using the expression "in press". For example: Smith, J. (in press). Article Title. Journal Title.

Articles marked with this shopping trolley icon are available for purchase - click on the icon to send an email request to purchase.

Online First articles are published online here, before they appear in a journal issue. Online First articles are fully citeable, complete with a DOI. They can be cited, read, and downloaded. Online First articles are published as Open Access (OA) articles to make the latest research available as early as possible.

Open AccessArticles marked with this Open Access icon are Online First articles. They are freely available and openly accessible to all without any restriction except the ones stated in their respective CC licenses.

Register for our alerting service, which notifies you by email when new issues are published online.

International Journal of Vehicle Performance (1 paper in press)

Regular Issues

  • Heuristic deep reinforcement learning approach for deeply adaptive navigation in indoor dynamic environments   Order a copy of this article
    by Walid Jebrane, Nabil El Akchioui 
    Abstract: Navigating mobile robots safely and efficiently through complex indoor environments populated by dynamic obstacles remains a challenging task. While traditional navigation techniques based on path planning and rule-based control struggle in such scenarios, deep reinforcement learning (DRL) offers a promising paradigm by enabling robots to learn adaptive behaviours directly from experience. This paper presents a DRL-based approach for autonomous indoor robot navigation integrated within the robot operating system (ROS) navigation stack. A proximal policy optimisation (PPO) agent is trained using laser scan and goal coordinate observations to output low-level velocity commands in real-time. To address limitations in global re-planning, we incorporate the computationally efficient D* Lite algorithm. Experiments in a Gazebo simulation combining the Flatland and PedSim simulators evaluate our approach. The environment models static obstacles and dynamic pedestrians using social forces modelling. Results demonstrate the ability of our trained agent to safely and efficiently navigate complex maps while negotiating pedestrian traffic. Comparisons with alternative global planners and navigation configurations provide insights into the benefits of integrating DRL with heuristic path planning. Our work contributes an adaptive solution for autonomous mobile robot navigation in dynamic indoor environments.
    Keywords: deep reinforcement learning; DRL; D* Lite; proximal policy optimisation; PPO; autonomous navigation.