Learning-based Integrated Cooperative Motion Planning and Control of Multi-AUVs
This paper introduces a learning-based solution tailored for the integrated motion planning and control of Multiple Autonomous Underwater Vehicles (AUVs). Tackling the complexities of cooperative motion planning, encompassing tasks such as waypoint tracking and self/obstacle collision avoidance, becomes challenging in a rule-based algorithmic paradigm due to the diverse and unpredictable situations encountered, necessitating a proliferation of if-then conditions in the implementation. Recognizing the limitations of traditional approaches that are heavily dependent on models and geometry of the system, our solution offers an innovative paradigm shift. This study proposes an integrated motion planning and control strategy that leverages sensor and navigation outputs to generate longitudinal and lateral control outputs dynamically. At the heart of this cutting-edge methodology lies a continuous action Deep Reinforcement Learning (DRL)frame terministic Policy Gradient (TD3).This algorithm surpasses traditional limitations by embodying an elaborated reward function, enabling the seamless execution of control actions essential for maneuvering multiple AUVs. Through simulation tests under both nominal and perturbed conditions, considering obstacles and underwater current disturbances, the obtained results demonstrate the feasibility and robustness of the proposed technique.
Item Type | Article |
---|---|
Additional information | © 2024 The Author(s). This is an open access article under the Creative Commons Attribution Non-Commercial No-Derivatives CC BY-NC-ND licence, https://creativecommons.org/licenses/by-nc-nd/4.0/ |
Date Deposited | 15 May 2025 15:46 |
Last Modified | 31 May 2025 00:46 |