TY - JOUR
T1 - Reinforcement Learning-Based Human Like Shared Control for Driver Vehicle Interactions
AU - Kumar Swain, Subrat
AU - Lee, Sangmoon
AU - Veluvolu, Kalyana C.
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2025
Y1 - 2025
N2 - Enhancing lateral stability and driver comfort in the presence of driver behavior uncertainties is essential in the context of shared control for autonomous vehicles. In view of the absence of exact model based information in real time, this study harnesses the inverse reinforcement learning (IRL) procedure to establish the reward function for the automation model using expert data. In contrast to existing shared control studies that focus on automation counteracting driver behavior uncertainties, the novelty of the proposed study lies in developing human-like behavior within the shared control environment. Additionally, to achieve the overall objective of human like driving, RL based approach is employed to generate the automation road steer angle and driver automation (DA) relative weights, ensuring fulfillment of lane-keeping, vehicle lateral stability, and driver comfort objectives simultaneously. The reward function formulated for generating the DA relative weights and the automation model is integrated with the human arm muscular characteristics of the driver behavior model in the RL framework to develop the optimal shared steer angle. Comprehensive evaluations were performed to compare the driving performance of the suggested RL-based shared control system with existing adaptive shared control methods. Simulation outcomes indicate that the proposed control technique outperforms others by closely replicating human driving behavior. Additionally, a hardware-in-loop (HIL) setup was employed to validate the proposed shared control scheme under varying longitudinal speeds.
AB - Enhancing lateral stability and driver comfort in the presence of driver behavior uncertainties is essential in the context of shared control for autonomous vehicles. In view of the absence of exact model based information in real time, this study harnesses the inverse reinforcement learning (IRL) procedure to establish the reward function for the automation model using expert data. In contrast to existing shared control studies that focus on automation counteracting driver behavior uncertainties, the novelty of the proposed study lies in developing human-like behavior within the shared control environment. Additionally, to achieve the overall objective of human like driving, RL based approach is employed to generate the automation road steer angle and driver automation (DA) relative weights, ensuring fulfillment of lane-keeping, vehicle lateral stability, and driver comfort objectives simultaneously. The reward function formulated for generating the DA relative weights and the automation model is integrated with the human arm muscular characteristics of the driver behavior model in the RL framework to develop the optimal shared steer angle. Comprehensive evaluations were performed to compare the driving performance of the suggested RL-based shared control system with existing adaptive shared control methods. Simulation outcomes indicate that the proposed control technique outperforms others by closely replicating human driving behavior. Additionally, a hardware-in-loop (HIL) setup was employed to validate the proposed shared control scheme under varying longitudinal speeds.
KW - Model predictive control
KW - adaptive relative weight switching
KW - inverse reinforcement learning
KW - reinforcement learning
KW - shared control
UR - https://www.scopus.com/pages/publications/105008041757
U2 - 10.1109/TITS.2025.3571068
DO - 10.1109/TITS.2025.3571068
M3 - Article
AN - SCOPUS:105008041757
SN - 1524-9050
VL - 26
SP - 13452
EP - 13465
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 9
ER -