TY - GEN
T1 - An Encoder-Sequencer-Decoder Network for Lane Detection to Facilitate Autonomous Driving
AU - Hussain, Muhammad Ishfaq
AU - Rafique, Muhammad Aasim
AU - Ko, Yeongmin
AU - Khan, Zafran
AU - Olimov, Farrukh
AU - Naz, Zubia
AU - Kim, Jeongbae
AU - Jeon, Moongu
N1 - Publisher Copyright:
© 2023 ICROS.
PY - 2023
Y1 - 2023
N2 - Lane detection in all weather conditions is a pressing necessity for autonomous driving. Accurate lane detection ensures the safe operation of autonomous vehicles, enabling advanced driver assistance systems to effectively track and maintain the vehicle within the lanes. Traditional lane detection techniques heavily rely on a single image frame captured by the camera, posing limitations. Moreover, these conventional methods demand a constant stream of pristine images for uninterrupted lane detection, resulting in degraded performance when faced with challenges such as low brightness, shadows, occlusions, and deteriorating environmental conditions. Recognizing that continuous sequence patterns on the road represent lanes, our approach leverages a sequential model to process multiple images for lane detection. In this study, we propose a deep neural network model to extract crucial lane information from a sequence of images. Our model adopts a convolutional neural network in an encoder/decoder architecture and incorporates an extended short-term memory model for sequential feature extraction. We evaluate the performance of our proposed model using the TuSimple and CuLane datasets, showcasing its superiority across various lane detection scenarios. Comparative analysis with state-of-the-art lane detection methods further substantiates our model's effectiveness.
AB - Lane detection in all weather conditions is a pressing necessity for autonomous driving. Accurate lane detection ensures the safe operation of autonomous vehicles, enabling advanced driver assistance systems to effectively track and maintain the vehicle within the lanes. Traditional lane detection techniques heavily rely on a single image frame captured by the camera, posing limitations. Moreover, these conventional methods demand a constant stream of pristine images for uninterrupted lane detection, resulting in degraded performance when faced with challenges such as low brightness, shadows, occlusions, and deteriorating environmental conditions. Recognizing that continuous sequence patterns on the road represent lanes, our approach leverages a sequential model to process multiple images for lane detection. In this study, we propose a deep neural network model to extract crucial lane information from a sequence of images. Our model adopts a convolutional neural network in an encoder/decoder architecture and incorporates an extended short-term memory model for sequential feature extraction. We evaluate the performance of our proposed model using the TuSimple and CuLane datasets, showcasing its superiority across various lane detection scenarios. Comparative analysis with state-of-the-art lane detection methods further substantiates our model's effectiveness.
KW - Autonomous Driving and Robotics
KW - Convolutional LSTM
KW - Encoder and Decoder Network
KW - TuSimple
UR - http://www.scopus.com/inward/record.url?scp=85179180800&partnerID=8YFLogxK
U2 - 10.23919/ICCAS59377.2023.10316884
DO - 10.23919/ICCAS59377.2023.10316884
M3 - Conference contribution
AN - SCOPUS:85179180800
T3 - International Conference on Control, Automation and Systems
SP - 899
EP - 904
BT - 23rd International Conference on Control, Automation and Systems, ICCAS 2023
PB - IEEE Computer Society
T2 - 23rd International Conference on Control, Automation and Systems, ICCAS 2023
Y2 - 17 October 2023 through 20 October 2023
ER -