Work place: National School of Computer Science and Systems Analysis (ENSIAS), Mohammed V University, in Rabat 8007, Morocco
E-mail: Houda.benbrahim@um5.ac.ma
Website:
Research Interests: Artificial Intelligence, Data Mining, Data Structures and Algorithms, Analysis of Algorithms, Statistics
Biography
Mrs. Benbrahim Houda received a Ph.D. in Computer Science from Portsmouth University 2008, UK. She joined Mohamed V University in 2008, as a teacher at ENSIAS engineer school, Rabat, Morocco, where she is teaching (Data Mining, Artificial Intelligence, Data Analysis, and Statistics) and supervising researchers in these fields.
By Fenjiro Youssef Benbrahim Houda
DOI: https://doi.org/10.5815/ijisa.2020.05.02, Pub. Date: 8 Oct. 2020
Self-driving car is one of the most amazing applications and most active research of artificial intelligence. It uses end-to-end deep learning models to take orientation and speed decisions, using mainly Convolutional Neural Networks for computer vision, plugged to a fully connected network to output control commands. In this paper, we introduce the Self-driving car domain and the CARLA simulation environment with a focus on the lane-keeping task, then we present the two main end-to-end models, used to solve this problematic, beginning by Deep imitation learning (IL) and specifically the Conditional Imitation Learning (COIL) algorithm, that learns through expert labeled demonstrations, trying to mimic their behaviors, and thereafter, describing Deep Reinforcement Learning (DRL), and precisely DQN and DDPG (respectively Deep Q learning and deep deterministic policy gradient), that uses the concepts of learning by trial and error, while adopting the Markovian decision processes (MDP), to get the best policy for the driver agent. In the last chapter, we compare the two algorithms IL and DRL based on a new approach, with metrics used in deep learning (Loss during training phase) and Self-driving car (the episode's duration before a crash and Average distance from the road center during the testing phase). The results of the training and testing on CARLA simulator reveals that the IL algorithm performs better than DRL algorithm when the agents are already trained on a given circuit, but DRL agents show better adaptability when they are on new roads.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals