Work place: National School of Computer Science and Systems Analysis (ENSIAS), Mohammed V University, in Rabat 8007, Morocco
E-mail: fenjiro@gmail.com
Website:
Research Interests: Engineering
Biography
Fenjiro Youssef was born on 21 December 1978. He received the Master of degree in Industrial Computer Science from ENSEM-INPL Nancy, France, in 2001.
In 2002, he joined Maroc Telecom as an engineer responsible of Internet platforms. Then became the head of internet IT in 2006, and project manager officer in 2012. In 2016, he enrolls for a doctorate at Mohamed V University, at the IT department of ENSIAS engineering school in Rabat, Morocco, with Mrs. Benbrahim Houda as thesis supervisor.
By Fenjiro Youssef Benbrahim Houda
DOI: https://doi.org/10.5815/ijisa.2020.05.02, Pub. Date: 8 Oct. 2020
Self-driving car is one of the most amazing applications and most active research of artificial intelligence. It uses end-to-end deep learning models to take orientation and speed decisions, using mainly Convolutional Neural Networks for computer vision, plugged to a fully connected network to output control commands. In this paper, we introduce the Self-driving car domain and the CARLA simulation environment with a focus on the lane-keeping task, then we present the two main end-to-end models, used to solve this problematic, beginning by Deep imitation learning (IL) and specifically the Conditional Imitation Learning (COIL) algorithm, that learns through expert labeled demonstrations, trying to mimic their behaviors, and thereafter, describing Deep Reinforcement Learning (DRL), and precisely DQN and DDPG (respectively Deep Q learning and deep deterministic policy gradient), that uses the concepts of learning by trial and error, while adopting the Markovian decision processes (MDP), to get the best policy for the driver agent. In the last chapter, we compare the two algorithms IL and DRL based on a new approach, with metrics used in deep learning (Loss during training phase) and Self-driving car (the episode's duration before a crash and Average distance from the road center during the testing phase). The results of the training and testing on CARLA simulator reveals that the IL algorithm performs better than DRL algorithm when the agents are already trained on a given circuit, but DRL agents show better adaptability when they are on new roads.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals