Work place: Signal and Image processing Laboratory, ENIT, 1002 Tunisia
E-mail: ellouz.noureddine@enit.rnu.tn
Website:
Research Interests: Computer systems and computational processes, Pattern Recognition, Image Processing
Biography
Mr Ellouze Noureddine is actually a senior professor at the High Institute of Engineering of Tunis. He received a Ph.D. degree in 1977 at INP (Toulouse- France). He is also the Director of the Research Laboratory LSTS at ENIT. Pr. Ellouze has directed multiple Masters and Thesis and published more than 300 scientific papers in journals and proceedings, in the domain of signal processing, speech and image processing, biomedical applications and pattern recognition.
By Hajer Rahali Zied Hajaiej Noureddine Ellouze
DOI: https://doi.org/10.5815/ijigsp.2014.11.03, Pub. Date: 8 Oct. 2014
In this paper we introduce a robust feature extractor, dubbed as Modified Function Cepstral Coefficients (MODFCC), based on gammachirp filterbank, Relative Spectral (RASTA) and Autoregressive Moving-Average (ARMA) filter. The goal of this work is to improve the robustness of speech recognition systems in additive noise and real-time reverberant environments. In speech recognition systems Mel-Frequency Cepstral Coefficients (MFCC), RASTA and ARMA Frequency Cepstral Coefficients (RASTA-MFCC and ARMA-MFCC) are the three main techniques used. It will be shown in this paper that it presents some modifications to the original MFCC method. In our work the effectiveness of proposed changes to MFCC were tested and compared against the original RASTA-MFCC and ARMA-MFCC features. The prosodic features such as jitter and shimmer are added to baseline spectral features. The above-mentioned techniques were tested with impulsive signals under various noisy conditions within AURORA databases.
[...] Read more.By Lamia Bouafif Noureddine Ellouze
DOI: https://doi.org/10.5815/ijisa.2014.09.02, Pub. Date: 8 Aug. 2014
Auditory models are very useful in many applications such as speech coding and compression, cochlea prosthesis, and audio watermarking. In this paper we will develop a new auditory model based on the REVCOR method. This technique is based on the estimation of the impulse response of a suitable filter characterizing the auditory neuron and the cochlea. The first step of our study is focused on the development of a mathematical model based on the gammachirp system. This model is then programmed, implemented and simulated under Matlab. The obtained results are compared with the experimental values (REVCOR experiments) for the validation and a better optimization of the model parameters. Two objective criteria are used in order to optimize the audio model estimation which are the SNR (signal to noise ratio) and the MQE (mean quadratic error). The simulation results demonstrated that for the auditory model, only a reduced number of channels are excited (from 3 to 6). This result is very interesting for auditory implants because only significant channels will be stimulated. Besides, this simplifies the electronic implementation and medical intervention.
[...] Read more.By Imen Trabelsi Dorra Ben Ayed Noureddine Ellouze
DOI: https://doi.org/10.5815/ijigsp.2013.09.02, Pub. Date: 8 Jul. 2013
The purpose of speech emotion recognition system is to classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral and happiness.
Speech features that are commonly used in speech emotion recognition (SER) rely on global utterance level prosodic features. In our work, we evaluate the impact of frame-level feature extraction. The speech samples are from Berlin emotional database and the features extracted from these utterances are energy, different variant of mel frequency cepstrum coefficients (MFCC), velocity and acceleration features. The idea is to explore the successful approach in the literature of speaker recognition GMM-UBM to handle with emotion identification tasks. In addition, we propose a classification scheme for the labeling of emotions on a continuous dimensional-based approach.
Subscribe to receive issue release notifications and newsletters from MECS Press journals