Work place: EEDIS Labo, University of Sidi Bel Abbes, Algeria
E-mail: elhir@univ-sba.dz
Website:
Research Interests: Computational Science and Engineering, Computational Engineering, Artificial Intelligence, Engineering
Biography
Ahmed Lehireche has completed respectively ING Diploma from ESI of Algiers (1981) with the final curriculum project at the IMAG (France), MAGISTER Diploma from USTOran (1993) and Å–DOCTORAT D’ETAT Diploma from UDL Sidi bel Abbes (2005). He is working as a Director of research, head of the Knowledge Engineering Team at the EEDIS laboratory and full Professor at the computer science department of UDL Sidi bel Abbes. He is mainly concerned with AI, Computer Science Theory and Semantics in IT.
Ahmed’s name figures as author on lot of scientific papers concerned in his research areas which are knowledge engineering, web engineering, artificial intelligence sf.
By Yasser Yahiaoui Ahmed Lehireche Djelloul Bouchiha
DOI: https://doi.org/10.5815/ijisa.2016.05.01, Pub. Date: 8 May 2016
The most familiar concept in Artificial intelligence is the knowledges representation. It aims to find explicit symbolization covering all semantic aspects of knowledge, and to make possible the use of this representation to produce an intelligent behavior like reasoning.
The most important constraint is the usability of the representation; it’s why the structures used must be well defined to facilitate manipulation for reasoning algorithms which leads to facilitate their implementation.
In this paper we propose a new approach based on the description logics formalism for the goal of simplification of description logics system implementation. This approach can reduce the complexity of reasoning Algorithm by the vectorisation of concept definition based on the subsumption hierarchy.
By Noureddine Doumi Ahmed Lehireche Denis Maurel Ahmed Abdelali
DOI: https://doi.org/10.5815/ijitcs.2016.02.01, Pub. Date: 8 Feb. 2016
This work presents a method that enables Arabic NLP community to build scalable lexical resources. The proposed method is low cost and efficient in time in addition to its scalability and extendibility. The latter is reflected in the ability for the method to be incremental in both aspects, processing resources and generating lexicons. Using a corpus; firstly, tokens are drawn from the corpus and lemmatized. Secondly, finite state transducers (FSTs) are generated semi-automatically. Finally, FSTs are used to produce all possible inflected verb forms with their full morphological features. Among the algorithm's strength is its ability to generate transducers having 184 transitions, which is very cumbersome, if manually designed. The second strength is a new inflection scheme of Arabic verbs; this increases the efficiency of FST generation algorithm. The experimentation uses a representative corpus of Modern Standard Arabic. The number of semi-automatically generated transducers is 171. The resulting open lexical resources coverage is high. Our resources cover more than 70% Arabic verbs. The built resources contain 16,855 verb lemmas and 11,080,355 fully, partially and not vocalized verbal inflected forms. All these resources are being made public and currently used as an open package in the Unitex framework available under the LGPL license.
[...] Read more.By Adil Toumouh Dominic Widdows Ahmed Lehireche
DOI: https://doi.org/10.5815/ijieeb.2016.01.05, Pub. Date: 8 Jan. 2016
In this paper we explore two paradigms: firstly, paradigmatic representation via the native HAL model including a model enriched by adding word order information using the permutation technique of Sahlgren and al [21], and secondly the syntagmatic representation via a words-by-documents model constructed using the Random Indexing method. We demonstrate that these kinds of word space models which were initially dedicated to extract similarity can also been efficient for extracting relatedness from Arabic corpora. For a given word the proposed models search the related words to it. A result is qualified as a failure when the number of related words given by a model is less than or equal to 4, otherwise it is considered as a success. To decide if a word is related to other one, we get help from an expert of the economic domain and use a glossary1 of the domain. First we begin by a comparison between a native HAL model and term- document model. The simple HAL model records a better result with a success rate of 72.92%. In a second stage, we want to boost the HAL model results by adding word order information via the permutation technique of sahlgren and al [21]. The success rate of the enriched HAL model attempt 79.2 %.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals