Work place: Computer Science and Engineering Discipline, Khulna University, Khulna, Bangladesh
E-mail: rdebnath@cseku.ac.bd
Website:
Research Interests: Detection Theory, Image Processing, Pattern Recognition, Computational Learning Theory
Biography
Rameswar Debnath is a Professor at the Computer Science and Engineering Discipline, Khulna University, Bangladesh. He received his Ph.D. from the University of Electro-Communications (UEC), Tokyo, Japan in 2005. His research interests are statistical machine learning (in particular supervised learning; support vector machines and kernel methods), and its applications to pattern recognition, image processing, bioinformatics and natural language analysis.
By S.M. Mohidul Islam Rameswar Debnath
DOI: https://doi.org/10.5815/ijigsp.2020.06.03, Pub. Date: 8 Dec. 2020
Content-based image retrieval is the popular approach for image data searching because in this case, the searching process analyses the actual contents of the image rather than the metadata associated with the image. It is not clear from prior research which feature or which similarity measure performs better among the many available alternatives as well as what are the best combinations of them in content-based image retrieval. We performed a systematic and comprehensive evaluation of several visual feature extraction methods as well as several similarity measurement methods for this case. A feature vector is created after color and/or texture and/or shape features extraction. Then similar images are retrieved using different similarity measures. From experimental results, we found that color moment and wavelet packet entropy features are most effective whereas color autocorrelogram, wavelet moment, and invariant moment features show narrow result. As a similarity measure, cosine and correlation measures are robust in maximum cases; Standardized L2 in few cases and on average, city block measure retrieves more similar images whereas L1 and Mahalanobis measures are less effective in maximum cases. This is the first such system to be informed by a rigorous comparative analysis of the total six features and twelve similarity measures.
[...] Read more.By Rafflesia Khan Rameswar Debnath
DOI: https://doi.org/10.5815/ijigsp.2020.02.03, Pub. Date: 8 Apr. 2020
This paper addresses the problem of identifying certain human behavior such as distraction and also predicting the pattern of it. This paper proposes an artificial emotional intelligent or emotional AI algorithm to detect any change in visual attention for individuals. Simply, this algorithm detects human’s attentive and distracted periods from video stream. The algorithm uses deviation of normal facial alignment to identify any change in attentive and distractive activities, e.g., looking to a different direction, speaking, yawning, sleeping, attention deficit hyperactivity and so on. For detecting facial deviation we use facial landmarks but, not all landmarks are related to any change in human behavior. This paper proposes an attribute model to identify relevant attributes that best defines human’s distraction using necessary facial landmark deviations. Once the change in those attributes is identified, the deviations are evaluated against a threshold based emotional AI model in order to detect any change in the corresponding behavior. These changes are then evaluated using time constraints to detect attention levels. Finally, another threshold model against the attention level is used to recognize inattentiveness. Our proposed algorithm is evaluated using video recording of human classroom learning activity to identify inattentive learners. Experimental results show that this algorithm can successfully identify the change in human attention which can be used as a learner or driver distraction detector. It can also be very useful for human distraction detection, adaptive learning and human computer interaction. This algorithm can also be used for early attention deficit hyperactivity disorder (ADHD) or dyslexia detection among patients.
[...] Read more.By Rafflesia Khan Rameswar Debnath
DOI: https://doi.org/10.5815/ijigsp.2019.08.01, Pub. Date: 8 Aug. 2019
In this paper, an efficient approach has been proposed to localize every clearly visible object or region of object from an image, using less memory and computing power. For object detection we have processed every input image to overcome several complexities, which are the main limitations to achieve better result, such as overlap between multiple objects, noise in the image background, poor resolution etc. We have also implemented an improved Convolutional Neural Network based classification or recognition algorithm which has proved to provide better performance than baseline works. Combining these two detection and recognition approaches, we have developed a competent multi-class Fruit Detection and Recognition (FDR) model that is very proficient regardless of different limitations such as high and poor image quality, complex background or lightening condition, different fruits of same shape and color, multiple overlapped fruits, existence of non-fruit object in the image and the variety in size, shape, angel and feature of fruit. This proposed FDR model is also capable of detecting every single fruit separately from a set of overlapping fruits. Another major contribution of our FDR model is that it is not a dataset oriented model which works better on only a particular dataset as it has been proved to provide better performance while applying on both real world images (e.g., our own dataset) and several states of art datasets. Nevertheless, taking a number of challenges into consideration, our proposed model is capable of detecting and recognizing fruits from image with a better accuracy and average precision rate of about 0.9875.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals