IJIGSP Vol. 9, No. 10, Oct. 2017
Cover page and Table of Contents: PDF (size: 242KB)
REGULAR PAPERS
Acoustic vibrations of the heart in time domain correspond to phonocardiogram (PCG) signal. A PCG signal, in the healthy case, consists of two fundamental sounds s1 and s2 produced by the mechanical functioning of the heart. Abnormalities in the heart valves correspond to other cardiac sounds than s1 and s2. This makes PCG signal a valuable tool related to the track of heart diseases. Actually, the characterization and the analysis of PCG signals is being a fertile area of study and investigation. However, most of the topics which treated this area of research focused only on time-frequency analysis, without exploiting the periodic character of PCG signal due to the limitations of the PCG modeling. In this work, we propose a coherent mathematical model for PCG signals based on cyclostationarity and Gabor kernel. The motivation behind is to define a framework, utilizing cyclic statistic due to noise robustness, for a full description of PCG signals, which leads to an easy and efficient early identification of certain heart abnormalities. The validation of the proposed model and its capacity to reflect the heart functioning is tested over synthetic and real data sets.
[...] Read more.The medical data science has been changing from conventional analog to more powerful digital imaging systems for some time. These imagining systems produced images in digital form. As digital technology evolves and exceeds the capability of analog imaging devices, so too does the expansion in the range of applications for image guided surgical and diagnostic systems. The optimization of bandwidth and storage are the major issues in image processing technology. The Compressive Sensing (CS) algorithm can become prominent tool for these issues because it can sample the signal with much lesser sample rate than twice of the maximum frequency of the signal and reconstruct the signal similar to the original signal. This paper, presents a novel scheme Region based Mixed-mode Medical Image Compression (RM2IC). Here, the region of interest is compressed with lossless hybrid compression methods and the non-region of interest is com-pressed with lossy hybrid CS algorithm. RM2IC is compared with different existing hybrid compression methods and it outperforms better visual perceptional quality of reconstructed image and reduces the compression rate. The performance analysis is done based on PSNR, MSE and compression ratio.
[...] Read more.Automatic face recognition is a major research area in computer vision which aims to recognize human face without human intervention. Significant developments in this field have shown that in many face recognition applications the automated techniques outperform humans. The conventional Scale-Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF) are used in face recognition where they provide high performances. However, this performance can be improved further by transforming the input into different domains before applying SIFT and SURF algorithms. Hence, we apply Discrete Wavelet Transform (DWT) or Gabor Wavelet Transform (GWT) at the input face images, which provides denser and extra information to be used by the conventional SIFT or SURF algorithms. Matching scores of SIFT or SURF from each subimage is fused before making final decision. Simulations show that the proposed approaches based on wavelet transforms using SIFT or SURF provides very high performance compared to the conventional algorithms.
[...] Read more.The Main objective of this assistive framework is to communicate the textual Information in the image captured by the Visually Challenged person as Speech, So that the Visually Challenged person can acquire knowledge about the surrounding. This framework can help Visually Challenged person to read books, magazine, warnings, instructions and various displays as well by taking their image along with the surrounding. Then the Optical Character Recognition (OCR) extracts and recognizes the text in the image and generates the text file. This text file is further converted to Speech with the help of Text to Speech (TTS) Synthesis. The inherent problem with the previous approach was if the acquired image is affected with the issues of different lighting conditions, noise and issue of Skew and Blur, as the image is captured by Visually Challenged person. Then the overall accuracy of the system was at stake due to inefficient OCR leads to improper Speech output of TTS Synthesis. In this paper we have introduced two more processes that are deblurring using Blind Deconvolution method and Pre-processing operation to remove the effect of noise and blur. Thus it prepares the image for efficient result of the framework for Visually Challenged. The proposed approach is implemented in Matlab with the image captured manually and taken from the internet and the result along with the OCR text file and corresponding output Speech shows that our framework is better than the previous framework.
[...] Read more.Information processing using Neural Network Counter can result in faster and accurate computation of data due to their parallel processing, learning and adaptability to various environments. In this paper, a novel 4-Bit Negative Edge Triggered Binary Synchronous Up/Down Counter using Artificial Neural Networks trained with hybrid algorithms is proposed. The Counter was built solely using logic gates and flip flops, and then they are trained using different evolutionary algorithms, with a multi objective fitness function using the back propagation learning. Thus, the device is less prone to error with a very fast convergence rate. The simulation results of proposed hybrid algorithms are compared in terms of network weights, bit-value, percentage error and variance with respect to theoretical outputs which show that the proposed counter has values close to the theoretical outputs.
[...] Read more.Automated system for plant species recognition is need of today since manual taxonomy is cumbersome, tedious, time consuming, expensive and suffers from perceptual biasness as well as taxonomic impediment. Availability of digitized databases with high resolution plant images annotated with metadata like date and time, lat long information has increased the interest in development of automated systems for plant taxonomy. Most of the approaches work only on a particular organ of the plant like leaf, bark or flowers and utilize only contextual information stored in the image which is time dependent whereas other metadata associated should also be considered. Motivated from the need of automation of plant species recognition and availability of digital databases of plants, we propose an image based identification of species of plant when the image may belong to different plant parts such as leaf, stem or flower, fruit , scanned leaf, branch and the entire plant. Besides using image content, our system also uses metadata associated with images like latitude, longitude and date of capturing to ease the identification process and obtain more accurate results. For a given image of plant and associated metadata, the system recognizes the species of the given plant image and produces an output that contains the Family, Genus, and Species name. Different methods for recognition of the species are used according to the part of the plant to which the image belongs to. For flower category, fusion of shape, color and texture features are used. For other categories like stem, fruit, leaf and leafscan, sparsely coded SIFT features pooled with Spatial pyramid matching approach is used. The proposed framework is implemented and tested on ImageClef data with 50 different classes of species. Maximum accuracy of 98% is attained in leaf scan sub-category whereas minimum accuracy is achieved in fruit sub-category which is 67.3 %.
[...] Read more.This paper presents a complete image feature representation, based on texton theory proposed by Julesz’s, called as a complete texton matrix (CTM)for texture image classification. The present descriptor can be viewed as an improved version of texton co-occurrence matrix (TCM) [1] and Multi-texton histogram (MTH) [2]. It is specially designed for natural image analysis and can achieve higher classification rate. TheCTM can express the spatial correlation of textons and can be considered as a generalized visual attribute descriptor. This paper initially quantized the original textures into 256 colors and computed color gradient from RGB vector space. Then the statistical information of eleven derived textons, on a 2 x 2 grid in a non-overlapped manner are computed to describe image features more precisely. To reduce the dimensionality the present paper extended the concept of present descriptor and derived a compact CTM (CCTM). The proposed CTM and CCTM methods are extensively tested on the Brodtaz, Outex and UIUC natural images. The results demonstrate the superiority of the present descriptor over the state-of-art representative schemes such as uniform LBP (ULBP), local ternary pattern (LTP), complete –LBP (CLBP), TCM and MTH.
[...] Read more.