IJIGSP Vol. 8, No. 10, Oct. 2016
Cover page and Table of Contents: PDF (size: 188KB)
REGULAR PAPERS
Eye blink detection has gained a lot of interest in recent years in the field of Human Computer Interaction (HCI). Research is being conducted all over the world for developing new Natural User Interfaces (NUI) that uses eye blinks as an input. This paper presents a comparison of five non-intrusive methods for eye blink detection for low resolution eye images using different features like mean intensity, Fisher faces and Histogram of Oriented Gradients (HOG) and classifiers like Support Vector Machines (SVM) and Artificial neural network (ANN). A comparative study is performed by varying the number of training images and in uncontrolled lighting conditions with low resolution eye images. The results show that HOG features combined with SVM classifier outperforms all other methods with an accuracy of 85.62% when tested on images taken from a totally unknown dataset.
[...] Read more.Reconstruction of a sparse signal from fewer observations require compressive sensing based recovery algorithm for saving memory storage. Various sparse recovery techniques including l_1 minimization, greedy pursuit approaches and non-convex optimization requires sparsity to be known in advance. This article presents the generalized adaptive orthogonal matching pursuit with forward-backward movement under the cumulative coherence property; which removes the need of knowledge of sparsity prior to implementation. In this technique, the forward step increases the size of support set and backward step eliminates the misidentified elements. It selects multiple indices on the basis of maximum correlation by forward-backward movement. The size of backward step is kept smaller than the forward one. These forward-backward steps then iterate and increment through the algorithm adaptively and terminate with stopping condition to ensure the identification of significant components. Recovery performance of proposed algorithm is demonstrated via simulation results including reconstruction of sparse signals in noisy and noise free environment. The algorithm has major advantage that it does not require the knowledge of sparsity in advance in contrast to the earlier reconstruction techniques. The evaluation and comparative analysis of result shows that algorithm leads to the increment in recovery performance and efficiency considerably.
[...] Read more.Human speech signal is an acoustic wave, which conveys the information about the words or message being spoken, identity of the speaker, language spoken, the presence and type of speech pathologies, the physical and emotional state of the speaker. Speech under physical task stress shows variations from the speech in neutral state and thus degrades the speech system performance. In this paper we have characterized the voice samples under physical stress and the acoustic parameters are compared with the neutral state voice parameters. The traditional voice measures, glottal flow parameters, mel frequency cepstrum coefficients and energy in various frequency bands are used for this characterization. T-test is performed to check the statistical significance of parameters. Significant variations are noticed in the parameters under two states. Pitch, intensity, energy values are high for the physically stressed voice; On the other hand glottal parameter values get decreased. Cepstrum coefficients shift up from the coefficients of neutral state voice samples. Energy in lower frequency bands was more sensitive to physical stress. This study improves the performance of various speech processing applications by analyzing the unwanted effect of physical stress in voice.
[...] Read more.In this paper, the architecture for Fast Fourier Transform over Galois Field (24) is described. The method used is cyclotomic decomposition. The Cyclotomic Fast Fourier Transforms (CFFTs) are preferred due to low multiplicative complexity. The approach used is the decomposition of the arbitrary polynomial into a sum of linearized polynomials. Also, Common Subexpression Elimination (CSE) algorithm is used to reduce the additive complexity of the architecture. By using CSE algorithm, the design with reduced operational complexity has been described.
[...] Read more.The main target of stereo matching algorithms is to find out the three dimensional (3D) distance, or depth of objects from a stereo pair of images. Depth information can be derived from images using disparity map of the same scene. There are many applications of computer vision like People tracking, Gesture recognition, Industrial automation and inspection, Security and Biometrics, Three-dimensional modeling, Web and Cloud, Aerial surveys etc. There are large categories of stereo algorithms which are used for finding the disparity or depth. This paper presents a proposed stereo matching algorithm to obtain depth map, enhance and measure. The hybrid mathematical process of the algorithm are color conversion, block matching, guided filtering, Minimum disparity assignment design, mathematical perimeter, zero depth assignment, combination of hole filling and permutation of morphological operator and last non linear spatial filtering. Our algorithm is produce noise less, reliable, smooth and efficient depth map. We obtained the results with ground truth image using Structural Similarity Index Map (SSIM) and Peak Signal to Noise Ratio (PSNR).
[...] Read more.The main contribution of this paper is using compressive sensing (CS) theory for crypto steganography system to increase both the security and the capacity and preserve the cover image imperceptibility. For CS implementation, the discrete Cosine transform (DCT) as sparse domain and random sensing matrix as measurement domain are used. We consider 7 MRI images as the secret and 7 gray scale test images as cover. In addition, three sampling rates for CS are used. The performance of seven CS recovery algorithms in terms of image imperceptibility, achieved peak signal to noise ratio (PSNR), and the computation time are compared with other references. We showed that the proposed crypto steganography system based on CS works properly even though the secret image size is greater than the cover image.
[...] Read more.Image fusion is a popular application of image processing which performs merging of two or more images into one. The merged image is of improved visual quality and carries more information content. The present work introduces a new image fusion method in complex wavelet domain. The proposed fusion rule is based on a level dependent threshold, where absolute difference of a wavelet coefficient from the threshold value is taken as fusion criteria. This absolute difference represents variation in the image intensity that resembles the salient features of image. Hence, for fusion, the coefficients that are far from threshold value are being selected. The motivation behind using dual tree complex wavelet transform is due to failure of real valued wavelet transform in many aspects. Good directional selectivity, availability of phase information and approximate shift invariant nature of dual tree complex wavelet transform make it suitable for image fusion and help to produce a high quality fused image. To prove the strength of the proposed method, it has been compared with several spatial, pyramidal, wavelet and new generation wavelet based fusion methods. The experimental results show that the proposed method outperforms all the other state-of-the-art methods visually as well as in terms of standard deviation, mutual information, edge strength, fusion factor, sharpness and average gradient.
[...] Read more.