IJIGSP Vol. 9, No. 5, May. 2017
Cover page and Table of Contents: PDF (size: 246KB)
REGULAR PAPERS
Automatic segmentation and detection of brain tumor is a notoriously complicated issue in Magnetic Resonance Image. The similar state-of-art segmentation methods and techniques are limited for the detection of tumor in multimodal brain MRI. Thus this work deals about the accurate segmentation and detection of tumor in multimodal brain MRI and this research work is focused to improve automatic segmentation results. This work analyses the segmentation performance of existing state-of-art method improved Fuzzy C-Means Clustering (FCMC) method and marker controlled Watershed method and this research work proposed method to amalgamated segmentation results of improved Fuzzy C-Means Clustering (FCMC) method and marker controlled Watershed method to carry out accurate brain tumor detection and enhance the segmentation results. The performance of proposed method is evaluated with assorted performance metric, viz., Segmentation accuracy, Sensitivity and Specificity. The comparative performance of the Proposed Method, FCMC Method and Watershed method is demonstrated on real and benchmark multimodal brain MRI datasets, viz. FLAIR MRI, T1 MRI, MRI and T2 MRI and the experimental results of the proposed method exhibits better results for segmentation and detection of tumor in multimodal brain MR images.
[...] Read more.Thresholding in wavelet domain has proven very high performances in image denoising and particularly for homogeneous ones. Conversely, and in cases of relatively non-homogeneous scenes, it often induces the loss of some true coefficients; inducing so, to smoothing the details and the different features of the thresholded image. Therefore, and in order to overcome this shortcoming, we introduce within this paper a new alternative made by a combination of advantages of both spatial filtering and wavelet thresholding; that ensures well removing the noise effect while preserving the different features of the considered image. First, the degraded image is decomposed into wavelet coefficients via a 2-level 2D-DWT. Then, the finest detail sub-bands likely due to noise, are thresholded in order to maximally cancel the noise contribution. The remaining noise shared across the coarse detail subbands (LH2, HL2, and HH2) is cleaned by filtering these mentioned sub-bands via an adaptive wiener filter instead of thresholding them; avoiding so smoothing the acquired image. Finally, a joint bilateral filter (JBF) is applied to ensure the preservation of the different image features. Experimental results show notable performances of our new proposed scheme compared to the recent state-of-the-art schemes visually and in terms of (MSE), (PSNR) and correlation coefficient.
[...] Read more.The aim of telemedicine system is to diagnose patients remotely; this includes healthcare provision to patients in far flung areas. The process of remote diagnosis is mainly dependent on bandwidths which are absolutely essential in terms of communications via the networks. Such communications have to be carefully monitored, because randomly sending data across the network and loading of packets as direct stream of cameras would result in chocking the bandwidth. Hence it is extremely important to make sure that the data is compressed so that there is minimum usage of bandwidth. But this has to be done in a way that maintains the quality at its best. This retention of good quality along with minimizing the usage of data is done by means of compression the streams obtained through cameras and then decompressing the data at the other end. This purpose was previously achieved by H.264 codec. Our major target was up gradation of the existing codec by introducing the latest; H.265. The Libde265 (Decoder) and the x265 (encoder) libraries have been used for the purpose of developing the H.265 codec. The H265 is an advanced codec, having better quality and ability to achieve far better compression than its predecessor. It has been shown through the algorithm and coding of H.265 that is has better compressing ability and the quality is maintained during the transmission of videos. This is highly desirable for the field of telemedicine as it can make improvements in providing healthcare services by easing the transmission of data in the form of videos from the patient end to the doctor’s end.
[...] Read more.In the digital world, image quality is of widespread importance in several areas of image application such as medical field, aerospace and satellite imaging, underwater imaging, etc. This requires the image obtained to be sharp and clear without any artifacts. Moreover, on zooming, the image should not lose any of its information. Thus, focusing on these points, Discrete Wavelet Transform has been practiced in combination with different interpolation methodologies to provide reconstruction of images via zooming and their PSNR values have been obtained. The research gave rise to a novel image zooming and reconstruction technique that improves the image quality of the enhanced images. This paper presents a proposed algorithm that is adopted to enhance a given original input image in the domain of wavelets and results have been proved with the help of PSNR values. The proposed algorithm is used further for contrast equalized images providing improvement in PSNR values and enhancement in images. The method is compared with existing papers. This verifies that the proposed technique is a better approach to provide good quality zoomed images.
[...] Read more.Obstacle detection is the process in which the upcoming objects in the path are detected and collision with them is avoided by some sort of signalling to the visually impaired person. In this review paper we present a comprehensive and critical survey of Image Processing techniques like vision based, ground plane detection, feature extraction, etc. for detecting the obstacles. Two types of vision based techniques namely (a) Monocular vision based approach (b) Stereo Vision based approach are discussed. Further types of above described ap-proaches are also discussed in the survey. Survey dis-cusses the analysis of the associated work reported in literature in the field of SURF and SIFTS features, mo-nocular vision based approaches, texture features and ground plane obstacle detection.
[...] Read more.Accurate recognition and tracking of human faces are indispensable in applications like Face Recognition, Forensics, etc. The need for enhancing the low resolution faces for such applications has gathered more attention in the past few years. To recognize the faces from the surveillance video footage, the images need to be in a significantly recognizable size. Image Super-Resolution (SR) algorithms aid in enlarging or super-resolving the captured low-resolution image into a high-resolution frame. It thereby improves the visual quality of the image for recognition. This paper discusses some of the recent methodologies in face super-resolution (FSR) along with an analysis of its performance on some benchmark databases. Learning based methods are by far the immensely used technique. Sparse representation techniques, Neighborhood-Embedding techniques, and Bayesian learning techniques are all different approaches to learning based methods. The review here demonstrates that, in general, learning based techniques provides better accuracy/ performance even though the computational requirements are high. It is observed that Neighbor Embedding provides better performances among the learning based techniques. The focus of future research on learning based techniques, such as Neighbor Embedding with Sparse representation techniques, may lead to approaches with reduced complexity and better performance.
[...] Read more.Pose variation is the one of the main difficulty faced by present automatic face recognition system. Due to the pose variations, feature vectors of the same person may vary more than inter person identity. This paper aims to generate virtual frontal view from its corresponding non frontal face image. The approach presented in this paper is based on the assumption of existence of an approximate mapping between the non frontal posed image and its corresponding frontal view. By calculating the mapping between frontal and posed image, the problem of estimating the frontal view will become the regression problem. In the present approach, non linear mapping, kernel extreme learning machine (KELM) regression is used to generate virtual frontal face image from its non frontal counterpart. Kernel ELM regression is used to compensate for the non linear shape of the face. The studies are performed on GTAV database with 5 posed images and compared with linear regression approach.
[...] Read more.