IJIGSP Vol. 7, No. 11, Oct. 2015
Cover page and Table of Contents: PDF (size: 661KB)
REGULAR PAPERS
Following the "on condition maintenance" approach used for extending service life of an aircraft one of the major tasks is a nondestructive testing of its critical elements. Considering that many of elements of operated aircraft are manufactured from polymeric composites a special attention should be paid for diagnosing these elements due to their high vulnerability to barely visible impact damage. One of the primary testing techniques used for inspection of aircraft composite elements is an ultrasonic C-Scan technique which application results in planar images of emitted/received wave attenuation and a time of flight map. Due to the complex nature of barely visible impact damage occurrence it is difficult to analyze resulting C-Scan images. Therefore, using assistance based on image processing may help with "big–data" analysis of collected images. In this paper the authors proposed the image processing algorithm for semi-automatic evaluation of such damage distribution in aircraft composite structures. The algorithm is based on multilevel Otsu thresholding and morphological processing. Using the proposed algorithm an extraction of damage visualization from a C-Scan image as well as its characterization and 3D representation is possible. The developed approach will allow supporting diagnosing of composite structures with impact damage using C-Scan technique.
[...] Read more.Several commercial algorithms have been developed for color enhancement of a digital image; however, none of these are completely able to preciously process a digital image. Therefore, this article focuses upon pixel-by-pixel processing, especially in the field of color enhancement of digital image. The enhancement is performed on individual pixel by taking information from its neighborhood. This has been implemented using a clock algorithm. Clock algorithm enhancement is implemented on human visual system based hexagonal sampled pixels instead of square ones. Enhancement of each pixel is performed both locally and globally. The local enhancement is done by using wavelet normalization. It obtains different bands of information as it enables localizing the signal information both in time and frequency domain. The global enhancement is obtained through Gabor filter. The Gabor filter extracts region based information and combined information is used to recognize region of interest also Gabor filter justifies biological findings in vision system. The results after enhancement provide better visibility of minor information and finally the enhanced image is obtained.
[...] Read more.In many fields, images become a useful tool containing data of which medical image is an example. The diagnosis depends on the skills of the doctors and image clarity. In the real world, most of medical images consist of noise and blur. This problem reduces the quality of images and causes difficulties for doctors. Most of the tasks of increasing the quality of medical images are deblurring or denoising process. This is the difficult problem in medical image processing, because it must keep the edge features and avoid the loss of information. In case of a medical image which contains noise combined with blur, it is more difficult. In this paper, we have proposed a method for increasing the quality of medical images in case that blur combined with noise pair is available in medical images. The proposed method is divided into two steps: denoising and deblurring. We use curvelet transform combined with bayesian thresholding for the denoising step and use the augmented lagrangian method for the deblurring step. For demonstrating the superiority of the proposed method, we have compared the results with the other recent methods available in literature.
[...] Read more.During the recent development in image manipulating software and vast use of Internet, it is now becomes very difficult to protect the images that are precious and need to be secured so that they will survive against several image modification attacks. This paper represents a new technique to produce robust and efficient Twin blind digital Watermarking with the use of 2-D Walsh code and Discrete Cosine transform. Authentication matching process is introduced during the extraction process to provide extra security to the Host image. Both the Watermarks are embedded into host image through Walsh Code conversion. In this technique, the Embedding and extraction of Watermark is simpler than the other transform previously used. The proposed algorithm uses the YCbCr colour elements of the colour images in DCT province with low frequency components. During the first step the Principal Watermark i.e Hand written signature is embedded through 2-D Walsh coding and then the resultant watermark i.e Biometric fingerprint is embedded to the first Watermarked image through 2-D Walsh coding. The De-watermarking is dawn by checking the Authentication through Biometric fingerprint matching method. The technique is accessed by analyzing various performance parameters like SSIM, PSNR and NC. Further, the evaluation is made through various attacks by using StirMark tool. It was observed from the result that, by utilizing 2-D Walsh coding technique, better robustness is maintained and the proposed technique survived against various attacks such as JPEG compression, median, noise etc.
[...] Read more.In this paper, we have proposed a novel image enhancement technique based on M band wavelets. The conventional image enhancement algorithms opt for contrast enhancement using equalization techniques. Contrast enhancement is one of the most important issues in image enhancement techniques. High difference in luminance reflected from two adjacent surfaces results in a good contrast image which makes the object more distinguishable from other objects in the background. Many a times owing to over contrast, minute details of the images are lost; which cannot be tolerated for biomedical images. Moreover, they don't account for the noise embedded in the images. Also denoising using conventional filters result in blurring of images. The proposed algorithm not only denoises the image by retaining the high frequency edges, but also increases the contrast and generates a high resolution image. Various parameters like MSE and PSNR are been taken into account for comparison of enhanced images generated from the proposed algorithm with that of the conventional techniques.
[...] Read more.Finger print is the finest and cheapest recognition system because of its easy extraction of unique features like bifurcation and termination. But the quality of fingerprint data are easily degraded by dryness of skin, wet, wound and other types of noises. Hence, denoising of fingerprint image is vital step for automatic fingerprint recognition system. In the proposed paper the removal of noise from fingerprint images by using stationary wavelet transform and adaptive thresholding method is analysed. The proposed algorithm is developed using MATLAB (R2010b) and tested in the fingerprint images collected from FVC2004 database and R303A optical scanner. The performance of the method is analysed by calculating the quality metrics like Peak Signal to Noise Ratio, Universal Quality Index , Structure Similarity and Multi-Scale Structure Similarity (MS-SSIM). The quality of fingerprint image after noise removal using proposed analysis confirms the suggested method is better than the conventional techniques.
[...] Read more.Ophthalmology is the study of structures, functions, treatment and disorders of eye. Computer aided analysis of retina images is still an open research area. Numerous efforts have been made to automate the analysis of retina images. This paper presents a review of various existing research in detection of anatomical structures in retina and lesions for the diagnosis of diabetic retinopathy (DR). The research in detection of anatomical structures is further divided into subcategories, namely, vessel segmentation and vessel centerline extraction, optic disc segmentation and localization, and fovea/ macula detection and extraction. Various research works in each of the categories are reviewed highlighting the techniques employed and comparing the performance figures obtained. The issues/ lacuna of various approaches are brought out. The following major observations are made: Most of the vessel detection algorithms fail to extract small thin vessels having low contrast. It is difficult to detect vessels at regions where close vessels are merged, at regions of missing of small vessels, at optic disc regions, and at regions of pathology. Machine learning based approaches for blood vessel tracing requires long processing time. It is difficult to detect optic disc radius or boundary with simple blood vessel tracing. Automatic detection of fovea and macular region extraction becomes complicated due to non-uniform illuminations while imaging and diseases of the eyes. Techniques requiring prior knowledge leads to complexity. Most lesion detection algorithms underperform due to wide variations in the color of fundus images arising out of variations in the degree of pigmentation and presence of choroid.
[...] Read more.