IJIGSP Vol. 10, No. 3, 8 Mar. 2018
Cover page and Table of Contents: PDF (size: 616KB)
Full Text (PDF, 616KB), PP.18-24
Views: 0 Downloads: 0
Target Recognition, Mutual Information, Hough Transform
This paper presents a new automatic target recognition approach based on Hough transform and mutual information. The Hough transform groups the extracted edge points in edged images to an appropriate set of lines which helps in features extraction and matching processes in both of target and stored database images. This gives an initial indication about realization and recognition between target image and its corresponding database image. Mutual information is used to emphasize the recognition of the target image and its verification with its corresponding database image. The proposed recognition approach passed through five stages which are: edge detection by Sobel edge detector, thinning as a morphological operation, Hough transformation, matching process and finally measuring the mutual information between target and the available database images. The experimental results proved that, the target recognition is realized and gives more accurate and successful recognition rate than other recent recognition techniques which are based on stable edge weighted HOG.
Ramy M. Bahy," New Automatic Target Recognition Approach based on Hough Transform and Mutual Information", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.10, No.3, pp. 18-24, 2018. DOI: 10.5815/ijigsp.2018.03.03
[1]Automatic Target Recognition, B. J. Schachter, 2nd Edition, vol., TT113, ISBN: 9781510611276, March 2017.
[2]J.J. Yebes, L.M. Bergasa, M. García-Garrido, “Visual object recognition with 3D-aware features in KITTI urban scenes,” Sensors 15 (4): 9228–9250, 2015.
[3]B. Li, Y. Yao, “An edge-based optimization method for shape recognition using atomic potential function,” Eng. Appl. Artif. Intell. 35, 14–25, 2014.
[4]Ramesh K N, Chandrika N, Omkar S N, M B Meenavathi and Rekha V “Detection of Rows in Agricultural Crop Images Acquired by Remote Sensing from a UAV,” I.J. Image, Graphics and Signal Processing, vol. 11, pp. 25-31, 2016.
[5]P. K. Mishra and G.P Saroha, “studying on classification for static and moving objects in video surveillance system, I.J. Image, Graphic and Signal Processing, vol. 5, pp. 76-82, 2016.
[6]Tunç Alkanat, Emre Tunali and Sinan Öz, “Fully-Automatic Target Detection and Tracking for Real-Time, Airborne Imaging Applications,” International Joint Conference on Computer Vision, Imaging and Computer Graphics, pp. 240-255, VISIGRAPP 2015.
[7]M. Yagimli, H.S. Varol., “Real Time Color Composition Based Moving Target Recognition,” Journal of Naval Science and Engineering, Vol.5, No.2, pp.89-97, 2009.
[8]M.Peker, A. Zengin, “Real-Time Motion-Sensitive Image Recognition System,” Scientific Research and Essays, Vol:5, No:15, pp.2044-2050, 2010.
[9]S.Saravanakumar, A. Vadivel, C.G. Ahmed, “Multiple Human Object Tracking using Background Subtraction and Shadow Removal Techniques Images,” Int. Conf. on Signal and Image Processing, pp.79-84, 2010.
[10]S. Yamamoto, Y. Mae, Y. Shirai, J.Miura, “Real time Multiple Object Tracking Based on Optical Flows, Robotics and Automation, proceedings,” pp.2328-2333, 1995.
[11]C.-H. Chuang, Y.-L. Chao, Z.-P. Li, “Moving Object Segmentation and Tracking Using Active Contour and Color Classification Models,” IEEE Int. Symposium, pp.1-8, 2010
[12]M. Kass, A. Witkin, D. Terzoupolos, Snakes, “active contour models,” Int. Journal of Computer Vision, vol.1, No. 4, pp.321-331,1988.
[13]M. Maziere, F. Chassaingl, L. Garrido, P. Salembie, “Segmentation and Tracking of Video Objects for a Content-Based Video Indexing Context,” IEEE Int. Conference, pp.1191-1194, 2000.
[14]N. Özgen, Computer Based Target Tracking, M.Sc. Thesis, Gazi University Institute of Science and Technology, August 2008.
[15]L.W. Wang , J.L. Qin, “Study on Moving Object Tracking Algorithm in Video Images,” The Eighth Int. Conference on Electronic Measurement and Instruments, pp.1-4, 2007.
[16]M.S. Dao, F.G. De Natale, A. Massa, “Edge potential functions and genetic algorithms for shape-based image retrieval,” Proceedings of 2003 International Conference on Image Processing
[17]C. Li, H. Duan, “Target detection approach for UAVs via improved Pigeon-inspired Optimization and Edge Potential Function,” Aerosp. Sci. Technol. 39 (2014)352–360.
[18]C. Xu, H. Duan, “Artificial bee colony (ABC) optimized edge potential function (EPF) approach to target recognition for low-altitude aircraft,” Pattern Recognition, Lett. 31 (13) (2010) 1759–1772.
[19]F. Battisti, M. Carli, F.G.B. De Natale, A. Neri, “Ear recognition based on edge potential function,” Proc. SPIE 8295, Image Processing: Algorithms and Systems X; and Parallel Processing for Imaging Applications II, No. 829508,February, 2012, http://dx.doi.org/10.1117/12.909082.
[20]Y. Wang, J. Yin, “Intelligent search optimized edge potential function (EPF)approach to synthetic aperture radar (SAR) scene matching,” IEEE Congress on Evolutionary Computation (CEC), (pp. 2124–2131). IEEE, July,2014.
[21]M.S. Dao, F.G. DeNatale, A. Massa, “MPEG-4 video retrieval using video-objects and edge potential functions,” Advances in Multimedia Information Processing-PCM 2004, Springer, Berlin, Heidelberg, pp. 550–557, 2005.
[22]M.S. Dao, F.G. DeNatale, A. Massa, “Video retrieval using video object-trajectory and edge potential function,” Proceedings of 2004 International Symposium on Intelligent Multimedia, Video and Speech Processing, October,2004. ICIP 2003. Vol. 3, pp. III-729. IEEE, September, 2003.
[23]B. Li, H. Cao, M. Hu, C. Zhou, “Shape matching optimization via atomic potential function and artificial bee colony algorithms with various search strategies,” Proceedings of 8th International Symposium on Computational Intelligence and Design (ISCID 2015), vol. 1,pp. 1–4, December 2015.
[24]C. F. Olson and D. P. Huttenlocher “Automatic Target Recognition by Matching Oriented Edge Pixels,” ieee Transactions on image processing, Vol. 6, No.1, Jan 2012.
[25]H. Zhu, L. Deng and G. Lu, “Indirect target recognition method for overhead infrared images equences,” Elsevier, Optik, 126, pp.1909-1913, 2015.
[26]Y. Weiping, W. Xuezhi , B. Moran, A. Wheaton, N. Cooley, “Efficient registration of optical and infrared images via modified Sobel edging for plant canopy temperature estimation,” Elsevier, Computers and Electrical Engineering vol. 38, pp. 1213–122, 2012.
[27]J. R. Parker, Algorithms for Image Processing and Computer Vision, 2¬nd ed., Wiley computer publishing, 2010.
[28]R. M. Bahy, G. I. Salama, T. A. Mahmoud, “Registration of Multi-Focus Images Using Hough Transform,” 29th National Radio Science Conference (NRSC), Cairo, Egypt, pp. 279-284, 10-12 Apr. 2012.
[29]https://www.mathworks.com/help/images/ref/normxcorr2
[30]R. Shams, N. Barnes and R. Hartley, “Image Registration in Hough Space Using Gradient of Images,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Glenelg, Australia, Jun. 2007.
[31]http://vision.ucmerced.edu/datasets/landuse.html
[32]https://nationalmap.gov
[33]W. Shen, X. Ding, C. Liu, C. Fang and B. Xiong, “New Method of Ground Target Recognition Based on Stable Edge Weighted HOG,” Asia-Pacific International Symposium on Aerospace Technology, APISAT2014, Published by Elsevier Ltd., 2015.
[34]Akarlari, M. Yagimli, “Target Recognition with Color Components and Sobel Operator,” Int. Journal of Electronics; Mechanical and Mechatronics Engineering, vol.2, no.4, pp. 305-310, 2012.
[35]Bai Li, “Atomic Potential Matching: An Evolutionary Target Recognition Approach Based on Edge Features,” Elsevier, Optik 127, pp. (3162-3168), 2016.