IJIGSP Vol. 10, No. 3, 8 Mar. 2018
Cover page and Table of Contents: PDF (size: 998KB)
Full Text (PDF, 998KB), PP.59-66
Views: 0 Downloads: 0
Peano scan motif, GLCM features, scan position
To extract local features efficiently Jhanwar et al. proposed Motif Co-occurrence Matrix (MCM) [23] in the literature. The Motifs or Peano Scan Motifs (PSM) is derived only on a 2*2 grid. The PSM are derived by fixing the initial position and this has resulted only six PSM’s on the 2*2 grid. This paper extended this ap-proach by deriving Motifs on a 3*3 neighborhood. This paper divided the 3*3 neighborhood into cross and diag-onal neighborhoods of 2*2 pixels. And on this cross and diagonal neighborhood complete Motifs are derived. The complete Motifs are different from initial Motifs, where the initial PSM positions are not fixed. This complete Motifs results 24 different Motifs on a 2*2 gird. This paper derived cross diagonal complete Motifs matrix (CD-CMM) that has relative frequencies of cross and diagonal complete Motifs. The GLCM features are de-rived on cross diagonal complete Motifs texture matrix for efficient face recognition. The proposed CD-CMM is evaluated face recognition rate on four popular face recognition databases and the face recognition rate is compared with other popular local feature based methods. The experimental results indicate the efficacy of the proposed method over the other existing methods.
A. Mallikarjuna Reddy, V. Venkata Krishna, L. Sumalatha," Face Recognition based on Cross Diagonal Complete Motif Matrix", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.10, No.3, pp. 59-66, 2018. DOI: 10.5815/ijigsp.2018.03.07
[1]R. Brunelli and T. Poggio, “Face recognition: Features versus templates,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15, no. 10, pp. 1042–1052, Oct. 1993.
[2]A. Moeini and H. Moeini, “Real-world and rapid face recognition toward pose and expression variations via fea-ture library matrix,” IEEE Trans. Inf. Forensics Security, vol. 10, no. 5, pp. 969–984, May 2015.
[3]P. J. Phillips et al., “Overview of the face recognition grand challenge,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2005, pp. 947–954.
[4]G. Betta, D. Capriglione, M. Corvino, C. Liguori, and A. Paolillo, “Face based recognition algorithms: A first step toward a metrological characterization,” IEEE Trans. In-strum. Meas., vol. 62, no. 5, pp. 1008–1016, May 2013.
[5]P. N. Belhumeur, J. P. Hespanha, and D. Kriegman, “Ei-genfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp. 711–720, Jul. 1997.
[6]P. Comon, “Independent component analysis: A new concept?” Signal Process., vol. 36, no. 3, pp. 287–314, Apr. 1994.
[7]J. Lu, K. N. Plataniotis, A. N. Venetsanopoulos, and S. Z. Li, “Ensemblebased discriminant learning with boosting for face recognition,” IEEE Trans. Neural Netw., vol. 17, no. 1, pp. 166–178, Jan. 2006.
[8]S. Xie, S. Shan, X. Chen, and J. Chen, “Fusing local pat-terns of Gabor magnitude and phase for face recognition,” IEEE Trans. Image Process., vol. 19, no. 5, pp. 1349–1361, May 2010.
[9]C. A. R. Behaine and J. Scharcanski, “Enhancing the performance of active shape models in face recognition applications,” IEEE Trans. Instrum. Meas., vol. 61, no. 8, pp. 2330–2333, Aug. 2012.
[10]Z. Xu, H. R. Wu, X. Yu, K. Horadam, and B. Qiu, “Ro-bust shapefeature- vector-based face recognition system,” IEEE Trans. Instrum. Meas., vol. 60, no. 12, pp. 3781–3791, Dec. 2011.
[11]L. Shen, L. Bai, and M. Fairhurst, “Gabor wavelets and general discriminant analysis for face identification and verification,” Image Vis. Comput., vol. 25, no. 5, pp. 553–563, May 2007.
[12]B. Kepenekci, F. B. Tek, and G. B. Akar, ``Occluded face recognition based on Gabor wavelets,'' in Proc. Int. Conf. Image Process., 2002, pp. I-293-I-296.
[13]S. Anith, D. Vaithiyanathan, and R. Seshasayanan, “Face recognition system based on feature extraction,” in Proc. Int. Conf. Inf. Commun.Embedded Syst. (ICICES), Feb. 2013, pp. 660–664.
[14]B. Jun, I. Choi, and D. Kim, “Local transform features and hybridization for accurate face and human detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1423–1436, Jun. 2013.
[15]G. Kayim, C. Sari, and C. Akgul, “Facial feature selection for gender recognition based on random decision forests,” in Proc. Signal Process.Commun. Appl. Conf. (SIU), Apr. 2013, pp. 1–4.
[16]Y. Gao and Y. Qi, ``Robust visual similarity retrieval in single model face databases,'' Pattern Recognit., vol. 38, no. 7, pp. 1009-1020, Jul. 2005
[17]H.-S. Le and H. Li, ``Recognizing frontal face images using hidden Markov models with one training image per person,'' in Proc. Int. Conf. Patten Recognit., Aug. 2004, pp. 318-321.
[18]M. Ko and A. Barkana, ``A new solution to one sample problem in face recognition using FLDA,'' Appl. Math. Comput., vol. 217, no. 24, pp. 10368-10376, Aug. 2011.
[19]T. Ahonen, A. Hadid, and M. Pietikäinen, ``Face descrip-tion with local binary patterns: Application to face recog-nition,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037-2041, Dec. 2006.
[20]Y. Sun, X. Wang, and X. Tang, ``Deep learning face rep-resentation from predicting 10,000 classes,'' in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 1891-1898.
[21]T. Ahonen, E. Rahtu, V. Ojansivu, and J. Heikkila, “Recognition of blurred faces using local phase quantiza-tion,” in Proc. Int. Conf. Pattern Recog. (ICPR), Dec. 2008, pp. 1–4.
[22]J. Kannala and E. Rahtu, “BSIF: Binarized statistical image features,” in Proc. Int. Conf. Pattern Recog. (ICPR), Nov. 2012, pp. 1363–1366.
[23]N. Jhanwar, S. Chaudhuri, G. Seetharaman, B. Zavidov-ique ‘Content based image retrieval using motif cooccur-rence matrix’, Image and Vision Computing 22 (2004) 1211–1220.
[24]G. Peano, Sur une courbe qui remplit toute une aire plaine, Mathematische Annalen 36 (1890) 157–160.
[25]D. Hilbert, Uber die stettige Abbildung einter linie auf ein Flachenstuck, Mathematical Annalen 38 (1891) 459–461.
[26]A. Lempel, J. Ziv, Compression of two-dimensional data, IEEE Transactions on Information Theory 32 (1) (1986) 2–8.
[27]J. Quinqueton, M. Berthod, A locally adaptive Peano scanning algorithm, IEEE Transactions on PAMI, PAMI-3 (4) (1981) 403–412.
[28]P.T. Nguyen, J. Quinqueton, Space filling curves and texture analysis, IEEE Transactions on PAMI, PAMI-4 (4) (1982).
[29]R. Dafner, D. Cohen-Or, Y. Matias, Context based space filling curves, EUROGRAPH-ICS Journal 19 (3) (2000).
[30]G. Seetharaman, B. Zavidovique, Image processing in a tree of Peano coded images, in: Proceedings of the IEEE Workshop on Computer Architecture for Machine Percep-tion, Cambridge, CA (1997).
[31]G. Seetharaman, B. Zavidovique, Z-trees: adaptive pyramid algorithms for image segmentation, in: Proceedings of the IEEE International Conference on Image Processing, ICIP98, Chicago, IL, October (1998).
[32]G. Shivaram, G. Seetharaman, Data compression of dis-crete sequence: a tree based approach using dynamic pro-gramming, IEEE Transactions on Image Processing (1997) (in review).
[33]A.R. Butz, Space filling curves and mathematical pro-gramming, Information and Control 12 (1968) 314–330.
[34]K. Srinivasa Reddy, V. Vijaya Kumar, B. Eswara Reddy, “Face Recognition Based on Texture Features using Local Ternary Patterns”, IJIGSP, vol.7, no.10, pp.37-46, 2015.
[35]W. Wang, W. Chen and D. Xu, "Pyramid-Based Multi-scale LBP Features for Face Recognition," 2011 Interna-tional Conference on Multimedia and Signal Processing, Guilin, Guangxi, 2011, pp. 151-155. doi: 10.1109/CMSP.2011.37
[36]Abuobayda M. Shabat, Jules-Raymond Tapamo Image Analysis and Recognition: 13th International Conference, ICIAR 2016, July 13-15, 2016, Proceedings (pp.226-233)
[37]Samaria, Ferdinando S., and Andy C. Harter. "Parameteri-sation of a stochastic model for human face identification." In Applications ofComputer Vision, 1994., Proceedings of the Second IEEE Workshop on, pp. 138-142. IEEE, 1994.
[38]A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: Illumination cone models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 6, pp. 643–660, Jun. 2001
[39]P. Phillips, H. Moon, S. Rizvi, and P. Rauss, “The FERET evaluation methodology for face-recognition algorithms,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 10, pp. 1090–1104, Oct. 2000.