Work place: Department of Electronics and Communication Engineering, K L University, Green Fields, Vaddeswaram, Guntur, India
E-mail: ananth.gondu@gmail.com
Website:
Research Interests: Computer systems and computational processes, Computer Architecture and Organization, Image Compression, Image Manipulation, Image Processing, Data Structures and Algorithms
Biography
G. Anantha Raoreceived B.Tech Degree from, GMRIT, JNTU, Hyderabad, In 2007. M.Tech. Degree from STIET, JNTUK, Kakinada, India In 2011, Pursuing Ph.D. In The Department of Electronics and Communication Engineering, KL University, Vijayawada, India. His research interest includes on Signal Processing, Image and Video Processing.
By P.V.V. Kishore G. Anantha Rao E. Kiran Kumar M. Teja Kiran Kumar D. Anil Kumar
DOI: https://doi.org/10.5815/ijisa.2018.10.07, Pub. Date: 8 Oct. 2018
Extraction of complex head and hand movements along with their constantly changing shapes for recognition of sign language is considered a difficult problem in computer vision. This paper proposes the recognition of Indian sign language gestures using a powerful artificial intelligence tool, convolutional neural networks (CNN). Selfie mode continuous sign language video is the capture method used in this work, where a hearing-impaired person can operate the Sign language recognition (SLR) mobile application independently. Due to non-availability of datasets on mobile selfie sign language, we initiated to create the dataset with five different subjects performing 200 signs in 5 different viewing angles under various background environments. Each sign occupied for 60 frames or images in a video. CNN training is performed with 3 different sample sizes, each consisting of multiple sets of subjects and viewing angles. The remaining 2 samples are used for testing the trained CNN. Different CNN architectures were designed and tested with our selfie sign language data to obtain better accuracy in recognition. We achieved 92.88 % recognition rate compared to other classifier models reported on the same dataset.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals