Work place: Department of CSE, KLE College of Engineering and Technology, Chikodi, Dist. Belagavi, Karnataka, India
E-mail: rameshvtu11@gmail.com
Website:
Research Interests: Image Processing, Image and Sound Processing, Image Manipulation, Image Compression, Computer Graphics and Visualization
Biography
Dr. Ramesh M. Kagalkar is an academician with 19 years of experience, 30+ International publications, 15+ International conference paper presented, 6+ published patents (Two are under process of granting stage), Received research grant of total Rs. 8 Lakh from TEQIP Competitive Research Grant from VTU, Belagavi, Author of 3 academic Text book to his credit. His paper citations are like Citation: 232, h-index: 08 and i-index: 08. He earned a Bachelor of Engineering (CSE-2001) Gulbarga University, Gulbarga, Karnataka, Master of Technology (M.Tech-2006) from Visvesvaraya Technological University, Belgaum, Karnataka and a Doctorate (Ph.D.) in Computer and Information Science (Ph.D-CISc-2019) from Visvesvaraya Technological University, Belgaum, Karnataka. He has guide 20+ PG projects and 30+ UG projects. Ability to handle innovative major project of UG final year students and also define research topic for guiding the Ph.D students. He is providing research solution to different domains such as Image, Video, Audio, etc. He is presently working on social problem to provide technical solutions for Blind, Deaf, and Disable, Aged individual, women and children safety service system projects. His research interest is in the areas of Image, Video and Audio processing.
DOI: https://doi.org/10.5815/ijigsp.2022.04.05, Pub. Date: 8 Aug. 2022
This paper presents a natural language text description from video content activities. Here it analyzes the content of any video to identify the number of objects in that video content, what actions and activities are going on has to track and match the action model then based on that generate the grammatical correct text description in English is discussed. It uses two approaches, training, and testing. In the training, we need to maintain a database i.e. subject-verb and object are assigned to extract features of images, and the second approach called testing will automatically generate text descriptions from video content. The implemented system will translate complex video contents into text descriptions and by the duration of a one-minute video with three different object considerations. For this evaluation, a standard DB of YouTube is considered where 250 samples from 50 different domains. The overall system gives an accuracy of 93%.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals