Work place: JNTUK College of Engineering, JNTUK University, Kakinada - 533003, Andhra Pradesh, India
E-mail: vamsivihar@gmail.com
Website:
Research Interests: Neural Networks, Pattern Recognition, Image Compression, Image Manipulation, Image Processing, Data Mining, Data Structures and Algorithms
Biography
K.V. Ramana received B.Tech. degree in Electronics and Communication Engineering from JNT University, Hyderabad. Telangana, India in 1986, M.Tech. degree in Computer Science and Engineering from University of Hyderabad, Hyderabad, Telangana, India in 1990, and Ph.D. in Computer Science and Engineering from Rayalaseema University, Kurnool, Andhra Pradesh, India in 2011. He is working as Professor in the Department of Computer Science and Engineering, JNTUK College of Engineering, JNTUK University, Kakinada, Andhra Pradesh, India. He got published more than 20 papers in International and National, Conferences and Journals. His research interests include Data Warehousing and Mining, Neural Networks, Image Processing, and Pattern Recognition.
By D.T.V. Dharmajee Rao K.V. Ramana
DOI: https://doi.org/10.5815/ijisa.2019.05.03, Pub. Date: 8 May 2019
The development of fast and efficient training algorithms for Deep Neural Networks has been a subject of interest over the past few years because the biggest drawback of Deep Neural Networks is enormous cost in computation and large time is consumed to train the parameters of Deep Neural Networks. This aspect motivated several researchers to focus on recent advancements of hardware architectures and parallel programming models and paradigms for accelerating the training of Deep Neural Networks. We revisited the concepts and mechanisms of typical Deep Neural Network training algorithms such as Backpropagation Algorithm and Boltzmann Machine Algorithm and observed that the matrix multiplication constitutes major portion of the work-load for the Deep Neural Network training process because it is carried out for a huge number of times during the training of Deep Neural Networks. With the advent of many-core GPU technologies, a matrix multiplication can be done very efficiently in parallel and this helps a lot training a Deep Neural Network not consuming time as it used to be a few years ago. CUDA is one of the high performance parallel programming models to exploit the capabilities of modern many-core GPU systems. In this paper, we propose to modify Backpropagation Algorithm and Boltzmann Machine Algorithm with CUDA parallel matrix multiplication and test on many-core GPU system. Finally we discover that the planned strategies achieve very quick training of Deep Neural Networks than classic strategies.
[...] Read more.By D.T.V. Dharmajee Rao K.V. Ramana
DOI: https://doi.org/10.5815/ijisa.2018.06.06, Pub. Date: 8 Jun. 2018
Matrix multiplication is widely used in a variety of applications and is often one of the core components of many scientific computations. This paper will examine three algorithms to compute the product of two matrices: the Naive, Strassen’s and Winograd’s algorithms. One of the main factors of determining the efficiency of an algorithm is the execution time factor, how much time the algorithm takes to accomplish its work. All the three algorithms will be implemented and the execution time will be calculated and we find that Winograd’s algorithm is the best and fast method experimentally for finding matrix multiplication. Deep Neural Networks are used for many applications. Training a Deep Neural Network is a time consuming process, especially when the number of hidden layers and nodes is large. The mechanism of Backpropagation Algorithm and Boltzmann Machine Algorithm for training a Deep Neural Network is revisited and considered how the sum of weighted input is computed. The process of computing the sum of product of weight and input matrices is carried out for several hundreds of thousands of epochs during the training of Deep Neural Network. We propose to modify Backpropagation Algorithm and Boltzmann Machine Algorithm by using fast Winograd’s algorithm. Finally, we find that the proposed methods reduce the long training time of Deep Neural Network than existing direct methods.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals