Work place: Nehru Memorial College, Puthanampatti, Trichy, TamilNadu, India-621 007
E-mail: nitishmanik@gmail.com
Website:
Research Interests: Cryptographic Coding, Information-Theoretic Security
Biography
Mani. K received his MCA and M.Tech. from the Bharathidasan University, Trichy, India in Computer Applications and Advanced Information Technology respectively. Since 1989, he has been with the Department of Computer Science at the Nehru Memorial College, affiliated to Bharathidasan University where he is currently working as an Associate Professor. He completed his PhD in Cryptography with primary emphasis on evolution of framework for enhancing the security and optimizing the run time in cryptographic algorithms. He published and presented around 15 research papers at international journals and conferences.
DOI: https://doi.org/10.5815/ijmsc.2017.04.05, Pub. Date: 8 Nov. 2017
Association rule mining is a data mining technique which is used to identify decision-making patterns by analyzing datasets. Many association rule mining techniques exist to find various relationships among itemsets. The techniques proposed in the literature are processed using non-distributed platform in which the entire dataset is sustained till all transactions are processed and the transactions are scanned sequentially. They require more space and are time consuming techniques when large amounts of data are considered. An efficient technique is needed to find association rules from big data set to minimize the space as well as time. Thus, this paper aims to enhance the efficiency of association rule mining of big transaction database both in terms of memory and speed by processing the big transaction database as distributed file system in Map-Reduce framework. The proposed method organizes the transactions into clusters and the clusters are distributed among many parallel processors in a distributed platform. This distribution makes the clusters to be processed simultaneously to find itemsets which enhances the performance both in memory and speed. Then, frequent itemsets are discovered using minimum support threshold. Associations are generated from frequent itemsets and finally interesting rules are found using minimum confidence threshold. The efficiency of the proposed method is enhanced in a noticeably higher level both in terms of memory and speed.
[...] Read more.DOI: https://doi.org/10.5815/ijitcs.2017.07.07, Pub. Date: 8 Jul. 2017
Most of the data mining and machine learning algorithms will work better with discrete data rather than continuous. But the real time data need not be always discrete and thus it is necessary to discretize the continuous features. There are several discretization methods available in the literature. This paper compares the two methods Median Based Discretization and ChiMerge discretization. The discretized values obtained using both methods are used to find the feature relevance using Information Gain. Using the feature relevance, the original features are ranked by both methods and the top ranked attributes are selected as the more relevant ones. The selected attributes are then fed into the Naive Bayesian Classifier to determine the predictive accuracy. The experimental results clearly show that the performance of the Naive Bayesian Classifier has improved significantly for the features selected using Information Gain with Median Based Discretization than Information Gain with ChiMerge discretization.
[...] Read more.DOI: https://doi.org/10.5815/ijmsc.2017.02.04, Pub. Date: 8 Apr. 2017
In many number theoretic cryptographic algorithms, encryption and decryption is of the form xn mod p, where n and p are integers. Exponentiation normally takes more time than any arithmetic operations. It may be performed by repeated multiplication which will reduce the computational time. To reduce the time further fewer multiplications are performed in computing the same exponentiation operation using addition chain. The problem of determining correct sequence of multiplications requires in performing modular exponentiation can be elegantly formulated using the concept of addition chains. There are several methods available in literature in generating the optimal addition chain. But novel graph based methods have been proposed in this paper to generate the optimal addition chain where the vertices of the graph represent the numbers used in the addition chain and edges represent the move from one number to another number in the addition chain. Method 1 termed as GBAPAC which generates all possible optimum addition chains for the given integer n by considering the edge weight of all possible numbers generated from every number in addition chain. Method 2 termed as GBMAC which generates the minimum number of optimum addition chains by considering mutually exclusive edges starting from every number. Further, the optimal addition chain generated for an integer using the proposed methods are verified with the conjectures which already existed in the literature with respect to addition chains.
[...] Read more.DOI: https://doi.org/10.5815/ijmsc.2017.01.04, Pub. Date: 8 Jan. 2017
Symmetric-key encryption is a traditional form of cryptography, in which a single key is used to encrypt and decrypt a message. In symmetric–key algorithm before any encrypted message is being transmitted, the sender and receiver must know the key value in advance. There are several drawbacks in symmetric-key algorithms. In some algorithms, the size of the key should be same as the size of the original plaintext and maintaining and remembering such a key is very difficult. Further, in symmetric-key algorithms, several round has to be performed to produce the ciphertext and perhaps the same key is used in each round which results in subkey generated from the current round is fully depending on the previous round. To avoid these, a novel approach in generating the key from the keystream for any symmetric-key algorithms using the Primitive Pythagorean Triples(PPT) has been proposed in this paper. The main advantage of this method is that the key value generated from the keystream is chosen by both the sender and the receiver. Further, the size of the key sequence is not limited but its size is arbitrary in length. Since, the keystream generated is random, no need to remember such keys by both the sender and the receiver.
[...] Read more.DOI: https://doi.org/10.5815/ijitcs.2017.01.07, Pub. Date: 8 Jan. 2017
Association rule mining aims to determine the relations among sets of items in transaction database and data repositories. It generates informative patterns from large databases. Apriori algorithm is a very popular algorithm in data mining for defining the relationships among itemsets. It generates 1, 2, 3,…, n-item candidate sets. Besides, it performs many scans on transactions to find the frequencies of itemsets which determine 1, 2, 3,…, n-item frequent sets. This paper aims to eradicate the generation of candidate itemsets so as to minimize the processing time, memory and the number of scans on the database. Since only those itemsets which occur in a transaction play a vital role in determining frequent itemset, the methodology that is proposed in this paper is extracting only single itemsets from each transaction, then 2,3,..., n itemsets are generated from them and their corresponding frequencies are also calculated. Further, each transaction is scanned only once and no candidate itemsets is generated both resulting in minimizing the memory space for storing the scanned itemsets and minimizing the processing time too. Based on the generated itemsets, association rules are generated using minimum support and confidence.
[...] Read more.DOI: https://doi.org/10.5815/ijieeb.2016.06.06, Pub. Date: 8 Nov. 2016
Feature selection is an indispensable pre-processing technique for selecting more relevant features and eradicating the redundant attributes. Finding the more relevant features for the target is an essential activity to improve the predictive accuracy of the learning algorithms because more irrelevant features in the original feature space will cause more classification errors and consume more time for learning. Many methods have been proposed for feature relevance analysis but no work has been done using Bayes Theorem and Self Information. Thus this paper has been initiated to introduce a novel integrated approach for feature weighting using the measures viz., Bayes Theorem and Self Information and picks the high weighted attributes as the more relevant features using Sequential Forward Selection. The main objective of introducing this approach is to enhance the predictive accuracy of the Naive Bayesian Classifier.
[...] Read more.Subscribe to receive issue release notifications and newsletters from MECS Press journals