IJCNIS Vol. 16, No. 3, Jun. 2024
Cover page and Table of Contents: PDF (size: 151KB)
REGULAR PAPERS
Smart cities (SCs) are being constructed with the huge placement of the Internet of Things (IoT). Real-time enhancements to life quality based on comfort and efficiency. The key concerns in most SCs that immediately impact network performance are security and privacy. Numerous approaches are proposed for secure data transmission, but the current methods do not provide high accuracy and it provide high computational time. To resolve these problems, an Auto-metric Graph Neural Network for Attack Detection and Secure Data Transmission using Optimized Enhanced Identity-Based Encryption in IoT (AGNN-AWHSE-ST-IoT) is proposed. Primarily, the input data is taken from the NSL-KDD dataset. The input data is gathered with the aid of NSL-KDD is pre-processed using three steps, crisp data conversion, splitting, and normalization. Then the Pre-processed input is fed into the Colour Harmony Algorithm (CHA) based feature selection to select the important features. After feature selection, the preferred features are given to the AGNN classifier. After classifying, the data is given to Enhanced Identity-Based Encryption (EIBE), and it is optimized using Wild Horse Optimizer (WHO) for transmitting the data more safely. The outcomes of the normal data are displayed using the LCD monitor. The AGNN-AWHSE-ST-IoT method is implemented in PYTHON. The AGNN-AWHSE-ST-IoT method attains 8.888%, 13.953%, 19.512% higher accuracy, 2.105%, 6.593%, 8.988% higher cumulative accuracy, 54.285%, 54.285%, 52.941% lower encryption time, 8.2%, 3.3%, 6.9% lower decryption time, 11.627%, 10.344%, 6.666% higher security level and 60.869%, 70% and 64% lower computational time than the existing approaches such as SBAS-ST-IoT, BDN-GWMNN-ST-IoT and DNN-LSTM-ST-IoT respectively.
[...] Read more.This paper investigates the impact of propagation delay and channel loss due to the use of multiple LED arrays in visible light communication (VLC) systems based on filter bank multicarrier (FBMC) modulation. FBMC offers greater spectral efficiency, and asynchronous transmission and is a promising alternative scheme to orthogonal frequency division modulation (OFDM). The proposed FBMC model is based on 4-quadrature amplitude modulation (QAM) and 16-QAM formats and uses 100 symbols and 600 input bits per symbol. In this paper, the VLC-FBMC system is designed based on the line-of-sight (LOS) model under the additive white Gaussian noise (AWGN) channel. Comparison analyses between different bit rates in terms of bit error rate (BER), best sampling point, and signal-to-noise ratio (SNR) requirement have been carried out to show the delay and loss effect on communication quality and system performance. The results demonstrate that the proposed FBMC model achieves a bit rate of up to 29.296 Mbit/s with a low BER of 10-3 and less SNR penalty in high QAM formats, demonstrating its potential as a viable alternative to OFDM for future VLC systems.
[...] Read more.As wireless signals are broadcast in nature, which implies that, a broadcast communication purposive to a predetermined destination may be received by a non-intended intermediate station. Cooperative transference, which employ interposed stations to pass on the eavesdropped data to attain the contrast gains, has a substantial capability to revamp the channeling effectiveness in wireless systems. In this it is evident that having cooperation amid stations in a wireless systems can accomplish higher throughput with enhanced network lifetime. Proffered work bestows a model for medium access layer called Cooperative MAC protocol based on optimal Data Rate (CMAC-DR). In the proffered work, stations with more data rate aid stations having lesser data rate in their communication by redirecting their congestion. In CMAC-DR model, utilizing the conveyance of eavesdropped information, potential helper stations with more data rate Send out Helper Ready To Send (HRTS), the stations with less data rate maintains a table, called Co-op table of potential helper stations, that can aid in its transmissions. During communication, the source station with low data rate chooses either transmitting by the way of a helper station, so that it lowers the end to end transference delay and increases the throughput or opt only direct transmission, if no potential helper is found or if CMAC-DR becomes an overhead. By analyzing varied simulated scenarios, CMAC-DR evaluates the elevation in the overall network lifetime, throughput and minimization of delay. The CMAC-DR protocol is unambiguous and in accordant with legacy 802.11 also when compared to this, we find improved performance in terms of delay throughput and network lifetime since data rate is considered as relay selection condition.
[...] Read more.Cloud storage environment permits the data holders to store their private data on remote cloud computers. Ciphertext Policy Attribute Based Encryption (CP-ABE) is an advanced method that assigns fine-grained access control and provides data confidentiality for accessing the cloud data. CP-ABE methods with small attribute universe limit the practical application of CP-ABE as the public parameter length linearly increases with the number of attributes. Further, it is necessary to provide a way to perform complex calculations during decryption on outsourced devices. In addition, the state-of-art techniques found it difficult to trace the traitor as well as revoke their attribute due to the complexity of ciphertext updation. In this paper, a concrete construction of CP-ABE technique has been provided to address the above limitations. The proposed technique supports large attribute universe, proxy decryption, traitor traceability, attribute revocation and ciphertext updation. The proposed scheme is proven to be secure under random oracle model. Moreover, the experimental outcomes reveal that our scheme is more time efficient than the existing schemes in terms of computation cost.
[...] Read more.Countering the spread of calls for political extremism through graphic content on online social networks is becoming an increasingly pressing problem that requires the development of new technological solutions, since traditional approaches to countering are based on the results of recognizing destructive content only in text messages. Since in modern conditions neural network tools for analyzing graphic information are considered the most effective, it is assumed that it is advisable to use such tools for analyzing images and video materials in online social networks, taking into account the need to adapt them to the expected conditions of use, which are determined by the wide variability in the size of graphic content, the presence of typical interference, limited computing resources of recognition tools. Using this thesis, a method has been proposed that makes it possible to implement the construction of neural network recognition tools adapted to the specified conditions. For recognition, the author's neural network model was used, which, due to the reasonable determination of the architectural parameters of the low-resource convolutional neural network of the MobileNetV2 type and the recurrent neural network of the LSTM type, which makes up its structure, ensures high accuracy of recognition of scenes of political extremism both in static images and in video materials under limited computing conditions resources. A mechanism was used to adapt the input field of the neural network model to the variability of the size of graphic resources, which provides for scaling within acceptable limits of the input graphic resource and, if necessary, filling the input field with zeros. Levelling out typical noise is ensured by using advanced solutions in the method for correcting brightness, contrast and eliminating blur of local areas in images of online social networks. Neural network tools developed on the basis of the proposed method for recognizing scenes of political extremism in graphic materials of online social networks demonstrate recognition accuracy at the level of the most well-known neural network models, while ensuring a reduction in resource intensity by more than 10 times. This allows the use of less powerful equipment, increases the speed of content analysis, and also opens up prospects for the development of easily scalable recognition tools, which ultimately ensures an increase in security and a reduction in the spread of extremist content on online social networks. It is advisable to correlate the paths for further research with the introduction of the Attention mechanism into the neural network model used in the method, which will make it possible to increase the efficiency of neural network analysis of video materials.
[...] Read more.Model-based parameter estimation, identification, and optimisation play a dominant role in many aspects of physical and operational processes in applied sciences, engineering, and other related disciplines. The intricate task involves engaging and fitting the most appropriate parametric model with nonlinear or linear features to experimental field datasets priori to selecting the best optimisation algorithm with the best configuration. Thus, the task is usually geared towards solving a clear optimsation problem. In this paper, a systematic-stepwise approach has been employed to review and benchmark six numerical-based optimization algorithms in MATLAB computational Environment. The algorithms include the Gradient Descent (GRA), Levenberg-Marguardt (LEM), Quasi-Newton (QAN), Gauss-Newton (GUN), Nelda-Meald (NEM), and Trust-Region-Dogleg (TRD). This has been accomplished by engaging them to solve an intricate radio frequency propagation modelling and parametric estimation in connection with practical spatial signal data. The spatial signal data were obtained via real-time field drive test conducted around six eNodeBs transmitters, with case studies taken from different terrains where 4G LTE transmitters are operational. Accordingly, three criteria in connection with rate of convergence Results show that the approximate hessian-based QAN algorithm, followed by the LEM algorithm yielded the best results in optimizing and estimating the RF propagation models parameters. The resultant approach and output of this paper will be of countless assets in assisting the end-users to select the most preferable optimization algorithm to handle their respective intricate problems.
[...] Read more.Data transport entails substantial security to avoid unauthorized snooping as data mining yields important and quite often sensitive information that must be and can be secured using one of the myriad Data Privacy Preservation methods. This study aspires to provide new knowledge to the study of protecting personal information. The key contributions of the work are an imputation method for filling in missing data before learning item profiles and the optimization of the Deep Auto-encoded NMF with a customizable learning rate. We used Bayesian inference to assess imputation for data with 13%, 26%, and 52% missing at random. By correcting any inherent biases, the results of decomposition problems may be enhanced. As the statistical analysis tool, MAPE is used. The proposed approach is evaluated on the Wiki dataset and the traffic dataset, against state-of-the-art techniques including BATF, BGCP, BCPF, and modified PARAFAC, all of which use a Bayesian Gaussian tensor factorization. Using this approach, the MAPE index is decreased for data which avails privacy safeguards than its corresponding original forms.
[...] Read more.The use of vehicle ad hoc networks (VANET) is increasing, VANET is a network in which two or more vehicles communicate with each other. The VANET architecture is vulnerable to various attacks, such as DoS and DDoS attacks hence various strategies were previously employed to combat these attacks, but the presence of end-to-end transparency and N-to-1 mapping of different IP addresses create failure in the blockage and not able to determine the twelve variants of DDoS attacks hence a novel technique, Encrypted Access Hex-tuple Mapping Attack detection was proposed, which uses triple random hyperbolic encryption, which performs triple random encoding to encrypt traffic signals and obtains the public key by plotting random values in hyperbola to strengthen the access control in the middlebox and Deep auto sparse impasse NN is used to detect twelve variant DDoS attacks in the VANET architecture. Moreover, to provide immunity against attack, the existing approach uses various artificial immune systems to prevent DDoS attacks but the selection of positive and negative clusters generates too many indicator packets. Hence a novel technique, Stable Automatic Optimized Cache Routing proposed, which uses a Deep trust factorization NN to detect irrational nodes without requiring prior negotiation about local outliner factor and direct evidence by automatically extracting trust factors of each node to manage the packet flows and detecting transmission of dangerous malware files in the network to prevent various types of hybrid DDoS attacks at VANET architecture. The proposed model is implemented in NS-3 to detect and prevent hybrid DDoS attacks.
[...] Read more.In the modern world, the military sphere occupies a very high place in the life of the country. At the same time, this area needs quick and accurate solutions. This decision can greatly affect the unfolding of events on the battlefield and indicate that they must be used carefully, using all possible means. During the war, the speed and importance of decisions are very important, and we note that the relevance of this topic is growing sharply. The purpose of the work is to create a comprehensive information system that facilitates the work of commanders of tactical units, which organizes the visualization and classification of aerial objects in real-time, the classification of objects for radio-technical intelligence, the structuring of military information and facilitates the perception of military information. The object of research/development is a phenomenon that creates a problematic problem, has the presence of slowing factors in the process of command and control, using teams of tactical links, which can slow down decision-making, as well as affect their correctness. The research/development aims to address emerging bottlenecks in the command-and-control process performed by tactical link teams, providing improved visualization, analysis and work with military data. The result of the work is an information system for processing military data to help commanders of tactical units. This system significantly improves on known officer assistance tools, although it includes a set of programs that have been used in parallel on an as-needed basis. Using modern information technologies and ease of use, the system covers problems that may arise for commanders. Also, each program included in the complex information system has its degree of innovation. The information system for structuring military information is distinguished by the possibility of use on any device. The information system for the visualization and clustering of aerial objects and the information system for the classification of objects for radio technical intelligence are distinguished by their component nature. This means that the application can use sources of input information and provides an API to use other processing information. Regarding the information system for integration into information materials, largely unknown terms and abbreviations are defined, so such solutions, cannot integrate the required data into real documents. Therefore, using this comprehensive information system, the command of tactical units will have the opportunity to improve the quality and achieve the command-and-control process.
[...] Read more.Transmission control protocol (TCP) is the most common protocol found in recent networks to maintain reliable communication. The most popular transport protocol in use today is TCP that cannot fully utilize the ability of the network because of the constraints of its conservative congestion control algorithm and favors reliability over timeliness. Despite congestion is the most frequent cause of lost packets, transmission defects can also result in packet loss. In response to packet loss, end-to-end congestion control mechanism in TCP limits the amount of remarkable, unacknowledged data segments that are permitted in the network. To overcome the drawback, Optimized Extreme Gradient Boosting Algorithm is proposed to predict the congestion. Initially, the data is collected and given to data preprocessing to improve the data quality. Min-Max normalization is used to normalize the data in the particular range and KNN-based missing value imputation is used to replace the missing values in the original data in the preprocessing section. Then the preprocessed data is fed into the Optimized Extreme Gradient Boosting Algorithm to predict the congestion. Remora optimization is used in the designed model for optimally selecting the learning rate to minimize the error for enhancing the prediction accuracy in machine learning. For validating the proposed model, the performance metrics attained by the proposed and existing model are compared. Accuracy, precision, recall and error values for the proposed methods are 96%, 97%, 96% and 3% values are obtained. Thus, the proposed optimized extreme gradient boosting with the remora algorithm for congestion prediction in the transport layer method is the best method than the existing algorithm.
[...] Read more.