IJCNIS Vol. 17, No. 2, Apr. 2025
Cover page and Table of Contents: PDF (size: 207KB)
REGULAR PAPERS
Wireless communication for data and a variety of wireless interacted devices have increased dramatically in the past few years. Millimeter wave (mmWave) technology can serve the primary objectives of 5G networks, which include high data throughput and low latency. But mmWave signals for communications lacking substantial diffraction and are consequently more susceptible to obstruction by environmental physical objects, which could cause communication lines to be disrupted and congestion takes place. Wireless data transmission suffers from blockages and path loss, causes high latency as well as reduces the data transmission speed and degrades in quality performance. To overcome the limitations, Rough Set Theory with hypertuned SVM is implemented and designed the congestion prediction model based on the behaviour of network towers for low latency and high-speed data transmission. The data from the different towers is initially collected and created as a dataset. Super MICE is a technique to replace the missing data. Then, the Rough Set Theory is utilized to cluster the data into equivalent classes based on the behaviour of 5G, 4G and 3G wireless network. Hypertuned SVM with a Gazelle optimization algorithm is applied to predict the congestion level by accurately selecting the hyperparameter. By employing performance metrics, the proposed approach is examined and contrasted with existing techniques. The evaluation of performance measurements for the proposed method includes informedness attained as 91%, Adjusted Rand Index obtained value as 0.83, Jaccard as 0.737. Accuracy, precision, sensitivity, error, F1_score, and NPV are also achieved at 93%, 92%, 94%, 7%, 92%, and 90%, respectively. According to this evaluation, the proposed model is superior to perform than the earlier used existing methods.
[...] Read more.Due to the maximal transistor count, Multi-Processor System-on-Chip (MPSoC) delivers more performance than uniprocessor systems. Network on Chip (NoC) in MPSoC provides scalable connectivity compared to traditional bus-based interconnects. Still, NoC designs significantly impact MPSoC design as it increases power consumption and network latency. A solution to this problem is packet compression which minimizes the data redundancy within NoC packets and reduces the overall power consumption of the whole network by minimizing a data packet size. Latency and overhead of compressor and decompressor require more memory access time, even though the packet compression is good for the improved performance of NoC. So, this problem demands a simple and lightweight compression method like delta compression. Consequently, this research proposes a new delta-difference Hybrid Tree coding (∆DHT-Zip) to de/compress the data packet in the NoC framework. In this compression approach, the Delta encoding, Huffman encoding and DNA tree (deoxyribonucleic acid) coding are hybridized to perform the data packet de/compression approach. Moreover, a time series approach named Run Length Encoding (RLE) is used to compress the metadata obtained from both the encoding and decoding processes. This research produces decreased packet loss and significant power savings by using the proposed ∆DHT-Zip method. The simulation results show that the proposed ∆DHT-Zip algorithm minimizes packet latency and outperforms existing data compression approaches with a mean Compression Ratio (CR) of 1.2%, which is 79.06% greater than the existing Flitzip algorithm.
[...] Read more.The imbalanced surveillance video dataset consists of majority and minority classes as normal and anomalous instances in the nonlinear and non-Gaussian framework. The normal and anomalous instances cause majority and minority samples or particles associated with high and low probable regions when considering the standard particle filter. The minority particles tend to be at high risk of being suppressed by the majority particles, as the proposal probability density function (pdf) encourages the highly probable regions of the input data space to remain a biased distribution. The standard particle filter-based tracker afflicts with sample degeneration and sample impoverishment due to the biased proposal pdf ignoring the minority particles. The difficulty in designing the correct proposal pdf prevents particle filter-based tracking in the imbalanced video data. The existing methods do not discuss the imbalanced nature of particle filter-based tracking. To alleviate this problem and tracking challenges, this paper proposes a novel fractional whale particle filter (FWPF) that fuses the fractional calculus-based whale optimization algorithm (FWOA) and the standard particle filter under weighted sum rule fusion. Integrating the FWPF with an iterative Gaussian mixture model (GMM) with unbiased sample variance and sample mean allows the proposal pdf to be adaptive to the imbalanced video data. The adaptive proposal pdf leads the FWPF to a minimum variance unbiased estimator for effectively detecting and tracking multiple objects in the imbalanced video data. The fractional calculus up to the first four terms makes the FWOA a local and global search operator with inherent memory property. The fractional calculus in the FWOA oversamples minority particles to be diversified with multiple imputations to eliminate data distortion with low bias and low variance. The proposed FWPF presents a novel imbalance evaluation metric, tracking distance correlation for the imbalanced tracking over UCSD surveillance video data and shows greater efficacy in mitigating the effects of the imbalanced nature of video data compared to other existing methods. The proposed method also outshines the existing methods regarding precision and accuracy in tracking multiple objects. The consistent tracking distance correlation near zero values provides efficient imbalance reduction through bias-variance correction compared to the existing methods.
[...] Read more.A method of identification of original and fake prints has been developed. Security elements are printed using an offset printing method, which we will call original printing. In parallel, we will print bitmap security elements on copiers. We will call this process fake printing. Such types of rasterisation were developed to make the difference between an original print and a fake print visible to the naked eye. A method of detecting fake printing has also been developed by measuring the change in the percentage of raster dot, dot gain, trapping, optical density, ∆lab, and change in tonality. The protection of the printed document is created when the image is transformed by amplitude-modulated rasterisation based on the mathematical apparatus of Ateb-functions. During rasterisation, we create thin graphic elements that have different shapes and are calculated according to developed methods. The size of a single dot of a raster element depends on the selection of the rasterisation method and the tonal gradation value of each corresponding pixel in the image. During rasterisation, a raster structure is formed, in which the value of each raster element is related by the value of the Ateb-function with tonal gradation, as well as a change in the angle, lines and shapes of the curves of a single raster. We offer raster image printing on various paper samples that are widely used today.
[...] Read more.The Internet of Things (IoT) is an ever-expanding network that links all objects to the web so that they can communicate with one another using standardized protocols. Recently, IoT networks have been extensively used in advanced applications like smart factories, smart homes, smart grids, smart cities, etc. They can be used in conjunction with artificial intelligence (AI) and machine learning to facilitate a data collection procedure that is both simplified and more dynamic. Along with the services provided by IoT applications, various security issues are also raised. The accessing of IoT devices is mainly through an untrusted network like the Internet which makes them unprotected against a wide range of malicious attacks. The detection performance of current IDSs is hindered by issues including false alarms, low detection rate, an unbalanced dataset, and slow response time. This study proposes a new intrusion detection system (IDS) for the IoT that utilizes the chaotic improved Black Widow Optimization Kernel Extreme Learning Machine (CIBWO-KELM) algorithm to address these problems. Initially, the pre-processing of the dataset is carried out using min-max normalization, changing string values to numerical values and changing IP address to numerical values. The selection of the highest performing feature set is achieved through the information gain method (IGM), and finally, the intrusion detection is performed by the CIBWO-KELM algorithm. Python is the tool utilized for testing, while the BoT-IoT dataset is used for simulation analysis. The suggested model achieves an accuracy level of 99.7% when applied to the BoT-IoT dataset. In addition, the results of the studies demonstrate that the proposed model outperforms other current techniques.
[...] Read more.Small cell is a key enabler for massive connectivity and higher data rate in the future generation of a cellular communication system. Few challenges in heterogeneous networks (HetNets) are effective resource utilization and de- ployment of optimal small base stations (SBSs) under dynamic mobile traffic patterns. In this paper, we design a traffic adaptive small cell planning (TASCP) schema to minimize the deployment of SBSs, enhancing the network energy efficiency without compromising the user equipment’s QoS (UEs). The proposed TASCP consists of two phases: small cell formation (SCF) and small Cell optimization (SCO). SCF creates the initial association between the UEs and SBS. The SCF operates the modes (active/sleep) of SBSs according to the dynamic traffic load. Changing the mode of SBS from an active mode to a sleep mode is based on the traffic load shared by other neighboring SBSs, cooperatively. The proposed TASCP method is compared with state-of-the-art algorithms, i.e., the Self-organized SBS Deployment Strategy (SSDS) and UE Association and SBS On/Off (USOF) algorithm. The network performance is calculated in terms of network energy efficiency, throughput, convergence time, and active small base stations. The performance of the proposed TASCP significantly increases as compared to state-of-the-art algorithms.
[...] Read more.Data management has been revolutionized because cloud computing technologies have increased user barriers to expensive infrastructure and storage limits. The advantages of the cloud have made it possible for significant cloud implementation in major businesses. However, the privacy of cloud-based data remains the significant and most crucial problem for data owners due to various security risks. Many researchers have proposed various methods to maintain the confidentiality of the data, including attribute-based encryption (ABE). Though, the cloud is still dogged mainly by the security issue. To protect data privacy, the new encryption model "Advanced Encryption Standard- Improved Quantum Ciphertext Policy and Attribute-based Encryption" (AES-IQCP-ABE) is introduced in the present research. The suggested method twice encrypts the data and the attributes using the ABE at first. Second, using the AES technique, the encrypted data is encrypted before being delivered to authorized users. The dynamic, chaotic map function is used in the proposed approach to protecting user attributes throughout the initialization of the key, encryption of data, and decryption of data processes. For the encryption process, the inputs used in the proposed research are both unstructured and structured extensive medical data. Regarding computational memory, time for cloud data encryption, and decryption, the proposed model outperforms the previous ABE-based encryption and decryption algorithms.
[...] Read more.In recent communications, multiple-input-multiple-output (MIMO), orthogonal frequency division multiplexing (OFDM) and Non-Orthogonal Multiple Access (NOMA), and are major sub-system techniques of 5G wireless communications for optimization of latency, Bit Error Rate (BER) and improvement of throughput. In this paper, the proposed design, manages the resource allocations among the techniques to meet the requirements using NOMA and MIMO. The interactive waterfilling based PA in MIMO and NOMA to improve Quality of Service (QoS) and investigated NOMA cell free massive MIMO system by considering effect of linear and individual channel estimations. The proposed system also optimizes user pairing approach for group users that optimize downlink rate per user so that PA can be acceptable at cost of involvedness. Lastly, the proposed system demonstrates experimental results to different noisy channel to minimize the BER and latency that does not degraded in performance compared to the existing PA. The design is validated under single user, 2, 4, 8 users under different noisy channels. The proposed system also validated for up-link transmission under same channels by interactive waterfilling based PA in MIMO and NOMA. Based on obtained simulation results, BER is optimized by 8%, SNR, throughput and PAPR are optimally obtained by 5.5%, 7% and 6% respectively.
[...] Read more.