The study focuses on improving the Quality of Service (QoS) in Vehicle-to-Vehicle (V2V) communication within Vehicular Ad Hoc Networks (VANETs) by enhancing the Learning Automata-based Ad Hoc On-Demand Distance Vector (LA-AODV) routing protocol. Unlike the standard AODV, which is a reactive routing protocol, and previous configurations of LA-AODV, this research introduces a fine-tuning strategy for the learning automata parameters. This strategy allows the parameters to dynamically adapt to changing network conditions to reduce routing overhead and enhance transmission stability. Three modified versions of LA-AODV referred to as setups A, B, and C, are evaluated against the standard AODV and earlier LA-AODV configurations. The performance of each setup is measured using key QoS metrics: flood ID, packet loss ratio (PLR), packet delivery ratio (PDR), average throughput, end-to-end delay, and jitter. These metrics are crucial in evaluating the efficiency, reliability, and performance of V2V communication systems within VANETs. The results demonstrate that the LA-AODV variants significantly reduce flood ID counts, which represent the number of times a packet is broadcasted, compared to AODV, with setups A and B achieving reductions of 10.24% and 28.74%, respectively, at 200 transmissions, indicating enhanced scalability. Additionally, LA-AODV setup A provides 5.4% higher throughput in high-density scenarios. The modified versions also significantly decrease delay and jitter, achieving reductions of over 99.99% and 99.93%, respectively, at 50 transmissions. These findings underscore the adaptive capabilities of the proposed LA-AODV modifications, providing reassurance about the robustness of the system. They also highlight the importance of parameter optimization in maintaining reliable V2V communication. Future work will benchmark LA-AODV against other state-of-the-art protocols to validate its effectiveness further.
[...] Read more.A way to make job matching work better in Nigeria, where the jobless rate is consistently high. Businesses and users alike might gain from the app's user-friendly layout, which makes it simple to publish jobs. Post jobs and submit resumes. The foundation of the program is the SVM algorithm, which searches job ads and user profiles for appropriate matches depending on parameters like education, experience, and the kind of role. This system learns from user interactions and comments to produce even better matches than job boards, which have significantly lower prediction accuracy. We develop secure and scalable applications using front-end and back-end methodologies with React Native and Node.js. This article outlines the system architecture, algorithmic implementation, and first testing results, illustrating how machine learning might transform the employment sector in poor countries such as Nigeria.
[...] Read more.For environmental sustainability and energy security, renewable sources must be incorporated into sustainable energy solutions. Machine learning (ML) techniques are explored in this study to optimize the adoption of renewable energy sources in Bangladesh. Specifically, it proposes a three-phase methodology: (1) forecasting demand for nonrenewable energy, (2) predicting renewable energy availability and costs, and (3) analyzing potential savings and environmental benefits. Utilizing decision trees and random forests, this study presents a comparative analysis of energy demand and cost predictions, contributing to a data-driven framework for energy transition. The results indicate that strategic adoption of renewable energy can mitigate Bangladesh’s electricity shortages while reducing dependency on fossil fuels. Machine learning plays a crucial role in energy optimization by accurately forecasting energy demand and availability, allowing for better resource allocation. It helps identify patterns and trends in energy consumption, enabling more efficient integration of renewable sources. By using techniques like decision trees and random forests, machine learning models can optimize energy production and distribution, ultimately leading to more sustainable and cost-effective energy systems.The findings provide policymakers and energy planners with insights to enhance sustainability efforts.
[...] Read more.Solar power stands as a pivotal renewable energy source for the twenty-first century. However, the optimal functioning of solar panels is often hindered by various faults, necessitating accurate and early defect detection to maximize energy production. Existing solar panel fault identification models encounter challenges such as low precision, difficulty in distinguishing fault types, and poor generalization due to limited and unbalanced data samples. This paper introduces a novel and effective approach, leveraging a Binary Cascaded Convolutional Classifier augmented with visual and thermal image combinations to address these limitations. The proposed model adeptly classifies five distinct types of solar panel faults, including single cell hotspots, diode hotspots, dust/ shadow hotspots, multicell hotspots, and Potential-Induced Degradation (PID) hotspots. Through image augmentation techniques like rotation, shifting, sheering, resizing, jittering, and blurring applied to visual and thermal images, inter-class feature variance is increased. Binary Cascaded Convolutional Neural Network (BCCNN) classifiers are trained using an enriched dataset, each specifically designed to differentiate between dust/ shadow hotspots and other fault categories. The adoption of a binary method significantly enhances precision, allowing for focused fault identification and classification. The proposed model surpasses existing literature in terms of precision (99.8%), accuracy (98.5%) and recall (98.4%), underscoring its effectiveness across all five fault classes. In summary, this research marks a substantial advancement in the realm of solar panel fault identification, presenting a more precise and effective fault detection methodology that has the potential to significantly enhance the maintenance and longevity of solar energy systems.
[...] Read more.The paper conducted a comprehensive analysis of the time series of stock prices of three leading energy companies – Shell, BP and ExxonMobil – for the period from January 2021 to January 2025. At the initial stage, data quality was checked: dates were set as indices, the absence of duplicates and missing values was confirmed, and descriptive statistics (mean, variance, skewness and kurtosis) were calculated. Next, the trends of adjusted closing prices (AdjClose) were analysed using moving averages (SMA14, SMA50), exponential smoothing, moving volatility (30-day standard deviation) and cumulative returns. It was found that еhe price dynamics growth has accelerated since 2022 against the background of the energy crisis caused by the war in Ukraine: ExxonMobil’s cumulative return reached ≈250% by mid-2022 and ≈350% at the beginning of 2025, Shell and BP, respectively ≈220% and ≈200% by 2024. Correlation analysis showed that BP and Shell have the most significant interdependence (r = 0.87, R² = 0.75). The autocorrelation method established high non-stationarity of the time series (ACF about one at low lags). K-Means clustering (k = 2) allowed us to distinguish periods of active growth and relative price consolidation, although the feature selection behind this clustering requires further clarification. The initially reported financial metrics (Sharpe, Sortino, and Calmar ratios) were significantly overstated due to unit errors, specifically, using percentage values as absolute figures. After applying appropriate annualization and decimal scaling performance indicators were obtained for ExxonMobil – CAGR = 36.84%, Sharpe ≈ 1.24, Sortino ≈ 1.9–2.5, Max Drawdown = 20.51%, Calmar ≈ 1.80; Shell: CAGR = 21.29%, Sharpe ≈ 0.76, Sortino ≈ 1.2–1.5, Max Drawdown = 25.04%, Calmar ≈ 0.85; BP: CAGR = 14.54%, Sharpe ≈ 0.53, Sortino ≈ 0.9–1.2, Max Drawdown = 26.23%, Calmar ≈ 0.55. The study confirms that ExxonMobil showed the most stable and substantial growth during the examined period, while BP exhibited the highest volatility. Shell demonstrated an intermediate performance level. The close correlation between Shell and BP is attributed to the similarity in their geographical market activity and stock behaviour. The choice of these methods of analysis is due to the desire to assess the behaviour of stocks during the period of increased market volatility caused by the energy crisis, geopolitical risks and changes in investor priorities. Technical analysis allows you to identify short- and medium-term patterns, clustering allows you to automatically separate market phases without the need for subjective hypotheses, and statistical metrics will enable you to compare the performance of assets within the industry. This research contributes to the broader field of financial analysis by demonstrating how machine learning and technical analytics tools can be applied to assess the resilience and relationships of assets during periods of market turmoil. The results can be helpful for institutional investors, financial analysts, and portfolio managers looking to adapt strategies to dynamic energy market conditions.
[...] Read more.Scheduling is an NP-hard problem, and heuristic algorithms are unable to find approximate solutions within a feasible time frame. Efficient Task Scheduling (TS) in Cloud-Fog Computing (CFC) environments is crucial for meeting the diverse resource demands of modern applications. This paper introduces the Sewing Training-Based Optimization (STBO) algorithm, a novel approach to resource-aware task scheduling that effectively balances workloads across cloud and fog resources. STBO categorizes Virtual Machines (VMs) into low, medium, and high resource utilization queues based on their computational power and availability. By dynamically allocating tasks to these queues, STBO minimizes delays and ensures that tasks with stringent deadlines are executed in optimal environments, enhancing overall system performance. The algorithm leverages processing delays, task deadlines, and VM capabilities to assign tasks intelligently, reducing response times and improving resource utilization. Experimental results demonstrate that STBO outperforms existing scheduling algorithms in reducing makespan by 21.6%, improved energy usage by 31%, and maximizing throughput by 27.8%, making it well-suited for real-time, resource-intensive applications in CFC systems.
[...] Read more.This article introduces a novel variational approach for solving the inverse geodesic problem on a transcendental surface shaped as a cylindrical structure with a cycloidal generatrix, a type of geometry that has not been previously studied in this context. Unlike classical models that rely on symmetric surfaces such as spheres or spheroids, this method formulates the geodesic path as a functional minimization problem. By applying the Euler–Lagrange equation, an analytical integration of the corresponding second-order differential equation is achieved, resulting in a parametric expression that satisfies boundary conditions. The effectiveness of the proposed method for computing geodesic curves on transcendental surfaces has been rigorously evaluated through a series of numerical experiments. Analytical validation has been carried out using MathCad, while simulation and three-dimensional visualization have been implemented in Python. Numerical experiments are conducted and 3D visualizations of the geodesic lines are presented for multiple point pairs on the surface, demonstrating the accuracy and computational efficiency of the proposed solution. This enables a closed-form analytical representation of the geodesic curve, significantly reducing computational complexity compared to existing numerical-heuristic methods.
The obtained results offer clear advantages over existing studies in the field of computational geometry and variational calculus. Specifically, the proposed method enables the construction of geodesic curves on complex transcendental surfaces where traditional methods either fail or require intensive numerical approximation.
The analytical integration of geodesic equations enhances both accuracy and performance, achieving an average computational cost reduction of approximately 27-30% and accuracy improvement of around 20% in comparison with previous models utilizing non-polynomial metrics. These enhancements are especially relevant in applications requiring real-time response and precision, such as robotics, CAD systems, computer graphics, and virtual environment simulation. The method’s ability to deliver compact and exact solutions for boundary value problems positions it as a valuable contribution for both theoretical and applied sciences.
Various authors from around the world have extended the fuzzy concept to study the uncertainty condition and define its degree of certainty in various real-life experiments. At the same time, many authors have discussed the shortcomings of the definition of fuzzy sets that currently exist. However, no author has properly highlighted the problem of not following the two main classical set theories logically. To address this issue, an imprecise set definition is introduced as an extended definition of fuzzy sets, where the new concept applies two parameters, namely the functions of membership and reference, instead of one, and is helpful in defining the uncertainty problem in a more convenient manner than the existing one. In our previous work, we have studied imprecise subgroup using this new concept addressed by Baruah. In this paper, using the concept of complement of imprecise subgroup, we have introduced anti imprecise subgroup and some properties of anti imprecise subgroup with examples. Imprecise subgroup is an extended version of fuzzy group theory developed using the definition of imprecise set defined by Baruah. In addition, we expected an application developed from an anti imprecise subgroup that can be used to resolve various networking problems.
[...] Read more.The Specialized Institute of Applied Technology (ISTA) in Fes provides a vocational training course focused on heritage design to protect and promote the richness and diversity of Moroccan heritage. Currently, this course is taught in French. However, English-language resources, including CAD software, AI tools and online courses predominantly influence the design and new technologies fields. This study investigates the attitudes and preferences of ISTA trainees regarding the language of instruction for heritage design training, how they perceive integrating AI tools into their work, and the relationship between AI and language preference in this field. The study employed a mixed-methods approach, combining quantitative data from surveys with qualitative insights from in-depth interviews. The institution's trainees revealed that approximately 50% do not perceive the current language of instruction (French) as a significant barrier. Nonetheless, 70% expressed a preference for English-language instruction. The Chi-Square as well as Fisher's Exact tests revealed no significant association between language preference and the use of artificial intelligence in heritage-related work in the context of the current sample. Interestingly, the actual use of AI software among participants is low suggesting that while the theoretical value of AI is acknowledged, practical adoption is limited, possibly due to barriers such as lack of access to AI tools or insufficient training.
[...] Read more.The escalating complexity of cybersecurity threats necessitates advanced technological solutions to protect digital infrastructures. This study explores the application of Autoencoder neural networks, a deep learning model, for anomaly detection in network traffic, aiming to enhance real-time identification of cyberattacks. Using the CICIDS2017 dataset, which encompasses diverse attack types such as Distributed Denial of Service (DDoS) and infiltration, the Autoencoder was trained to detect deviations from normal traffic patterns based on reconstruction errors. The model was optimized through preprocessing, feature selection, and hyperparameter tuning, achieving strong performance metrics including precision, recall, F1-score, accuracy, and ROC-AUC. Despite its effectiveness in distinguishing normal and malicious traffic, challenges arose in detecting stealthy attacks like slow brute-force attempts. These results underscore the Autoencoder's potential in cybersecurity frameworks and highlight opportunities for improvement through adaptive thresholds and hybrid models. This study contributes to advancing AI-driven anomaly detection, promoting proactive defense against evolving cyber threats.
[...] Read more.