IJISA Vol. 13, No. 1, Feb. 2021
Cover page and Table of Contents: PDF (size: 283KB)
REGULAR PAPERS
Determining the resource requirements at airports especially in-ground services companies is essential to successful planning in the future, which is represented in the resources demand curve according to the future flight schedule, through which staff schedules are created at the airport to cover the workload with ensuring the highest possible quality service provided. Given in the presence of variety service level agreements used on flight service vary according to many flight features, the resources assumption method makes planning difficult. For instance, flight position is not included in future flight schedule but it's efficacious in the identification of flight resources. In this regard, based on machine learning, we propose a model for building a resource demand curve for future flight schedules. It is divided into two phases, the first is the use of machine learning to predict resources of the service level agreement required on future flight schedules, and the second is the use of implement a resource allocation algorithm to build a demand curve based on predicted resources. This proposal could be applicable to airports that will provide efficient and realistic for the resources demand curve to ensure the resource planning does not deviate from the real-time resource requirements. the model has proven good accuracy when using one day of flights to measuring deviation between the proposed model predict demand curve when flights did not include the location feature and the actual demand curve when flights include location.
[...] Read more.In the present paper, we introduce generalized measure of 'useful' R-norm inaccuracy having two parameters and its analogue 'useful' R-norm total ambiguity measure by merging together the concepts of probability, fuzziness, R-norm, 'useful' information and inaccuracy. Along with the basic properties, some other important properties of these two proposed measures are stated. These measures are generalizations of some well-known inaccuracy measures. Further, the monotonic behaviour of the proposed 'useful' R-norm inaccuracy measures is studied and the graphical overview is given. The measure of information improvement for both the measures is also obtained. Lastly, the application of 'useful' R-norm total ambiguity measure is presented in terms of multi-criteria decision making. For all the numerical calculations R software is used.
[...] Read more.The k-Nearest Neighbor classifier is a non-complex and widely applied data classification algorithm which does well in real-world applications. The overall classification accuracy of the k-Nearest Neighbor algorithm largely depends on the choice of the number of nearest neighbors(k). The use of a constant k value does not always yield the best solutions especially for real-world datasets with an irregular class and density distribution of data points as it totally ignores the class and density distribution of a test point’s k-environment or neighborhood. A resolution to this problem is to dynamically choose k for each test instance to be classified. However, given a large dataset, it becomes very tasking to maximize the k-Nearest Neighbor performance by tuning k. This work proposes the use of Simulated Annealing, a metaheuristic search algorithm, to select optimal k, thus eliminating the prospect of an exhaustive search for optimal k. The results obtained in four different classification tasks demonstrate a significant improvement in the computational efficiency against the k-Nearest Neighbor methods that perform exhaustive search for k, as accurate nearest neighbors are returned faster for k-Nearest Neighbor classification, thus reducing the computation time.
[...] Read more.Nowadays, it is evident that signature is commonly used for personal verification, this justifies the necessity for an Automatic Verification System (AVS). Based on the application, verification could either be achieved Offline or Online. An online system uses the signature’s dynamic information; such information is captured at the instant the signature is generated. An offline system, on the other hand, uses an image (the signature is scanned). In this paper, some set of simple shaped geometric features are used in achieving offline Verification of signatures. These features include Baseline Slant Angle (BSA), Aspect Ratio (AR), and Normalized Area (NA), Center of Gravity as well as the line’s Slope that joins the Center of Gravities of the signature’s image two splits. Before the features extraction, a signature preprocessing is necessary to segregate its parts as well as to eliminate any available spurious noise. Primarily, System training is achieved via a signature record which was acquired from personalities whose signatures had to be validated through the system. An average signature is acquired for each subject as a result of incorporating the aforementioned features which were derived from a sample set of the subject’s true signatures. Therefore, a signature functions as the prototype for authentication against a requested test signature. The similarity measure within the feature space between the two signatures is determined by Euclidian distance. If the Euclidian distance is lower than a set threshold (i.e. analogous to the minimum acceptable degree of similarity), the test signature is certified as that of the claiming subject otherwise detected as a forgery. Details on the stated features, pre-processing, implementation, and the results are presented in this work.
[...] Read more.Outlier detection is one of the important tasks in data mining. Detecting outliers over streaming data has become an important task in many applications, such as network analysis, fraud detections, and environment monitoring. One of the well-known outlier detection algorithms called Local Outlier Factor (LOF). However, the original LOF has many drawbacks that can’t be used with data streams: 1- it needs a lot of processing power (CPU) and large memory to detect the outliers. 2- it deals with static data which mean that in any change in data the LOF recalculates the outliers from the beginning on the whole data. These drawbacks make big challenges for existing outlier detection algorithms in terms of their accuracies when they are implemented in the streaming environment. In this paper, we propose a new algorithm called GSILOF that focuses on detecting outliers from data streams using genetics. GSILOF solve the problem of large memory needed as it has fixed memory bound. GSILOF has two phases. First, the summarization phase that tries to summarize the past data arrived. Second, the detection phase detects the outliers from the new arriving data. The summarization phase uses a genetic algorithm to try to find the subset of points that can represent the whole original set. our experiments have been done over real datasets. Our experiments confirming the effectiveness of the proposed approach and the high quality of approximate solutions in a set of real-world streaming data.
[...] Read more.