IJMSC Vol. 2, No. 3, Jul. 2016
Cover page and Table of Contents: PDF (size: 194KB)
REGULAR PAPERS
The main trends of information retrieval deal with its personification and semantization are analyzed. Sources of knowledge about main subjects and objects of the search process are considered. Ontological model of interaction between the Web information resources and information consumers is proposed as a base of the search personification. Methods of development, improvement and usage of this model are defined. User characteristics are supplemented with sociopsychophysiological properties and ontologically personalized readability criteria.
Software realization of semantic search on base of this ontological approach is described.
Cross docks play an important role in goods distribution. In most of the common models, the capacity of vehicles is not completely used as they assume that each node is met only by one vehicle. Also, due to high cost of purchasing vehicles with high capacity, rental vehicles are used in collecting section. In this paper, a novel mathematical model is presented in which, each node can be possibly visited by different vehicles (splitting). Besides, in the proposed model, existence of open routes in pickup section has been supposed. Then, one meta-heuristic method based on the simulation annealing algorithm with two different approaches has been developed. For testing the performance of the proposed algorithm, the obtained results compared with the exact answers in both small and large scales. The outcomes show that the algorithm works properly.
[...] Read more.Subgame Perfect Equilibrium (SGPE) is a refined version of Nash equilibrium used in games of sequential nature. Computational complexity of classical approaches to compute SGPE grows exponentially with the increase in height of the game tree. In this paper, we present a quantum algorithm based on discrete-time quantum walk to compute Subgame Perfect Equilibrium (SGPE) in a finite two-player sequential game. A full-width game tree of average branching factor b and height h has nodes in it. The proposed algorithm uses oracle queries to backtrack to the solution. The resultant speed-up is times better than the best known classical approach, Zermelo's algorithm.
[...] Read more.Feature selection is one of the issues that have been raised in the discussion of machine learning and statistical identification model. We have provided definitions for feature selection and definitions needed to understand this issue, we check. Then, different methods for this problem were based on the type of product, as well as how to evaluate candidate subsets of features, we classify the following categories. As in previous studies may not have understood that different methods of assessment data into consideration, We propose a new approach for assessing similarity of data to understand the relationship between diversity and stability of the data is selected. After review and meta-heuristic algorithms to implement the algorithm found that the cluster algorithm has better performance compared with other algorithms for feature selection sustained.
[...] Read more.In this article our main aim is to revisit the definition of fuzzy point and fuzzy quasi-coincident of fuzzy topology which is accepted in the literature of fuzzy set theory. We analyse some results and also prove some proposition with extended definition of complementation of fuzzy sets on the basis of reference function and some new definitions have also been introduced whenever possible. In this work the main efforts have been made to show that the existing definition of complement of fuzzy point and definition of fuzzy quasi-coincident are not acceptable.
[...] Read more.A newly developed software system before its deployment is subjected to vigorous testing so as to minimize the probability of occurrence of failure very soon. Software solutions for safety critical and mission-critical application areas need a much focused level of testing. The testing process is basically carried out to build confidence in the software for its use in real world applications. Thus, reliability of systems is always a matter of concern for us. As we keep on performing the error detection and correction process on our software, the reliability of the system grows. In order to model this growth in the system reliability, many formulations in Software Reliability Growth Models (SRGMs) have been proposed including some based on Non-Homogeneous Poisson Process (NHPP). The role of human learning and experiential pattern gains are being studied and incorporated in such models. The realistic assumptions about human learning behavior and experiential gains of new skill-sets for better detection and correction of faults on software are being incorporated and studied in such models. In this paper, a detailed analysis of some select SRGMs with learning effects is presented based on use of seven data sets. The estimation of parameters and comparative analysis based on goodness of fit using seven data sets are presented. Moreover, model comparisons on the basis of total defects predicted by the select models are also tabulated.
[...] Read more.