WorldWideScience

Sample records for tabu search based

  1. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  2. An Improved Tabu Search Algorithm Based on Grid Search Used in the Antenna Parameters Optimization

    OpenAIRE

    He, Di; Hong, Yunlv

    2015-01-01

    In the mobile system covering big areas, many small cells are often used. And the base antenna’s azimuth angle, vertical down angle, and transmit power are the most important parameters to affect the coverage of an antenna. This paper makes mathematical model and analyzes different algorithm’s performance in model. Finally we propose an improved Tabu search algorithm based on grid search, to get the best parameters of antennas, which can maximize the coverage area and minimize the interferenc...

  3. An Improved Tabu Search Algorithm Based on Grid Search Used in the Antenna Parameters Optimization

    Directory of Open Access Journals (Sweden)

    Di He

    2015-01-01

    Full Text Available In the mobile system covering big areas, many small cells are often used. And the base antenna’s azimuth angle, vertical down angle, and transmit power are the most important parameters to affect the coverage of an antenna. This paper makes mathematical model and analyzes different algorithm’s performance in model. Finally we propose an improved Tabu search algorithm based on grid search, to get the best parameters of antennas, which can maximize the coverage area and minimize the interference.

  4. Utilization of Tabu search heuristic rules in sampling-based motion planning

    Science.gov (United States)

    Khaksar, Weria; Hong, Tang Sai; Sahari, Khairul Salleh Mohamed; Khaksar, Mansoor

    2015-05-01

    Path planning in unknown environments is one of the most challenging research areas in robotics. In this class of path planning, the robot acquires the information from its sensory system. Sampling-based path planning is one of the famous approaches with low memory and computational requirements that has been studied by many researchers during the past few decades. We propose a sampling-based algorithm for path planning in unknown environments using Tabu search. The Tabu search component of the proposed method guides the sampling to find the samples in the most promising areas and makes the sampling procedure more intelligent. The simulation results show the efficient performance of the proposed approach in different types of environments. We also compare the performance of the algorithm with some of the well-known path planning approaches, including Bug1, Bug2, PRM, RRT and the Visibility Graph. The comparison results support the claim of superiority of the proposed algorithm.

  5. Neural Based Tabu Search method for solving unit commitment problem with cooling-banking constraints

    Directory of Open Access Journals (Sweden)

    Rajan Asir Christober Gnanakkan Charles

    2009-01-01

    Full Text Available This paper presents a new approach to solve short-term unit commitment problem (UCP using Neural Based Tabu Search (NBTS with cooling and banking constraints. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for next H hours. A 7-unit utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 10, 26 and 34 units. Numerical results are shown to compare the superiority of the cost solutions obtained using the Tabu Search (TS method, Dynamic Programming (DP and Lagrangian Relaxation (LR methods in reaching proper unit commitment.

  6. Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication

    Science.gov (United States)

    Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei

    2018-01-01

    This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.

  7. Optimization of fuel cells for BWR based in Tabu modified search

    International Nuclear Information System (INIS)

    Martin del Campo M, C.; Francois L, J.L.; Palomera P, M.A.

    2004-01-01

    The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)

  8. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    Science.gov (United States)

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems.

  9. Tabu Search-based Synthesis of Digital Microfluidic Biochips with Dynamically Reconfigurable Non-rectangular Devices

    DEFF Research Database (Denmark)

    Maftei, Elena; Pop, Paul; Madsen, Jan

    2010-01-01

    rectangular. In this paper, we present a Tabu Search metaheuristic for the synthesis of digital microfluidic biochips, which, starting from a biochemical application and a given biochip architecture, determines the allocation, resource binding, scheduling and placement of the operations in the application...

  10. Optimized Aircraft Electric Control System Based on Adaptive Tabu Search Algorithm and Fuzzy Logic Control

    Directory of Open Access Journals (Sweden)

    Saifullah Khalid

    2016-09-01

    Full Text Available Three conventional control constant instantaneous power control, sinusoidal current control, and synchronous reference frame techniques for extracting reference currents for shunt active power filters have been optimized using Fuzzy Logic control and Adaptive Tabu search Algorithm and their performances have been compared. Critical analysis of Comparison of the compensation ability of different control strategies based on THD and speed will be done, and suggestions will be given for the selection of technique to be used. The simulated results using MATLAB model are presented, and they will clearly prove the value of the proposed control method of aircraft shunt APF. The waveforms observed after the application of filter will be having the harmonics within the limits and the power quality will be improved.

  11. Optimization of fuel cells for BWR based in Tabu modified search; Optimizacion de celdas de combustible para BWR basada en busqueda Tabu modificada

    Energy Technology Data Exchange (ETDEWEB)

    Martin del Campo M, C.; Francois L, J.L. [Facultad de Ingenieria, UNAM, Laboratorio de Analisis en Ingenieria de Reactores Nucleares, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico); Palomera P, M.A. [Facultad de Ingenieria, UNAM, Posgrado en Ingenieria en Computacion, Circuito exterior s/n, Ciudad Universitaria, Mexico, D.F. (Mexico)]. e-mail: cmcm@fi-b.unam.mx

    2004-07-01

    The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)

  12. A heuristic algorithm based on tabu search for vehicle routing problems with backhauls

    Directory of Open Access Journals (Sweden)

    Jhon Jairo Santa Chávez

    2017-07-01

    Full Text Available In this paper, a heuristic algorithm based on Tabu Search Approach for solving the Vehicle Routing Problem with Backhauls (VRPB is proposed. The problem considers a set of customers divided in two subsets: Linehaul and Backhaul customers. Each Linehaul customer requires the delivery of a given quantity of product from the depot, whereas a given quantity of product must be picked up from each Backhaul customer and transported to the depot. In the proposed algorithm, each route consists of one sub-route in which only the delivery task is done, and one sub-route in which only the collection process is performed. The search process allows obtaining a correct order to visit all the customers on each sub-route. In addition, the proposed algorithm determines the best connections among the sub-routes in order to obtain a global solution with the minimum traveling cost. The efficiency of the algorithm is evaluated on a set of benchmark instances taken from the literature. The results show that the computing times are greatly reduced with a high quality of solutions. Finally, conclusions and suggestions for future works are presented.

  13. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  14. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    Science.gov (United States)

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  15. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    Science.gov (United States)

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  16. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    Directory of Open Access Journals (Sweden)

    Wee Loon Lim

    2016-01-01

    Full Text Available The quadratic assignment problem (QAP is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO, a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  17. A tabu search approach for the NMR protein structure-based assignment problem.

    Science.gov (United States)

    Cavuşlar, Gizem; Çatay, Bülent; Apaydın, Mehmet Serkan

    2012-01-01

    Spectroscopy is an experimental technique which exploits the magnetic properties of specific nuclei and enables the study of proteins in solution. The key bottleneck of NMR studies is to map the NMR peaks to corresponding nuclei, also known as the assignment problem. Structure-Based Assignment (SBA) is an approach to solve this computationally challenging problem by using prior information about the protein obtained from a homologous structure. NVR-BIP used the Nuclear Vector Replacement (NVR) framework to model SBA as a binary integer programming problem. In this paper, we prove that this problem is NP-hard and propose a tabu search (TS) algorithm (NVR-TS) equipped with a guided perturbation mechanism to efficiently solve it. NVR-TS uses a quadratic penalty relaxation of NVR-BIP where the violations in the Nuclear Overhauser Effect constraints are penalized in the objective function. Experimental results indicate that our algorithm finds the optimal solution on NVRBIP’s data set which consists of seven proteins with 25 templates (31 to 126 residues). Furthermore, it achieves relatively high assignment accuracies on two additional large proteins, MBP and EIN (348 and 243 residues, respectively), which NVR-BIP failed to solve. The executable and the input files are available for download at http://people.sabanciuniv.edu/catay/NVR-TS/NVR-TS.html.

  18. Ship Collision Avoidance by Distributed Tabu Search

    Directory of Open Access Journals (Sweden)

    Dong-Gyun Kim

    2015-03-01

    Full Text Available More than 90% of world trade is transported by sea. The size and speed of ships is rapidly increasing in order to boost economic efficiency. If ships collide, the damage and cost can be astronomical. It is very difficult for officers to ascertain routes that will avoid collisions, especially when multiple ships travel the same waters. There are several ways to prevent ship collisions, such as lookouts, radar, and VHF radio. More advanced methodologies, such as ship domain, fuzzy theory, and genetic algorithm, have been proposed. These methods work well in one-on-one situations, but are more difficult to apply in multiple-ship situations. Therefore, we proposed the Distributed Local Search Algorithm (DLSA to avoid ship collisions as a precedent study. DLSA is a distributed algorithm in which multiple ships communicate with each other within a certain area. DLSA computes collision risk based on the information received from neighboring ships. However, DLSA suffers from Quasi-Local Minimum (QLM, which prevents a ship from changing course even when a collision risk arises. In our study, we developed the Distributed Tabu Search Algorithm (DTSA. DTSA uses a tabu list to escape from QLM that also exploits a modified cost function and enlarged domain of next-intended courses to increase its efficiency. We conducted experiments to compare the performance of DLSA and DTSA. The results showed that DTSA outperformed DLSA.

  19. Effective speeding up strategies for tabu search heuristics

    NARCIS (Netherlands)

    Lai, David S.W.; Dullaert, Wout

    2015-01-01

    Tabu search metaheuristics have been developed for decades, making them one of the most widely applied metaheuristic frameworks. Although a standard tabu search framework can offer interesting performance, the effectiveness of a tabu search metaheuristic can be strongly improved by specific

  20. Tabu search for target-radar assignment

    DEFF Research Database (Denmark)

    Hindsberger, Magnus; Vidal, Rene Victor Valqui

    2000-01-01

    In the paper the problem of assigning air-defense illumination radars to enemy targets is presented. A tabu search metaheuristic solution is described and the results achieved are compared to those of other heuristic approaches, implementation and experimental aspects are discussed. It is argued...

  1. Tabu search algorithms for water network optimization

    OpenAIRE

    Cunha, Maria da Conceição; Ribeiro, Luísa

    2004-01-01

    In this paper we propose a tabu search algorithm to find the least-cost design of looped water distribution networks. The mathematical nature of this optimization problem, a nonlinear mixed integer problem, is at the origin of a multitude of contributions to the literature in the last 25 years. In fact, exact optimization methods have not been found for this type of problem, and, in the past, classical optimization methods, like linear and nonlinear programming, were tried at the cost of dras...

  2. Application of tabu search to deterministic and stochastic optimization problems

    Science.gov (United States)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is

  3. Parallel tabu search for two-dimensional irregular cutting

    Energy Technology Data Exchange (ETDEWEB)

    Blazewicz, J.; Walkowiak, R.

    1994-12-31

    The problem of cutting figures of various shapes out from the rectangular area is considered. The method proposed is based on the tabu search metaheuristic adapted for the problem. It includes defining the neighborhood, various types of moves and search strategies. An exact algorithm for finding placement of a polygon in polygons has been used in order to enhance additionally the quality of solutions. A parallel version of the algorithm has been proposed for a transputer environment. In the method independent tabu search processes exist, which use concurrently the transputer network. In order to use the system effectively, a load balancing mechanism has been proposed. It has been based on an auction type algorithm. The experimental results are accompanied by a discussion and comments concerning the future research.

  4. TABU SEARCH WITH ASPIRATION CRITERION FOR THE TIMETABLING PROBLEM

    Directory of Open Access Journals (Sweden)

    Oscar Chávez-Bosquez

    2015-01-01

    Full Text Available The aspiration criterion is an imperative element in the Tabu Search, with aspiration-by-default and the aspiration-by-objective the mainly used criteria in the literature. In this paper a new aspiration criterion is proposed in order to implement a probabilistic function when evaluating an element classified as tabu which improves the current solution, the proposal is called Tabu Search with Probabilistic Aspiration Criterion (BT- CAP. The test case used to evaluate the performance of the Probabilistic Aspiration Criterion proposed consists on the 20 instances of the problem described in the First International Timetabling Competition. The results are compared with 2 additional variants of the Tabu Search Algorithm: Tabu Search with Default Aspiration Criterion (BT-CAD and Tabu Search with Objective Aspiration Criterion (BT-CAO. Wilcoxon test was applied to the generated results, and it was proved with 99 % confidence that BT-CAP algorithm gets better solutions than the two other variants of the Tabu Search algorithm.

  5. An Iterated Tabu Search Approach for the Clique Partitioning Problem

    Directory of Open Access Journals (Sweden)

    Gintaras Palubeckis

    2014-01-01

    all cliques induced by the subsets is as small as possible. We develop an iterated tabu search (ITS algorithm for solving this problem. The proposed algorithm incorporates tabu search, local search, and solution perturbation procedures. We report computational results on CPP instances of size up to 2000 vertices. Performance comparisons of ITS against state-of-the-art methods from the literature demonstrate the competitiveness of our approach.

  6. 3D protein structure prediction with genetic tabu search algorithm.

    Science.gov (United States)

    Zhang, Xiaolong; Wang, Ting; Luo, Huiping; Yang, Jack Y; Deng, Youping; Tang, Jinshan; Yang, Mary Qu

    2010-05-28

    Protein structure prediction (PSP) has important applications in different fields, such as drug design, disease prediction, and so on. In protein structure prediction, there are two important issues. The first one is the design of the structure model and the second one is the design of the optimization technology. Because of the complexity of the realistic protein structure, the structure model adopted in this paper is a simplified model, which is called off-lattice AB model. After the structure model is assumed, optimization technology is needed for searching the best conformation of a protein sequence based on the assumed structure model. However, PSP is an NP-hard problem even if the simplest model is assumed. Thus, many algorithms have been developed to solve the global optimization problem. In this paper, a hybrid algorithm, which combines genetic algorithm (GA) and tabu search (TS) algorithm, is developed to complete this task. In order to develop an efficient optimization algorithm, several improved strategies are developed for the proposed genetic tabu search algorithm. The combined use of these strategies can improve the efficiency of the algorithm. In these strategies, tabu search introduced into the crossover and mutation operators can improve the local search capability, the adoption of variable population size strategy can maintain the diversity of the population, and the ranking selection strategy can improve the possibility of an individual with low energy value entering into next generation. Experiments are performed with Fibonacci sequences and real protein sequences. Experimental results show that the lowest energy obtained by the proposed GATS algorithm is lower than that obtained by previous methods. The hybrid algorithm has the advantages from both genetic algorithm and tabu search algorithm. It makes use of the advantage of multiple search points in genetic algorithm, and can overcome poor hill-climbing capability in the conventional genetic

  7. Tabu search techniques for large high-school timetabling problems

    Energy Technology Data Exchange (ETDEWEB)

    Schaerf, A. [Universita di Roma, Rome (Italy)

    1996-12-31

    The high-school timetabling problem consists in assigning all the lectures of a high school to the time periods in such a way that no teacher (or class) is involved in more than one lecture at a time and other side constraints are satisfied. The problem is NP-complete and is usually tackled using heuristic methods. This paper describes a solution algorithm (and its implementation) based on Tabu Search. The algorithm interleaves different types of moves and makes use of an adaptive relaxation of the hard constraints. The implementation of the algorithm has been successfully experimented in some large high schools with various kinds of side constraints.

  8. Reconstructing protein structure from solvent exposure using tabu search

    Directory of Open Access Journals (Sweden)

    Winter Pawel

    2006-10-01

    Full Text Available Abstract Background A new, promising solvent exposure measure, called half-sphere-exposure (HSE, has recently been proposed. Here, we study the reconstruction of a protein's Cα trace solely from structure-derived HSE information. This problem is of relevance for de novo structure prediction using predicted HSE measure. For comparison, we also consider the well-established contact number (CN measure. We define energy functions based on the HSE- or CN-vectors and minimize them using two conformational search heuristics: Monte Carlo simulation (MCS and tabu search (TS. While MCS has been the dominant conformational search heuristic in literature, TS has been applied only a few times. To discretize the conformational space, we use lattice models with various complexity. Results The proposed TS heuristic with a novel tabu definition generally performs better than MCS for this problem. Our experiments show that, at least for small proteins (up to 35 amino acids, it is possible to reconstruct the protein backbone solely from the HSE or CN information. In general, the HSE measure leads to better models than the CN measure, as judged by the RMSD and the angle correlation with the native structure. The angle correlation, a measure of structural similarity, evaluates whether equivalent residues in two structures have the same general orientation. Our results indicate that the HSE measure is potentially very useful to represent solvent exposure in protein structure prediction, design and simulation.

  9. A Group Theoretic Tabu Search Approach to the Traveling Salesman Problem

    National Research Council Canada - National Science Library

    Hall, Shane

    2000-01-01

    .... This research demonstrates a Group Theoretic Tabu Search (GTTS) Java algorithm for the TSP. The tabu search metaheuristic continuously finds near-optimal solutions to the TSP under various different implementations...

  10. The effect of neighborhood structures on tabu search algorithm in solving university course timetabling problem

    Science.gov (United States)

    Shakir, Ali; AL-Khateeb, Belal; Shaker, Khalid; Jalab, Hamid A.

    2014-12-01

    The design of course timetables for academic institutions is a very difficult job due to the huge number of possible feasible timetables with respect to the problem size. This process contains lots of constraints that must be taken into account and a large search space to be explored, even if the size of the problem input is not significantly large. Different heuristic approaches have been proposed in the literature in order to solve this kind of problem. One of the efficient solution methods for this problem is tabu search. Different neighborhood structures based on different types of move have been defined in studies using tabu search. In this paper, different neighborhood structures on the operation of tabu search are examined. The performance of different neighborhood structures is tested over eleven benchmark datasets. The obtained results of every neighborhood structures are compared with each other. Results obtained showed the disparity between each neighborhood structures and another in terms of penalty cost.

  11. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  12. Tabu search heuristic for university course timetabling problem ...

    African Journals Online (AJOL)

    In this study we have addressed the NP-Hard problem of academic course timetabling. This is the problem of assigning resources such as lecturers, rooms and courses to a fixed time period normally a week, while satisfying a number of problem-specific constraints. This paper describes a Tabu Search algorithm that creates ...

  13. Application of Hybrid Quantum Tabu Search with Support Vector Regression (SVR for Load Forecasting

    Directory of Open Access Journals (Sweden)

    Cheng-Wen Lee

    2016-10-01

    Full Text Available Hybridizing chaotic evolutionary algorithms with support vector regression (SVR to improve forecasting accuracy is a hot topic in electricity load forecasting. Trapping at local optima and premature convergence are critical shortcomings of the tabu search (TS algorithm. This paper investigates potential improvements of the TS algorithm by applying quantum computing mechanics to enhance the search information sharing mechanism (tabu memory to improve the forecasting accuracy. This article presents an SVR-based load forecasting model that integrates quantum behaviors and the TS algorithm with the support vector regression model (namely SVRQTS to obtain a more satisfactory forecasting accuracy. Numerical examples demonstrate that the proposed model outperforms the alternatives.

  14. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    Directory of Open Access Journals (Sweden)

    Lei Shi

    2018-01-01

    Full Text Available In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA and tabu search (TS is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy.

  15. Tabu search methods for multicommodity capcitated fixed charge network design problem

    Energy Technology Data Exchange (ETDEWEB)

    Crainic, T.; Farvolden, J.; Gendreau, M.; Soriano, P.

    1994-12-31

    We address the fixed charge capacitated multicommodity network flow problem with linear costs and no additional side constraints, and present two solution approaches based on the tabu search metaheuristic. In the first case, the search is conducted by exploring the space of the design (integer) variables, while neighbors are evaluated and moves are selected via a capacitated multicommodity minimum cost network flow subproblem. The second approach integrates simplex pivoting rules into a tabu search framework. Here, the search explores the space of flow path variables when all design arcs are open, by using column generation to obtain new variables, and pivoting to determine and evaluate the neighbors of any given solution. Adapting this idea within our tabu search framework represents an interesting challenge, since several of the standard assumptions upon which column generation schemes are based are no longer verified. In particular, the monotonic decrease of the objective function value is no longer ensured, and both variable and fixed costs characterize arcs. On the other hand, the precise definition and generation of the tabu search neighborhoods in a column generation context poses an additional challenge, linked particularly to the description and identification of path variables. We describe the various components, implementation challenges and behaviour of each of the two algorithms, and compare their computational and solution quality performance. Comparisons with known bounding procedures will be presented as well.

  16. Tabu search, a versatile technique for the functions optimization

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    2003-01-01

    The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)

  17. System identification using Nuclear Norm & Tabu Search optimization

    Science.gov (United States)

    Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.

    2018-01-01

    In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.

  18. Ice Cover Prediction of a Power Grid Transmission Line Based on Two-Stage Data Processing and Adaptive Support Vector Machine Optimized by Genetic Tabu Search

    Directory of Open Access Journals (Sweden)

    Xiaomin Xu

    2017-11-01

    Full Text Available With the increase in energy demand, extreme climates have gained increasing attention. Ice disasters on transmission lines can cause gap discharge and icing flashover electrical failures, which can lead to mechanical failure of the tower, conductor, and insulators, causing significant harm to people’s daily life and work. To address this challenge, an intelligent combinational model is proposed based on improved empirical mode decomposition and support vector machine for short-term forecasting of ice cover thickness. Firstly, in light of the characteristics of ice cover thickness data, fast independent component analysis (FICA is implemented to smooth the abnormal situation on the curve trend of the original data for prediction. Secondly, ensemble empirical mode decomposition (EEMD decomposes data after denoising it into different components from high frequency to low frequency, and support vector machine (SVM is introduced to predict the sequence of different components. Then, some modifications are performed on the standard SVM algorithm to accelerate the convergence speed. Combined with the advantages of genetic algorithm and tabu search, the combination algorithm is introduced to optimize the parameters of support vector machine. To improve the prediction accuracy, the kernel function of the support vector machine is adaptively adopted according to the complexity of different sequences. Finally, prediction results for each component series are added to obtain the overall ice cover thickness. A 220 kV DC transmission line in the Hunan Region is taken as the case study to verify the practicability and effectiveness of the proposed method. Meanwhile, we select SVM optimized by genetic algorithm (GA-SVM and traditional SVM algorithm for comparison, and use the error function of mean absolute percentage error (MAPE, root mean square error (RMSE and mean absolute error (MAE to compare prediction accuracy. Finally, we find that these improvements

  19. Tabu search, a versatile technique for the functions optimization; Busqueda Tabu, una tecnica versatil para la optimizacion de funciones

    Energy Technology Data Exchange (ETDEWEB)

    Castillo M, J.A. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)

    2003-07-01

    The basic elements of the Tabu search technique are presented, putting emphasis in the qualities that it has in comparison with the traditional methods of optimization known as in descending pass. Later on some modifications are sketched that have been implemented in the technique along the time, so that this it is but robust. Finally they are given to know some areas where this technique has been applied, obtaining successful results. (Author)

  20. Implementation of the Metaheuristic Tabu Search in Route Selection for Mobility Analysis Support System

    National Research Council Canada - National Science Library

    Ryer, David

    1999-01-01

    This thesis employs a reactive tabu search heuristic implemented in the Java programming language to solve a real world variation of the vehicle routing problem with the objective of providing quality...

  1. Tabu search approaches for the multi-level warehouse layout problem with adjacency constraints

    Science.gov (United States)

    Zhang, G. Q.; Lai, K. K.

    2010-08-01

    A new multi-level warehouse layout problem, the multi-level warehouse layout problem with adjacency constraints (MLWLPAC), is investigated. The same item type is required to be located in adjacent cells, and horizontal and vertical unit travel costs are product dependent. An integer programming model is proposed to formulate the problem, which is NP hard. Along with a cube-per-order index policy based heuristic, the standard tabu search (TS), greedy TS, and dynamic neighbourhood based TS are presented to solve the problem. The computational results show that the proposed approaches can reduce the transportation cost significantly.

  2. Models and Tabu Search Metaheuristics for Service Network Design with Asset-Balance Requirements

    DEFF Research Database (Denmark)

    Pedersen, Michael Berliner; Crainic, T.G.; Madsen, Oli B.G.

    2009-01-01

    This paper focuses on a generic model for service network design, which includes asset positioning and utilization through constraints on asset availability at terminals. We denote these relations as "design-balance constraints" and focus on the design-balanced capacitated multicommodity network...... design model, a generalization of the capacitated multicommodity network design model generally used in service network design applications. Both arc-and cycle-based formulations for the new model are presented. The paper also proposes a tabu search metaheuristic framework for the arc-based formulation...

  3. Cultural-Based Genetic Tabu Algorithm for Multiobjective Job Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Yuzhen Yang

    2014-01-01

    Full Text Available The job shop scheduling problem, which has been dealt with by various traditional optimization methods over the decades, has proved to be an NP-hard problem and difficult in solving, especially in the multiobjective field. In this paper, we have proposed a novel quadspace cultural genetic tabu algorithm (QSCGTA to solve such problem. This algorithm provides a different structure from the original cultural algorithm in containing double brief spaces and population spaces. These spaces deal with different levels of populations globally and locally by applying genetic and tabu searches separately and exchange information regularly to make the process more effective towards promising areas, along with modified multiobjective domination and transform functions. Moreover, we have presented a bidirectional shifting for the decoding process of job shop scheduling. The computational results we presented significantly prove the effectiveness and efficiency of the cultural-based genetic tabu algorithm for the multiobjective job shop scheduling problem.

  4. Fuel Management in Candu Reactors Using Tabu Search

    International Nuclear Information System (INIS)

    Chambon, R.; Varin, E.

    2008-01-01

    Meta-heuristic methods are perfectly suited to solve fuel management optimization problem in LWR. Indeed, they are originally designed for combinatorial or integer parameter problems which can represent the reloading pattern of the assemblies. For the Candu reactors the problem is however completely different. Indeed, this type of reactor is refueled online. Thus, for their design at fuel reloading equilibrium, the parameter to optimize is the average exit burnup of each fuel channel (which is related to the frequency at which each channel has to be reloaded). It is then a continuous variable that we have to deal with. Originally, this problem was solved using gradient methods. However, their major drawback is the potential local optimum into which they can be trapped. This makes the meta-heuristic methods interesting. In this paper, we have successfully implemented the Tabu Search (TS) method in the reactor diffusion code DONJON. The case of an ACR-700 using 7 burnup zones has been tested. The results have been compared to those we obtained previously with gradient methods. Both methods give equivalent results. This validates them both. The TS has however a major drawback concerning the computation time. A problem with the enrichment as an additional parameter has been tested. In this case, the feasible domain is very narrow, and the optimization process has encountered limitations. Actually, the TS method may not be suitable to find the exact solution of the fuel management problem, but it may be used in a hybrid method such as a TS to find the global optimum region coupled with a gradient method to converge faster on the exact solution. (authors)

  5. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles

    Directory of Open Access Journals (Sweden)

    Ricardo Soto

    2015-01-01

    Full Text Available The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n2 × n2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.

  6. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.

    Science.gov (United States)

    Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique

    2015-01-01

    The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.

  7. A Multi Time Scale Wind Power Forecasting Model of a Chaotic Echo State Network Based on a Hybrid Algorithm of Particle Swarm Optimization and Tabu Search

    Directory of Open Access Journals (Sweden)

    Xiaomin Xu

    2015-11-01

    Full Text Available The uncertainty and regularity of wind power generation are caused by wind resources’ intermittent and randomness. Such volatility brings severe challenges to the wind power grid. The requirements for ultrashort-term and short-term wind power forecasting with high prediction accuracy of the model used, have great significance for reducing the phenomenon of abandoned wind power , optimizing the conventional power generation plan, adjusting the maintenance schedule and developing real-time monitoring systems. Therefore, accurate forecasting of wind power generation is important in electric load forecasting. The echo state network (ESN is a new recurrent neural network composed of input, hidden layer and output layers. It can approximate well the nonlinear system and achieves great results in nonlinear chaotic time series forecasting. Besides, the ESN is simpler and less computationally demanding than the traditional neural network training, which provides more accurate training results. Aiming at addressing the disadvantages of standard ESN, this paper has made some improvements. Combined with the complementary advantages of particle swarm optimization and tabu search, the generalization of ESN is improved. To verify the validity and applicability of this method, case studies of multitime scale forecasting of wind power output are carried out to reconstruct the chaotic time series of the actual wind power generation data in a certain region to predict wind power generation. Meanwhile, the influence of seasonal factors on wind power is taken into consideration. Compared with the classical ESN and the conventional Back Propagation (BP neural network, the results verify the superiority of the proposed method.

  8. Solving a large-scale precedence constrained scheduling problem with elastic jobs using tabu search

    DEFF Research Database (Denmark)

    Pedersen, C.R.; Rasmussen, R.V.; Andersen, Kim Allan

    2007-01-01

    This paper presents a solution method for minimizing makespan of a practical large-scale scheduling problem with elastic jobs. The jobs are processed on three servers and restricted by precedence constraints, time windows and capacity limitations. We derive a new method for approximating the server...... exploitation of the elastic jobs and solve the problem using a tabu search procedure. Finding an initial feasible solution is in general -complete, but the tabu search procedure includes a specialized heuristic for solving this problem. The solution method has proven to be very efficient and leads...

  9. A hybrid Tabu search-simulated annealing method to solve quadratic assignment problem

    Directory of Open Access Journals (Sweden)

    Mohamad Amin Kaviani

    2014-06-01

    Full Text Available Quadratic assignment problem (QAP has been considered as one of the most complicated problems. The problem is NP-Hard and the optimal solutions are not available for large-scale problems. This paper presents a hybrid method using tabu search and simulated annealing technique to solve QAP called TABUSA. Using some well-known problems from QAPLIB generated by Burkard et al. (1997 [Burkard, R. E., Karisch, S. E., & Rendl, F. (1997. QAPLIB–a quadratic assignment problem library. Journal of Global Optimization, 10(4, 391-403.], two methods of TABUSA and TS are both coded on MATLAB and they are compared in terms of relative percentage deviation (RPD for all instances. The performance of the proposed method is examined against Tabu search and the preliminary results indicate that the hybrid method is capable of solving real-world problems, efficiently.

  10. A new method for decoding an encrypted text by genetic algorithms and its comparison with tabu search and simulated annealing

    Directory of Open Access Journals (Sweden)

    Mahdi Sadeghzadeh

    2014-02-01

    Full Text Available Genetic Algorithm is an algorithm based on population and many optimization problems are solved with this method, successfully. With increasing demand for computer attacks, security, efficient and reliable Internet has increased. Cryptographic systems have studied the science of communication is hidden, and includes two case categories including encryption, password and analysis. In this paper, several code analyses based on genetic algorithms, tabu search and simulated annealing for a permutation of encrypted text are investigated. The study also attempts to provide and to compare the performance in terms of the amount of check and control algorithms and the results are compared.

  11. A HYBRID HOPFIELD NEURAL NETWORK AND TABU SEARCH ALGORITHM TO SOLVE ROUTING PROBLEM IN COMMUNICATION NETWORK

    Directory of Open Access Journals (Sweden)

    MANAR Y. KASHMOLA

    2012-06-01

    Full Text Available The development of hybrid algorithms for solving complex optimization problems focuses on enhancing the strengths and compensating for the weakness of two or more complementary approaches. The goal is to intelligently combine the key elements of these approaches to find superior solutions to solve optimization problems. Optimal routing in communication network is considering a complex optimization problem. In this paper we propose a hybrid Hopfield Neural Network (HNN and Tabu Search (TS algorithm, this algorithm called hybrid HNN-TS algorithm. The paradigm of this hybridization is embedded. We embed the short-term memory and tabu restriction features from TS algorithm in the HNN model. The short-term memory and tabu restriction control the neuron selection process in the HNN model in order to get around the local minima problem and find an optimal solution using the HNN model to solve complex optimization problem. The proposed algorithm is intended to find the optimal path for packet transmission in the network which is fills in the field of routing problem. The optimal path that will be selected is depending on 4-tuples (delay, cost, reliability and capacity. Test results show that the propose algorithm can find path with optimal cost and a reasonable number of iterations. It also shows that the complexity of the network model won’t be a problem since the neuron selection is done heuristically.

  12. A Novel Framework for Medical Web Information Foraging Using Hybrid ACO and Tabu Search.

    Science.gov (United States)

    Drias, Yassine; Kechid, Samir; Pasi, Gabriella

    2016-01-01

    We present in this paper a novel approach based on multi-agent technology for Web information foraging. We proposed for this purpose an architecture in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The system was implemented using a colony of artificial ants hybridized with tabu search in order to achieve more effectiveness and efficiency. To validate our proposal, experiments were conducted on MedlinePlus, a real website dedicated for research in the domain of Health in contrast to other previous works where experiments were performed on web logs datasets. The main results are promising either for those related to strong Web regularities and for the response time, which is very short and hence complies the real time constraint.

  13. A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available In this paper, we address the Wireless Sensor Network (WSN deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values.

  14. A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events.

    Science.gov (United States)

    Aitsaadi, Nadjib; Achir, Nadjib; Boussetta, Khaled; Pujolle, Guy

    2009-01-01

    In this paper, we address the Wireless Sensor Network (WSN) deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values.

  15. A hybridised variable neighbourhood tabu search heuristic to increase security in a utility network

    International Nuclear Information System (INIS)

    Janssens, Jochen; Talarico, Luca; Sörensen, Kenneth

    2016-01-01

    We propose a decision model aimed at increasing security in a utility network (e.g., electricity, gas, water or communication network). The network is modelled as a graph, the edges of which are unreliable. We assume that all edges (e.g., pipes, cables) have a certain, not necessarily equal, probability of failure, which can be reduced by selecting edge-specific security strategies. We develop a mathematical programming model and a metaheuristic approach that uses a greedy random adaptive search procedure to find an initial solution and uses tabu search hybridised with iterated local search and a variable neighbourhood descend heuristic to improve this solution. The main goal is to reduce the risk of service failure between an origin and a destination node by selecting the right combination of security measures for each network edge given a limited security budget. - Highlights: • A decision model aimed at increasing security in a utility network is proposed. • The goal is to reduce the risk of service failure given a limited security budget. • An exact approach and a variable neighbourhood tabu search heuristic are developed. • A generator for realistic networks is built and used to test the solution methods. • The hybridised heuristic reduces the total risk on average with 32%.

  16. Hydro-thermal Commitment Scheduling by Tabu Search Method with Cooling-Banking Constraints

    Science.gov (United States)

    Nayak, Nimain Charan; Rajan, C. Christober Asir

    This paper presents a new approach for developing an algorithm for solving the Unit Commitment Problem (UCP) in a Hydro-thermal power system. Unit Commitment is a nonlinear optimization problem to determine the minimum cost turn on/off schedule of the generating units in a power system by satisfying both the forecasted load demand and various operating constraints of the generating units. The effectiveness of the proposed hybrid algorithm is proved by the numerical results shown comparing the generation cost solutions and computation time obtained by using Tabu Search Algorithm with other methods like Evolutionary Programming and Dynamic Programming in reaching proper unit commitment.

  17. A hybrid tabu search algorithm for automatically assigning patients to beds.

    Science.gov (United States)

    Demeester, Peter; Souffriau, Wouter; De Causmaecker, Patrick; Vanden Berghe, Greet

    2010-01-01

    We describe a patient admission scheduling algorithm that supports the operational decisions in a hospital. It involves efficiently assigning patients to beds in the appropriate departments, taking into account the medical needs of the patients as well as their preferences, while keeping the number of patients in the different departments balanced. Due to the combinatorial complexity of the admission scheduling problem, there is a need for an algorithm that intelligently assists the admission scheduler in taking decisions fast. To this end a hybridized tabu search algorithm is developed to tackle the admission scheduling problem. For testing, we use a randomly generated data set. The performance of the algorithm is compared with an integer programming approach. The metaheuristic allows flexible modelling and presents feasible solutions even when disrupted by the user at an early stage in the calculation. The integer programming approach is not able to find a solution in 1h of calculation time. 2009 Elsevier B.V. All rights reserved.

  18. Hybrid water flow-like algorithm with Tabu search for traveling salesman problem

    Science.gov (United States)

    Bostamam, Jasmin M.; Othman, Zulaiha

    2016-08-01

    This paper presents a hybrid Water Flow-like Algorithm with Tabu Search for solving travelling salesman problem (WFA-TS-TSP).WFA has been proven its outstanding performances in solving TSP meanwhile TS is a conventional algorithm which has been used since decades to solve various combinatorial optimization problem including TSP. Hybridization between WFA with TS provides a better balance of exploration and exploitation criteria which are the key elements in determining the performance of one metaheuristic. TS use two different local search namely, 2opt and 3opt separately. The proposed WFA-TS-TSP is tested on 23 sets on the well-known benchmarked symmetric TSP instances. The result shows that the proposed WFA-TS-TSP has significant better quality solutions compared to WFA. The result also shows that the WFA-TS-TSP with 3-opt obtained the best quality solution. With the result obtained, it could be concluded that WFA has potential to be further improved by using hybrid technique or using better local search technique.

  19. Development of Future Rule Curves for Multipurpose Reservoir Operation Using Conditional Genetic and Tabu Search Algorithms

    Directory of Open Access Journals (Sweden)

    Anongrit Kangrang

    2018-01-01

    Full Text Available Optimal rule curves are necessary guidelines in the reservoir operation that have been used to assess performance of any reservoir to satisfy water supply, irrigation, industrial, hydropower, and environmental conservation requirements. This study applied the conditional genetic algorithm (CGA and the conditional tabu search algorithm (CTSA technique to connect with the reservoir simulation model in order to search optimal reservoir rule curves. The Ubolrat Reservoir located in the northeast region of Thailand was an illustrative application including historic monthly inflow, future inflow generated by the SWAT hydrological model using 50-year future climate data from the PRECIS regional climate model in case of B2 emission scenario by IPCC SRES, water demand, hydrologic data, and physical reservoir data. The future and synthetic inflow data of reservoirs were used to simulate reservoir system for evaluating water situation. The situations of water shortage and excess water were shown in terms of frequency magnitude and duration. The results have shown that the optimal rule curves from CGA and CTSA connected with the simulation model can mitigate drought and flood situations than the existing rule curves. The optimal future rule curves were more suitable for future situations than the other rule curves.

  20. System network planning expansion using mathematical programming, genetic algorithms and tabu search

    Energy Technology Data Exchange (ETDEWEB)

    Sadegheih, A. [Department of Industrial Engineering, University of Yazd, P.O. Box 89195-741, Yazd (Iran); Drake, P.R. [E-Business and Operations Management Division, University of Liverpool Management School, University of Liverpool, Liverpool (United Kingdom)

    2008-06-15

    In this paper, system network planning expansion is formulated for mixed integer programming, a genetic algorithm (GA) and tabu search (TS). Compared with other optimization methods, GAs are suitable for traversing large search spaces, since they can do this relatively rapidly and because the use of mutation diverts the method away from local minima, which will tend to become more common as the search space increases in size. GA's give an excellent trade off between solution quality and computing time and flexibility for taking into account specific constraints in real situations. TS has emerged as a new, highly efficient, search paradigm for finding quality solutions to combinatorial problems. It is characterized by gathering knowledge during the search and subsequently profiting from this knowledge. The attractiveness of the technique comes from its ability to escape local optimality. The cost function of this problem consists of the capital investment cost in discrete form, the cost of transmission losses and the power generation costs. The DC load flow equations for the network are embedded in the constraints of the mathematical model to avoid sub-optimal solutions that can arise if the enforcement of such constraints is done in an indirect way. The solution of the model gives the best line additions and also provides information regarding the optimal generation at each generation point. This method of solution is demonstrated on the expansion of a 10 bus bar system to 18 bus bars. Finally, a steady-state genetic algorithm is employed rather than generational replacement, also uniform crossover is used. (author)

  1. System network planning expansion using mathematical programming, genetic algorithms and tabu search

    International Nuclear Information System (INIS)

    Sadegheih, A.; Drake, P.R.

    2008-01-01

    In this paper, system network planning expansion is formulated for mixed integer programming, a genetic algorithm (GA) and tabu search (TS). Compared with other optimization methods, GAs are suitable for traversing large search spaces, since they can do this relatively rapidly and because the use of mutation diverts the method away from local minima, which will tend to become more common as the search space increases in size. GA's give an excellent trade off between solution quality and computing time and flexibility for taking into account specific constraints in real situations. TS has emerged as a new, highly efficient, search paradigm for finding quality solutions to combinatorial problems. It is characterized by gathering knowledge during the search and subsequently profiting from this knowledge. The attractiveness of the technique comes from its ability to escape local optimality. The cost function of this problem consists of the capital investment cost in discrete form, the cost of transmission losses and the power generation costs. The DC load flow equations for the network are embedded in the constraints of the mathematical model to avoid sub-optimal solutions that can arise if the enforcement of such constraints is done in an indirect way. The solution of the model gives the best line additions and also provides information regarding the optimal generation at each generation point. This method of solution is demonstrated on the expansion of a 10 bus bar system to 18 bus bars. Finally, a steady-state genetic algorithm is employed rather than generational replacement, also uniform crossover is used

  2. Automated Energy Calibration and Fitting of LaCl3(Ce y-Spectra Using Peak Likelihood and Tabu Search

    Directory of Open Access Journals (Sweden)

    Timothy P. McClanahan

    2008-10-01

    Full Text Available An automated method for ?-emission spectrum calibration and deconvolution is presented for spaceflight applications for a Cerium doped Lanthanum Chloride, (LaCl3(Ce ?-ray detector system. This detector will be coupled with a pulsed neutron generator (PNG to induce and enhance nuclide signal quality and rates, yielding large volumes of spectral information. Automated analytical methods are required to deconvolve and quantify nuclide signals from spectra; this will both reduce human interactions in spectrum analysis and facilitate feedback to automated robotic and operations planning. Initial system tests indicate significant energy calibration drifts (>6%, that which must be mitigated for spectrum analysis. A linear energy calibration model is presently considered, with gain and zero factors. Deconvolution methods incorporate a tabu search heuristic to formulate and optimize searches using memory structures. Iterative use of a peak likelihood methodology identifies global calibration minima and peak areas. The method is compared to manual methods of calibration and indicates superior performance using tabu methods. Performance of the Tabu enhanced calibration method is superior to similar unoptimized local search. The techniques are also applicable to other emission spectroscopy, eg. X-ray and neutron.

  3. Obtention control bars patterns for a BWR using Tabo search; Obtencion de patrones de barras de control para un BWR usando busqueda Tabu

    Energy Technology Data Exchange (ETDEWEB)

    Castillo, A.; Ortiz, J.J.; Alonso, G. [Instituto Nacional de Investigaciones Nucleares, Km. 36.5 Carretera Mexico-Toluca, Ocoyoacac, Estado de Mexico 52045 (Mexico); Morales, L.B. [UNAM, IIMAS, Ciudad Universitaria, D. F. 04510 (Mexico); Valle, E. del [IPN, ESFM, Unidad Profesional ' Adolfo Lopez Mateos' , Col. Lindavista 07738, D. F. (Mexico)]. e-mail: jacm@nuclear.inin.mx

    2004-07-01

    The obtained results when implementing the technique of tabu search, for to optimize patterns of control bars in a BWR type reactor, using the CM-PRESTO code are presented. The patterns of control bars were obtained for the designs of fuel reloads obtained in a previous work, using the same technique. The obtained results correspond to a cycle of 18 months using 112 fresh fuels enriched at the 3.53 of U-235. The used technique of tabu search, prohibits recently visited movements, in the position that correspond to the axial positions of the control bars, additionally the tiempo{sub t}abu matrix is used for to manage a size of variable tabu list and the objective function is punished with the frequency of the forbidden movements. The obtained patterns of control bars improve the longitude of the cycle with regard to the reference values and they complete the restrictions of safety. (Author)

  4. Kombinasi Firefly Algorithm-Tabu Search untuk Penyelesaian Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Riyan Naufal Hay's

    2017-07-01

    Full Text Available Traveling Salesman Problem (TSP adalah masalah optimasi kombinatorial klasik dan memiliki peran dalam perencanaan, penjadwalan, dan pencarian pada bidang rekayasa dan pengetahuan (Dong, 2012. TSP juga merupakan objek yang baik untuk menguji kinerja metode optimasi, beberapa metode seperti Cooperative Genetic Ant System (CGAS (Dong, 2012, Parallelized Genetic Ant Colony System (PGAS Particle Swarm Optimization and Ant Colony Optimization Algorithms (PSO–ACO (Elloumi, 2014, dan Ant Colony Hyper-Heuristics (ACO HH (Aziz, 2015 telah dikembangkan untuk memecahkan TSP. Sehingga, pada penelitian ini diimplementasikan kombinasi metode baru untuk meningkatkan akurasi penyelesaian TSP. Firefly Algorithm (FA merupakan salah satu algoritma yang dapat digunakan untuk memecahkan masalah optimasi kombinatorial (Layeb, 2014. FA merupakan algoritma yang berpotensi kuat dalam memecahkan kasus optimasi dibanding algoritma yang ada termasuk Particle Swarm Optimization (Yang, 2010. Namun, FA memiliki kekurangan dalam memecahkan masalah optimasi dengan skala besar (Baykasoğlu dan Ozsoy, 2014. Tabu Search (TS merupakan metode optimasi yang terbukti efektif untuk memecahkan masalah optimasi dengan skala besar (Pedro, 2013. Pada penelitian ini, TS akan diterapkan pada FA (FATS untuk memecahkan kasus TSP. Hasil FATS akan dibandingkan terhadap penelitian sebelumnya yaitu ACOHH. Perbandingan hasil menunjukan peningkatan akurasi sebesar 0.89% pada dataset Oliver30, 0.14% dataset Eil51, 3.81% dataset Eil76 dan 1.27% dataset KroA100.

  5. A Novel Prostate Cancer Classification Technique Using Intermediate Memory Tabu Search

    Directory of Open Access Journals (Sweden)

    Abbes Amira

    2005-08-01

    Full Text Available The introduction of multispectral imaging in pathology problems such as the identification of prostatic cancer is recent. Unlike conventional RGB color space, it allows the acquisition of a large number of spectral bands within the visible spectrum. This results in a feature vector of size greater than 100. For such a high dimensionality, pattern recognition techniques suffer from the well-known curse of dimensionality problem. The two well-known techniques to solve this problem are feature extraction and feature selection. In this paper, a novel feature selection technique using tabu search with an intermediate-term memory is proposed. The cost of a feature subset is measured by leave-one-out correct-classification rate of a nearest-neighbor (1-NN classifier. The experiments have been carried out on the prostate cancer textured multispectral images and the results have been compared with a reported classical feature extraction technique. The results have indicated a significant boost in the performance both in terms of minimizing features and maximizing classification accuracy.

  6. Meta heurística tabu search aplicada ao problema de projeto de redes de transporte

    Directory of Open Access Journals (Sweden)

    Leonardo Campo DalI'Orto

    2009-12-01

    Full Text Available

    Na otimização clássica o problema de projetos de redes de serviço é formulado como um problema inteiro misto. Esta abordagem resulta em uma formulação com um número grande de variáveis e restrições. Utilizar técnicas de enumeração para resolver este problema é extremamente dispendioso em relação ao tempo computacional, quando se trabalha em um contexto dinâmico este problema é ainda mais contundente. Nossa idéia é decompor a rede em vários subproblemas enraizados em um terminal (nó e resolvê-los um a um. Cada subproblema representa a operação de um despachante em um dado período e iteração. A estratégia de solução para cada subproblema é encontrar uma solução inicial factível e aprimorá-la utilizando uma meta-heurística. No nosso caso, usaremos a idéia das cadeias de ejeção e técnicas de busca na vizinhança encontradas na meta-heurística tabu search. O objetivo é encontrar rapidamente uma solução de alta qualidade.

  7. A Hybrid Tabu Search Algorithm for a Real-World Open Vehicle Routing Problem Involving Fuel Consumption Constraints

    Directory of Open Access Journals (Sweden)

    Yunyun Niu

    2018-01-01

    Full Text Available Outsourcing logistics operation to third-party logistics has attracted more attention in the past several years. However, very few papers analyzed fuel consumption model in the context of outsourcing logistics. This problem involves more complexity than traditional open vehicle routing problem (OVRP, because the calculation of fuel emissions depends on many factors, such as the speed of vehicles, the road angle, the total load, the engine friction, and the engine displacement. Our paper proposed a green open vehicle routing problem (GOVRP model with fuel consumption constraints for outsourcing logistics operations. Moreover, a hybrid tabu search algorithm was presented to deal with this problem. Experiments were conducted on instances based on realistic road data of Beijing, China, considering that outsourcing logistics plays an increasingly important role in China’s freight transportation. Open routes were compared with closed routes through statistical analysis of the cost components. Compared with closed routes, open routes reduce the total cost by 18.5% with the fuel emissions cost down by nearly 29.1% and the diver cost down by 13.8%. The effect of different vehicle types was also studied. Over all the 60- and 120-node instances, the mean total cost by using the light-duty vehicles is the lowest.

  8. An iterated tabu search heuristic for the Single Source Capacitated Facility Location Problem

    DEFF Research Database (Denmark)

    Ho, Sin C.

    2015-01-01

    This paper discusses the Single Source Capacitated Facility Location Problem (SSCFLP) where the problem consists in determining a subset of capacitated facilities to be opened in order to satisfy the customers’ demands such that total costs are minimized. The paper presents an iterated tabu searc...... competitive with other metaheuristic approaches for solving the SSCFLP....

  9. Hybrid Multistarting GA-Tabu Search Method for the Placement of BtB Converters for Korean Metropolitan Ring Grid

    Directory of Open Access Journals (Sweden)

    Remund J. Labios

    2016-01-01

    Full Text Available This paper presents a method to determine the optimal locations for installing back-to-back (BtB converters in a power grid as a countermeasure to reduce fault current levels. The installation of BtB converters can be regarded as network reconfiguration. For the purpose, a hybrid multistarting GA-tabu search method was used to determine the best locations from a preselected list of candidate locations. The constraints used in determining the best locations include circuit breaker fault current limits, proximity of proposed locations, and capability of the solution to reach power flow convergence. A simple power injection model after applying line-opening on selected branches was used as a means for power flows with BtB converters. Kron reduction was also applied as a method for network reduction for fast evaluation of fault currents with a given topology. Simulations of the search method were performed on the Korean power system, particularly the Seoul metropolitan area.

  10. Solução de problemas de planejamento florestal com restrições de inteireza utilizando busca tabu Solving forest management problems with integer constraints using tabu search

    Directory of Open Access Journals (Sweden)

    Flávio Lopes Rodrigues

    2003-10-01

    Full Text Available Este trabalho teve como objetivos desenvolver e testar um algoritmo com base na metaheurística busca tabu (BT, para a solução de problemas de gerenciamento florestal com restrições de inteireza. Os problemas avaliados tinham entre 93 e 423 variáveis de decisão, sujeitos às restrições de singularidade, produção mínima e produção máxima periódicas. Todos os problemas tiveram como objetivo a maximização do valor presente líquido. O algoritmo para implementação da BT foi codificado em ambiente delphi 5.0 e os testes foram efetuados em um microcomputador AMD K6II 500 MHZ, com memória RAM de 64 MB e disco rígido de 15GB. O desempenho da BT foi avaliado de acordo com as medidas de eficácia e eficiência. Os diferentes valores ou categorias dos parâmetros da BT foram testados e comparados quanto aos seus efeitos na eficácia do algoritmo. A seleção da melhor configuração de parâmetros foi feita com o teste L&O, a 1% de probabilidade, e as análises através de estatísticas descritivas. A melhor configuração de parâmetros propiciou à BT eficácia média de 95,97%, valor mínimo igual a 90,39% e valor máximo igual a 98,84%, com um coeficiente de variação de 2,48% do ótimo matemático. Para o problema de maior porte, a eficiência da BT foi duas vezes superior à eficiência do algoritmo exato branch and bound, apresentando-se como uma abordagem muito atrativa para solução de importantes problemas de gerenciamento florestal.This work aimed to develop and test an algorithm based on Tabu Search (TS metaheuristics; to solve problems of forest management with integer constraints. TS was tested in five problems containing between 12 and 423 decision variables subjected to singularity constraints, minimum and maximum periodic productions. All the problems aimed at maximizing the net present value. TS was codified into delphi 5.0 language and the tests were performed in a microcomputer AMD K6II 500 MHZ, RAM memory 64 MB

  11. A Runtime Performance Predictor for Selecting Tabu Tenures

    Science.gov (United States)

    Allen, John A.; Minton, Steven N.

    1997-01-01

    One of the drawbacks of parameter based systems, such as tabu search, is the difficulty of finding the correct parameter for a particular problem. Often, rule-of-thumb advice is given which may have little or no applicability to the domain or problem instance at hand. This paper describes the application of a general technique, Runtime Performance Predictors (RPP) which can be used to determine, in an efficient manner, the correct tabu tenure for a particular problem instance. The details of the approach and a demonstration using a variant of GSAT are presented.

  12. Solving the competitive facility location problem considering the reactions of competitor with a hybrid algorithm including Tabu Search and exact method

    Science.gov (United States)

    Bagherinejad, Jafar; Niknam, Azar

    2018-03-01

    In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.

  13. Solving the competitive facility location problem considering the reactions of competitor with a hybrid algorithm including Tabu Search and exact method

    Science.gov (United States)

    Bagherinejad, Jafar; Niknam, Azar

    2017-06-01

    In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.

  14. On metaheuristic "failure modes": a case study in Tabu search for job-shop scheduling.

    Energy Technology Data Exchange (ETDEWEB)

    Watson, Jean-Paul

    2005-06-01

    In this paper, we analyze the relationship between pool maintenance schemes, long-term memory mechanisms, and search space structure, with the goal of placing metaheuristic design on a more concrete foundation.

  15. A tabu-search for minimising the carry-over effects value of a round-robin tournament

    Directory of Open Access Journals (Sweden)

    MP Kidd

    2010-12-01

    Full Text Available A player b in a round-robin sports tournament receives a carry-over effect from another player a if some third player opposes a in round i and b in round i+1. Let γ(ab denote the number of times player b receives a carry-over effect from player a during a tournament. Then the carry-over effects value of the entire tournament T on n players is given by Γ(T=ΣΣγ(ij^2. Furthermore, let Γ(n denote the minimum carry-over effects value over all round-robin tournaments on n players. A strict lower bound on Γ(n is n(n-1 (in which case there exists a round-robin tournament of order n such that each player receives a carry-over effect from each other player exactly once, and it is known that this bound is attained for n=2^r or n=20,22. It is also known that round-robin tournaments can be constructed from so-called starters; round-robin tournaments constructed in this way are called cyclic. It has previously been shown that cyclic round-robin tournaments have the potential of admitting small values for Γ(T, and in this paper a tabu-search is used to find starters which produce cyclic tournaments with small carry-over effects values. The best solutions in the literature are matched for n<=22, and new upper bounds are established on Γ(n for 24<=n<=40.

  16. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    Science.gov (United States)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this

  17. A granular tabu search algorithm for a real case study of a vehicle routing problem with a heterogeneous fleet and time windows

    Directory of Open Access Journals (Sweden)

    Jose Bernal

    2017-10-01

    Full Text Available Purpose: We consider a real case study of a vehicle routing problem with a heterogeneous fleet and time windows (HFVRPTW for a franchise company bottling Coca-Cola products in Colombia. This study aims to determine the routes to be performed to fulfill the demand of the customers by using a heterogeneous fleet and considering soft time windows. The objective is to minimize the distance traveled by the performed routes. Design/methodology/approach: We propose a two-phase heuristic algorithm. In the proposed approach, after an initial phase (first phase, a granular tabu search is applied during the improvement phase (second phase. Two additional procedures are considered to help that the algorithm could escape from local optimum, given that during a given number of iterations there has been no improvement. Findings: Computational experiments on real instances show that the proposed algorithm is able to obtain high-quality solutions within a short computing time compared to the results found by the software that the company currently uses to plan the daily routes. Originality/value: We propose a novel metaheuristic algorithm for solving a real routing problem by considering heterogeneous fleet and time windows. The efficiency of the proposed approach has been tested on real instances, and the computational experiments shown its applicability and performance for solving NP-Hard Problems related with routing problems with similar characteristics. The proposed algorithm was able to improve some of the current solutions applied by the company by reducing the route length and the number of vehicles.

  18. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    Science.gov (United States)

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  19. (AJST) TABU SEARCH HEURISTIC FOR UNIVERSITY COURSE ...

    African Journals Online (AJOL)

    Therefore, further fine tuning of parameters might bring even better results. REFERENCES. 1. Abdennadher S, Michael M. (1999), University. Timetabling using Constraint Handling rule, Journal of Applied Artificial Intelligence, Special issues on. Constraint Handling Rules. 2. Cooper T., Kingston J. (1996). The Complexity of.

  20. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  1. Tag Based Audio Search Engine

    OpenAIRE

    Parameswaran Vellachu; Sunitha Abburu

    2012-01-01

    The volume of the music database is increasing day by day. Getting the required song as per the choice of the listener is a big challenge. Hence, it is really hard to manage this huge quantity, in terms of searching, filtering, through the music database. It is surprising to see that the audio and music industry still rely on very simplistic metadata to describe music files. However, while searching audio resource, an efficient "Tag Based Audio Search Engine" is necessary. The current researc...

  2. An algorithm based on granular tabu search for the problem of balancing public bikes by using multiple vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Linfati

    2014-01-01

    Full Text Available El uso de sistemas de bicicletas públicas ha cobrado gran importancia en países europeos y alrededor de todo el planeta; esto ha llevado a la necesidad de buscar técnicas avanzadas que ayuden a la toma de decisiones. Un sistema de bicicletas públicas consiste en un conjunto de puntos donde se pueden recoger y entregar bicicletas; un depósito central donde existe un conjunto de vehículos que toma las bicicletas sobrantes y las transportan a los puntos donde exista un déficit (es decir que la demanda supera la oferta. Una de las grandes problemáticas que se presentan en los sistemas de bicicletas públicas es el balanceo, que consiste en enviar bicicletas desde los puntos donde se produce una oferta (bicicletas que sobran hacia los puntos donde existe una demanda (bicicletas que faltan. La forma de modelar este problema es con una adaptación del problema de ruteo de vehículos con recolección y entrega de mercancías (VRPPD, permitiendo que cada ruta realice entregas parciales a los clientes y limitando el número de clientes a visitar por ruta. En este artículo se introduce un modelo de programación lineal entera mixta y una metaheurística basada en una búsqueda tabú granular para encontrar soluciones. Se usan instancias desde 15 a 500 clientes adaptadas de la literatura. Los resultados computacionales evidencian que el algoritmo propuesto encuentra soluciones en tiempos acotados de cómputo.

  3. Location-based Web Search

    Science.gov (United States)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  4. Content Based Searching for INIS

    International Nuclear Information System (INIS)

    Jain, V.; Jain, R.K.

    2016-01-01

    Full text: Whatever a user wants is available on the internet, but to retrieve the information efficiently, a multilingual and most-relevant document search engine is a must. Most current search engines are word based or pattern based. They do not consider the meaning of the query posed to them; purely based on the keywords of the query; no support of multilingual query and and dismissal of nonrelevant results. Current information-retrieval techniques either rely on an encoding process, using a certain perspective or classification scheme, to describe a given item, or perform a full-text analysis, searching for user-specified words. Neither case guarantees content matching because an encoded description might reflect only part of the content and the mere occurrence of a word does not necessarily reflect the document’s content. For general documents, there doesn’t yet seem to be a much better option than lazy full-text analysis, by manually going through those endless results pages. In contrast to this, new search engine should extract the meaning of the query and then perform the search based on this extracted meaning. New search engine should also employ Interlingua based machine translation technology to present information in the language of choice of the user. (author

  5. Semoogle - An Ontology Based Search Engine

    OpenAIRE

    Aghajani, Nooshin

    2012-01-01

    In this thesis, we present a prototype for search engine to show how such a semantic search application based on ontology techniques contributes to save time for user, and improve the quality of relevant search results compared to a traditional search engine. This system is built as a query improvement module, which uses ontology and sorts the results search based on four predefined categories. The first and important part of the implementation of search engine prototype is to apply ontology ...

  6. Distributed search engine architecture based on topic specific searches

    Science.gov (United States)

    Abudaqqa, Yousra; Patel, Ahmed

    2015-05-01

    Indisputably, search engines (SEs) abound. The monumental growth of users performing online searches on the Web is a contending issue in the contemporary world nowadays. For example, there are tens of billions of searches performed everyday, which typically offer the users many irrelevant results which are time consuming and costly to the user. Based on the afore-going problem it has become a herculean task for existing Web SEs to provide complete, relevant and up-to-date information response to users' search queries. To overcome this problem, we developed the Distributed Search Engine Architecture (DSEA), which is a new means of smart information query and retrieval of the World Wide Web (WWW). In DSEAs, multiple autonomous search engines, owned by different organizations or individuals, cooperate and act as a single search engine. This paper includes the work reported in this research focusing on development of DSEA, based on topic-specific specialised search engines. In DSEA, the results to specific queries could be provided by any of the participating search engines, for which the user is unaware of. The important design goal of using topic-specific search engines in the research is to build systems that can effectively be used by larger number of users simultaneously. Efficient and effective usage with good response is important, because it involves leveraging the vast amount of searched data from the World Wide Web, by categorising it into condensed focused topic -specific results that meet the user's queries. This design model and the development of the DSEA adopt a Service Directory (SD) to route queries towards topic-specific document hosting SEs. It displays the most acceptable performance which is consistent with the requirements of the users. The evaluation results of the model return a very high priority score which is associated with each frequency of a keyword.

  7. Evidence-based Medicine Search: a customizable federated search engine.

    Science.gov (United States)

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  8. Mathematical programming solver based on local search

    CERN Document Server

    Gardi, Frédéric; Darlay, Julien; Estellon, Bertrand; Megel, Romain

    2014-01-01

    This book covers local search for combinatorial optimization and its extension to mixed-variable optimization. Although not yet understood from the theoretical point of view, local search is the paradigm of choice for tackling large-scale real-life optimization problems. Today's end-users demand interactivity with decision support systems. For optimization software, this means obtaining good-quality solutions quickly. Fast iterative improvement methods, like local search, are suited to satisfying such needs. Here the authors show local search in a new light, in particular presenting a new kind of mathematical programming solver, namely LocalSolver, based on neighborhood search. First, an iconoclast methodology is presented to design and engineer local search algorithms. The authors' concern about industrializing local search approaches is of particular interest for practitioners. This methodology is applied to solve two industrial problems with high economic stakes. Software based on local search induces ex...

  9. Complete local search with memory

    NARCIS (Netherlands)

    Ghosh, D.; Sierksma, G.

    2000-01-01

    Neighborhood search heuristics like local search and its variants are some of the most popular approaches to solve discrete optimization problems of moderate to large size. Apart from tabu search, most of these heuristics are memoryless. In this paper we introduce a new neighborhood search heuristic

  10. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    Martin del Campo, C.; Francois, J.L.

    2003-01-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  11. Module-Based Synthesis of Digital Microfluidic Biochips with Droplet-Aware Operation Execution

    DEFF Research Database (Denmark)

    Maftei, Elena; Pop, Paul; Madsen, Jan

    2013-01-01

    , which means that we know the exact position of droplets inside the modules at each time-step. We propose a Tabu Search-based metaheuristic for the synthesis of digital biochips with droplet-aware operation execution. Experimental results show that our approach can significantly reduce the application...

  12. Comparative Searching of Computer Data Bases

    Science.gov (United States)

    And Others; Beauchamp, R. O., Jr.

    1973-01-01

    Methods for retrieval of information on chemical compounds utilizing several computer data bases are compared to determine scope of data base coverage. Preparation of search questions is outlined and comparative results are reported indicating the yield from each data base. (6 references) (Author/NH)

  13. Space based microlensing planet searches

    Directory of Open Access Journals (Sweden)

    Tisserand Patrick

    2013-04-01

    Full Text Available The discovery of extra-solar planets is arguably the most exciting development in astrophysics during the past 15 years, rivalled only by the detection of dark energy. Two projects unite the communities of exoplanet scientists and cosmologists: the proposed ESA M class mission EUCLID and the large space mission WFIRST, top ranked by the Astronomy 2010 Decadal Survey report. The later states that: “Space-based microlensing is the optimal approach to providing a true statistical census of planetary systems in the Galaxy, over a range of likely semi-major axes”. They also add: “This census, combined with that made by the Kepler mission, will determine how common Earth-like planets are over a wide range of orbital parameters”. We will present a status report of the results obtained by microlensing on exoplanets and the new objectives of the next generation of ground based wide field imager networks. We will finally discuss the fantastic prospect offered by space based microlensing at the horizon 2020–2025.

  14. Formulation space search approach for the teacher/class timetabling problem

    Directory of Open Access Journals (Sweden)

    Kochetov Yuri

    2008-01-01

    Full Text Available We consider the well known NP-hard teacher/class timetabling problem. Variable neighborhood search and tabu search heuristics are developed based on idea of the Formulation Space Search approach. Two types of solution representation are used in the heuristics. For each representation we consider two families of neighborhoods. The first family uses swapping of time periods for teacher (class timetable. The second family bases on the idea of large Kernighan-Lin neighborhoods. Computation results for difficult random test instances show high efficiency of the proposed approach. .

  15. Using Tabu Search Heuristics in Solving the Vehicle Routing ...

    African Journals Online (AJOL)

    Nafiisah

    assumed that the traffic is fluid, in other words, no traffic jam occurs on the road. For each region, the earliest time was calculated by considering the earliest time the first customer of that region needed to be serviced while the latest time to leave that region was obtained by considering the latest time that the last customer in ...

  16. Integrating genetic algorithms and tabu search for unit commitment ...

    African Journals Online (AJOL)

    Optimization is the art of obtaining optimum result under given circumstances. In design, construction and maintenance of any engineering system, Engineers have to take many technological and managerial decisions at several stages. The ultimate goal of all such decisions is to either maximize the desired benefit or to ...

  17. Implementation of a Tabu Search Heuristic for the Examinations ...

    African Journals Online (AJOL)

    This paper reports on the design and implementation of an algorithm for the construction of an examinations timetable. The Examinations Timetabling Problem is the problem of assigning examinations and candidates to time periods and examination rooms while satisfying a set of specific constraints. Every University has a ...

  18. Tabu search techniques for large high-school timetabling problems

    NARCIS (Netherlands)

    A. Schaerf

    1996-01-01

    textabstractThe high-school timetabling problem regards the weekly scheduling for all the lectures of a high school. The problem consists in assigning lectures to periods in such a way that no teacher (or class) is involved in more than one lecture at a time, and other side constraints are

  19. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all...... situations, for example in urban environments. We propose a system to provide location-based services using image searches without requiring GPS. The goal of this system is to assist tourists in cities with additional information using their mobile phones and built-in cameras. Based upon the result...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  20. New similarity search based glioma grading

    Energy Technology Data Exchange (ETDEWEB)

    Haegler, Katrin; Brueckmann, Hartmut; Linn, Jennifer [Ludwig-Maximilians-University of Munich, Department of Neuroradiology, Munich (Germany); Wiesmann, Martin; Freiherr, Jessica [RWTH Aachen University, Department of Neuroradiology, Aachen (Germany); Boehm, Christian [Ludwig-Maximilians-University of Munich, Department of Computer Science, Munich (Germany); Schnell, Oliver; Tonn, Joerg-Christian [Ludwig-Maximilians-University of Munich, Department of Neurosurgery, Munich (Germany)

    2012-08-15

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  1. New similarity search based glioma grading.

    Science.gov (United States)

    Haegler, Katrin; Wiesmann, Martin; Böhm, Christian; Freiherr, Jessica; Schnell, Oliver; Brückmann, Hartmut; Tonn, Jörg-Christian; Linn, Jennifer

    2012-08-01

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone.

  2. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  3. Personalizing Web Search based on User Profile

    OpenAIRE

    Utage, Sharyu; Ahire, Vijaya

    2016-01-01

    Web Search engine is most widely used for information retrieval from World Wide Web. These Web Search engines help user to find most useful information. When different users Searches for same information, search engine provide same result without understanding who is submitted that query. Personalized web search it is search technique for proving useful result. This paper models preference of users as hierarchical user profiles. a framework is proposed called UPS. It generalizes profile and m...

  4. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    Science.gov (United States)

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  5. DIFFERENTIAL SEARCH ALGORITHM BASED EDGE DETECTION

    Directory of Open Access Journals (Sweden)

    M. A. Gunen

    2016-06-01

    Full Text Available In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  6. Strategies for searching and managing evidence-based practice resources.

    Science.gov (United States)

    Robb, Meigan; Shellenbarger, Teresa

    2014-10-01

    Evidence-based nursing practice requires the use of effective search strategies to locate relevant resources to guide practice change. Continuing education and staff development professionals can assist nurses to conduct effective literature searches. This article provides suggestions for strategies to aid in identifying search terms. Strategies also are recommended for refining searches by using controlled vocabulary, truncation, Boolean operators, PICOT (Population/Patient Problem, Intervention, Comparison, Outcome, Time) searching, and search limits. Suggestions for methods of managing resources also are identified. Using these approaches will assist in more effective literature searches and may help evidence-based practice decisions. Copyright 2014, SLACK Incorporated.

  7. Top-k Keyword Search Over Graphs Based On Backward Search

    Directory of Open Access Journals (Sweden)

    Zeng Jia-Hui

    2017-01-01

    Full Text Available Keyword search is one of the most friendly and intuitive information retrieval methods. Using the keyword search to get the connected subgraph has a lot of application in the graph-based cognitive computation, and it is a basic technology. This paper focuses on the top-k keyword searching over graphs. We implemented a keyword search algorithm which applies the backward search idea. The algorithm locates the keyword vertices firstly, and then applies backward search to find rooted trees that contain query keywords. The experiment shows that query time is affected by the iteration number of the algorithm.

  8. Complex Sequencing Problems and Local Search Heuristics

    NARCIS (Netherlands)

    Brucker, P.; Hurink, Johann L.; Osman, I.H.; Kelly, J.P.

    1996-01-01

    Many problems can be formulated as complex sequencing problems. We will present problems in flexible manufacturing that have such a formulation and apply local search methods like iterative improvement, simulated annealing and tabu search to solve these problems. Computational results are reported.

  9. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    Science.gov (United States)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  10. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  11. Solving Large Clustering Problems with Meta-Heuristic Search

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen

    problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta......In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...

  12. Search and optimization by metaheuristics techniques and algorithms inspired by nature

    CERN Document Server

    Du, Ke-Lin

    2016-01-01

    This textbook provides a comprehensive introduction to nature-inspired metaheuristic methods for search and optimization, including the latest trends in evolutionary algorithms and other forms of natural computing. Over 100 different types of these methods are discussed in detail. The authors emphasize non-standard optimization problems and utilize a natural approach to the topic, moving from basic notions to more complex ones. An introductory chapter covers the necessary biological and mathematical backgrounds for understanding the main material. Subsequent chapters then explore almost all of the major metaheuristics for search and optimization created based on natural phenomena, including simulated annealing, recurrent neural networks, genetic algorithms and genetic programming, differential evolution, memetic algorithms, particle swarm optimization, artificial immune systems, ant colony optimization, tabu search and scatter search, bee and bacteria foraging algorithms, harmony search, biomolecular computin...

  13. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  14. Algorithm of axial fuel optimization based in progressive steps of turned search; Algoritmo de optimizacion axial de combustible basado en etapas progresivas de busqueda de entorno

    Energy Technology Data Exchange (ETDEWEB)

    Martin del Campo, C.; Francois, J.L. [Laboratorio de Analisis en Ingenieria de Reactores Nucleares, FI-UNAM, Paseo Cuauhnahuac 8532, Jiutepec, Morelos (Mexico)

    2003-07-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  15. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    ... performing the search on the entire image database, the image category option directs the retrieval engine to the specified category. Also, there is provision to update or modify the different image categories in the image database as need arise. Keywords: Content-based, Multimedia, Search Engine, Image-based, Texture ...

  16. Considerations for the development of task-based search engines

    DEFF Research Database (Denmark)

    Petcu, Paula; Dragusin, Radu

    2013-01-01

    Based on previous experience from working on a task-based search engine, we present a list of suggestions and ideas for an Information Retrieval (IR) framework that could inform the development of next generation professional search systems. The specific task that we start from is the clinicians......' information need in finding rare disease diagnostic hypotheses at the time and place where medical decisions are made. Our experience from the development of a search engine focused on supporting clinicians in completing this task has provided us valuable insights in what aspects should be considered...... by the developers of vertical search engines....

  17. Flexible Triangle Search Algorithm for Block-Based Motion Estimation

    Directory of Open Access Journals (Sweden)

    Andreas Antoniou

    2007-01-01

    Full Text Available A new fast algorithm for block-based motion estimation, the flexible triangle search (FTS algorithm, is presented. The algorithm is based on the simplex method of optimization adapted to an integer grid. The proposed algorithm is highly flexible due to its ability to quickly change its search direction and to move towards the target of the search criterion. It is also capable of increasing or decreasing its search step size to allow coarser or finer search. Unlike other fast search algorithms, the FTS can escape from inferior local minima and thus converge to better solutions. The FTS was implemented as part of the H.264 encoder and was compared with several other block matching algorithms. The results obtained show that the FTS can reduce the number of block matching comparisons by around 30–60% with negligible effect on the image quality and compression ratio.

  18. Attribute-Based Proxy Re-Encryption with Keyword Search

    Science.gov (United States)

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for , and propose two concrete constructions for : key-policy and ciphertext-policy . In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography. PMID:25549257

  19. IBRI-CASONTO: Ontology-based semantic search engine

    Directory of Open Access Journals (Sweden)

    Awny Sayed

    2017-11-01

    Full Text Available The vast availability of information, that added in a very fast pace, in the data repositories creates a challenge in extracting correct and accurate information. Which has increased the competition among developers in order to gain access to technology that seeks to understand the intent researcher and contextual meaning of terms. While the competition for developing an Arabic Semantic Search systems are still in their infancy, and the reason could be traced back to the complexity of Arabic Language. It has a complex morphological, grammatical and semantic aspects, as it is a highly inflectional and derivational language. In this paper, we try to highlight and present an Ontological Search Engine called IBRI-CASONTO for Colleges of Applied Sciences, Oman. Our proposed engine supports both Arabic and English language. It is also employed two types of search which are a keyword-based search and a semantics-based search. IBRI-CASONTO is based on different technologies such as Resource Description Framework (RDF data and Ontological graph. The experiments represent in two sections, first it shows a comparison among Entity-Search and the Classical-Search inside the IBRI-CASONTO itself, second it compares the Entity-Search of IBRI-CASONTO with currently used search engines, such as Kngine, Wolfram Alpha and the most popular engine nowadays Google, in order to measure their performance and efficiency.

  20. Stochastic search techniques for post-fault restoration of electrical ...

    Indian Academy of Sciences (India)

    Three stochastic search techniques have been used to find the optimal sequence of operations required to restore supply in an electrical distribution system on the occurrence of a fault. The three techniques are the genetic algorithm,simulated annealing and the tabu search. The performance of these techniques has been ...

  1. Ranking of XML Files by Compiler Based Adaptive Search

    OpenAIRE

    Varun Varma Sangaraju; Mary Posonia

    2014-01-01

    The Ranking of XML files can be performed by using Adaptive keyword search and Reverse indexing of the XML data within which critical metrics are weighed and assigned to the XML data. Compiler for correcting search words will act as an added benefit. This proposed system can act as an upgrade to the existing XML keyword searching pattern. The search results obtained in LCA based systems are non-adjustable and some important features like compactness and size are missed. So this proposed syste...

  2. Search for brown dwarfs in the IRAS data bases

    International Nuclear Information System (INIS)

    Low, F.J.

    1986-01-01

    A report is given on the initial searches for brown dwarf stars in the IRAS data bases. The paper was presented to the workshop on 'Astrophysics of brown dwarfs', Virginia, USA, 1985. To date no brown dwarfs have been discovered in the solar neighbourhood. Opportunities for future searches with greater sensitivity and different wavelengths are outlined. (U.K.)

  3. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k eff and differential coefficients of concerned parameter, the polynomial estimator of k eff changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  4. Evidence-based librarianship: searching for the needed EBL evidence.

    Science.gov (United States)

    Eldredge, J D

    2000-01-01

    This paper discusses the challenges of finding evidence needed to implement Evidence-Based Librarianship (EBL). Focusing first on database coverage for three health sciences librarianship journals, the article examines the information contents of different databases. Strategies are needed to search for relevant evidence in the library literature via these databases, and the problems associated with searching the grey literature of librarianship. Database coverage, plausible search strategies, and the grey literature of library science all pose challenges to finding the needed research evidence for practicing EBL. Health sciences librarians need to ensure that systems are designed that can track and provide access to needed research evidence to support Evidence-Based Librarianship (EBL).

  5. Multi-objective Search-based Mobile Testing

    OpenAIRE

    Mao, K.

    2017-01-01

    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis...

  6. Footprint: Tourism Information Search based on Mixed Reality

    OpenAIRE

    Vinothini Kasinathan; Aida Mustapha; Yeap Chee Seong; Aida Zamnah Zainal Abidin

    2017-01-01

    In the quest to provide better information seeking experience during travelling, this paper is set to design, build and trial a prototype tourism information search application based on mixed reality. This paper proposes “Footprint”, an android-based tourism information search application in a mixed reality environment, whereby it overlays tourism-related information on the image that the mobile phone camera is focused at. Using Footprint, the user would only need to point the mobile phone ca...

  7. Obtention control bars patterns for a BWR using Tabo search

    International Nuclear Information System (INIS)

    Castillo, A.; Ortiz, J.J.; Alonso, G.; Morales, L.B.; Valle, E. del

    2004-01-01

    The obtained results when implementing the technique of tabu search, for to optimize patterns of control bars in a BWR type reactor, using the CM-PRESTO code are presented. The patterns of control bars were obtained for the designs of fuel reloads obtained in a previous work, using the same technique. The obtained results correspond to a cycle of 18 months using 112 fresh fuels enriched at the 3.53 of U-235. The used technique of tabu search, prohibits recently visited movements, in the position that correspond to the axial positions of the control bars, additionally the tiempo t abu matrix is used for to manage a size of variable tabu list and the objective function is punished with the frequency of the forbidden movements. The obtained patterns of control bars improve the longitude of the cycle with regard to the reference values and they complete the restrictions of safety. (Author)

  8. Content-based Music Search and Recommendation System

    Science.gov (United States)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  9. Investigating the enhanced Best Performance Algorithm for Annual Crop Planning problem based on economic factors.

    Science.gov (United States)

    Adewumi, Aderemi Oluyinka; Chetty, Sivashan

    2017-01-01

    The Annual Crop Planning (ACP) problem was a recently introduced problem in the literature. This study further expounds on this problem by presenting a new mathematical formulation, which is based on market economic factors. To determine solutions, a new local search metaheuristic algorithm is investigated which is called the enhanced Best Performance Algorithm (eBPA). eBPA's results are compared against two well-known local search metaheuristic algorithms; these include Tabu Search and Simulated Annealing. The results show the potential of the eBPA for continuous optimization problems.

  10. Update on CERN Search based on SharePoint 2013

    Science.gov (United States)

    Alvarez, E.; Fernandez, S.; Lossent, A.; Posada, I.; Silva, B.; Wagner, A.

    2017-10-01

    CERN’s enterprise Search solution “CERN Search” provides a central search solution for users and CERN service providers. A total of about 20 million public and protected documents from a wide range of document collections is indexed, including Indico, TWiki, Drupal, SharePoint, JACOW, E-group archives, EDMS, and CERN Web pages. In spring 2015, CERN Search was migrated to a new infrastructure based on SharePoint 2013. In the context of this upgrade, the document pre-processing and indexing process was redesigned and generalised. The new data feeding framework allows to profit from new functionality and it facilitates the long term maintenance of the system.

  11. Robust object tacking based on self-adaptive search area

    Science.gov (United States)

    Dong, Taihang; Zhong, Sheng

    2018-02-01

    Discriminative correlation filter (DCF) based trackers have recently achieved excellent performance with great computational efficiency. However, DCF based trackers suffer boundary effects, which result in the unstable performance in challenging situations exhibiting fast motion. In this paper, we propose a novel method to mitigate this side-effect in DCF based trackers. We change the search area according to the prediction of target motion. When the object moves fast, broad search area could alleviate boundary effects and reserve the probability of locating object. When the object moves slowly, narrow search area could prevent effect of useless background information and improve computational efficiency to attain real-time performance. This strategy can impressively soothe boundary effects in situations exhibiting fast motion and motion blur, and it can be used in almost all DCF based trackers. The experiments on OTB benchmark show that the proposed framework improves the performance compared with the baseline trackers.

  12. SHOP: scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Linusson, Anna; Zamora, Ismael

    2007-01-01

    A new GRID-based method for scaffold hopping (SHOP) is presented. In a fully automatic manner, scaffolds were identified in a database based on three types of 3D-descriptors. SHOP's ability to recover scaffolds was assessed and validated by searching a database spiked with fragments of known...

  13. SEARCH

    International Development Research Centre (IDRC) Digital Library (Canada)

    Chaitali Sinha

    Anexo B: Lista de verificación para presentar una nota conceptual en el marco de IDRC-SEARCH ....... 17 .... incluir investigación primaria y/o síntesis de estudios existentes, para generar nuevo conocimiento. Los .... de datos entre grupos diferentes de usuarios (trabajadores de la salud comunitaria, funcionarios de salud.

  14. A novel line segment detection algorithm based on graph search

    Science.gov (United States)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  15. Modified Biogeography-Based Optimization with Local Search Mechanism

    Directory of Open Access Journals (Sweden)

    Quanxi Feng

    2013-01-01

    Full Text Available Biogeography-based optimization (BBO is a new effective population optimization algorithm based on the biogeography theory with inherently insufficient exploration capability. To address this limitation, we proposed a modified BBO with local search mechanism (denoted as MLBBO. In MLBBO, a modified migration operator is integrated into BBO, which can adopt more information from other habitats, to enhance the exploration ability. Then, a local search mechanism is used in BBO to supplement with modified migration operator. Extensive experimental tests are conducted on 27 benchmark functions to show the effectiveness of the proposed algorithm. The simulation results have been compared with original BBO, DE, improved BBO algorithms, and other evolutionary algorithms. Finally, the performance of the modified migration operator and local search mechanism are also discussed.

  16. Solving a chemical batch scheduling problem by local search

    NARCIS (Netherlands)

    Brucker, P.; Hurink, Johann L.

    1999-01-01

    A chemical batch scheduling problem is modelled in two different ways as a discrete optimization problem. Both models are used to solve the batch scheduling problem in a two-phase tabu search procedure. The method is tested on real-world data.

  17. Can social tagged images aid concept-based video search?

    NARCIS (Netherlands)

    Setz, A.T.; Snoek, C.G.M.

    2009-01-01

    This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We

  18. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  19. Constraint-based local search for container stowage slot planning

    DEFF Research Database (Denmark)

    Pacino, Dario; Jensen, Rune Møller; Bebbington, Tom

    2012-01-01

    -sea vessels. This paper describes the constrained-based local search algorithm used in the second phase of this approach where individual containers are assigned to slots in each bay section. The algorithm can solve this problem in an average of 0.18 seconds per bay, corresponding to a 20 seconds runtime...

  20. Pharmacophore alignment search tool: influence of the third dimension on text-based similarity searching.

    Science.gov (United States)

    Hähnke, Volker; Klenner, Alexander; Rippmann, Friedrich; Schneider, Gisbert

    2011-06-01

    Previously (Hähnke et al., J Comput Chem 2010, 31, 2810) we introduced the concept of nonlinear dimensionality reduction for canonization of two-dimensional layouts of molecular graphs as foundation for text-based similarity searching using our Pharmacophore Alignment Search Tool (PhAST), a ligand-based virtual screening method. Here we apply these methods to three-dimensional molecular conformations and investigate the impact of these additional degrees of freedom on virtual screening performance and assess differences in ranking behavior. Best-performing variants of PhAST are compared with 16 state-of-the-art screening methods with respect to significance estimates for differences in screening performance. We show that PhAST sorts new chemotypes on early ranks without sacrificing overall screening performance. We succeeded in combining PhAST with other virtual screening techniques by rank-based data fusion, significantly improving screening capabilities. We also present a parameterization of double dynamic programming for the problem of small molecule comparison, which allows for the calculation of structural similarity between compounds based on one-dimensional representations, opening the door to a holistic approach to molecule comparison based on textual representations. Copyright © 2011 Wiley Periodicals, Inc.

  1. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...... information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user friendly interface. The output of the system is a set of links...

  2. Computer-Assisted Search Of Large Textual Data Bases

    Science.gov (United States)

    Driscoll, James R.

    1995-01-01

    "QA" denotes high-speed computer system for searching diverse collections of documents including (but not limited to) technical reference manuals, legal documents, medical documents, news releases, and patents. Incorporates previously available and emerging information-retrieval technology to help user intelligently and rapidly locate information found in large textual data bases. Technology includes provision for inquiries in natural language; statistical ranking of retrieved information; artificial-intelligence implementation of semantics, in which "surface level" knowledge found in text used to improve ranking of retrieved information; and relevance feedback, in which user's judgements of relevance of some retrieved documents used automatically to modify search for further information.

  3. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  4. Searching for adaptive traits in genetic resources - phenology based approach

    Science.gov (United States)

    Bari, Abdallah

    2015-04-01

    Searching for adaptive traits in genetic resources - phenology based approach Abdallah Bari, Kenneth Street, Eddy De Pauw, Jalal Eddin Omari, and Chandra M. Biradar International Center for Agricultural Research in the Dry Areas, Rabat Institutes, Rabat, Morocco Phenology is an important plant trait not only for assessing and forecasting food production but also for searching in genebanks for adaptive traits. Among the phenological parameters we have been considering to search for such adaptive and rare traits are the onset (sowing period) and the seasonality (growing period). Currently an application is being developed as part of the focused identification of germplasm strategy (FIGS) approach to use climatic data in order to identify crop growing seasons and characterize them in terms of onset and duration. These approximations of growing period characteristics can then be used to estimate flowering and maturity dates for dryland crops, such as wheat, barley, faba bean, lentils and chickpea, and assess, among others, phenology-related traits such as days to heading [dhe] and grain filling period [gfp]. The approach followed here is based on first calculating long term average daily temperatures by fitting a curve to the monthly data over days from beginning of the year. Prior to the identification of these phenological stages the onset is extracted first from onset integer raster GIS layers developed based on a model of the growing period that considers both moisture and temperature limitations. The paper presents some examples of real applications of the approach to search for rare and adaptive traits.

  5. An improved version of Inverse Distance Weighting metamodel assisted Harmony Search algorithm for truss design optimization

    Directory of Open Access Journals (Sweden)

    Y. Gholipour

    Full Text Available This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.

  6. Local search heuristics for the probabilistic dial-a-ride problem

    DEFF Research Database (Denmark)

    Ho, Sin C.; Haugland, Dag

    2011-01-01

    evaluation procedure in a pure local search heuristic and in a tabu search heuristic. The quality of the solutions obtained by the two heuristics have been compared experimentally. Computational results confirm that our neighborhood evaluation technique is much faster than the straightforward one...

  7. Supporting inter-topic entity search for biomedical Linked Data based on heterogeneous relationships.

    Science.gov (United States)

    Zong, Nansu; Lee, Sungin; Ahn, Jinhyun; Kim, Hong-Gee

    2017-08-01

    The keyword-based entity search restricts search space based on the preference of search. When given keywords and preferences are not related to the same biomedical topic, existing biomedical Linked Data search engines fail to deliver satisfactory results. This research aims to tackle this issue by supporting an inter-topic search-improving search with inputs, keywords and preferences, under different topics. This study developed an effective algorithm in which the relations between biomedical entities were used in tandem with a keyword-based entity search, Siren. The algorithm, PERank, which is an adaptation of Personalized PageRank (PPR), uses a pair of input: (1) search preferences, and (2) entities from a keyword-based entity search with a keyword query, to formalize the search results on-the-fly based on the index of the precomputed Individual Personalized PageRank Vectors (IPPVs). Our experiments were performed over ten linked life datasets for two query sets, one with keyword-preference topic correspondence (intra-topic search), and the other without (inter-topic search). The experiments showed that the proposed method achieved better search results, for example a 14% increase in precision for the inter-topic search than the baseline keyword-based search engine. The proposed method improved the keyword-based biomedical entity search by supporting the inter-topic search without affecting the intra-topic search based on the relations between different entities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  9. An ICP algorithm based on block path closest point search

    Science.gov (United States)

    Wang, Kuisheng; Li, Xing; Lei, Hongwei; Zhang, Xiaorui

    2017-08-01

    At present, the traditional ICP algorithm has the problems of low efficiency and low precision. To solve these two problems, an ICP algorithm based on block path closest point search is proposed in this paper. The idea of the algorithm is as follows: firstly, the point cloud data is divided into blocks, and the nearest point block corresponding to the target point cloud is searched by the path method. Secondly, according to the global method, the nearest point can be determined only by finding the nearest point block, and complete all the closest match. The experimental results show that the improved ICP algorithm has faster speed and higher precision than the traditional ICP algorithm, for a large number of point cloud data advantage is more obvious.

  10. Underwater Sensor Network Redeployment Algorithm Based on Wolf Search.

    Science.gov (United States)

    Jiang, Peng; Feng, Yang; Wu, Feng

    2016-10-21

    This study addresses the optimization of node redeployment coverage in underwater wireless sensor networks. Given that nodes could easily become invalid under a poor environment and the large scale of underwater wireless sensor networks, an underwater sensor network redeployment algorithm was developed based on wolf search. This study is to apply the wolf search algorithm combined with crowded degree control in the deployment of underwater wireless sensor networks. The proposed algorithm uses nodes to ensure coverage of the events, and it avoids the prematurity of the nodes. The algorithm has good coverage effects. In addition, considering that obstacles exist in the underwater environment, nodes are prevented from being invalid by imitating the mechanism of avoiding predators. Thus, the energy consumption of the network is reduced. Comparative analysis shows that the algorithm is simple and effective in wireless sensor network deployment. Compared with the optimized artificial fish swarm algorithm, the proposed algorithm exhibits advantages in network coverage, energy conservation, and obstacle avoidance.

  11. Disease Related Knowledge Summarization Based on Deep Graph Search.

    Science.gov (United States)

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo.

  12. A DE-Based Scatter Search for Global Optimization Problems

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.

  13. A search for new cobalt-based high temperature superalloys

    Science.gov (United States)

    Nyshadham, Chandramouli; Hansen, Jacob; Curtarolo, Stefano; Hart, Gus L. W.

    2015-03-01

    The discovery of a high temperature Co3(Al,W) superalloy has provided a promising avenue for further search of other Co-based superalloys. The L12 Co3(Al,W) system is found to have higher strength and melting temperature than common Ni-based alloys. The high strength of super alloys is generally attributed to the stable or metastable austentic face-centered cubic crystal structure. We performed an extensive series of ab-initio calculations to search for stable or metastable Co-based ternary alloys of the form Co3(A0.5B0.5). A 32 atom cell special quasi random structure (SQS-32) is considered to mimic the properties of the alloy at high temperatures. The results from the DFT calculations for over 780 different Co-based ternary systems and the potential candidates of the future high temperature super alloys is presented. CN, SC and GLWH acknowledge support from ONR (MURI N00014-13-1-0635). JH acknowledges support by NSF (DMR-0908753).

  14. Visual tracking method based on cuckoo search algorithm

    Science.gov (United States)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  15. SEARCHING AND TRACKING OF LOCATION BY PROXY BASED APPROACH

    Directory of Open Access Journals (Sweden)

    K. Nirmala

    2017-01-01

    Full Text Available Location based services became an important application in recent world. Users can search a location using smart phones and can also track the location. It is also possible for the users to retrieve information about the nearest location. The objective of this paper is to reduce the respond time of the server for the user queries. A proxy based approach is proposed to reduce the waiting time of the user and to increase the information about the location. Many location based services, provides details about the user queries. The proxy based approach creates an Estimated Valid Region which reduces the number of queries approaching the server there by reducing the waiting time of the mobile clients due to server load.

  16. Constraint-based local search for container stowage slot planning

    DEFF Research Database (Denmark)

    Pacino, Dario; Jensen, Rune Møller; Bebbington, Tom

    2012-01-01

    Due to the economical importance of stowage planning, there recently has been an increasing interest in developing optimization algorithms for this problem. We have developed 2-phase approach that in most cases can generate near optimal stowage plans within a few hundred seconds for large deep-sea...... vessels. This paper describes the constrained-based local search algorithm used in the second phase of this approach where individual containers are assigned to slots in each bay section. The algorithm can solve this problem in an average of 0.18 seconds per bay, corresponding to a 20 seconds runtime...

  17. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  18. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  19. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  20. Case-Based Reasoning as a Heuristic Selector in a Hyper-Heuristic for Course Timetabling Problems

    OpenAIRE

    Petrovic, Sanja; Qu, Rong

    2002-01-01

    This paper studies Knowledge Discovery (KD) using Tabu Search and Hill Climbing within Case-Based Reasoning (CBR) as a hyper-heuristic method for course timetabling problems. The aim of the hyper-heuristic is to choose the best heuristic(s) for given timetabling problems according to the knowledge stored in the case base. KD in CBR is a 2-stage iterative process on both case representation and the case base. Experimental results are analysed and related research issues for future work are dis...

  1. An Advanced Tabu Search Approach to Solving the Mixed Payload Airlift Load Planning Problem

    Science.gov (United States)

    2009-03-01

    endeavors of this magnitude, airlift comes at a great financial cost; it is therefore imperative to utilize the Air Force’s airlift fleet in an...A1PTRows,1:5) = A1PT; PalletTable (2,1:A2PTRows,1:5) = A2PT; % Fix the length of each cargo item to acocunt for chaining space [Cargo] = ChainSpace

  2. Use of Tabu Search in a Solver to Map Complex Networks onto Emulab Testbeds

    National Research Council Canada - National Science Library

    MacDonald, Jason E

    2007-01-01

    The University of Utah's solver for the testbed mapping problem uses a simulated annealing metaheuristic algorithm to map a researcher's experimental network topology onto available testbed resources...

  3. SAFE MOVEMENT OF HAZARDOUS MATERIALS THROUGH HEURISTIC HYBRID APPROACH: TABU SEARCH AND GAME THEORY APLICATION

    OpenAIRE

    Hakan ASLAN

    2008-01-01

    The safe movement of hazardous materials is receiving increased attention due to growing environmental awareness of the potential health affects of a release causing incident. A novel approach developed in this paper through a game theory interpretation provides a risk-averse solution to the hazardous materials transportation problem. The dispatcher minimizes the expected maximum disutility subject to worst possible set of link failure probabilities, assuming that one link in the network fail...

  4. Thermal Unit Commitment Scheduling Problem in Utility System by Tabu Search Embedded Genetic Algorithm Method

    Directory of Open Access Journals (Sweden)

    C. Christober Asir Rajan

    2008-06-01

    Full Text Available The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal unit commitment in the power system for the next H hours. A 66-bus utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 24, 57 and 175 buses. Numerical results are shown comparing the cost solutions and computation time obtained by different intelligence and conventional methods.

  5. A Group Theoretic Tabu Search Methodology for Solving the Theater Distribution Vehicle Routing and Scheduling Problem

    National Research Council Canada - National Science Library

    Crino, John

    2002-01-01

    .... This dissertation applies and extends some of Colletti's (1999) seminal work in group theory and metaheuristics in order to solve the theater distribution vehicle routing and scheduling problem (TDVRSP...

  6. Application of Tabu Search to UPFC Stabilizer Adjustment at a Multi Machine Electric Power System

    OpenAIRE

    Hasan Fayazi Boroujeni; Babak Keyvani Boroujeni; Ahmad Memaripour; Meysam Eghtedari

    2012-01-01

    Unified Power Flow Controller (UPFC) is one of the most viable and important Flexible AC Transmission Systems (FACTS) devises. Application of UPFC in single machine and multi machine electric power systems has been investigated with different purposes such as power transfer capability, damping of Low Frequency Oscillations (LFO), voltage support and so forth. But, an important issue in UPFC applications is to find optimal parameters of UPFC controllers. This paper presents the application of ...

  7. A tabu search algorithm for scheduling a single robot in a job-shop environment

    NARCIS (Netherlands)

    Hurink, Johann L.; Knust, S.

    1999-01-01

    We consider a single-machine scheduling problem which arises as a subproblem in a job-shop environment where the jobs have to be transported between the machines by a single transport robot. The robot scheduling problem may be regarded as a generalization of the travelling-salesman problem with time

  8. A tabu search algorithm for scheduling a single robot in a job-shop environment

    NARCIS (Netherlands)

    Hurink, Johann L.; Knust, Sigrid

    2002-01-01

    We consider a single-machine scheduling problem which arises as a subproblem in a job-shop environment where the jobs have to be transported between the machines by a single transport robot. The robot scheduling problem may be regarded as a generalization of the travelling-salesman problem with time

  9. A tabu search algorithm for scheduling a single robot in a job-shop environment

    OpenAIRE

    Hurink, Johann; Knust, Sigrid

    2002-01-01

    We consider a single-machine scheduling problem which arises as a subproblem in a job-shop environment where the jobs have to be transported between the machines by a single transport robot. The robot scheduling problem may be regarded as a generalization of the travelling-salesman problem with time windows, where additionally generalized precedence constraints have to be respected. The objective is to determine a sequence of all nodes and corresponding starting times in the given time window...

  10. Solving a manpower scheduling problem for airline catering using tabu search

    DEFF Research Database (Denmark)

    Ho, Sin C.; Leung, Janny M. Y.

    We study a manpower scheduling problem with job time-windows and job-skills compatibility constraints. This problem is motivated by airline catering operations, whereby airline meals and other supplies are delivered to aircrafts on the tarmac just before the flights take off. Jobs (flights) must...... be serviced within a given time-window by a team consisting of a driver and a loader. Each driver/loader has the skills to service some, but not all, of the airline/aircraft/configuration of the jobs. Given the jobs to be serviced and the roster of workers for each shift, the problem is to form teams...

  11. Using Advanced Tabu Search Approaches to Perform Enhanced Air Mobility Command Operational Airlift Analyses

    Science.gov (United States)

    2009-02-28

    provides its clients with a flexible and inclusive architecture to build applications with user friendly graphical user interfaces (GUIs) that embrace...30, No. 3, 2006, pp. 51 -61. 18. Dimova, B., J. W. Barnes, and E. Popova, "The Characteristic Neighborhood Equation for an AR(2) Landscape ...34Some Additional Properties of Elementary Landscapes ," Applied Mathematics Letters , 2008, pp. 1-4, (Online Only). Final Report- FA9550-06-1 -0052

  12. Solving a static repositioning problem in bike-sharing systems using iterated tabu search

    DEFF Research Database (Denmark)

    Ho, Sin C.; Szeto, W. Y.

    2014-01-01

    In this paper, we study the static bike repositioning problem where the problem consists of selecting a subset of stations to visit, sequencing them, and determining the pick-up/drop-off quantities (associated with each of the visited stations) under the various operational constraints. The objec......In this paper, we study the static bike repositioning problem where the problem consists of selecting a subset of stations to visit, sequencing them, and determining the pick-up/drop-off quantities (associated with each of the visited stations) under the various operational constraints...

  13. A tabu-search for minimising the carry-over effects value of a round ...

    African Journals Online (AJOL)

    16 0 1 2 3 4 5 6 7 8 9 a b c d e. 0 f c 9 4 3 a 8 d 6 2 5 e 1 7 b. 1 c f d a 5 4 b 9 e 7 3 6 0 2 8. 2 9 d f e b 6 5 c a 0 8 4 7 1 3. 3 4 a e f 0 c 7 6 d b 1 9 5 8 2. 4 3 5 b 0 f 1 d 8 7 e c 2 a 6 9. 5 a 4 6 c 1 f 2 e 9 8 0 d 3 b 7. 6 8 b 5 7 d 2 f 3 0 a 9 1 e 4 c. 7 d 9 c 6 8 e 3 f 4 1 b a 2 0 5. 8 6 e a d 7 9 0 4 f 5 2 c b 3 1. 9 2 7 0 b e 8 a 1 5 f 6 3 d c 4.

  14. Information retrieval for children based on the aggregated search paradigm

    NARCIS (Netherlands)

    Duarte Torres, Sergio

    This report presents research to develop information services for children by expanding and adapting current Information retrieval technologies according to the search characteristics and needs of children. Concretely, we will employ the aggregated search paradigm as theoretical framework. The

  15. Personalized Profile Based Search Interface With Ranked and Clustered Display

    National Research Council Canada - National Science Library

    Kumar, Sachin; Oztekin, B. U; Ertoz, Levent; Singhal, Saurabh; Han, Euihong; Kumar, Vipin

    2001-01-01

    We have developed an experimental meta-search engine, which takes the snippets from traditional search engines and presents them to the user either in the form of clusters, indices or re-ranked list...

  16. Biobotic insect swarm based sensor networks for search and rescue

    Science.gov (United States)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  17. Proceedings of the ECIR 2012 Workshop on Task-Based and Aggregated Search (TBAS2012)

    DEFF Research Database (Denmark)

    2012-01-01

    document and media types, and how to present results to the user. An example of aggregated search is the retrieval of scientific content, which involves searching among different domain-dependent document types and structures (e.g. full articles, short abstracts, tables of content). This workshop aims......Task-based search aims to understand the user's current task and desired outcomes, and how this may provide useful context for the Information Retrieval (IR) process. An example of task-based search is situations where additional user information on e.g. the purpose of the search or what the user...... already knows about the topic can provide valuable additional evidence that can significantly improve retrieval performance. Task-based search may be especially useful in cases of aggregated search, also known as integrated search in the digital libraries domain. Aggregated search describes...

  18. A Systematic Understanding of Successful Web Searches in Information-Based Tasks

    Science.gov (United States)

    Zhou, Mingming

    2013-01-01

    The purpose of this study is to research how Chinese university students solve information-based problems. With the Search Performance Index as the measure of search success, participants were divided into high, medium and low-performing groups. Based on their web search logs, these three groups were compared along five dimensions of the search…

  19. Multi-Robot Searching using Game-Theory Based Approach

    Directory of Open Access Journals (Sweden)

    Yan Meng

    2008-11-01

    Full Text Available This paper proposes a game-theory based approach in a multi–target searching using a multi-robot system in a dynamic environment. It is assumed that a rough priori probability map of the targets' distribution within the environment is given. To consider the interaction between the robots, a dynamic-programming equation is proposed to estimate the utility function for each robot. Based on this utility function, a cooperative nonzero-sum game is generated, where both pure Nash Equilibrium and mixed-strategy Equilibrium solutions are presented to achieve an optimal overall robot behaviors. A special consideration has been taken to improve the real-time performance of the game-theory based approach. Several mechanisms, such as event-driven discretization, one-step dynamic programming, and decision buffer, have been proposed to reduce the computational complexity. The main advantage of the algorithm lies in its real-time capabilities whilst being efficient and robust to dynamic environments.

  20. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  1. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  2. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  3. Empirical Evidences in Citation-Based Search Engines: Is Microsoft Academic Search dead?

    OpenAIRE

    Orduna-Malea, Enrique; Ayllon, Juan Manuel; Martin-Martin, Alberto; Lopez-Cozar, Emilio Delgado

    2014-01-01

    Purpose - The purpose of this paper is to describe the obsolescence process of Microsoft Academic Search (MAS) as well as the effects of this decline in the coverage of disciplines and journals, and their influence in the representativeness of organizations. Design/methodology/approach - The total number of records and those belonging to the most reputable journals (1,762) and organizations (346) according to the Field Rating indicator in each of the 15 fields and 204 sub-fields of MAS, ...

  4. A Secured Cognitive Agent based Multi-strategic Intelligent Search System

    Directory of Open Access Journals (Sweden)

    Neha Gulati

    2018-04-01

    Full Text Available Search Engine (SE is the most preferred information retrieval tool ubiquitously used. In spite of vast scale involvement of users in SE’s, their limited capabilities to understand the user/searcher context and emotions places high cognitive, perceptual and learning load on the user to maintain the search momentum. In this regard, the present work discusses a Cognitive Agent (CA based approach to support the user in Web-based search process. The work suggests a framework called Secured Cognitive Agent based Multi-strategic Intelligent Search System (CAbMsISS to assist the user in search process. It helps to reduce the contextual and emotional mismatch between the SE’s and user. After implementation of the proposed framework, performance analysis shows that CAbMsISS framework improves Query Retrieval Time (QRT and effectiveness for retrieving relevant results as compared to Present Search Engine (PSE. Supplementary to this, it also provides search suggestions when user accesses a resource previously tagged with negative emotions. Overall, the goal of the system is to enhance the search experience for keeping the user motivated. The framework provides suggestions through the search log that tracks the queries searched, resources accessed and emotions experienced during the search. The implemented framework also considers user security. Keywords: BDI model, Cognitive Agent, Emotion, Information retrieval, Intelligent search, Search Engine

  5. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  6. The Cellular Differential Evolution Based on Chaotic Local Search

    Directory of Open Access Journals (Sweden)

    Qingfeng Ding

    2015-01-01

    Full Text Available To avoid immature convergence and tune the selection pressure in the differential evolution (DE algorithm, a new differential evolution algorithm based on cellular automata and chaotic local search (CLS or ccDE is proposed. To balance the exploration and exploitation tradeoff of differential evolution, the interaction among individuals is limited in cellular neighbors instead of controlling parameters in the canonical DE. To improve the optimizing performance of DE, the CLS helps by exploring a large region to avoid immature convergence in the early evolutionary stage and exploiting a small region to refine the final solutions in the later evolutionary stage. What is more, to improve the convergence characteristics and maintain the population diversity, the binomial crossover operator in the canonical DE may be instead by the orthogonal crossover operator without crossover rate. The performance of ccDE is widely evaluated on a set of 14 bound constrained numerical optimization problems compared with the canonical DE and several DE variants. The simulation results show that ccDE has better performances in terms of convergence rate and solution accuracy than other optimizers.

  7. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  8. Analysis on the Correlation of Traffic Flow in Hainan Province Based on Baidu Search

    Science.gov (United States)

    Chen, Caixia; Shi, Chun

    2018-03-01

    Internet search data records user’s search attention and consumer demand, providing necessary database for the Hainan traffic flow model. Based on Baidu Index, with Hainan traffic flow as example, this paper conduct both qualitative and quantitative analysis on the relationship between search keyword from Baidu Index and actual Hainan tourist traffic flow, and build multiple regression model by SPSS.

  9. A modified harmony search based method for optimal rural radial ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 3 (2010) >. Log in or Register to get access to full text downloads.

  10. Alephweb: a search engine based on the federated structure ...

    African Journals Online (AJOL)

    Revue d'Information Scientifique et Technique. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 7, No 1 (1997) >. Log in or Register to get access to full text downloads.

  11. OS2: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain

    Science.gov (United States)

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search (OS2) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, OS2 ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables OS2 to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of OS2 is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations. PMID:28692697

  12. A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories

    Science.gov (United States)

    Nguyen, Hung D.; Steele, Gynelle C.

    2016-01-01

    This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.

  13. Visualization for Information Retrieval based on Fast Search Technology

    Directory of Open Access Journals (Sweden)

    Mamoon H. Mamoon

    2013-03-01

    Full Text Available The core of search engine is information retrieval technique. Using information retrieval system backs more retrieval results, some of them more relevant than other, and some is not relevant. While using search engine to retrieve information has grown very substantially, there remain problems with the information retrieval systems. The interface of the systems does not help them to perceive the precision of these results. It is therefore not surprising that graphical visualizations have been employed in search engines to assist users. The main objective of Internet users is to find the required information with high efficiency and effectiveness. In this paper we present brief sides of information visualization's role in enhancing web information retrieval system as in some of its techniques such as tree view, title view, map view, bubble view and cloud view and its tools such as highlighting and Colored Query Result.

  14. Contrasting gist-based and template-based guidance during real-world visual search.

    Science.gov (United States)

    Bahle, Brett; Matsukura, Michi; Hollingworth, Andrew

    2018-03-01

    Visual search through real-world scenes is guided both by a representation of target features and by knowledge of the sematic properties of the scene (derived from scene gist recognition). In 3 experiments, we compared the relative roles of these 2 sources of guidance. Participants searched for a target object in the presence of a critical distractor object. The color of the critical distractor either matched or mismatched (a) the color of an item maintained in visual working memory for a secondary task (Experiment 1), or (b) the color of the target, cued by a picture before search commenced (Experiments 2 and 3). Capture of gaze by a matching distractor served as an index of template guidance. There were 4 main findings: (a) The distractor match effect was observed from the first saccade on the scene, (b) it was independent of the availability of scene-level gist-based guidance, (c) it was independent of whether the distractor appeared in a plausible location for the target, and (d) it was preserved even when gist-based guidance was available before scene onset. Moreover, gist-based, semantic guidance of gaze to target-plausible regions of the scene was delayed relative to template-based guidance. These results suggest that feature-based template guidance is not limited to plausible scene regions after an initial, scene-level analysis. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Assessing ligand efficiencies using template-based molecular ...

    Indian Academy of Sciences (India)

    Keywords. TIBO; structure based drug design (SBDD); template; Tabu-clustering; HIV-1 reverse transcriptase (HIVRT); non-nucleoside reverse transcriptase inhibitor (NNRTI); pIC50; multiple linear regression (MLR); artificial neural network (ANN).

  16. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  17. Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine

    Science.gov (United States)

    Alghoson, Abdullah

    2017-01-01

    Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…

  18. Towards ontology based search and knowledgesharing using domain ontologies

    DEFF Research Database (Denmark)

    Zambach, Sine

    This paper reports on work in progress. We present work on domain specific verbs and their role as relations in domain ontologies. The domain ontology which is in focus for our research is modeled in cooperation with the Danish biotech company Novo Nordic. Two of the main purposes of domain...... ontologies for enterprises are as background for search and knowledge sharing used for e.g. multi lingual product development. Our aim is to use linguistic methods and logic to construct consistent ontologies that can be used in both a search perspective and as knowledge sharing.This focuses on identifying...... verbs for relations in the ontology modeling. For this work we use frequency lists from a biomedical text corpus of different genres as well as a study of the relations used in other biomedical text mining tools. In addition, we discuss how these relations can be used in broarder perspective....

  19. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  20. Ontology-Based Information Behaviour to Improve Web Search

    Directory of Open Access Journals (Sweden)

    Silvia Calegari

    2010-10-01

    Full Text Available Web Search Engines provide a huge number of answers in response to a user query, many of which are not relevant, whereas some of the most relevant ones may not be found. In the literature several approaches have been proposed in order to help a user to find the information relevant to his/her real needs on the Web. To achieve this goal the individual Information Behavior can been analyzed to ’keep’ track of the user’s interests. Keeping information is a type of Information Behavior, and in several works researchers have referred to it as the study on what people do during a search on the Web. Generally, the user’s actions (e.g., how the user moves from one Web page to another, or her/his download of a document, etc. are recorded in Web logs. This paper reports on research activities which aim to exploit the information extracted from Web logs (or query logs in personalized user ontologies, with the objective to support the user in the process of discovering Web information relevant to her/his information needs. Personalized ontologies are used to improve the quality of Web search by applying two main techniques: query reformulation and re-ranking of query evaluation results. In this paper we analyze various methodologies presented in the literature aimed at using personalized ontologies, defined on the basis of the observation of Information Behaviour to help the user in finding relevant information.

  1. Optimal fuzzy logic-based PID controller for load-frequency control including superconducting magnetic energy storage units

    International Nuclear Information System (INIS)

    Pothiya, Saravuth; Ngamroo, Issarachai

    2008-01-01

    This paper proposes a new optimal fuzzy logic-based-proportional-integral-derivative (FLPID) controller for load frequency control (LFC) including superconducting magnetic energy storage (SMES) units. Conventionally, the membership functions and control rules of fuzzy logic control are obtained by trial and error method or experiences of designers. To overcome this problem, the multiple tabu search (MTS) algorithm is applied to simultaneously tune PID gains, membership functions and control rules of FLPID controller to minimize frequency deviations of the system against load disturbances. The MTS algorithm introduces additional techniques for improvement of search process such as initialization, adaptive search, multiple searches, crossover and restarting process. Simulation results explicitly show that the performance of the optimum FLPID controller is superior to the conventional PID controller and the non-optimum FLPID controller in terms of the overshoot, settling time and robustness against variations of system parameters

  2. Graphics-based intelligent search and abstracting using Data Modeling

    Science.gov (United States)

    Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.

    2002-11-01

    This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.

  3. [Formula: see text]: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain.

    Science.gov (United States)

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem; Khan, Wajahat Ali

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search ([Formula: see text]) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, [Formula: see text] ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables [Formula: see text] to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of [Formula: see text] is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations.

  4. Swarm Robots Search for Multiple Targets Based on an Improved Grouping Strategy.

    Science.gov (United States)

    Tang, Qirong; Ding, Lu; Yu, Fangchao; Zhang, Yuan; Li, Yinghao; Tu, Haibo

    2017-03-14

    Swarm robots search for multiple targets in collaboration in unknown environments has been addressed in this paper. An improved grouping strategy based on constriction factors Particle Swarm Optimization is proposed. Robots are grouped under this strategy after several iterations of stochastic movements, which considers the influence range of targets and environmental information they have sensed. The group structure may change dynamically and each group focuses on searching one target. All targets are supposed to be found finally. Obstacle avoidance is considered during the search process. Simulation compared with previous method demonstrates the adaptability, accuracy and efficiency of the proposed strategy in multiple targets searching.

  5. The role of space and time in object-based visual search

    NARCIS (Netherlands)

    Schreij, D.B.B.; Olivers, C.N.L.

    2013-01-01

    Recently we have provided evidence that observers more readily select a target from a visual search display if the motion trajectory of the display object suggests that the observer has dealt with it before. Here we test the prediction that this object-based memory effect on search breaks down if

  6. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    Science.gov (United States)

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  7. Searching for evidence-based geriatrics: Tips and tools for finding evidence in the medical literature

    NARCIS (Netherlands)

    van Munster, B. C.; van de Glind, E. M. M.; Hooft, L.

    2012-01-01

    Introduction: Information to treat geriatric patients evidence-based is hard to find. Recently, a sensitive and a specific search filter to improve searching for literature relevant to geriatric medicine were developed in a research setting. The aim of this study is to determine whether these

  8. The impact of semantic document expansion on cluster-based fusion for microblog search

    NARCIS (Netherlands)

    Liang, S.; Ren, Z.; de Rijke, M.; de Rijke, M.; Kenter, T.; de Vries, A.P.; Zhai, C.X.; de Jong, F.; Radinsky, K.; Hofmann, K.

    2014-01-01

    Searching microblog posts, with their limited length and creative language usage, is challenging. We frame the microblog search problem as a data fusion problem. We examine the effectiveness of a recent cluster-based fusion method on the task of retrieving microblog posts. We find that in the

  9. Effect of Reading Ability and Internet Experience on Keyword-Based Image Search

    Science.gov (United States)

    Lei, Pei-Lan; Lin, Sunny S. J.; Sun, Chuen-Tsai

    2013-01-01

    Image searches are now crucial for obtaining information, constructing knowledge, and building successful educational outcomes. We investigated how reading ability and Internet experience influence keyword-based image search behaviors and performance. We categorized 58 junior-high-school students into four groups of high/low reading ability and…

  10. Improving protein structure similarity searches using domain boundaries based on conserved sequence information

    Directory of Open Access Journals (Sweden)

    Madej Tom

    2009-05-01

    Full Text Available Abstract Background The identification of protein domains plays an important role in protein structure comparison. Domain query size and composition are critical to structure similarity search algorithms such as the Vector Alignment Search Tool (VAST, the method employed for computing related protein structures in NCBI Entrez system. Currently, domains identified on the basis of structural compactness are used for VAST computations. In this study, we have investigated how alternative definitions of domains derived from conserved sequence alignments in the Conserved Domain Database (CDD would affect the domain comparisons and structure similarity search performance of VAST. Results Alternative domains, which have significantly different secondary structure composition from those based on structurally compact units, were identified based on the alignment footprints of curated protein sequence domain families. Our analysis indicates that domain boundaries disagree on roughly 8% of protein chains in the medium redundancy subset of the Molecular Modeling Database (MMDB. These conflicting sequence based domain boundaries perform slightly better than structure domains in structure similarity searches, and there are interesting cases when structure similarity search performance is markedly improved. Conclusion Structure similarity searches using domain boundaries based on conserved sequence information can provide an additional method for investigators to identify interesting similarities between proteins with known structures. Because of the improvement in performance of structure similarity searches using sequence domain boundaries, we are in the process of implementing their inclusion into the VAST search and MMDB resources in the NCBI Entrez system.

  11. Keyword-based Ciphertext Search Algorithm under Cloud Storage

    Directory of Open Access Journals (Sweden)

    Ren Xunyi

    2016-01-01

    Full Text Available With the development of network storage services, cloud storage have the advantage of high scalability , inexpensive, without access limit and easy to manage. These advantages make more and more small or medium enterprises choose to outsource large quantities of data to a third party. This way can make lots of small and medium enterprises get rid of costs of construction and maintenance, so it has broad market prospects. But now lots of cloud storage service providers can not protect data security.This result leakage of user data, so many users have to use traditional storage method.This has become one of the important factors that hinder the development of cloud storage. In this article, establishing keyword index by extracting keywords from ciphertext data. After that, encrypted data and the encrypted index upload cloud server together.User get related ciphertext by searching encrypted index, so it can response data leakage problem.

  12. A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles

    OpenAIRE

    Neas, Charles Bennett

    2010-01-01

    This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search tech...

  13. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  14. Modeling Multilevel Supplier Selection Problem Based on Weighted-Directed Network and Its Solution

    Directory of Open Access Journals (Sweden)

    Chia-Te Wei

    2017-01-01

    Full Text Available With the rapid development of economy, the supplier network is becoming more and more complicated. It is important to choose the right suppliers for improving the efficiency of the supply chain, so how to choose the right ones is one of the important research directions of supply chain management. This paper studies the partner selection problem from the perspective of supplier network global optimization. Firstly, this paper discusses and forms the evaluation system to estimate the supplier from the two indicators of risk and greenness and then applies the value as the weight of the network between two nodes to build a weighted-directed supplier network; secondly, the study establishes the optimal combination model of supplier selection based on the global network perspective and solves the model by the dynamic programming-tabu search algorithm and the improved ant colony algorithm, respectively; finally, different scale simulation examples are given to testify the efficiency of the two algorithms. The results show that the ant colony algorithm is superior to the tabu search one as a whole, but the latter is slightly better than the former when network scale is small.

  15. Pattern Nulling of Linear Antenna Arrays Using Backtracking Search Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kerim Guney

    2015-01-01

    Full Text Available An evolutionary method based on backtracking search optimization algorithm (BSA is proposed for linear antenna array pattern synthesis with prescribed nulls at interference directions. Pattern nulling is obtained by controlling only the amplitude, position, and phase of the antenna array elements. BSA is an innovative metaheuristic technique based on an iterative process. Various numerical examples of linear array patterns with the prescribed single, multiple, and wide nulls are given to illustrate the performance and flexibility of BSA. The results obtained by BSA are compared with the results of the following seventeen algorithms: particle swarm optimization (PSO, genetic algorithm (GA, modified touring ant colony algorithm (MTACO, quadratic programming method (QPM, bacterial foraging algorithm (BFA, bees algorithm (BA, clonal selection algorithm (CLONALG, plant growth simulation algorithm (PGSA, tabu search algorithm (TSA, memetic algorithm (MA, nondominated sorting GA-2 (NSGA-2, multiobjective differential evolution (MODE, decomposition with differential evolution (MOEA/D-DE, comprehensive learning PSO (CLPSO, harmony search algorithm (HSA, seeker optimization algorithm (SOA, and mean variance mapping optimization (MVMO. The simulation results show that the linear antenna array synthesis using BSA provides low side-lobe levels and deep null levels.

  16. The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet

    Science.gov (United States)

    Hill, Paul; Rader, Heidi B.; Hino, Jeff

    2012-01-01

    For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…

  17. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. SciRide Finder: a citation-based paradigm in biomedical literature search.

    Science.gov (United States)

    Volanakis, Adam; Krawczyk, Konrad

    2018-04-18

    There are more than 26 million peer-reviewed biomedical research items according to Medline/PubMed. This breadth of information is indicative of the progress in biomedical sciences on one hand, but an overload for scientists performing literature searches on the other. A major portion of scientific literature search is to find statements, numbers and protocols that can be cited to build an evidence-based narrative for a new manuscript. Because science builds on prior knowledge, such information has likely been written out and cited in an older manuscript. Thus, Cited Statements, pieces of text from scientific literature supported by citing other peer-reviewed publications, carry significant amount of condensed information on prior art. Based on this principle, we propose a literature search service, SciRide Finder (finder.sciride.org), which constrains the search corpus to such Cited Statements only. We demonstrate that Cited Statements can carry different information to this found in titles/abstracts and full text, giving access to alternative literature search results than traditional search engines. We further show how presenting search results as a list of Cited Statements allows researchers to easily find information to build an evidence-based narrative for their own manuscripts.

  19. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  20. Q-learning-based adjustable fixed-phase quantum Grover search algorithm

    International Nuclear Information System (INIS)

    Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun

    2017-01-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)

  1. Aspiration Levels and R&D Search in Young Technology-Based Firms

    DEFF Research Database (Denmark)

    Candi, Marina; Saemundsson, Rognvaldur; Sigurjonsson, Olaf

    the same when performance surpasses aspirations. Both positive and negative outlooks reinforce the effects of performance feedback. The combined effect is that the more outcomes and expectations deviate from aspirations the more young technology-based firms invest in R&D search.......Decisions about allocation of resources to research and development (R&D), referred to here as R&D search, are critically important for competitive advantage. Using panel data collected yearly over a period of nine years, this paper re-visits existing theories of backward-looking and forward......-looking decision models for R&D search in the important context of young technology-based firms. Some of the findings confirming existing models, but overall the findings contradict existing models. Not only are young technology-based firms found to increase search when aspirations are not met, but they do...

  2. Integrating Conflict Driven Clause Learning to Local Search

    Directory of Open Access Journals (Sweden)

    Gilles Audenard

    2009-10-01

    Full Text Available This article introduces SatHyS (SAT HYbrid Solver, a novel hybrid approach for propositional satisfiability. It combines local search and conflict driven clause learning (CDCL scheme. Each time the local search part reaches a local minimum, the CDCL is launched. For SAT problems it behaves like a tabu list, whereas for UNSAT ones, the CDCL part tries to focus on minimum unsatisfiable sub-formula (MUS. Experimental results show good performances on many classes of SAT instances from the last SAT competitions.

  3. Optimization of interactive visual-similarity-based search

    NARCIS (Netherlands)

    Nguyen, G.P.; Worring, M.

    2008-01-01

    At one end of the spectrum, research in interactive content-based retrieval concentrates on machine learning methods for effective use of relevance feedback. On the other end, the information visualization community focuses on effective methods for conveying information to the user. What is lacking

  4. Using Artificial Intelligence to Retrieve the Optimal Parameters and Structures of Adaptive Network-Based Fuzzy Inference System for Typhoon Precipitation Forecast Modeling

    Directory of Open Access Journals (Sweden)

    Chien-Lin Huang

    2015-01-01

    Full Text Available This study aims to construct a typhoon precipitation forecast model providing forecasts one to six hours in advance using optimal model parameters and structures retrieved from a combination of the adaptive network-based fuzzy inference system (ANFIS and artificial intelligence. To enhance the accuracy of the precipitation forecast, two structures were then used to establish the precipitation forecast model for a specific lead-time: a single-model structure and a dual-model hybrid structure where the forecast models of higher and lower precipitation were integrated. In order to rapidly, automatically, and accurately retrieve the optimal parameters and structures of the ANFIS-based precipitation forecast model, a tabu search was applied to identify the adjacent radius in subtractive clustering when constructing the ANFIS structure. The coupled structure was also employed to establish a precipitation forecast model across short and long lead-times in order to improve the accuracy of long-term precipitation forecasts. The study area is the Shimen Reservoir, and the analyzed period is from 2001 to 2009. Results showed that the optimal initial ANFIS parameters selected by the tabu search, combined with the dual-model hybrid method and the coupled structure, provided the favors in computation efficiency and high-reliability predictions in typhoon precipitation forecasts regarding short to long lead-time forecasting horizons.

  5. A dichotomous search-based heuristic for the three-dimensional sphere packing problem

    Directory of Open Access Journals (Sweden)

    Mhand Hifi

    2015-12-01

    Full Text Available In this paper, the three-dimensional sphere packing problem is solved by using a dichotomous search-based heuristic. An instance of the problem is defined by a set of $ n $ unequal spheres and an object of fixed width and height and, unlimited length. Each sphere is characterized by its radius and the aim of the problem is to optimize the length of the object containing all spheres without overlapping. The proposed method is based upon beam search, in which three complementary phases are combined: (i a greedy selection phase which determines a series of eligible search subspace, (ii a truncated tree search, using a width-beam search, that explores some promising paths, and (iii a dichotomous search that diversifies the search. The performance of the proposed method is evaluated on benchmark instances taken from the literature where its obtained results are compared to those reached by some recent methods of the literature. The proposed method is competitive and it yields promising results.

  6. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Ontology-based Semantic Search Engine for Healthcare Services

    OpenAIRE

    Jotsna Molly Rajan; M. Deepa Lakshmi

    2012-01-01

    With the development of Web Services, the retrieval of relevant services has become a challenge. The keyword-based discovery mechanism using UDDI and WSDL is insufficient due to the retrievalof a large amount of irrelevant information. Also, keywords are insufficient in expressing semantic concepts since a single concept can be referred using syntactically different terms. Hence, service capabilities need to be manually analyzed, which lead to the development of the Semantic Web for automatic...

  8. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  9. Inference-Based Similarity Search in Randomized Montgomery Domains for Privacy-Preserving Biometric Identification.

    Science.gov (United States)

    Wang, Yi; Wan, Jianwu; Guo, Jun; Cheung, Yiu-Ming; C Yuen, Pong

    2017-07-14

    Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives.

  10. Particle Swarm Optimization and harmony search based clustering and routing in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Veena Anand

    2017-01-01

    Full Text Available Wireless Sensor Networks (WSN has the disadvantage of limited and non-rechargeable energy resource in WSN creates a challenge and led to development of various clustering and routing algorithms. The paper proposes an approach for improving network lifetime by using Particle swarm optimization based clustering and Harmony Search based routing in WSN. So in this paper, global optimal cluster head are selected and Gateway nodes are introduced to decrease the energy consumption of the CH while sending aggregated data to the Base station (BS. Next, the harmony search algorithm based Local Search strategy finds best routing path for gateway nodes to the Base Station. Finally, the proposed algorithm is presented.

  11. Broad-Based Search for New and Practical Superconductors

    Science.gov (United States)

    2014-10-31

    which are potentially superconductors. In particular, we are exploring BaSnO3 as the base system to look for new superconductors. Doped BaSnO3 is a...doped (A- site and B-site) BaSnO3 composition spreads. Figure 1 is a summary of one spread where we made a continuous spread of (Ba,La)SnO3. X-ray...substitution into BaSnO3 . Fig. 1 summary of (Ba,La)SnO3 spread We have found that low La doping gives rise to good conductivity. The table on the

  12. Extracting Communities of Interests for Semantics-Based Graph Searches

    Science.gov (United States)

    Nakatsuji, Makoto; Tanaka, Akimichi; Uchiyama, Toshio; Fujimura, Ko

    Users recently find their interests by checking the contents published or mentioned by their immediate neighbors in social networking services. We propose semantics-based link navigation; links guide the active user to potential neighbors who may provide new interests. Our method first creates a graph that has users as nodes and shared interests as links. Then it divides the graph by link pruning to extract practical numbers, that the active user can navigate, of interest-sharing groups, i.e. communities of interests (COIs). It then attaches a different semantic tag to the link to each representative user, which best reflects the interests of COIs that they are included in, and to the link to each immediate neighbor of the active user. It finally calculates link attractiveness by analyzing the semantic tags on links. The active user can select the link to access by checking the semantic tags and link attractiveness. User interests extracted from large scale actual blog-entries are used to confirm the efficiency of our proposal. Results show that navigation based on link attractiveness and representative users allows the user to find new interests much more accurately than is otherwise possible.

  13. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  14. Nearby Search Indekos Based Android Using A Star (A*) Algorithm

    Science.gov (United States)

    Siregar, B.; Nababan, EB; Rumahorbo, JA; Andayani, U.; Fahmi, F.

    2018-03-01

    Indekos or rented room is a temporary residence for months or years. Society of academicians who come from out of town need a temporary residence, such as Indekos or rented room during their education, teaching, or duties. They are often found difficulty in finding a Indekos because lack of information about the Indekos. Besides, new society of academicians don’t recognize the areas around the campus and desire the shortest path from Indekos to get to the campus. The problem can be solved by implementing A Star (A*) algorithm. This algorithm is one of the shortest path algorithm to a finding shortest path from campus to the Indekos application, where the faculties in the campus as the starting point of the finding. Determination of the starting point used in this study aims to allow students to determine the starting point in finding the Indekos. The mobile based application facilitates the finding anytime and anywhere. Based on the experimental results, A* algorithm can find the shortest path with 86,67% accuracy.

  15. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Science.gov (United States)

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  16. General Quantum Meet-in-the-Middle Search Algorithm Based on Target Solution of Fixed Weight

    Science.gov (United States)

    Fu, Xiang-Qun; Bao, Wan-Su; Wang, Xiang; Shi, Jian-Hong

    2016-10-01

    Similar to the classical meet-in-the-middle algorithm, the storage and computation complexity are the key factors that decide the efficiency of the quantum meet-in-the-middle algorithm. Aiming at the target vector of fixed weight, based on the quantum meet-in-the-middle algorithm, the algorithm for searching all n-product vectors with the same weight is presented, whose complexity is better than the exhaustive search algorithm. And the algorithm can reduce the storage complexity of the quantum meet-in-the-middle search algorithm. Then based on the algorithm and the knapsack vector of the Chor-Rivest public-key crypto of fixed weight d, we present a general quantum meet-in-the-middle search algorithm based on the target solution of fixed weight, whose computational complexity is \\sumj = 0d {(O(\\sqrt {Cn - k + 1d - j }) + O(C_kj log C_k^j))} with Σd i =0 Ck i memory cost. And the optimal value of k is given. Compared to the quantum meet-in-the-middle search algorithm for knapsack problem and the quantum algorithm for searching a target solution of fixed weight, the computational complexity of the algorithm is lower. And its storage complexity is smaller than the quantum meet-in-the-middle-algorithm. Supported by the National Basic Research Program of China under Grant No. 2013CB338002 and the National Natural Science Foundation of China under Grant No. 61502526

  17. A GIS-based Quantitative Approach for the Search of Clandestine Graves, Italy.

    Science.gov (United States)

    Somma, Roberta; Cascio, Maria; Silvestro, Massimiliano; Torre, Eliana

    2017-10-30

    Previous research on the RAG color-coded prioritization systems for the discovery of clandestine graves has not considered all the factors influencing the burial site choice within a GIS project. The goal of this technical note was to discuss a GIS-based quantitative approach for the search of clandestine graves. The method is based on cross-referenced RAG maps with cumulative suitability factors to host a burial, leading to the editing of different search scenarios for ground searches showing high-(Red), medium-(Amber), and low-(Green) priority areas. The application of this procedure allowed several outcomes to be determined: If the concealment occurs at night, then the "search scenario without the visibility" will be the most effective one; if the concealment occurs in daylight, then the "search scenario with the DSM-based visibility" will be most appropriate; the different search scenarios may be cross-referenced with offender's confessions and eyewitnesses' testimonies to verify the veracity of their statements. © 2017 American Academy of Forensic Sciences.

  18. Parameter Tuning for Local-Search-Based Matheuristic Methods

    Directory of Open Access Journals (Sweden)

    Guillermo Cabrera-Guerrero

    2017-01-01

    Full Text Available Algorithms that aim to solve optimisation problems by combining heuristics and mathematical programming have attracted researchers’ attention. These methods, also known as matheuristics, have been shown to perform especially well for large, complex optimisation problems that include both integer and continuous decision variables. One common strategy used by matheuristic methods to solve such optimisation problems is to divide the main optimisation problem into several subproblems. While heuristics are used to seek for promising subproblems, exact methods are used to solve them to optimality. In general, we say that both mixed integer (nonlinear programming problems and combinatorial optimisation problems can be addressed using this strategy. Beside the number of parameters researchers need to adjust when using heuristic methods, additional parameters arise when using matheuristic methods. In this paper we focus on one particular parameter, which determines the size of the subproblem. We show how matheuristic performance varies as this parameter is modified. We considered a well-known NP-hard combinatorial optimisation problem, namely, the capacitated facility location problem for our experiments. Based on the obtained results, we discuss the effects of adjusting the size of subproblems that are generated when using matheuristics methods such as the one considered in this paper.

  19. An Analysis of Literature Searching Anxiety in Evidence-Based Medicine Education

    Directory of Open Access Journals (Sweden)

    Hui-Chin Chang

    2014-01-01

    Full Text Available Introduction. Evidence-Based Medicine (EBM is hurtling towards a cornerstone in lifelong learning for healthcare personnel worldwide. This study aims to evaluate the literature searching anxiety in graduate students in practicing EBM. Method The study participants were 48 graduate students who enrolled the EBM course at aMedical Universityin central Taiwan. Student’s t-test, Pearson correlation and multivariate regression, interviewing are used to evaluate the students’ literature searching anxiety of EBM course. The questionnaire is Literature Searching Anxiety Rating Scale -LSARS. Results The sources of anxiety are uncertainty of database selection, literatures evaluation and selection, technical assistance request, computer programs use, English and EBM education programs were disclosed. The class performance is negatively related to LSARS score, however, the correlation is statistically insignificant with the adjustment of gender, degree program, age category and experience of publication. Conclusion This study helps in understanding the causes and the extent of anxiety in order to work on a better teaching program planning to improve user’s searching skills and the capability of utilization the information; At the same time, provide friendly-user facilities of evidence searching. In short, we need to upgrade the learner’s searching 45 skills and reduce theanxiety. We also need to stress on the auxiliary teaching program for those with the prevalent and profoundanxiety during literature searching.

  20. CSA: A Credibility Search Algorithm Based on Different Query in Unstructured Peer-to-Peer Networks

    Directory of Open Access Journals (Sweden)

    Hongyan Mei

    2014-01-01

    Full Text Available Efficient searching for resources has become a challenging task with less network bandwidth consumption in unstructured peer-to-peer (P2P networks. Heuristic search mechanism is an effective method which depends on the previous searches to guide future ones. In the proposed methods, searching for high-repetition resources is more effective. However, the performances of the searches for nonrepetition or low-repetition or rare resources need to be improved. As for this problem, considering the similarity between social networks and unstructured P2P networks, we present a credibility search algorithm based on different queries according to the trust production principle in sociology and psychology. In this method, queries are divided into familiar queries and unfamiliar queries. For different queries, we adopt different ways to get the credibility of node to its each neighbor. And then queries should be forwarded by the neighbor nodes with higher credibility. Experimental results show that our method can improve query hit rate and reduce search delay with low bandwidth consumption in three different network topologies under static and dynamic network environments.

  1. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  2. Architecture for knowledge-based and federated search of online clinical evidence.

    Science.gov (United States)

    Coiera, Enrico; Walther, Martin; Nguyen, Ken; Lovell, Nigel H

    2005-10-24

    It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Clinicians performed 1662 searches over the trial. The average search duration was 4.9 +/- 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite the additional effort required to incorporate the

  3. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  4. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account.

  5. A suffix arrays based approach to semantic search in P2P systems

    Science.gov (United States)

    Shi, Qingwei; Zhao, Zheng; Bao, Hu

    2007-09-01

    Building a semantic search system on top of peer-to-peer (P2P) networks is becoming an attractive and promising alternative scheme for the reason of scalability, Data freshness and search cost. In this paper, we present a Suffix Arrays based algorithm for Semantic Search (SASS) in P2P systems, which generates a distributed Semantic Overlay Network (SONs) construction for full-text search in P2P networks. For each node through the P2P network, SASS distributes document indices based on a set of suffix arrays, by which clusters are created depending on words or phrases shared between documents, therefore, the search cost for a given query is decreased by only scanning semantically related documents. In contrast to recently announced SONs scheme designed by using metadata or predefined-class, SASS is an unsupervised approach for decentralized generation of SONs. SASS is also an incremental, linear time algorithm, which efficiently handle the problem of nodes update in P2P networks. Our simulation results demonstrate that SASS yields high search efficiency in dynamic environments.

  6. Parameter estimation of activated sludge process based on an improved cuckoo search algorithm.

    Science.gov (United States)

    Du, Xianjun; Wang, Junlu; Jegatheesan, Veeriah; Shi, Guohua

    2018-02-01

    It is essential to use appropriate values for kinetic parameters in activated sludge model when the model is applied for wastewater treatment processes under different environments. An improved cuckoo search (ICS) algorithm was proposed in this paper for the estimation of kinetic parameters used in Activated Sludge Model No. 1 (ASM1). ICS is tested for its speed and accuracy in reaching solution by searching global minima of six standard functions. Cyclical adjustment strategy was employed into the detected probability to increase searching ability. Meanwhile, the searching step was adaptively adjusted based on the optimal nest of the last generation and the current iteration numbers. Subsequently, ICS is used to estimate 7 sensitive parameters in ASM1 for practical applications. Field data are used to validate prediction accuracy of ASM1 with estimated parameters. Predicted results of the model are closer to the actual data with adjusted parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Addressing special structure in the relevance feedback learning problem through aspect-based image search

    NARCIS (Netherlands)

    M.J. Huiskes (Mark)

    2004-01-01

    textabstractIn this paper we focus on a number of issues regarding special structure in the relevance feedback learning problem, most notably the effects of image selection based on partial relevance on the clustering behavior of examples. We propose a simple scheme, aspect-based image search, which

  8. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    Science.gov (United States)

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.

  9. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  10. Identification of Fuzzy Inference Systems by Means of a Multiobjective Opposition-Based Space Search Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2013-01-01

    Full Text Available We introduce a new category of fuzzy inference systems with the aid of a multiobjective opposition-based space search algorithm (MOSSA. The proposed MOSSA is essentially a multiobjective space search algorithm improved by using an opposition-based learning that employs a so-called opposite numbers mechanism to speed up the convergence of the optimization algorithm. In the identification of fuzzy inference system, the MOSSA is exploited to carry out the parametric identification of the fuzzy model as well as to realize its structural identification. Experimental results demonstrate the effectiveness of the proposed fuzzy models.

  11. Improving Search in Tag-Based Systems with Automatically Extracted Keywords

    Science.gov (United States)

    Awawdeh, Ruba; Anderson, Terry

    Tag-based systems are used by millions of web users to tag, save and share items. User-defined tags, however, are so variable in quality that searching on these tags alone is unsatisfactory. One way to improve search in bookmarking systems is by adding more metadata to the user-created tags to enhance tag quality. The additional metadata we have used is based on document content and largely avoids the idiosyncratic and ambiguous terms too often evident in user-created tags. Such an approach adds value by incorporating information about the content of the resource while retaining the original user-created tags.

  12. Unsupervised 3D ring template searching as an ideas generator for scaffold hopping: use of the LAMDA, RigFit, and field-based similarity search (FBSS) methods.

    Science.gov (United States)

    Bohl, Martin; Loeprecht, Björn; Wendt, Bernd; Heritage, Trevor; Richmond, Nicola J; Willett, Peter

    2006-01-01

    Crystal structures taken from the Cambridge Structural Database were used to build a ring scaffold database containing 19 050 3D structures, with each such scaffold then being used to generate a centroid connecting path (CCP) representation. The CCP is a novel object that connects ring centroids, ring linker atoms, and other important points on the connection path between ring centroids. Unsupervised searching in the scaffold and CCP data sets was carried out using the atom-based LAMDA and RigFit search methods and the field-based similarity search method. The performance of these methods was tested with three different ring scaffold queries. These searches demonstrated that unsupervised 3D scaffold searching methods can find not only the types of ring systems that might be retrieved in carefully defined pharmacophore searches (supervised approach) but also additional, structurally diverse ring systems that could form the starting point for lead discovery programs or other scaffold-hopping applications. Not only are the methods effective but some are sufficiently rapid to permit scaffold searching in large chemical databases on a routine basis.

  13. COORDINATE-BASED META-ANALYTIC SEARCH FOR THE SPM NEUROIMAGING PIPELINE

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Szewczyk, Marcin; Rasmussen, Peter Mondrup

    2009-01-01

    . BredeQuery offers a direct link from SPM5 to the Brede Database coordinate-based search engine. BredeQuery is able to ‘grab’ brain location coordinates from the SPM windows and enter them as a query for the Brede Database. Moreover, results of the query can be displayed in an SPM window and/or exported...... of the databases offer so- called coordinate-based searching to the users (e.g. Brede, BrainMap). For such search, the publications, which relate to the brain locations represented by the user coordinates, are retrieved. In this paper we present BredeQuery – a plugin for the widely used SPM5 data analytic pipeline...

  14. Efficient exploration of large combinatorial chemistry spaces by monomer-based similarity searching.

    Science.gov (United States)

    Yu, Ning; Bakken, Gregory A

    2009-04-01

    In modern drug discovery, 2-D similarity searching is widely employed as a cost-effective way to screen large compound collections and select subsets of molecules that may have interesting biological activity prior to experimental screening. Nowadays, there is a growing interest in applying the existing 2-D similarity searching methods to combinatorial chemistry libraries to search for novel hits or to evolve lead series. A dilemma thus arises when many identical substructures recur in library products and they have to be considered repeatedly in descriptor calculations. The dilemma is exacerbated by the astronomical number of combinatorial products. This problem imposes a major barrier to similarity searching of large combinatorial chemistry spaces. An efficient approach, termed Monomer-based Similarity Searching (MoBSS), is proposed to remedy the problem. MoBSS calculates atom pair (AP) descriptors based on interatomic topological distances, which lend themselves to pair additivity. A fast algorithm is employed in MoBSS to rapidly compute product atom pairs from those of the constituent fragments. The details of the algorithm are presented along with a series of proof-of-concept studies, which demonstrate the speed, accuracy, and utility of the MoBSS approach.

  15. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  16. ELE: An Ontology-Based System Integrating Semantic Search and E-Learning Technologies

    Science.gov (United States)

    Barbagallo, A.; Formica, A.

    2017-01-01

    ELSE (E-Learning for the Semantic ECM) is an ontology-based system which integrates semantic search methodologies and e-learning technologies. It has been developed within a project of the CME (Continuing Medical Education) program--ECM (Educazione Continua nella Medicina) for Italian participants. ELSE allows the creation of e-learning courses…

  17. Agent-oriented Architecture for Task-based Information Search System

    NARCIS (Netherlands)

    Aroyo, Lora; de Bra, Paul M.E.; De Bra, P.; Hardman, L.

    1999-01-01

    The topic of the reported research discusses an agent-oriented architecture of an educational information search system AIMS - a task-based learner support system. It is implemented within the context of 'Courseware Engineering' on-line course at the Faculty of Educational Science and Technology,

  18. A novel approach towards skill-based search and services of Open Educational Resources

    NARCIS (Netherlands)

    Ha, Kyung-Hun; Niemann, Katja; Schwertel, Uta; Holtkamp, Philipp; Pirkkalainen, Henri; Börner, Dirk; Kalz, Marco; Pitsilis, Vassilis; Vidalis, Ares; Pappa, Dimitra; Bick, Markus; Pawlowski, Jan; Wolpers, Martin

    2011-01-01

    Ha, K.-H., Niemann, K., Schwertel, U., Holtkamp, P., Pirkkalainen, H., Börner, D. et al (2011). A novel approach towards skill-based search and services of Open Educational Resources. In E. Garcia-Barriocanal, A. Öztürk, & M. C. Okur (Eds.), Metadata and Semantics Research: 5th International

  19. Promoting evidence based medicine in preclinical medical students via a federated literature search tool.

    Science.gov (United States)

    Keim, Samuel Mark; Howse, David; Bracke, Paul; Mendoza, Kathryn

    2008-01-01

    Medical educators are increasingly faced with directives to teach Evidence Based Medicine (EBM) skills. Because of its nature, integrating fundamental EBM educational content is a challenge in the preclinical years. To analyse preclinical medical student user satisfaction and feedback regarding a clinical EBM search strategy. The authors introduced a custom EBM search option with a self-contained education structure to first-year medical students. The implementation took advantage of a major curricular change towards case-based instruction. Medical student views and experiences were studied regarding the tool's convenience, problems and the degree to which they used it to answer questions raised by case-based instruction. Surveys were completed by 70% of the available first-year students. Student satisfaction and experiences were strongly positive towards the EBM strategy, especially of the tool's convenience and utility for answering issues raised during case-based learning sessions. About 90% of the students responded that the tool was easy to use, productive and accessed for half or more of their search needs. This study provides evidence that the integration of an educational EBM search tool can be positively received by preclinical medical students.

  20. Exploring Gender Differences in SMS-Based Mobile Library Search System Adoption

    Science.gov (United States)

    Goh, Tiong-Thye

    2011-01-01

    This paper investigates differences in how male and female students perceived a short message service (SMS) library catalog search service when adopting it. Based on a sample of 90 students, the results suggest that there are significant differences in perceived usefulness and intention to use but no significant differences in self-efficacy and…

  1. Web-Based Search and Plot System for Nuclear Reaction Data

    International Nuclear Information System (INIS)

    Otuka, N.; Nakagawa, T.; Fukahori, T.; Katakura, J.; Aikawa, M.; Suda, T.; Naito, K.; Korennov, S.; Arai, K.; Noto, H.; Ohnishi, A.; Kato, K.

    2005-01-01

    A web-based search and plot system for nuclear reaction data has been developed, covering experimental data in EXFOR format and evaluated data in ENDF format. The system is implemented for Linux OS, with Perl and MySQL used for CGI scripts and the database manager, respectively. Two prototypes for experimental and evaluated data are presented

  2. 32 CFR 634.52 - Search incident to impoundment based on criminal activity.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Search incident to impoundment based on criminal activity. 634.52 Section 634.52 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY (CONTINUED) LAW ENFORCEMENT AND CRIMINAL INVESTIGATIONS MOTOR VEHICLE TRAFFIC SUPERVISION Impounding...

  3. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  4. Mobile Visual Search Based on Histogram Matching and Zone Weight Learning

    Science.gov (United States)

    Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong

    2018-01-01

    In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.

  5. Search for proton decay and supernova neutrino bursts with a lunar base neutron detector

    International Nuclear Information System (INIS)

    Cline, D.B.

    1989-06-01

    We describe the current status of the search for proton decay on earth, emphasizing the decay mode P → K + ν - and discuss the possibility of detecting this mode with a single detector on a lunar base station. The same detector could be used to search for neutrino bursts from distant supernova using the neutral current signature ν μ,τ +N → n+ν x by detecting the produced neutrons. The key advantage of the lunar experiment is the low neutrino flux and possible low radioactive background. (author). 5 refs, 4 tabs, 3 figs

  6. HARD: SUBJECT-BASED SEARCH ENGINE MENGGUNAKAN TF-IDF DAN JACCARD'S COEFFICIENT

    Directory of Open Access Journals (Sweden)

    Rolly Intan

    2006-01-01

    Full Text Available This paper proposes a hybridized concept of search engine based on subject parameter of High Accuracy Retrieval from Documents (HARD. Tf-Idf and Jaccard's Coefficient are modified and extended to providing the concept. Several illustrative examples are given including their steps of calculations in order to clearly understand the proposed concept and formulas. Abstract in Bahasa Indonesia : Paper ini memperkenalkan suatu algorima search engine berdasarkan konsep HARD (High Accuracy Retrieval from Documents dengan menggabungkan penggunaan metoda TF-IDF (Term Frequency Inverse Document Frequency dan Jaccard's Coefficient. Kedua metoda, TF-IDF dan Jaccard's Coefficient dimodifikasi dan dikembangkan dengan memperkenalkan beberapa rumusan baru. Untuk lebih memudahkan dalam mengerti algoritma dan rumusan baru yang diperkenalkan, beberapa contoh perhitungan diberikan. Kata kunci: HARD, Tf-Idf, koefisien Jaccard, search engine, himpunan fuzzy.

  7. Genetic Algorithm-based Dynamic Vehicle Route Search using Car-to-Car Communication

    Directory of Open Access Journals (Sweden)

    KIM, J.

    2010-11-01

    Full Text Available Suggesting more efficient driving routes generate benefits not only for individuals by saving commute time, but also for society as a whole by reducing accident rates and social costs by lessening traffic congestion. In this paper, we suggest a new route search algorithm based on a genetic algorithm which is more easily installable into mutually communicating car navigation systems, and validate its usefulness through experiments reflecting real-world situations. The proposed algorithm is capable of searching alternative routes dynamically in unexpected events of system malfunctioning or traffic slow-downs due to accidents. Experimental results demonstrate that our algorithm searches the best route more efficiently and evolves with universal adaptability.

  8. Project GRACE A grid based search tool for the global digital library

    CERN Document Server

    Scholze, Frank; Vigen, Jens; Prazak, Petra; The Seventh International Conference on Electronic Theses and Dissertations

    2004-01-01

    The paper will report on the progress of an ongoing EU project called GRACE - Grid Search and Categorization Engine (http://www.grace-ist.org). The project participants are CERN, Sheffield Hallam University, Stockholm University, Stuttgart University, GL 2006 and Telecom Italia. The project started in 2002 and will finish in 2005, resulting in a Grid based search engine that will search across a variety of content sources including a number of electronic thesis and dissertation repositories. The Open Archives Initiative (OAI) is expanding and is clearly an interesting movement for a community advocating open access to ETD. However, the OAI approach alone may not be sufficiently scalable to achieve a truly global ETD Digital Library. Many universities simply offer their collections to the world via their local web services without being part of any federated system for archiving and even those dissertations that are provided with OAI compliant metadata will not necessarily be picked up by a centralized OAI Ser...

  9. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    Science.gov (United States)

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  10. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    S. K. Saha

    2013-01-01

    Full Text Available In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  11. An Efficient Minimum Free Energy Structure-Based Search Method for Riboswitch Identification Based on Inverse RNA Folding.

    Directory of Open Access Journals (Sweden)

    Matan Drory Retwitzer

    Full Text Available Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is to find additional eukaryotic riboswitches since more than 20 riboswitch classes have been found in prokaryotes but only one class has been found in eukaryotes. Moreover, this single known class of eukaryotic riboswitch, namely the TPP riboswitch class, has been found in bacteria, archaea, fungi and plants but not in animals. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods such as a combination of BLAST and pattern matching techniques that incorporate base-pairing considerations. None of these approaches perform energy minimization structure predictions. There is a clear motivation to develop new bioinformatics methods, aside of the ongoing advances in covariance models, that will sample the sequence search space more flexibly using structural guidance while retaining the computational efficiency of sequence-based methods. We present a new energy minimization approach that transforms structure-based search into a sequence-based search, thereby enabling the utilization of well established sequence-based search utilities such as BLAST and FASTA. The transformation to sequence space is obtained by using an extended inverse RNA folding problem solver with sequence and structure constraints, available within RNAfbinv. Examples in applying the new method are presented for the purine and preQ1 riboswitches. The method is described in detail along with its findings in prokaryotes. Potential uses in finding novel eukaryotic riboswitches and optimizing pre-designed synthetic riboswitches based on ligand simulations are discussed. The method components are freely

  12. Nonuniformity correction for an infrared focal plane array based on diamond search block matching.

    Science.gov (United States)

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.

  13. Road Traffic Congestion Management Based on a Search-Allocation Approach

    Directory of Open Access Journals (Sweden)

    Raiyn Jamal

    2017-03-01

    Full Text Available This paper introduces a new scheme for road traffic management in smart cities, aimed at reducing road traffic congestion. The scheme is based on a combination of searching, updating, and allocation techniques (SUA. An SUA approach is proposed to reduce the processing time for forecasting the conditions of all road sections in real-time, which is typically considerable and complex. It searches for the shortest route based on historical observations, then computes travel time forecasts based on vehicular location in real-time. Using updated information, which includes travel time forecasts and accident forecasts, the vehicle is allocated the appropriate section. The novelty of the SUA scheme lies in its updating of vehicles in every time to reduce traffic congestion. Furthermore, the SUA approach supports autonomy and management by self-regulation, which recommends its use in smart cities that support internet of things (IoT technologies.

  14. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  15. Algorithms for recollection of search terms based on the Wikipedia category structure.

    Science.gov (United States)

    Vandamme, Stijn; De Turck, Filip

    2014-01-01

    The common user interface for a search engine consists of a text field where the user can enter queries consisting of one or more keywords. Keyword query based search engines work well when the users have a clear vision what they are looking for and are capable of articulating their query using the same terms as indexed. For our multimedia database containing 202,868 items with text descriptions, we supplement such a search engine with a category-based interface whose category structure is tailored to the content of the database. This facilitates browsing and offers the users the possibility to look for named entities, even if they forgot their names. We demonstrate that this approach allows users who fail to recollect the name of named entities to retrieve data with little effort. In all our experiments, it takes 1 query on a category and on average 2.49 clicks, compared to 5.68 queries on the database's traditional text search engine for a 68.3% success probability or 6.01 queries when the user also turns to Google, for a 97.1% success probability.

  16. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  17. Category Theory Approach to Solution Searching Based on Photoexcitation Transfer Dynamics

    Directory of Open Access Journals (Sweden)

    Makoto Naruse

    2017-07-01

    Full Text Available Solution searching that accompanies combinatorial explosion is one of the most important issues in the age of artificial intelligence. Natural intelligence, which exploits natural processes for intelligent functions, is expected to help resolve or alleviate the difficulties of conventional computing paradigms and technologies. In fact, we have shown that a single-celled organism such as an amoeba can solve constraint satisfaction problems and related optimization problems as well as demonstrate experimental systems based on non-organic systems such as optical energy transfer involving near-field interactions. However, the fundamental mechanisms and limitations behind solution searching based on natural processes have not yet been understood. Herein, we present a theoretical background of solution searching based on optical excitation transfer from a category-theoretic standpoint. One important indication inspired by the category theory is that the satisfaction of short exact sequences is critical for an adequate computational operation that determines the flow of time for the system and is termed as “short-exact-sequence-based time.” In addition, the octahedral and braid structures known in triangulated categories provide a clear understanding of the underlying mechanisms, including a quantitative indication of the difficulties of obtaining solutions based on homology dimension. This study contributes to providing a fundamental background of natural intelligence.

  18. Guided search for hybrid systems based on coarse-grained space abstractions.

    Science.gov (United States)

    Bogomolov, Sergiy; Donzé, Alexandre; Frehse, Goran; Grosu, Radu; Johnson, Taylor T; Ladan, Hamed; Podelski, Andreas; Wehrle, Martin

    Hybrid systems represent an important and powerful formalism for modeling real-world applications such as embedded systems. A verification tool like SpaceEx is based on the exploration of a symbolic search space (the region space ). As a verification tool, it is typically optimized towards proving the absence of errors. In some settings, e.g., when the verification tool is employed in a feedback-directed design cycle, one would like to have the option to call a version that is optimized towards finding an error trajectory in the region space. A recent approach in this direction is based on guided search . Guided search relies on a cost function that indicates which states are promising to be explored, and preferably explores more promising states first. In this paper, we propose an abstraction-based cost function based on coarse-grained space abstractions for guiding the reachability analysis. For this purpose, a suitable abstraction technique that exploits the flexible granularity of modern reachability analysis algorithms is introduced. The new cost function is an effective extension of pattern database approaches that have been successfully applied in other areas. The approach has been implemented in the SpaceEx model checker. The evaluation shows its practical potential.

  19. A Line-Search-Based Partial Proximal Alternating Directions Method for Separable Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yu-hua Zeng

    2014-01-01

    Full Text Available We propose an appealing line-search-based partial proximal alternating directions (LSPPAD method for solving a class of separable convex optimization problems. These problems under consideration are common in practice. The proposed method solves two subproblems at each iteration: one is solved by a proximal point method, while the proximal term is absent from the other. Both subproblems admit inexact solutions. A line search technique is used to guarantee the convergence. The convergence of the LSPPAD method is established under some suitable conditions. The advantage of the proposed method is that it provides the tractability of the subproblem in which the proximal term is absent. Numerical tests show that the LSPPAD method has better performance compared with the existing alternating projection based prediction-correction (APBPC method if both are employed to solve the described problem.

  20. Multiresolution Search of the Rigid Motion Space for Intensity-Based Registration.

    Science.gov (United States)

    Nasihatkon, Behrooz; Kahl, Fredrik

    2018-01-01

    We study the relation between the correlation-based target functions of low-resolution and high-resolution intensity-based registration for the class of rigid transformations. Our results show that low-resolution target values can tightly bound the high-resolution target function in natural images. This can help with analyzing and better understanding the process of multiresolution image registration. It also gives a guideline for designing multiresolution algorithms in which the search space in higher resolution registration is restricted given the fitness values for lower resolution image pairs. To demonstrate this, we incorporate our multiresolution technique into a Lipschitz global optimization framework. We show that using the multiresolution scheme can result in large gains in the efficiency of such algorithms. The method is evaluated by applying to the problems of 2D registration, 3D rotation search, and the detection of reflective symmetry in 2D and 3D images.

  1. Hybrid fuzzy charged system search algorithm based state estimation in distribution networks

    Directory of Open Access Journals (Sweden)

    Sachidananda Prasad

    2017-06-01

    Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.

  2. Self-Adaptive Stepsize Search Applied to Optimal Structural Design

    Science.gov (United States)

    Nolle, L.; Bland, J. A.

    Structural engineering often involves the design of space frames that are required to resist predefined external forces without exhibiting plastic deformation. The weight of the structure and hence the weight of its constituent members has to be as low as possible for economical reasons without violating any of the load constraints. Design spaces are usually vast and the computational costs for analyzing a single design are usually high. Therefore, not every possible design can be evaluated for real-world problems. In this work, a standard structural design problem, the 25-bar problem, has been solved using self-adaptive stepsize search (SASS), a relatively new search heuristic. This algorithm has only one control parameter and therefore overcomes the drawback of modern search heuristics, i.e. the need to first find a set of optimum control parameter settings for the problem at hand. In this work, SASS outperforms simulated-annealing, genetic algorithms, tabu search and ant colony optimization.

  3. MinHash-Based Fuzzy Keyword Search of Encrypted Data across Multiple Cloud Servers

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2018-05-01

    Full Text Available To enhance the efficiency of data searching, most data owners store their data files in different cloud servers in the form of cipher-text. Thus, efficient search using fuzzy keywords becomes a critical issue in such a cloud computing environment. This paper proposes a method that aims at improving the efficiency of cipher-text retrieval and lowering storage overhead for fuzzy keyword search. In contrast to traditional approaches, the proposed method can reduce the complexity of Min-Hash-based fuzzy keyword search by using Min-Hash fingerprints to avoid the need to construct the fuzzy keyword set. The method will utilize Jaccard similarity to rank the results of retrieval, thus reducing the amount of calculation for similarity and saving a lot of time and space overhead. The method will also take consideration of multiple user queries through re-encryption technology and update user permissions dynamically. Security analysis demonstrates that the method can provide better privacy preservation and experimental results show that efficiency of cipher-text using the proposed method can improve the retrieval time and lower storage overhead as well.

  4. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  5. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    Science.gov (United States)

    2002-01-01

    can pose queries via a simple point-and-click, form-based user interface without ever needing to write SQL queries. Similarity searching also allows...monotonic scor- ing functions that has been adopted by the Garlic multimedia information system under develop- ment at the IBM Almaden Research Center...product memory costs [60]. On the other hand, in Garlic , the data items returned by each stream must wait in a temporary file until the completion of the

  6. A peak value searching method of the MCA based on digital logic devices

    International Nuclear Information System (INIS)

    Sang Ziru; Huang Shanshan; Chen Lian; Jin Ge

    2010-01-01

    Digital multi-channel analyzers play a more important role in multi-channel pulse height analysis technique. The direction of digitalization are characterized by powerful pulse processing ability, high throughput, improved stability and flexibility. This paper introduces a method of searching peak value of waveform based on digital logic with FPGA. This method reduce the dead time. Then data correction offline can improvement the non-linearity of MCA. It gives the α energy spectrum of 241 Am. (authors)

  7. The possibilities of searching for new materials based on isocationic analogs of ZnBVI

    Science.gov (United States)

    Kirovskaya, I. A.; Mironova, E. V.; Kosarev, B. A.; Yureva, A. V.; Ekkert, R. V.

    2017-08-01

    Acid-base properties of chalcogenides - analogs of ZnBVI - were investigated in detail by modern techniques. The regularities of their composition-dependent changes were set, these regularities correlating with the dependencies "bulk physicochemical property - composition". The main reason for such correlations was found, facilitating the search for new materials of corresponding sensors. In this case, it was the sensors for basic gases impurities.

  8. Expectation violations in sensorimotor sequences: shifting from LTM-based attentional selection to visual search.

    Science.gov (United States)

    Foerster, Rebecca M; Schneider, Werner X

    2015-03-01

    Long-term memory (LTM) delivers important control signals for attentional selection. LTM expectations have an important role in guiding the task-driven sequence of covert attention and gaze shifts, especially in well-practiced multistep sensorimotor actions. What happens when LTM expectations are disconfirmed? Does a sensory-based visual-search mode of attentional selection replace the LTM-based mode? What happens when prior LTM expectations become valid again? We investigated these questions in a computerized version of the number-connection test. Participants clicked on spatially distributed numbered shapes in ascending order while gaze was recorded. Sixty trials were performed with a constant spatial arrangement. In 20 consecutive trials, either numbers, shapes, both, or no features switched position. In 20 reversion trials, participants worked on the original arrangement. Only the sequence-affecting number switches elicited slower clicking, visual search-like scanning, and lower eye-hand synchrony. The effects were neither limited to the exchanged numbers nor to the corresponding actions. Thus, expectation violations in a well-learned sensorimotor sequence cause a regression from LTM-based attentional selection to visual search beyond deviant-related actions and locations. Effects lasted for several trials and reappeared during reversion. © 2015 New York Academy of Sciences.

  9. Fast online and index-based algorithms for approximate search of RNA sequence-structure patterns

    Science.gov (United States)

    2013-01-01

    Background It is well known that the search for homologous RNAs is more effective if both sequence and structure information is incorporated into the search. However, current tools for searching with RNA sequence-structure patterns cannot fully handle mutations occurring on both these levels or are simply not fast enough for searching large sequence databases because of the high computational costs of the underlying sequence-structure alignment problem. Results We present new fast index-based and online algorithms for approximate matching of RNA sequence-structure patterns supporting a full set of edit operations on single bases and base pairs. Our methods efficiently compute semi-global alignments of structural RNA patterns and substrings of the target sequence whose costs satisfy a user-defined sequence-structure edit distance threshold. For this purpose, we introduce a new computing scheme to optimally reuse the entries of the required dynamic programming matrices for all substrings and combine it with a technique for avoiding the alignment computation of non-matching substrings. Our new index-based methods exploit suffix arrays preprocessed from the target database and achieve running times that are sublinear in the size of the searched sequences. To support the description of RNA molecules that fold into complex secondary structures with multiple ordered sequence-structure patterns, we use fast algorithms for the local or global chaining of approximate sequence-structure pattern matches. The chaining step removes spurious matches from the set of intermediate results, in particular of patterns with little specificity. In benchmark experiments on the Rfam database, our improved online algorithm is faster than the best previous method by up to factor 45. Our best new index-based algorithm achieves a speedup of factor 560. Conclusions The presented methods achieve considerable speedups compared to the best previous method. This, together with the expected

  10. Internet-based search of randomised trials relevant to mental health originating in the Arab world

    Directory of Open Access Journals (Sweden)

    Adams Clive E

    2005-07-01

    Full Text Available Abstract Background The internet is becoming a widely used source of accessing medical research through various on-line databases. This instant access to information is of benefit to busy clinicians and service users around the world. The population of the Arab World is comparable to that of the United States, yet it is widely believed to have a greatly contrasting output of randomised controlled trials related to mental health. This study was designed to investigate the existence of such research in the Arab World and also to investigate the availability of this research on-line. Methods Survey of findings from three internet-based potential sources of randomised trials originating from the Arab world and relevant to mental health care. Results A manual search of an Arabic online current contents service identified 3 studies, MEDLINE, EMBASE, and PsycINFO searches identified only 1 study, and a manual search of a specifically indexed, study-based mental health database, PsiTri, revealed 27 trials. Conclusion There genuinely seem to be few trials from the Arab world and accessing these on-line was problematic. Replication of some studies that guide psychiatric/psychological practice in the Arab world would seem prudent.

  11. Simulation Optimization of Search and Rescue in Disaster Relief Based on Distributed Auction Mechanism

    Directory of Open Access Journals (Sweden)

    Jian Tang

    2017-11-01

    Full Text Available In this paper, we optimize the search and rescue (SAR in disaster relief through agent-based simulation. We simulate rescue teams’ search behaviors with the improved Truncated Lévy walks. Then we propose a cooperative rescue plan based on a distributed auction mechanism, and illustrate it with the case of landslide disaster relief. The simulation is conducted in three scenarios, including “fatal”, “serious” and “normal”. Compared with the non-cooperative rescue plan, the proposed rescue plan in this paper would increase victims’ relative survival probability by 7–15%, increase the ratio of survivors getting rescued by 5.3–12.9%, and decrease the average elapsed time for one site getting rescued by 16.6–21.6%. The robustness analysis shows that search radius can affect the rescue efficiency significantly, while the scope of cooperation cannot. The sensitivity analysis shows that the two parameters, the time limit for completing rescue operations in one buried site and the maximum turning angle for next step, both have a great influence on rescue efficiency, and there exists optimal value for both of them in view of rescue efficiency.

  12. A model independent S/W framework for search-based software testing.

    Science.gov (United States)

    Oh, Jungsup; Baik, Jongmoon; Lim, Sung-Hwa

    2014-01-01

    In Model-Based Testing (MBT) area, Search-Based Software Testing (SBST) has been employed to generate test cases from the model of a system under test. However, many types of models have been used in MBT. If the type of a model has changed from one to another, all functions of a search technique must be reimplemented because the types of models are different even if the same search technique has been applied. It requires too much time and effort to implement the same algorithm over and over again. We propose a model-independent software framework for SBST, which can reduce redundant works. The framework provides a reusable common software platform to reduce time and effort. The software framework not only presents design patterns to find test cases for a target model but also reduces development time by using common functions provided in the framework. We show the effectiveness and efficiency of the proposed framework with two case studies. The framework improves the productivity by about 50% when changing the type of a model.

  13. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-01-01

    Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

  14. The Nursing & Allied Health (CINAHL) data base: a guide to effective searching.

    Science.gov (United States)

    Fishel, C C

    1985-01-01

    The Cumulative Index to Nursing & Allied Health Literature is now available online through both BRS and DIALOG. Known as the NURSING & ALLIED HEALTH (CINAHL) file, it is the data base of choice for professionals in these fields. Unlike the National Library of Medicine's Medical Subject Headings (MeSH), CINAHL has a strong nursing orientation and a specific, current nursing vocabulary. Search techniques are similar to those used on MEDLINE since CINAHL has adopted the powerful MeSH tree structure format. The arrival of this data base is a significant advance for the nursing profession.

  15. Handling Conflicts in Depth-First Search for LTL Tableau to Debug Compliance Based Languages

    Directory of Open Access Journals (Sweden)

    Francois Hantry

    2011-09-01

    Full Text Available Providing adequate tools to tackle the problem of inconsistent compliance rules is a critical research topic. This problem is of paramount importance to achieve automatic support for early declarative design and to support evolution of rules in contract-based or service-based systems. In this paper we investigate the problem of extracting temporal unsatisfiable cores in order to detect the inconsistent part of a specification. We extend conflict-driven SAT-solver to provide a new conflict-driven depth-first-search solver for temporal logic. We use this solver to compute LTL unsatisfiable cores without re-exploring the history of the solver.

  16. A Privacy-Preserving Intelligent Medical Diagnosis System Based on Oblivious Keyword Search

    Directory of Open Access Journals (Sweden)

    Zhaowen Lin

    2017-01-01

    Full Text Available One of the concerns people have is how to get the diagnosis online without privacy being jeopardized. In this paper, we propose a privacy-preserving intelligent medical diagnosis system (IMDS, which can efficiently solve the problem. In IMDS, users submit their health examination parameters to the server in a protected form; this submitting process is based on Paillier cryptosystem and will not reveal any information about their data. And then the server retrieves the most likely disease (or multiple diseases from the database and returns it to the users. In the above search process, we use the oblivious keyword search (OKS as a basic framework, which makes the server maintain the computational ability but cannot learn any personal information over the data of users. Besides, this paper also provides a preprocessing method for data stored in the server, to make our protocol more efficient.

  17. Extended-Search, Bézier Curve-Based Lane Detection and Reconstruction System for an Intelligent Vehicle

    Directory of Open Access Journals (Sweden)

    Xiaoyun Huang

    2015-09-01

    Full Text Available To improve the real-time performance and detection rate of a Lane Detection and Reconstruction (LDR system, an extended-search-based lane detection method and a Bézier curve-based lane reconstruction algorithm are proposed in this paper. The extended-search-based lane detection method is designed to search boundary blocks from the initial position, in an upwards direction and along the lane, with small search areas including continuous search, discontinuous search and bending search in order to detect different lane boundaries. The Bézier curve-based lane reconstruction algorithm is employed to describe a wide range of lane boundary forms with comparatively simple expressions. In addition, two Bézier curves are adopted to reconstruct the lanes' outer boundaries with large curvature variation. The lane detection and reconstruction algorithm — including initial-blocks' determining, extended search, binarization processing and lane boundaries' fitting in different scenarios — is verified in road tests. The results show that this algorithm is robust against different shadows and illumination variations; the average processing time per frame is 13 ms. Significantly, it presents an 88.6% high-detection rate on curved lanes with large or variable curvatures, where the accident rate is higher than that of straight lanes.

  18. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  19. New Internet search volume-based weighting method for integrating various environmental impacts

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  20. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    Science.gov (United States)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  1. Experiencias de utilización del método de búsqueda TABU en la resolución de problemas de organización universitaria

    Directory of Open Access Journals (Sweden)

    J.M. Tamarit

    2000-01-01

    Full Text Available A lo largo de los 10 últimos años hemos trabajado en problemas de organización académica de grandes dimensiones. En problemas universitarios hemos tratado la confección de calendarios de exámenes, la asignación de estudiantes a grupos vinculada al problema de la automatrícula y también la confección de horarios. Todos estos problemas son difíciles (NP-hard por lo que en todos los casos los algoritmos de resolución implementados han sido complejos y basados en procedimientos adaptados a cada problema. Sin embargo, en todos ellos el método de Búsqueda Tabú (Tabu Search ha constituído el elemento esencial en la obtención de buenas soluciones. En el presente trabajo exponemos algunas enseñanzas de estas experiencias. Tanto aquellos elementos que se han mostrado útiles en el conjunto de los problemas como los que han mostrado una validez diferente en los diversos casos planteados. Asimismo se examinan diferentes posibilidades de uso de los elementos del Tabú y se exponen conclusiones.

  2. BSSF: a fingerprint based ultrafast binding site similarity search and function analysis server

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-01-01

    Full Text Available Abstract Background Genome sequencing and post-genomics projects such as structural genomics are extending the frontier of the study of sequence-structure-function relationship of genes and their products. Although many sequence/structure-based methods have been devised with the aim of deciphering this delicate relationship, there still remain large gaps in this fundamental problem, which continuously drives researchers to develop novel methods to extract relevant information from sequences and structures and to infer the functions of newly identified genes by genomics technology. Results Here we present an ultrafast method, named BSSF(Binding Site Similarity & Function, which enables researchers to conduct similarity searches in a comprehensive three-dimensional binding site database extracted from PDB structures. This method utilizes a fingerprint representation of the binding site and a validated statistical Z-score function scheme to judge the similarity between the query and database items, even if their similarities are only constrained in a sub-pocket. This fingerprint based similarity measurement was also validated on a known binding site dataset by comparing with geometric hashing, which is a standard 3D similarity method. The comparison clearly demonstrated the utility of this ultrafast method. After conducting the database searching, the hit list is further analyzed to provide basic statistical information about the occurrences of Gene Ontology terms and Enzyme Commission numbers, which may benefit researchers by helping them to design further experiments to study the query proteins. Conclusions This ultrafast web-based system will not only help researchers interested in drug design and structural genomics to identify similar binding sites, but also assist them by providing further analysis of hit list from database searching.

  3. Power and Execution Time Optimization through Hardware Software Partitioning Algorithm for Core Based Embedded System

    Directory of Open Access Journals (Sweden)

    Siwar Ben Haj Hassine

    2017-01-01

    Full Text Available Shortening the marketing cycle of the product and accelerating its development efficiency have become a vital concern in the field of embedded system design. Therefore, hardware/software partitioning has become one of the mainstream technologies of embedded system development since it affects the overall system performance. Given today’s largest requirement for great efficiency necessarily accompanied by high speed, our new algorithm presents the best version that can meet such unpreceded levels. In fact, we describe in this paper an algorithm that is based on HW/SW partitioning which aims to find the best tradeoff between power and latency of a system taking into consideration the dark silicon problem. Moreover, it has been tested and has shown its efficiency compared to other existing heuristic well-known algorithms which are Simulated Annealing, Tabu search, and Genetic algorithms.

  4. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  5. Linkage-Based Distance Metric in the Search Space of Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Yong-Hyuk Kim

    2015-01-01

    Full Text Available We propose a new distance metric, based on the linkage of genes, in the search space of genetic algorithms. This second-order distance measure is derived from the gene interaction graph and first-order distance, which is a natural distance in chromosomal spaces. We show that the proposed measure forms a metric space and can be computed efficiently. As an example application, we demonstrate how this measure can be used to estimate the extent to which gene rearrangement improves the performance of genetic algorithms.

  6. Investigation of candidate data structures and search algorithms to support a knowledge based fault diagnosis system

    Science.gov (United States)

    Bosworth, Edward L., Jr.

    1987-01-01

    The focus of this research is the investigation of data structures and associated search algorithms for automated fault diagnosis of complex systems such as the Hubble Space Telescope. Such data structures and algorithms will form the basis of a more sophisticated Knowledge Based Fault Diagnosis System. As a part of the research, several prototypes were written in VAXLISP and implemented on one of the VAX-11/780's at the Marshall Space Flight Center. This report describes and gives the rationale for both the data structures and algorithms selected. A brief discussion of a user interface is also included.

  7. An Experiment and Detection Scheme for Cavity-Based Light Cold Dark Matter Particle Searches

    Directory of Open Access Journals (Sweden)

    Masroor H. S. Bukhari

    2017-01-01

    Full Text Available A resonance detection scheme and some useful ideas for cavity-based searches of light cold dark matter particles (such as axions are presented, as an effort to aid in the on-going endeavors in this direction as well as for future experiments, especially in possibly developing a table-top experiment. The scheme is based on our idea of a resonant detector, incorporating an integrated tunnel diode (TD and GaAs HEMT/HFET (High-Electron Mobility Transistor/Heterogeneous FET transistor amplifier, weakly coupled to a cavity in a strong transverse magnetic field. The TD-amplifier combination is suggested as a sensitive and simple technique to facilitate resonance detection within the cavity while maintaining excellent noise performance, whereas our proposed Halbach magnet array could serve as a low-noise and permanent solution replacing the conventional electromagnets scheme. We present some preliminary test results which demonstrate resonance detection from simulated test signals in a small optimal axion mass range with superior signal-to-noise ratios (SNR. Our suggested design also contains an overview of a simpler on-resonance dc signal read-out scheme replacing the complicated heterodyne read-out. We believe that all these factors and our propositions could possibly improve or at least simplify the resonance detection and read-out in cavity-based DM particle detection searches (and other spectroscopy applications and reduce the complications (and associated costs, in addition to reducing the electromagnetic interference and background.

  8. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (psearching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Image copy-move forgery detection based on polar cosine transform and approximate nearest neighbor searching.

    Science.gov (United States)

    Li, Yuenan

    2013-01-10

    Copy-move is one of the most commonly used image tampering operation, where a part of image content is copied and then pasted to another part of the same image. In order to make the forgery visually convincing and conceal its trace, the copied part may subject to post-processing operations such as rotation and blur. In this paper, we propose a polar cosine transform and approximate nearest neighbor searching based copy-move forgery detection algorithm. The algorithm starts by dividing the image into overlapping patches. Robust and compact features are extracted from patches by taking advantage of the rotationally-invariant and orthogonal properties of the polar cosine transform. Potential copy-move pairs are then detected by identifying the patches with similar features, which is formulated as approximate nearest neighbor searching and accomplished by means of locality-sensitive hashing (LSH). Finally, post-verifications are performed on potential pairs to filter out false matches and improve the accuracy of forgery detection. To sum up, the LSH based similar patch identification and the post-verification methods are two major novelties of the proposed work. Experimental results reveal that the proposed work can produce accurate detection results, and it exhibits high robustness to various post-processing operations. In addition, the LSH based similar patch detection scheme is much more effective than the widely used lexicographical sorting. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. A novel ternary content addressable memory design based on resistive random access memory with high intensity and low search energy

    Science.gov (United States)

    Han, Runze; Shen, Wensheng; Huang, Peng; Zhou, Zheng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    A novel ternary content addressable memory (TCAM) design based on resistive random access memory (RRAM) is presented. Each TCAM cell consists of two parallel RRAM to both store and search for ternary data. The cell size of the proposed design is 8F2, enable a ∼60× cell area reduction compared with the conventional static random access memory (SRAM) based implementation. Simulation results also show that the search delay and energy consumption of the proposed design at the 64-bit word search are 2 ps and 0.18 fJ/bit/search respectively at 22 nm technology node, where significant improvements are achieved compared to previous works. The desired characteristics of RRAM for implementation of the high performance TCAM search chip are also discussed.

  12. Turn-Based War Chess Model and Its Search Algorithm per Turn

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2016-01-01

    Full Text Available War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1 enumeration by order; (2 enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.

  13. Confirmation bias in web-based search: a randomized online study on the effects of expert information and social tags on information search and evaluation.

    Science.gov (United States)

    Schweiger, Stefan; Oeberst, Aileen; Cress, Ulrike

    2014-03-26

    The public typically believes psychotherapy to be more effective than pharmacotherapy for depression treatments. This is not consistent with current scientific evidence, which shows that both types of treatment are about equally effective. The study investigates whether this bias towards psychotherapy guides online information search and whether the bias can be reduced by explicitly providing expert information (in a blog entry) and by providing tag clouds that implicitly reveal experts' evaluations. A total of 174 participants completed a fully automated Web-based study after we invited them via mailing lists. First, participants read two blog posts by experts that either challenged or supported the bias towards psychotherapy. Subsequently, participants searched for information about depression treatment in an online environment that provided more experts' blog posts about the effectiveness of treatments based on alleged research findings. These blogs were organized in a tag cloud; both psychotherapy tags and pharmacotherapy tags were popular. We measured tag and blog post selection, efficacy ratings of the presented treatments, and participants' treatment recommendation after information search. Participants demonstrated a clear bias towards psychotherapy (mean 4.53, SD 1.99) compared to pharmacotherapy (mean 2.73, SD 2.41; t173=7.67, Pinformation search and evaluation. This bias was significantly reduced, however, when participants were exposed to tag clouds with challenging popular tags. Participants facing popular tags challenging their bias (n=61) showed significantly less biased tag selection (F2,168=10.61, Pinformation as presented in blog posts, compared to supporting expert information (n=81), decreased the bias in information search with regard to blog post selection (F1,168=4.32, P=.04, partial eta squared=0.025). No significant effects were found for treatment recommendation (Ps>.33). We conclude that the psychotherapy bias is most effectively

  14. Confirmation Bias in Web-Based Search: A Randomized Online Study on the Effects of Expert Information and Social Tags on Information Search and Evaluation

    Science.gov (United States)

    Oeberst, Aileen; Cress, Ulrike

    2014-01-01

    Background The public typically believes psychotherapy to be more effective than pharmacotherapy for depression treatments. This is not consistent with current scientific evidence, which shows that both types of treatment are about equally effective. Objective The study investigates whether this bias towards psychotherapy guides online information search and whether the bias can be reduced by explicitly providing expert information (in a blog entry) and by providing tag clouds that implicitly reveal experts’ evaluations. Methods A total of 174 participants completed a fully automated Web-based study after we invited them via mailing lists. First, participants read two blog posts by experts that either challenged or supported the bias towards psychotherapy. Subsequently, participants searched for information about depression treatment in an online environment that provided more experts’ blog posts about the effectiveness of treatments based on alleged research findings. These blogs were organized in a tag cloud; both psychotherapy tags and pharmacotherapy tags were popular. We measured tag and blog post selection, efficacy ratings of the presented treatments, and participants’ treatment recommendation after information search. Results Participants demonstrated a clear bias towards psychotherapy (mean 4.53, SD 1.99) compared to pharmacotherapy (mean 2.73, SD 2.41; t 173=7.67, Pbiased information search and evaluation. This bias was significantly reduced, however, when participants were exposed to tag clouds with challenging popular tags. Participants facing popular tags challenging their bias (n=61) showed significantly less biased tag selection (F 2,168=10.61, Pbias-supporting tag clouds (n=56) and balanced tag clouds (n=57). Challenging (n=93) explicit expert information as presented in blog posts, compared to supporting expert information (n=81), decreased the bias in information search with regard to blog post selection (F 1,168=4.32, P=.04, partial eta

  15. Heuristic algorithms for joint optimization of unicast and anycast traffic in elastic optical network–based large–scale computing systems

    Directory of Open Access Journals (Sweden)

    Markowski Marcin

    2017-09-01

    Full Text Available In recent years elastic optical networks have been perceived as a prospective choice for future optical networks due to better adjustment and utilization of optical resources than is the case with traditional wavelength division multiplexing networks. In the paper we investigate the elastic architecture as the communication network for distributed data centers. We address the problems of optimization of routing and spectrum assignment for large-scale computing systems based on an elastic optical architecture; particularly, we concentrate on anycast user to data center traffic optimization. We assume that computational resources of data centers are limited. For this offline problems we formulate the integer linear programming model and propose a few heuristics, including a meta-heuristic algorithm based on a tabu search method. We report computational results, presenting the quality of approximate solutions and efficiency of the proposed heuristics, and we also analyze and compare some data center allocation scenarios.

  16. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  17. Reliability Assessment of Power Generation Systems Using Intelligent Search Based on Disparity Theory

    Directory of Open Access Journals (Sweden)

    Athraa Ali Kadhem

    2017-03-01

    Full Text Available The reliability of the generating system adequacy is evaluated based on the ability of the system to satisfy the load demand. In this paper, a novel optimization technique named the disparity evolution genetic algorithm (DEGA is proposed for reliability assessment of power generation. Disparity evolution is used to enhance the performance of the probability of mutation in a genetic algorithm (GA by incorporating features from the paradigm into the disparity theory. The DEGA is based on metaheuristic searching for the truncated sampling of state-space for the reliability assessment of power generation system adequacy. Two reliability test systems (IEEE-RTS-79 and (IEEE-RTS-96 are used to demonstrate the effectiveness of the proposed algorithm. The simulation result shows the DEGA can generate a larger variety of the individuals in an early stage of the next population generation. It is also able to estimate the reliability indices accurately.

  18. Path Searching Based Crease Detection for Large Scale Scanned Document Images

    Science.gov (United States)

    Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun

    2017-12-01

    Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.

  19. An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage

    Directory of Open Access Journals (Sweden)

    German Gramajo

    2017-01-01

    Full Text Available A path planning strategy for a search and coverage mission for a small UAV that maximizes the area covered based on stored energy and maneuverability constraints is presented. The proposed formulation has a high level of autonomy, without requiring an exact choice of optimization parameters, and is appropriate for real-time implementation. The computed trajectory maximizes spatial coverage while closely satisfying terminal constraints on the position of the vehicle and minimizing the time of flight. Comparisons of this formulation to a path planning algorithm based on those with time constraint show equivalent coverage performance but improvement in prediction of overall mission duration and accuracy of the terminal position of the vehicle.

  20. SASAgent: an agent based architecture for search, retrieval and composition of scientific models.

    Science.gov (United States)

    Felipe Mendes, Luiz; Silva, Laryssa; Matos, Ely; Braga, Regina; Campos, Fernanda

    2011-07-01

    Scientific computing is a multidisciplinary field that goes beyond the use of computer as machine where researchers write simple texts, presentations or store analysis and results of their experiments. Because of the huge hardware/software resources invested in experiments and simulations, this new approach to scientific computing currently adopted by research groups is well represented by e-Science. This work aims to propose a new architecture based on intelligent agents to search, recover and compose simulation models, generated in the context of research projects related to biological domain. The SASAgent architecture is described as a multi-tier, comprising three main modules, where CelO ontology satisfies requirements put by e-science projects mainly represented by the semantic knowledge base. Preliminary results suggest that the proposed architecture is promising to achieve requirements found in e-Science projects, considering mainly the biological domain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Cuckoo search based optimal mask generation for noise suppression and enhancement of speech signal

    Directory of Open Access Journals (Sweden)

    Anil Garg

    2015-07-01

    Full Text Available In this paper, an effective noise suppression technique for enhancement of speech signals using optimized mask is proposed. Initially, the noisy speech signal is broken down into various time–frequency (TF units and the features are extracted by finding out the Amplitude Magnitude Spectrogram (AMS. The signals are then classified based on quality ratio into different classes to generate the initial set of solutions. Subsequently, the optimal mask for each class is generated based on Cuckoo search algorithm. Subsequently, in the waveform synthesis stage, filtered waveforms are windowed and then multiplied by the optimal mask value and summed up to get the enhanced target signal. The experimentation of the proposed technique was carried out using various datasets and the performance is compared with the previous techniques using SNR. The results obtained proved the effectiveness of the proposed technique and its ability to suppress noise and enhance the speech signal.

  2. Infodemiology of status epilepticus: A systematic validation of the Google Trends-based search queries.

    Science.gov (United States)

    Bragazzi, Nicola Luigi; Bacigaluppi, Susanna; Robba, Chiara; Nardone, Raffaele; Trinka, Eugen; Brigo, Francesco

    2016-02-01

    People increasingly use Google looking for health-related information. We previously demonstrated that in English-speaking countries most people use this search engine to obtain information on status epilepticus (SE) definition, types/subtypes, and treatment. Now, we aimed at providing a quantitative analysis of SE-related web queries. This analysis represents an advancement, with respect to what was already previously discussed, in that the Google Trends (GT) algorithm has been further refined and correlational analyses have been carried out to validate the GT-based query volumes. Google Trends-based SE-related query volumes were well correlated with information concerning causes and pharmacological and nonpharmacological treatments. Google Trends can provide both researchers and clinicians with data on realities and contexts that are generally overlooked and underexplored by classic epidemiology. In this way, GT can foster new epidemiological studies in the field and can complement traditional epidemiological tools. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. An ontology-based search engine for protein-protein interactions.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  4. The U.S. Online News Coverage of Mammography Based on a Google News Search.

    Science.gov (United States)

    Young Lin, Leng Leng; Rosenkrantz, Andrew B

    2017-12-01

    To characterize online news coverage relating to mammography, including articles' stance toward screening mammography. Google News was used to search U.S. news sites over a 9-year period (2006-2015) based on the search terms "mammography" and "mammogram." The top 100 search results were recorded. Identified articles were manually reviewed. The top 100 news articles were from the following sources: local news outlet (50%), national news outlet (24%), nonimaging medical source (13%), entertainment or culture news outlet (6%), business news outlet (4%), peer-reviewed journal (1%), and radiology news outlet (1%). Most common major themes were the screening mammography controversy (29%), description of a new breast imaging technology (23%), dense breasts (11%), and promotion of a public screening initiative (11%). For the most recent year, article stance toward screening mammography was 59%, favorable; 16%, unfavorable; and 25%, neutral. After 2010, there was an abrupt shift in articles' stances from neutral to both favorable and unfavorable. A wide range of online news sources addressed a range of issues related to mammography. National, rather than local, news sites were more likely to focus on the screening controversy and more likely to take an unfavorable view. The controversial United States Preventive Services Task Force guidelines may have influenced articles to take a stance on screening mammography. As such online news may impact public perception of the topic and thus potentially impact guideline adherence, radiologists are encouraged to maintain awareness of this online coverage and to support the online dissemination of reliable and accurate information. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  5. A simple heuristic for Internet-based evidence search in primary care: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Eberbach A

    2016-08-01

    Full Text Available Andreas Eberbach,1 Annette Becker,1 Justine Rochon,2 Holger Finkemeler,1Achim Wagner,3 Norbert Donner-Banzhoff1 1Department of Family and Community Medicine, Philipp University of Marburg, Marburg, Germany; 2Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany; 3Department of Sport Medicine, Justus-Liebig-University of Giessen, Giessen, Germany Background: General practitioners (GPs are confronted with a wide variety of clinical questions, many of which remain unanswered. Methods: In order to assist GPs in finding quick, evidence-based answers, we developed a learning program (LP with a short interactive workshop based on a simple ­three-step-heuristic to improve their search and appraisal competence (SAC. We evaluated the LP ­effectiveness with a randomized controlled trial (RCT. Participants (intervention group [IG] n=20; ­control group [CG] n=31 rated acceptance and satisfaction and also answered 39 ­knowledge ­questions to assess their SAC. We controlled for previous knowledge in content areas covered by the test. Results: Main outcome – SAC: within both groups, the pre–post test shows significant (P=0.00 improvements in correctness (IG 15% vs CG 11% and confidence (32% vs 26% to find evidence-based answers. However, the SAC difference was not significant in the RCT. Other measures: Most workshop participants rated “learning atmosphere” (90%, “skills acquired” (90%, and “relevancy to my practice” (86% as good or very good. The ­LP-recommendations were implemented by 67% of the IG, whereas 15% of the CG already conformed to LP recommendations spontaneously (odds ratio 9.6, P=0.00. After literature search, the IG showed a (not significantly higher satisfaction regarding “time spent” (IG 80% vs CG 65%, “quality of information” (65% vs 54%, and “amount of information” (53% vs 47%.Conclusion: Long-standing established GPs have a good SAC. Despite high acceptance, strong

  6. The development of a quality-and-multimedia-based health web information searching tool.

    Science.gov (United States)

    Chang, Hui-Jou; Chang, Polun

    2009-01-01

    People have not been satisfied with the search tools of web health information. We built a prototype easy-to-use web information searching tool by using multimedia techniques, combined with the emphasis of result presentation and content quality information. Instead of traditional search methods, we provide quality and webpage source information of websites for people to get useful information, and present search result by graphs and animations for people to get better user experience.

  7. Balancing Efficiency and Effectiveness for Fusion-Based Search Engines in the "Big Data" Environment

    Science.gov (United States)

    Li, Jieyu; Huang, Chunlan; Wang, Xiuhong; Wu, Shengli

    2016-01-01

    Introduction: In the big data age, we have to deal with a tremendous amount of information, which can be collected from various types of sources. For information search systems such as Web search engines or online digital libraries, the collection of documents becomes larger and larger. For some queries, an information search system needs to…

  8. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    Science.gov (United States)

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  9. Fuzzy Integral and Cuckoo Search Based Classifier Fusion for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    AYDIN, I.

    2018-02-01

    Full Text Available The human activity recognition is an important issue for sports analysis and health monitoring. The early recognition of human actions is used in areas such as detection of criminal activities, fall detection, and action recognition in rehabilitation centers. Especially, the detection of the falls in elderly people is very important for rapid intervention. Mobile phones can be used for action recognition with their built-in accelerometer sensor. In this study, a new combined method based on fuzzy integral and cuckoo search is proposed for classifying human actions. The signals are acquired from three axes of acceleration sensor of a mobile phone and the features are extracted by applying signal processing methods. Our approach utilizes from linear discriminant analysis (LDA, support vector machines (SVM, and neural networks (NN techniques and aggregates their outputs by using fuzzy integral. The cuckoo search method adjusts the parameters for assignment of optimal confidence levels of the classifiers. The experimental results show that our model provides better performance than the individual classifiers. In addition, appropriate selection of the confidence levels improves the performance of the combined classifiers.

  10. Reduction rules-based search algorithm for opportunistic replacement strategy of multiple life-limited parts

    Directory of Open Access Journals (Sweden)

    Xuyun FU

    2018-01-01

    Full Text Available The opportunistic replacement of multiple Life-Limited Parts (LLPs is a problem widely existing in industry. The replacement strategy of LLPs has a great impact on the total maintenance cost to a lot of equipment. This article focuses on finding a quick and effective algorithm for this problem. To improve the algorithm efficiency, six reduction rules are suggested from the perspectives of solution feasibility, determination of the replacement of LLPs, determination of the maintenance occasion and solution optimality. Based on these six reduction rules, a search algorithm is proposed. This search algorithm can identify one or several optimal solutions. A numerical experiment shows that these six reduction rules are effective, and the time consumed by the algorithm is less than 38 s if the total life of equipment is shorter than 55000 and the number of LLPs is less than 11. A specific case shows that the algorithm can obtain optimal solutions which are much better than the result of the traditional method in 10 s, and it can provide support for determining to-be-replaced LLPs when determining the maintenance workscope of an aircraft engine. Therefore, the algorithm is applicable to engineering applications concerning opportunistic replacement of multiple LLPs in aircraft engines.

  11. Stardust@home: An Interactive Internet-based Search for Interstellar Dust

    Science.gov (United States)

    Mendez, B. J.; Westphal, A. J.; Butterworth, A. L.; Craig, N.

    2006-12-01

    On January 15, 2006, NASA's Stardust mission returned to Earth after nearly seven years in interplanetary space. During its journey, Stardust encountered comet Wild 2, collecting dust particles from it in a special material called aerogel. At two other times in the mission, aerogel collectors were also opened to collect interstellar dust. The Stardust Interstellar Dust Collector is being scanned by an automated microscope at the Johnson Space Center. There are approximately 700,000 fields of view needed to cover the entire collector, but we expect only a few dozen total grains of interstellar dust were captured within it. Finding these particles is a daunting task. We have recruited many thousands of volunteers from the public to aid in the search for these precious pieces of space dust trapped in the collectors. We call the project Stardust@home. Through Stardust@home, volunteers from the public search fields of view from the Stardust aerogel collector using a web-based Virtual Microscope. Volunteers who discover interstellar dust particles have the privilege of naming them. The interest and response to this project has been extraordinary. Many people from all walks of life are very excited about space science and eager to volunteer their time to contribute to a real research project such as this. We will discuss the progress of the project and the education and outreach activities being carried out for it.

  12. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  13. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  14. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm.

    Science.gov (United States)

    Mandal, Sudip; Saha, Goutam; Pal, Rajat Kumar

    2017-08-01

    Correct inference of genetic regulations inside a cell from the biological database like time series microarray data is one of the greatest challenges in post genomic era for biologists and researchers. Recurrent Neural Network (RNN) is one of the most popular and simple approach to model the dynamics as well as to infer correct dependencies among genes. Inspired by the behavior of social elephants, we propose a new metaheuristic namely Elephant Swarm Water Search Algorithm (ESWSA) to infer Gene Regulatory Network (GRN). This algorithm is mainly based on the water search strategy of intelligent and social elephants during drought, utilizing the different types of communication techniques. Initially, the algorithm is tested against benchmark small and medium scale artificial genetic networks without and with presence of different noise levels and the efficiency was observed in term of parametric error, minimum fitness value, execution time, accuracy of prediction of true regulation, etc. Next, the proposed algorithm is tested against the real time gene expression data of Escherichia Coli SOS Network and results were also compared with others state of the art optimization methods. The experimental results suggest that ESWSA is very efficient for GRN inference problem and performs better than other methods in many ways.

  15. CLUSTERING CATEGORICAL DATA USING k-MODES BASED ON CUCKOO SEARCH OPTIMIZATION ALGORITHM

    Directory of Open Access Journals (Sweden)

    Lakshmi K

    2017-10-01

    Full Text Available Cluster analysis is the unsupervised learning technique that finds the interesting patterns in the data objects without knowing class labels. Most of the real world dataset consists of categorical data. For example, social media analysis may have the categorical data like the gender as male or female. The k-modes clustering algorithm is the most widely used to group the categorical data, because it is easy to implement and efficient to handle the large amount of data. However, due to its random selection of initial centroids, it provides the local optimum solution. There are number of optimization algorithms are developed to obtain global optimum solution. Cuckoo Search algorithm is the population based metaheuristic optimization algorithms to provide the global optimum solution. Methods: In this paper, k-modes clustering algorithm is combined with Cuckoo Search algorithm to obtain the global optimum solution. Results: Experiments are conducted with benchmark datasets and the results are compared with k-modes and Particle Swarm Optimization with k-modes to prove the efficiency of the proposed algorithm.

  16. FACC: A Novel Finite Automaton Based on Cloud Computing for the Multiple Longest Common Subsequences Search

    Directory of Open Access Journals (Sweden)

    Yanni Li

    2012-01-01

    Full Text Available Searching for the multiple longest common subsequences (MLCS has significant applications in the areas of bioinformatics, information processing, and data mining, and so forth, Although a few parallel MLCS algorithms have been proposed, the efficiency and effectiveness of the algorithms are not satisfactory with the increasing complexity and size of biologic data. To overcome the shortcomings of the existing MLCS algorithms, and considering that MapReduce parallel framework of cloud computing being a promising technology for cost-effective high performance parallel computing, a novel finite automaton (FA based on cloud computing called FACC is proposed under MapReduce parallel framework, so as to exploit a more efficient and effective general parallel MLCS algorithm. FACC adopts the ideas of matched pairs and finite automaton by preprocessing sequences, constructing successor tables, and common subsequences finite automaton to search for MLCS. Simulation experiments on a set of benchmarks from both real DNA and amino acid sequences have been conducted and the results show that the proposed FACC algorithm outperforms the current leading parallel MLCS algorithm FAST-MLCS.

  17. Biclustering of Gene Expression Data by Correlation-Based Scatter Search

    Science.gov (United States)

    2011-01-01

    Background The analysis of data generated by microarray technology is very useful to understand how the genetic information becomes functional gene products. Biclustering algorithms can determine a group of genes which are co-expressed under a set of experimental conditions. Recently, new biclustering methods based on metaheuristics have been proposed. Most of them use the Mean Squared Residue as merit function but interesting and relevant patterns from a biological point of view such as shifting and scaling patterns may not be detected using this measure. However, it is important to discover this type of patterns since commonly the genes can present a similar behavior although their expression levels vary in different ranges or magnitudes. Methods Scatter Search is an evolutionary technique that is based on the evolution of a small set of solutions which are chosen according to quality and diversity criteria. This paper presents a Scatter Search with the aim of finding biclusters from gene expression data. In this algorithm the proposed fitness function is based on the linear correlation among genes to detect shifting and scaling patterns from genes and an improvement method is included in order to select just positively correlated genes. Results The proposed algorithm has been tested with three real data sets such as Yeast Cell Cycle dataset, human B-cells lymphoma dataset and Yeast Stress dataset, finding a remarkable number of biclusters with shifting and scaling patterns. In addition, the performance of the proposed method and fitness function are compared to that of CC, OPSM, ISA, BiMax, xMotifs and Samba using Gene the Ontology Database. PMID:21261986

  18. Reducing a Knowledge-Base Search Space When Data Are Missing

    Science.gov (United States)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  19. Development and empirical user-centered evaluation of semantically-based query recommendation for an electronic health record search engine.

    Science.gov (United States)

    Hanauer, David A; Wu, Danny T Y; Yang, Lei; Mei, Qiaozhu; Murkowski-Steffy, Katherine B; Vydiswaran, V G Vinod; Zheng, Kai

    2017-03-01

    The utility of biomedical information retrieval environments can be severely limited when users lack expertise in constructing effective search queries. To address this issue, we developed a computer-based query recommendation algorithm that suggests semantically interchangeable terms based on an initial user-entered query. In this study, we assessed the value of this approach, which has broad applicability in biomedical information retrieval, by demonstrating its application as part of a search engine that facilitates retrieval of information from electronic health records (EHRs). The query recommendation algorithm utilizes MetaMap to identify medical concepts from search queries and indexed EHR documents. Synonym variants from UMLS are used to expand the concepts along with a synonym set curated from historical EHR search logs. The empirical study involved 33 clinicians and staff who evaluated the system through a set of simulated EHR search tasks. User acceptance was assessed using the widely used technology acceptance model. The search engine's performance was rated consistently higher with the query recommendation feature turned on vs. off. The relevance of computer-recommended search terms was also rated high, and in most cases the participants had not thought of these terms on their own. The questions on perceived usefulness and perceived ease of use received overwhelmingly positive responses. A vast majority of the participants wanted the query recommendation feature to be available to assist in their day-to-day EHR search tasks. Challenges persist for users to construct effective search queries when retrieving information from biomedical documents including those from EHRs. This study demonstrates that semantically-based query recommendation is a viable solution to addressing this challenge. Published by Elsevier Inc.

  20. Trail-Based Search for Efficient Event Report to Mobile Actors in Wireless Sensor and Actor Networks.

    Science.gov (United States)

    Xu, Zhezhuang; Liu, Guanglun; Yan, Haotian; Cheng, Bin; Lin, Feilong

    2017-10-27

    In wireless sensor and actor networks, when an event is detected, the sensor node needs to transmit an event report to inform the actor. Since the actor moves in the network to execute missions, its location is always unavailable to the sensor nodes. A popular solution is the search strategy that can forward the data to a node without its location information. However, most existing works have not considered the mobility of the node, and thus generate significant energy consumption or transmission delay. In this paper, we propose the trail-based search (TS) strategy that takes advantage of actor's mobility to improve the search efficiency. The main idea of TS is that, when the actor moves in the network, it can leave its trail composed of continuous footprints. The search packet with the event report is transmitted in the network to search the actor or its footprints. Once an effective footprint is discovered, the packet will be forwarded along the trail until it is received by the actor. Moreover, we derive the condition to guarantee the trail connectivity, and propose the redundancy reduction scheme based on TS (TS-R) to reduce nontrivial transmission redundancy that is generated by the trail. The theoretical and numerical analysis is provided to prove the efficiency of TS. Compared with the well-known expanding ring search (ERS), TS significantly reduces the energy consumption and search delay.

  1. SHOP: receptor-based scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Liljefors, Tommy; Sørensen, Morten D

    2009-01-01

    A new field-derived 3D method for receptor-based scaffold hopping, implemented in the software SHOP, is presented. Information from a protein-ligand complex is utilized to substitute a fragment of the ligand with another fragment from a database of synthetically accessible scaffolds. A GRID...

  2. Function Optimization and Parameter Performance Analysis Based on Gravitation Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-12-01

    Full Text Available The gravitational search algorithm (GSA is a kind of swarm intelligence optimization algorithm based on the law of gravitation. The parameter initialization of all swarm intelligence optimization algorithms has an important influence on the global optimization ability. Seen from the basic principle of GSA, the convergence rate of GSA is determined by the gravitational constant and the acceleration of the particles. The optimization performances on six typical test functions are verified by the simulation experiments. The simulation results show that the convergence speed of the GSA algorithm is relatively sensitive to the setting of the algorithm parameters, and the GSA parameter can be used flexibly to improve the algorithm’s convergence velocity and improve the accuracy of the solutions.

  3. PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.

    Science.gov (United States)

    Liu, Xiaoyong; Fu, Hui

    2014-01-01

    Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms.

  4. Multiple-optima search method based on a metamodel and mathematical morphology

    Science.gov (United States)

    Li, Yulin; Liu, Li; Long, Teng; Chen, Xin

    2016-03-01

    This article investigates a non-population-based optimization method using mathematical morphology and the radial basis function (RBF) for multimodal computationally intensive functions. To obtain several feasible solutions, mathematical morphology is employed to search promising regions. Sequential quadratic programming is used to exploit the possible areas to determine the exact positions of the potential optima. To relieve the computational burden, metamodelling techniques are employed. The RBF metamodel in different iterations varies considerably so that the positions of potential optima are moving during optimization. To find the pair of correlative potential optima between the latest two iterations, a tolerance is presented. Furthermore, to ensure that all the output minima are the global or local optima, an optimality judgement criterion is introduced.

  5. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    Science.gov (United States)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  6. Application of 3D Zernike descriptors to shape-based ligand similarity searching

    Directory of Open Access Journals (Sweden)

    Venkatraman Vishwesh

    2009-12-01

    Full Text Available Abstract Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  7. Semi-supervised weighted kernel clustering based on gravitational search for fault diagnosis.

    Science.gov (United States)

    Li, Chaoshun; Zhou, Jianzhong

    2014-09-01

    Supervised learning method, like support vector machine (SVM), has been widely applied in diagnosing known faults, however this kind of method fails to work correctly when new or unknown fault occurs. Traditional unsupervised kernel clustering can be used for unknown fault diagnosis, but it could not make use of the historical classification information to improve diagnosis accuracy. In this paper, a semi-supervised kernel clustering model is designed to diagnose known and unknown faults. At first, a novel semi-supervised weighted kernel clustering algorithm based on gravitational search (SWKC-GS) is proposed for clustering of dataset composed of labeled and unlabeled fault samples. The clustering model of SWKC-GS is defined based on wrong classification rate of labeled samples and fuzzy clustering index on the whole dataset. Gravitational search algorithm (GSA) is used to solve the clustering model, while centers of clusters, feature weights and parameter of kernel function are selected as optimization variables. And then, new fault samples are identified and diagnosed by calculating the weighted kernel distance between them and the fault cluster centers. If the fault samples are unknown, they will be added in historical dataset and the SWKC-GS is used to partition the mixed dataset and update the clustering results for diagnosing new fault. In experiments, the proposed method has been applied in fault diagnosis for rotatory bearing, while SWKC-GS has been compared not only with traditional clustering methods, but also with SVM and neural network, for known fault diagnosis. In addition, the proposed method has also been applied in unknown fault diagnosis. The results have shown effectiveness of the proposed method in achieving expected diagnosis accuracy for both known and unknown faults of rotatory bearing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    OpenAIRE

    Andy Hickner; Christopher R. Friese; Margaret Irwin

    2011-01-01

    Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP) review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS). A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from...

  9. UMA DIMENSÃO CULTURAL DA PAISAGEM: HISTÓRIA AMBIENTAL E OS ASPECTOS BIOGEOGRÁFICOS DE UM TABU

    Directory of Open Access Journals (Sweden)

    Rita de Cássia de Paula Freitas Svorc

    2012-12-01

    Full Text Available Na estrutura e composição da Mata Atlântica, especialmente em áreas de florestas secundárias é notável a presença de árvores de grande porte do gênero Ficus (Moraceae, preservados da derrubada por razões culturais por populações tradicionais. A estrutura de trechos de florestas secundárias localizadas no sul do Estado do Rio de Janeiro foi determinada nas proximidades de grandes figueiras por meio de parcelas de 20 x 5 m. Em três áreas foi amostrado um total de 105 espécies de árvores, sendo que as figueiras atingiram o maior valor de cobertura, sendo responsáveis em média por 43,4% da área basal. A presença destes exemplares pode ser atribuída a um mesmo tabu cultural, espalhado por extensas regiões do país e que impõem alterações na paisagem florestal.

  10. Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    Directory of Open Access Journals (Sweden)

    Bin Guo

    2008-01-01

    Full Text Available A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certain conditions, smart objects become hidden ones. However, current systems provide no mechanism to detect and manage objects when this problem occurs. Second, there is no common context infrastructure for building smart artifact systems, which makes it difficult for separately developed applications to interact with each other and uneasy for them to share and reuse knowledge. Unlike previous systems, Home-Explorer builds on an ontology-based knowledge infrastructure named Sixth-Sense, which makes it easy for the system to interact with other applications or agents also based on this ontology. The hidden object problem is also reflected in our ontology, which enables Home-Explorer to deal with both smart objects and hidden objects. A set of rules for deducing an object's status or location information and for locating hidden objects are described and evaluated.

  11. Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.

    Science.gov (United States)

    Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond

    2016-11-01

    An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.

  12. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    Directory of Open Access Journals (Sweden)

    Andy Hickner

    2011-09-01

    Full Text Available Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS. A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from both case and control groups completed a questionnaire to assess satisfaction with the literature search phases of the review process. Results – A protocol was developed and refined for use by future review teams. The collaboration and the resulting search protocol were beneficial for both the student and the review team members. The questionnaire results did not yield statistically significant differences regarding satisfaction with the search process between case and control groups. Conclusions – Evidence-based reviewers' satisfaction with the literature searching process depends on multiple factors and it was not clear that embedding an LIS specialist in the review team improved satisfaction with the process. Future research with more respondents may elucidate specific factors that may impact reviewers' assessment.

  13. ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials.

    Science.gov (United States)

    Korkontzelos, Ioannis; Mu, Tingting; Ananiadou, Sophia

    2012-04-30

    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols.

  14. ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials

    Science.gov (United States)

    2012-01-01

    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols. PMID:22595088

  15. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  16. Support patient search on pathology reports with interactive online learning based data extraction

    Directory of Open Access Journals (Sweden)

    Shuai Zheng

    2015-01-01

    Full Text Available Background: Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user′s interaction with minimal human effort. Methods : We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system′s data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users′ corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. Results: We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of

  17. Support patient search on pathology reports with interactive online learning based data extraction.

    Science.gov (United States)

    Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng

    2015-01-01

    Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable

  18. Fast neutron counting in a mobile, trailer-based search platform

    Directory of Open Access Journals (Sweden)

    Hayward Jason P.

    2017-01-01

    Full Text Available Trailer-based search platforms for detection of radiological and nuclear threats are often based upon coded aperture gamma-ray imaging, because this method can be rendered insensitive to local variations in gamma background while still localizing the source well. Since gamma source emissions are rather easily shielded, in this work we consider the addition of fast neutron counting to a mobile platform for detection of sources containing Pu. A proof-of-concept system capable of combined gamma and neutron coded-aperture imaging was built inside of a trailer and used to detect a 252Cf source while driving along a roadway. Neutron detector types employed included EJ-309 in a detector plane and EJ-299-33 in a front mask plane. While the 252Cf gamma emissions were not readily detectable while driving by at 16.9 m standoff, the neutron emissions can be detected while moving. Mobile detection performance for this system and a scaled-up system design are presented, along with implications for threat sensing.

  19. Augmented Reality for Searching Potential Assets in Medan using GPS based Tracking

    Science.gov (United States)

    Muchtar, M. A.; Syahputra, M. F.; Syahputra, N.; Ashrafia, S.; Rahmat, R. F.

    2017-01-01

    Every city is required to introduce its variety of potential assets so that the people know how to utilize or to develop their area. Potential assets include infrastructure, facilities, people, communities, organizations, customs that affects the characteristics and the way of life in Medan. Due to lack of socialization and information, most of people in Medan only know a few parts of the assets. Recently, so many mobile apps provide search and mapping locations used to find the location of potential assets in user’s area. However, the available information, such as text and digital maps, sometimes do not much help the user clearly and dynamically. Therefore, Augmented Reality technology able to display information in real world vision is implemented in this research so that the information can be more interactive and easily understood by user. This technology will be implemented in mobile apps using GPS based tracking and define the coordinate of user’s smartphone as a marker so that it can help people dynamically and easily find the location of potential assets in the nearest area based on the direction of user’s view on camera.

  20. Fast neutron counting in a mobile, trailer-based search platform

    Science.gov (United States)

    Hayward, Jason P.; Sparger, John; Fabris, Lorenzo; Newby, Robert J.

    2017-12-01

    Trailer-based search platforms for detection of radiological and nuclear threats are often based upon coded aperture gamma-ray imaging, because this method can be rendered insensitive to local variations in gamma background while still localizing the source well. Since gamma source emissions are rather easily shielded, in this work we consider the addition of fast neutron counting to a mobile platform for detection of sources containing Pu. A proof-of-concept system capable of combined gamma and neutron coded-aperture imaging was built inside of a trailer and used to detect a 252Cf source while driving along a roadway. Neutron detector types employed included EJ-309 in a detector plane and EJ-299-33 in a front mask plane. While the 252Cf gamma emissions were not readily detectable while driving by at 16.9 m standoff, the neutron emissions can be detected while moving. Mobile detection performance for this system and a scaled-up system design are presented, along with implications for threat sensing.

  1. A pattern-based nearest neighbor search approach for promoter prediction using DNA structural profiles.

    Science.gov (United States)

    Gan, Yanglan; Guan, Jihong; Zhou, Shuigeng

    2009-08-15

    Identification of core promoters is a key clue in understanding gene regulations. However, due to the diverse nature of promoter sequences, the accuracy of existing prediction approaches for non-CpG island (simply CGI)-related promoters is not as high as that for CGI-related promoters. This consequently leads to a low genome-wide promoter prediction accuracy. In this article, we first systematically analyze the similarities and differences between the two types of promoters (CGI- and non-CGI-related) from a novel structural perspective, and then devise a unified framework, called PNNP (Pattern-based Nearest Neighbor search for Promoter), to predict both CGI- and non-CGI-related promoters based on their structural features. Our comparative analysis on the structural characteristics of promoters reveals two interesting facts: (i) the structural values of CGI- and non-CGI-related promoters are quite different, but they exhibit nearly similar structural patterns; (ii) the structural patterns of promoters are obviously different from that of non-promoter sequences though the sequences have almost similar structural values. Extensive experiments demonstrate that the proposed PNNP approach is effective in capturing the structural patterns of promoters, and can significantly improve genome-wide performance of promoters prediction, especially non-CGI-related promoters prediction. The implementation of the program PNNP is available at http://admis.tongji.edu.cn/Projects/pnnp.aspx.

  2. A surrogate-based metaheuristic global search method for beam angle selection in radiation treatment planning.

    Science.gov (United States)

    Zhang, H H; Gao, S; Chen, W; Shi, L; D'Souza, W D; Meyer, R R

    2013-03-21

    An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equallyspaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality.

  3. A surrogate-based metaheuristic global search method for beam angle selection in radiation treatment planning

    International Nuclear Information System (INIS)

    Zhang, H H; D’Souza, W D; Gao, S; Shi, L; Chen, W; Meyer, R R

    2013-01-01

    An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality. (paper)

  4. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  5. NESSiE: The Experimental Sterile Neutrino Search in Short-Base-Line at CERN

    CERN Document Server

    Kose, Umut

    2013-01-01

    Several different experimental results are indicating the existence of anomalies in the neutrino sector. Models beyond the standard model have been developed to explain these results and involve one or more additional neutrinos that do not weakly interact. A new experimental program is therefore needed to study this potential new physics with a possibly new Short-Base-Line neutrino beam at CERN. CERN is actually promoting the start up of a New Neutrino Facility in the North Area site, which may host two complementary detectors, one based on LAr technology and one corresponding to a muon spectrometer. The system is doubled in two different sites. With regards to the latter option, NESSiE, Neutrino Experiment with Spectrometers in Europe, had been proposed for the search of sterile neutrinos studying Charged Current (CC) muon neutrino and antineutrino ineractions. The detectors consists of two magnetic spectrometers to be located in two sites:"Near" and "Far" from the proton target of the CERN-SPS beam. Each sp...

  6. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    Science.gov (United States)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  7. Predicting relevance based on assessor disagreement: analysis and practical applications for search evaluation

    NARCIS (Netherlands)

    Demeester, Thomas; Aly, Robin; Hiemstra, Djoerd; Nguyen, Dong-Phuong; Develder, Chris

    Evaluation of search engines relies on assessments of search results for selected test queries, from which we would ideally like to draw conclusions in terms of relevance of the results for general (e.g., future, unknown) users. In practice however, most evaluation scenarios only allow us to

  8. Design of personalized search engine based on user-webpage dynamic model

    Science.gov (United States)

    Li, Jihan; Li, Shanglin; Zhu, Yingke; Xiao, Bo

    2013-12-01

    Personalized search engine focuses on establishing a user-webpage dynamic model. In this model, users' personalized factors are introduced so that the search engine is better able to provide the user with targeted feedback. This paper constructs user and webpage dynamic vector tables, introduces singular value decomposition analysis in the processes of topic categorization, and extends the traditional PageRank algorithm.

  9. Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach.

    Science.gov (United States)

    Arendt, Florian; Scherr, Sebastian

    2017-11-01

    Search engines are increasingly used to seek suicide-related information online, which can serve both harmful and helpful purposes. Google acknowledges this fact and presents a suicide-prevention result for particular search terms. Unfortunately, the result is only presented to a limited number of visitors. Hence, Google is missing the opportunity to provide help to vulnerable people. We propose a two-step approach to a tailored optimization: First, research will identify the risk factors. Second, search engines will reweight algorithms according to the risk factors. In this study, we show that the query share of the search term "poisoning" on Google shows substantial peaks corresponding to peaks in actual suicidal behavior. Accordingly, thresholds for showing the suicide-prevention result should be set to the lowest levels during the spring, on Sundays and Mondays, on New Year's Day, and on Saturdays following Thanksgiving. Search engines can help to save lives globally by utilizing a more tailored approach to suicide prevention.

  10. Searching for answers to clinical questions using google versus evidence-based summary resources: a randomized controlled crossover study.

    Science.gov (United States)

    Kim, Sarang; Noveck, Helaine; Galt, James; Hogshire, Lauren; Willett, Laura; O'Rourke, Kerry

    2014-06-01

    To compare the speed and accuracy of answering clinical questions using Google versus summary resources. In 2011 and 2012, 48 internal medicine interns from two classes at Rutgers University Robert Wood Johnson Medical School, who had been trained to use three evidence-based summary resources, performed four-minute computer searches to answer 10 clinical questions. Half were randomized to initiate searches for answers to questions 1 to 5 using Google; the other half initiated searches using a summary resource. They then crossed over and used the other resource for questions 6 to 10. They documented the time spent searching and the resource where the answer was found. Time to correct response and percentage of correct responses were compared between groups using t test and general estimating equations. Of 480 questions administered, interns found answers for 393 (82%). Interns initiating searches in Google used a wider variety of resources than those starting with summary resources. No significant difference was found in mean time to correct response (138.5 seconds for Google versus 136.1 seconds for summary resource; P = .72). Mean correct response rate was 58.4% for Google versus 61.5% for summary resource (mean difference -3.1%; 95% CI -10.3% to 4.2%; P = .40). The authors found no significant differences in speed or accuracy between searches initiated using Google versus summary resources. Although summary resources are considered to provide the highest quality of evidence, improvements to allow for better speed and accuracy are needed.

  11. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  12. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis.

    Science.gov (United States)

    Huang, Yan Xin; Bao, Yong Li; Guo, Shu Yan; Wang, Yan; Zhou, Chun Guang; Li, Yu Xin

    2008-12-16

    The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb) in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes), Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation coefficient (MCC), sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be superior to that of other similar programs

  13. A Global-best Harmony Search based Gradient Descent Learning FLANN (GbHS-GDL-FLANN for data classification

    Directory of Open Access Journals (Sweden)

    Bighnaraj Naik

    2016-03-01

    Full Text Available While dealing with real world data for classification using ANNs, it is often difficult to determine the optimal ANN classification model with fast convergence. Also, it is laborious to adjust the set of weights of ANNs by using appropriate learning algorithm to obtain better classification accuracy. In this paper, a variant of Harmony Search (HS, called Global-best Harmony Search along with Gradient Descent Learning is used with Functional Link Artificial Neural Network (FLANN for classification task in data mining. The Global-best Harmony Search (GbHS uses the concepts of Particle Swarm Optimization from Swarm Intelligence to improve the qualities of harmonies. The problem solving strategies of Global-best Harmony Search along with searching capabilities of Gradient Descent Search are used to obtain optimal set of weight for FLANN. The proposed method (GbHS-GDL-FLANN is implemented in MATLAB and compared with other alternatives (FLANN, GA based FLANN, PSO based FLANN, HS based FLANN, Improved HS based FLANN, Self Adaptive HS based FLANN, MLP, SVM and FSN. The GbHS-GDL-FLANN is tested on benchmark datasets from UCI Machine Learning repository by using 5-fold cross validation technique. The proposed method is analyzed under null-hypothesis by using Friedman Test, Holm and Hochberg Procedure and Post-Hoc ANOVA Statistical Analysis (Tukey Test & Dunnett Test for statistical analysis and validity of results. Simulation results reveal that the performance of the proposed GbHS-GDL-FLANN is better and statistically significant from other alternatives.

  14. Earthdata Search: How Usability Drives Innovation To Enable A Broad User Base

    Science.gov (United States)

    Reese, M.; Siarto, J.; Lynnes, C.; Shum, D.

    2017-12-01

    Earthdata Search (https://search.earthdata.nasa.gov) is a modern web application allowing users to search, discover, visualize, refine, and access NASA Earth Observation data using a wide array of service offerings. Its goal is to ease the technical burden on data users by providing a high-quality application that makes it simple to interact with NASA Earth observation data, freeing them to spend more effort on innovative endeavors. This talk would detail how we put end users first in our design and development process, focusing on usability and letting usability needs drive requirements for the underlying technology. Just a few examples of how this plays out practically, Earthdata Search teams with a lightning fast metadata repository, allowing it to be an extremely responsive UI that updates as the user changes criteria not only at the dataset level, but also at the file level. This results in a better exploration experience as the time penalty is greatly reduced. Also, since Earthdata Search uses metadata from over 35,000 datasets that are managed by different data providers, metadata standards, quality and consistency will vary. We found that this was negatively impacting users' search and exploration experience. We have resolved this problem with the introduction of "humanizers", which is a community-driven process to both "smooth out" metadata values and provide non-jargonistic representations of some content within the Earthdata Search UI. This is helpful for both the experience data scientist and our users that are brand new to the discipline.

  15. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    Directory of Open Access Journals (Sweden)

    Shyamala Devi Munisamy

    2015-01-01

    Full Text Available Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider’s premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  16. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    Science.gov (United States)

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  17. BredeQuery: Coordinate-Based Meta-analytic Search of Neuroscientific Literature from the SPM Environment

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Szewczyk, Marcin Marek; Rasmussen, Peter Mondrup

    2010-01-01

    Query offers a direct link from SPM to the Brede Database coordinate-based search engine. BredeQuery is able to ‘grab’ brain location coordinates from the SPM windows and enter them as a query for the Brede Database. Moreover, results of the query can be displayed in a MATLAB window and/or exported directly...... of the databases offer so-called coordinate-based searching to the users (e.g. Brede, BrainMap). For such search, the publications, which relate to the brain locations represented by the user coordinates, are retrieved. We present BredeQuery – a plugin for the widely used SPM data analytic pipeline. Brede...

  18. A new model of information behaviour based on the Search Situation Transition schema Information searching, Information behaviour, Behavior, Information retrieval, Information seeking

    Directory of Open Access Journals (Sweden)

    Nils Pharo

    2004-01-01

    Full Text Available This paper presents a conceptual model of information behaviour. The model is part of the Search Situation Transition method schema. The method schema is developed to discover and analyse interplay between phenomena traditionally analysed as factors influencing either information retrieval or information seeking. In this paper the focus is on the model's five main categories: the work task, the searcher, the social/organisational environment, the search task, and the search process. In particular, the search process and its sub-categories search situation and transition and the relationship between these are discussed. To justify the method schema an empirical study was designed according to the schema's specifications. In the paper a subset of the study is presented analysing the effects of work tasks on Web information searching. Findings from this small-scale study indicate a strong relationship between the work task goal and the level of relevance used for judging resources during search processes.

  19. A compact ADPLL based on symmetrical binary frequency searching with the same circuit

    Science.gov (United States)

    Li, Hangbiao; Zhang, Bo; Luo, Ping; Liao, Pengfei; Liu, Junjie; Li, Zhaoji

    2015-03-01

    A compact all-digital phase-locked loop (C-ADPLL) based on symmetrical binary frequency searching (BFS) with the same circuit is presented in this paper. The minimising relative frequency variation error Δη (MFE) rule is derived as guidance of design and is used to weigh the accuracy of the digitally controlled oscillator (DCO) clock frequency. The symmetrical BFS is used in the coarse-tuning process and the fine-tuning process of DCO clock frequency to achieve the minimum Δη of the locked DCO clock, which simplifies the circuit architecture and saves the die area. The C-ADPLL is implemented in a 0.13 μm one-poly-eight-metal (1P8M) CMOS process and the on-chip area is only 0.043 mm2, which is much smaller. The measurement results show that the peak-to-peak (Pk-Pk) jitter and the root-mean-square jitter of the DCO clock frequency are 270 ps at 72.3 MHz and 42 ps at 79.4 MHz, respectively, while the power consumption of the proposed ADPLL is only 2.7 mW (at 115.8 MHz) with a 1.2 V power supply. The measured Δη is not more than 1.14%. Compared with other ADPLLs, the proposed C-ADPLL has simpler architecture, smaller size and lower Pk-Pk jitter.

  20. Debris search around (486958) 2014 MU69: Results from SOFIA and ground-based occultation campaigns

    Science.gov (United States)

    Young, Eliot F.; Buie, Marc W.; Porter, Simon Bernard; Zangari, Amanda Marie; Stern, S. Alan; Ennico, Kimberly; Reach, William T.; Pfueller, Enrico; Wiedemann, Manuel; Fraser, Wesley Cristopher; Camargo, Julio; Young, Leslie; Wasserman, Lawrence H.; New Horizons MU69 Occultation Team

    2017-10-01

    The New Horizons spacecraft is scheduled to fly by the cold classical KBO 2014 MU69 on 1-Jan-2019. The spacecraft speed relative to the MU69 will be in excess of 14 km/s. At these encounter velocities, impact with debris could be fatal to the spacecraft. We report on searches for debris in the neighborhood of MU69 conducted from SOFIA and ground-based sites. SOFIA observed the star field around MU69 on 10-Jul-2017 (UT) with their Focal Plane Imager (FPI+), operating at 20 Hz from 7:25 to 8:10 UT, spanning the time of the predicted occultation. Several large fixed telescopes observed the 3-Jun-2017, 10-Jul-2017 and/or the 17-Jul-2017 occultation events, including the 4-meter SOAR telescope, the 8-meter Gemini South telescope, and many 16-inch portable telescopes that were arranged in picket fences in South Africa and Argentina. We report on the light curves from these observing platforms and constraints on the optical depth due to debris or rings within the approximate Hill sphere (about 60,000 km across) of MU69. This work was supported by the New Horizons mission and NASA, with astrometric support from the Gaia mission and logistical support from Argentina and the US embassies in Buenos Aires and CapeTown. At SOAR, data acquisition has been done with a Raptor camera (visitor instrument) funded by the Observatorio Nacional/MCTIC.

  1. Spermatozoa motion detection and trajectory tracking algorithm based on orthogonal search

    Science.gov (United States)

    Chacon Murguia, Mario I.; Valdez Martinez, Antonio

    1999-10-01

    This paper presents a new algorithm for object motion detection and trajectory tracking. This method was developed as part of a machine vision system for human fertility analysis. Fertility analysis is based on the amount of spermatozoa in semen samples and their type of movement. Two approaches were tested to detect the movement of the spermatozoa, image subtraction, and optical flow. Image subtraction is a simple and fast method but it has some complications to detect individual motion when large amounts of objects are presented. The optical flow method is able to detect motion but it turns to be computationally time expensive. It does not generate a specific trajectory of each spermatozoon, and it does not detect static spermatozoa. The algorithm developed detects object motion through an orthogonal search of blocks in consecutive frames. Matching of two blocks in consecutive frames is defined by square differences. A dynamic control array is used to store the trajectory of each spermatozoon, and to deal with all the different situations in the trajectories like, new spermatozoa entering in a frame, spermatozoa leaving the frame, and spermatozoa collision. The algorithm developed turns out to be faster than the optical flow algorithm and solves the problem of the image subtraction method. It also detects static spermatozoa, and generates a motion vector for each spermatozoon that describes their trajectory.

  2. State Recognition of High Voltage Isolation Switch Based on Background Difference and Iterative Search

    Science.gov (United States)

    Xu, Jiayuan; Yu, Chengtao; Bo, Bin; Xue, Yu; Xu, Changfu; Chaminda, P. R. Dushantha; Hu, Chengbo; Peng, Kai

    2018-03-01

    The automatic recognition of the high voltage isolation switch by remote video monitoring is an effective means to ensure the safety of the personnel and the equipment. The existing methods mainly include two ways: improving monitoring accuracy and adopting target detection technology through equipment transformation. Such a method is often applied to specific scenarios, with limited application scope and high cost. To solve this problem, a high voltage isolation switch state recognition method based on background difference and iterative search is proposed in this paper. The initial position of the switch is detected in real time through the background difference method. When the switch starts to open and close, the target tracking algorithm is used to track the motion trajectory of the switch. The opening and closing state of the switch is determined according to the angle variation of the switch tracking point and the center line. The effectiveness of the method is verified by experiments on different switched video frames of switching states. Compared with the traditional methods, this method is more robust and effective.

  3. The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies

    Science.gov (United States)

    Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza

    2009-09-01

    The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.

  4. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    Science.gov (United States)

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  5. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Jun Sang

    2015-08-01

    Full Text Available Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  6. Evidence-based practice: extending the search to find material for the systematic review

    OpenAIRE

    Helmer, Diane; Savoie, Isabelle; Green, Carolyn; Kazanjian, Arminée

    2001-01-01

    Background: Cochrane-style systematic reviews increasingly require the participation of librarians. Guidelines on the appropriate search strategy to use for systematic reviews have been proposed. However, research evidence supporting these recommendations is limited.

  7. Decentralized cooperative unmanned aerial vehicles conflict resolution by neural network-based tree search method

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2016-09-01

    Full Text Available In this article, a tree search algorithm is proposed to find the near optimal conflict avoidance solutions for unmanned aerial vehicles. In the dynamic environment, the unmodeled elements, such as wind, would make UAVs deviate from nominal traces. It brings about difficulties for conflict detection and resolution. The back propagation neural networks are utilized to approximate the unmodeled dynamics of the environment. To satisfy the online planning requirement, the search length of the tree search algorithm would be limited. Therefore, the algorithm may not be able to reach the goal states in search process. The midterm reward function for assessing each node is devised, with consideration given to two factors, namely, the safe separation requirement and the mission of each unmanned aerial vehicle. The simulation examples and the comparisons with previous approaches are provided to illustrate the smooth and convincing behaviours of the proposed algorithm.

  8. Collab-Analyzer: An Environment for Conducting Web-Based Collaborative Learning Activities and Analyzing Students' Information-Searching Behaviors

    Science.gov (United States)

    Wu, Chih-Hsiang; Hwang, Gwo-Jen; Kuo, Fan-Ray

    2014-01-01

    Researchers have found that students might get lost or feel frustrated while searching for information on the Internet to deal with complex problems without real-time guidance or supports. To address this issue, a web-based collaborative learning system, Collab-Analyzer, is proposed in this paper. It is not only equipped with a collaborative…

  9. Scatter search based met heuristic for robust optimization of the deploying of "DWDM" technology on optical networks with survivability

    Directory of Open Access Journals (Sweden)

    Moreno-Pérez José A.

    2005-01-01

    Full Text Available In this paper we discuss the application of a met heuristic approach based on the Scatter Search to deal with robust optimization of the planning problem in the deploying of the Dense Wavelength Division Multiplexing (DWDM technology on an existing optical fiber network taking into account, in addition to the forecasted demands, the uncertainty in the survivability requirements.

  10. Design considerations for a large-scale image-based text search engine in historical manuscript collections

    NARCIS (Netherlands)

    Schomaker, Lambertus

    2016-01-01

    This article gives an overview of design considerations for a handwriting search engine based on pattern recognition and high-performance computing, “Monk”. In order to satisfy multiple and often conflicting technological requirements, an architecture is used which heavily relies on high-performance

  11. An improved formalism for quantum computation based on geometric algebra—case study: Grover's search algorithm

    Science.gov (United States)

    Chappell, James M.; Iqbal, Azhar; Lohe, M. A.; von Smekal, Lorenz; Abbott, Derek

    2013-04-01

    The Grover search algorithm is one of the two key algorithms in the field of quantum computing, and hence it is desirable to represent it in the simplest and most intuitive formalism possible. We show firstly, that Clifford's geometric algebra, provides a significantly simpler representation than the conventional bra-ket notation, and secondly, that the basis defined by the states of maximum and minimum weight in the Grover search space, allows a simple visualization of the Grover search analogous to the precession of a spin-{1/2} particle. Using this formalism we efficiently solve the exact search problem, as well as easily representing more general search situations. We do not claim the development of an improved algorithm, but show in a tutorial paper that geometric algebra provides extremely compact and elegant expressions with improved clarity for the Grover search algorithm. Being a key algorithm in quantum computing and one of the most studied, it forms an ideal basis for a tutorial on how to elucidate quantum operations in terms of geometric algebra—this is then of interest in extending the applicability of geometric algebra to more complicated problems in fields of quantum computing, quantum decision theory, and quantum information.

  12. Search Help

    Science.gov (United States)

    Guidance and search help resource listing examples of common queries that can be used in the Google Search Appliance search request, including examples of special characters, or query term seperators that Google Search Appliance recognizes.

  13. Large Neighborhood Search

    DEFF Research Database (Denmark)

    Pisinger, David; Røpke, Stefan

    2010-01-01

    Heuristics based on large neighborhood search have recently shown outstanding results in solving various transportation and scheduling problems. Large neighborhood search methods explore a complex neighborhood by use of heuristics. Using large neighborhoods makes it possible to find better...... candidate solutions in each iteration and hence traverse a more promising search path. Starting from the large neighborhood search method,we give an overview of very large scale neighborhood search methods and discuss recent variants and extensions like variable depth search and adaptive large neighborhood...... search....

  14. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Searching for Binary Systems Among Nearby Dwarfs Based on Pulkovo Observations and SDSS Data

    Science.gov (United States)

    Khovrichev, M. Yu.; Apetyan, A. A.; Roshchina, E. A.; Izmailov, I. S.; Bikulova, D. A.; Ershova, A. P.; Balyaev, I. A.; Kulikova, A. M.; Petyur, V. V.; Shumilov, A. A.; Os'kina, K. I.; Maksimova, L. A.

    2018-02-01

    Our goal is to find previously unknown binary systems among low-mass dwarfs in the solar neighborhood and to test the search technique. The basic ideas are to reveal the images of stars with significant ellipticities and/or asymmetries compared to the background stars on CCD frames and to subsequently determine the spatial parameters of the binary system and the magnitude difference between its components. For its realization we have developed a method based on an image shapelet decomposition. All of the comparatively faint stars with large proper motions ( V >13 m , μ > 300 mas yr-1) for which the "duplicate source" flag in the Gaia DR1 catalogue is equal to one have been included in the list of objects for our study. As a result, we have selected 702 stars. To verify our results, we have performed additional observations of 65 stars from this list with the Pulkovo 1-m "Saturn" telescope (2016-2017). We have revealed a total of 138 binary candidates (nine of them from the "Saturn" telescope and SDSS data). Six program stars are known binaries. The images of the primaries of the comparatively wide pairs WDS 14519+5147, WDS 11371+6022, and WDS 15404+2500 are shown to be resolved into components; therefore, we can talk about the detection of triple systems. The angular separation ρ, position angle, and component magnitude difference Δ m have been estimated for almost all of the revealed binary systems. For most stars 1.5'' < ρ < 2.5'', while Δ m <1.5m.

  16. Neurally and ocularly informed graph-based models for searching 3D environments

    Science.gov (United States)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  17. Searching for the definition of macrosomia through an outcome-based approach.

    Science.gov (United States)

    Ye, Jiangfeng; Zhang, Lin; Chen, Yan; Fang, Fang; Luo, ZhongCheng; Zhang, Jun

    2014-01-01

    Macrosomia has been defined in various ways by obstetricians and researchers. The purpose of the present study was to search for a definition of macrosomia through an outcome-based approach. In a study of 30,831,694 singleton term live births and 38,053 stillbirths in the U.S. Linked Birth-Infant Death Cohort datasets (1995-2004), we compared the occurrence of stillbirth, neonatal death, and 5-min Apgar score less than four in subgroups of birthweight (4000-4099 g, 4100-4199 g, 4200-4299 g, 4300-4399 g, 4400-4499 g, 4500-4999 g vs. reference group 3500-4000 g) and birthweight percentile for gestational age (90th-94th percentile, 95th-96th, and ≥ 97th percentile, vs. reference group 75th-90th percentile). There was no significant increase in adverse perinatal outcomes until birthweight exceeded the 97th percentile. Weight-specific odds ratios (ORs) elevated substantially to 2 when birthweight exceeded 4500 g in Whites. In Blacks and Hispanics, the aORs exceeded 2 for 5-min Apgar less than four when birthweight exceeded 4300 g. For vaginal deliveries, the aORs of perinatal morbidity and mortality were larger for most of the subgroups, but the patterns remained the same. A birthweight greater than 4500 g in Whites, or 4300 g in Blacks and Hispanics regardless of gestational age is the optimal threshold to define macrosomia. A birthweight greater than the 97th percentile for a given gestational age, irrespective of race is also reasonable to define macrosomia. The former may be more clinically useful and simpler to apply.

  18. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    Science.gov (United States)

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  19. Evaluation of the efficiency of computer-aided spectra search systems based on information theory

    International Nuclear Information System (INIS)

    Schaarschmidt, K.

    1979-01-01

    Application of information theory allows objective evaluation of the efficiency of computer-aided spectra search systems. For this purpose, a significant number of search processes must be analyzed. The amount of information gained by computer application is considered as the difference between the entropy of the data bank and a conditional entropy depending on the proportion of unsuccessful search processes and ballast. The influence of the following factors can be estimated: volume, structure, and quality of the spectra collection stored, efficiency of the encoding instruction and the comparing algorithm, and subjective errors involved in the encoding of spectra. The relations derived are applied to two published storage and retrieval systems for infared spectra. (Auth.)

  20. Collating the knowledge base for core outcome set development: developing and appraising the search strategy for a systematic review.

    Science.gov (United States)

    Gargon, Elizabeth; Williamson, Paula R; Clarke, Mike

    2015-03-29

    The COMET (Core Outcome Measures in Effectiveness Trials) Initiative is developing a publicly accessible online resource to collate the knowledge base for core outcome set development (COS) and the applied work from different health conditions. Ensuring that the database is as comprehensive as possible and keeping it up to date are key to its value for users. This requires the development and application of an optimal, multi-faceted search strategy to identify relevant material. This paper describes the challenges of designing and implementing such a search, outlining the development of the search strategy for studies of COS development, and, in turn, the process for establishing a database of COS. We investigated the performance characteristics of this strategy including sensitivity, precision and numbers needed to read. We compared the contribution of databases towards identifying included studies to identify the best combination of methods to retrieve all included studies. Recall of the search strategies ranged from 4% to 87%, and precision from 0.77% to 1.13%. MEDLINE performed best in terms of recall, retrieving 216 (87%) of the 250 included records, followed by Scopus (44%). The Cochrane Methodology Register found just 4% of the included records. MEDLINE was also the database with the highest precision. The number needed to read varied between 89 (MEDLINE) and 130 (SCOPUS). We found that two databases and hand searching were required to locate all of the studies in this review. MEDLINE alone retrieved 87% of the included studies, but actually 97% of the included studies were indexed on MEDLINE. The Cochrane Methodology Register did not contribute any records that were not found in the other databases, and will not be included in our future searches to identify studies developing COS. SCOPUS had the lowest precision rate (0.77) and highest number needed to read (130). In future COMET searches for COS a balance needs to be struck between the work involved in

  1. PRESENTING SEARCH RESULT WITH REDUCED UNWANTED WEB ADDRESSES USING FUZZY BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Nancy Jasmine Goldena

    2017-07-01

    Full Text Available Big Data is now the most talked about research subject. Over the year with the internet and storage space expansions vast swaths of data are available for would be searcher. About a decade ago when a content was searched, due to minimum amount of content often you end up with accurate set of results. But nowadays most of the data, if not all are sometimes vague and not even sometime pertain to area of search it was indented to. Hence here a novel approach is presented to perform data cleaning using a simple but effective fuzzy rule to weed out data that won’t produce accurate data.

  2. Novel search space updating heuristics-based genetic algorithm for optimizing medium-scale airline crew pairing problems

    Directory of Open Access Journals (Sweden)

    Nihan Cetin Demirel

    2017-01-01

    Full Text Available This study examines the crew pairing problem, which is one of the most comprehensive problems encountered in airline planning, to generate a set of crew pairings that has minimal cost, covers all flight legs and fulfils legal criteria. In addition, this study examines current research related to crew pairing optimization. The contribution of this study is developing heuristics based on an improved dynamic-based genetic algorithm, a deadhead-minimizing pairing search and a partial solution approach (less-costly alternative pairing search. This study proposes genetic algorithm variants and a memetic algorithm approach. In addition, computational results based on real-world data from a local airline company in Turkey are presented. The results demonstrate that the proposed approach can successfully handle medium sets of crew pairings and generate higher-quality solutions than previous methods.

  3. Aspiration Levels and R&D Search in Young Technology-Based Firms

    DEFF Research Database (Denmark)

    Candi, Marina; Saemundsson, Rognvaldur; Sigurjonsson, Olaf

    Decisions about allocation of resources to research and development (R&D), referred to here as R&D search, are critically important for competitive advantage. Using panel data collected yearly over a period of nine years, this paper re-visits existing theories of backward-looking and forward-look...

  4. A Distributed Election and Spanning Tree Algorithm Based on Depth First Search Traversals

    DEFF Research Database (Denmark)

    Skyum, Sven

    The existence of an effective distributed traversal algorithm for a class of graphs has proven useful in connection with election problems for those classes. In this paper we show how a general traversal algorithm, such as depth first search, can be turned into an effective election algorithm using...... modular techniques. The presented method also constructs a spanning tree for the graph....

  5. Model-Based Systems Engineering in the Execution of Search and Rescue Operations

    Science.gov (United States)

    2015-09-01

    SAR DRM Activity Simulation Gantt Chart ...................................................82 Figure 19. SAR DRM Action Utilization Simulation Output...characteristics of the search object. Simulation software , such as the U.S. Coast Guard’s SAROPS, calculates Monte Carlo probability distributions for...construct tends to be somewhat software - centric so the International Council on Systems Engineering decided in 2001 to customize UML for systems

  6. A Slicing Tree Representation and QCP-Model-Based Heuristic Algorithm for the Unequal-Area Block Facility Layout Problem

    Directory of Open Access Journals (Sweden)

    Mei-Shiang Chang

    2013-01-01

    Full Text Available The facility layout problem is a typical combinational optimization problem. In this research, a slicing tree representation and a quadratically constrained program model are combined with harmony search to develop a heuristic method for solving the unequal-area block layout problem. Because of characteristics of slicing tree structure, we propose a regional structure of harmony memory to memorize facility layout solutions and two kinds of harmony improvisation to enhance global search ability of the proposed heuristic method. The proposed harmony search based heuristic is tested on 10 well-known unequal-area facility layout problems from the literature. The results are compared with the previously best-known solutions obtained by genetic algorithm, tabu search, and ant system as well as exact methods. For problems O7, O9, vC10Ra, M11*, and Nug12, new best solutions are found. For other problems, the proposed approach can find solutions that are very similar to previous best-known solutions.

  7. Multi-Agent Based Beam Search for Real-Time Production Scheduling and Control Method, Software and Industrial Application

    CERN Document Server

    Kang, Shu Gang

    2013-01-01

    The Multi-Agent Based Beam Search (MABBS) method systematically integrates four major requirements of manufacturing production - representation capability, solution quality, computation efficiency, and implementation difficulty - within a unified framework to deal with the many challenges of complex real-world production planning and scheduling problems. Multi-agent Based Beam Search for Real-time Production Scheduling and Control introduces this method, together with its software implementation and industrial applications.  This book connects academic research with industrial practice, and develops a practical solution to production planning and scheduling problems. To simplify implementation, a reusable software platform is developed to build the MABBS method into a generic computation engine.  This engine is integrated with a script language, called the Embedded Extensible Application Script Language (EXASL), to provide a flexible and straightforward approach to representing complex real-world problems. ...

  8. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  9. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Science.gov (United States)

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  10. [Evidence-based medicine in surgical practice - locating clinical studies and systematic reviews by searching the Medline database].

    Science.gov (United States)

    Grummich, K; Jensen, K; Obst, O; Seiler, C M; Diener, M K

    2014-12-01

    Every day approximately 75 clinical trials and 11 systematic reviews are published in the health-care intervention and medical field. Due to this growing number of publications it is a challenge for every practicing clinician to keep track with the latest research. The implementation of new and effective diagnostic and therapeutic interventions into daily clinical routine may thus be delayed. Conversely, ineffective or even harmful interventions might still be in use. Decision-making in evidence-based medicine (EBM) requires consideration of the most recent high quality evidence. Randomised controlled trials (RCTs) are regarded as the "gold standard" to prove the efficacy of surgical interventions in patient-oriented research. Systematic reviews combine results from RCTs by summarising single RCTs which answer a particular clinical question. Some basic knowledge in systematic literature searching is required and helpful for detecting relevant publications. This article shows various possibilities for locating clinical studies and systematic reviews in the database Medline on the basis of illustrative step-by-step instructions. RESULTS AND CONCLUSION. Depending on the aim and topic of the literature search, the time required for the task may vary. In routine practice, a systematic literature search is unrealistic in most cases. Clinicians in need of a quick update of current evidence on a certain clinical topic may make use of up-to-date systematic reviews. During a systematic literature search, different approaches and strategies might be necessary. Georg Thieme Verlag KG Stuttgart · New York.

  11. Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews.

    Science.gov (United States)

    Rathbone, John; Albarqouni, Loai; Bakhit, Mina; Beller, Elaine; Byambasuren, Oyungerel; Hoffmann, Tammy; Scott, Anna Mae; Glasziou, Paul

    2017-11-25

    Citation screening for scoping searches and rapid review is time-consuming and inefficient, often requiring days or sometimes months to complete. We examined the reliability of PICo-based title-only screening using keyword searches based on the PICo elements-Participants, Interventions, and Comparators, but not the Outcomes. A convenience sample of 10 datasets, derived from the literature searches of completed systematic reviews, was used to test PICo-based title-only screening. Search terms for screening were generated from the inclusion criteria of each review, specifically the PICo elements-Participants, Interventions and Comparators. Synonyms for the PICo terms were sought, including alternatives for clinical conditions, trade names of generic drugs and abbreviations for clinical conditions, interventions and comparators. The MeSH database, Wikipedia, Google searches and online thesauri were used to assist generating terms. Title-only screening was performed by five reviewers independently in Endnote X7 reference management software using OR Boolean operator. Outcome measures were recall of included studies and the reduction in screening effort. Recall is the proportion of included studies retrieved using PICo title-only screening out of the total number of included studies in the original reviews. The percentage reduction in screening effort is the proportion of records not needing screening because the method eliminates them from the screen set. Across the 10 reviews, the reduction in screening effort ranged from 11 to 78% with a median reduction of 53%. In nine systematic reviews, the recall of included studies was 100%. In one review (oxygen therapy), four of five reviewers missed the same included study (median recall 67%). A post hoc analysis was performed on the dataset with the lowest reduction in screening effort (11%), and it was rescreened using only the intervention and comparator keywords and omitting keywords for participants. The reduction in

  12. Expediting citation screening using PICo-based title-only screening for identifying studies in scoping searches and rapid reviews

    Directory of Open Access Journals (Sweden)

    John Rathbone

    2017-11-01

    Full Text Available Abstract Background Citation screening for scoping searches and rapid review is time-consuming and inefficient, often requiring days or sometimes months to complete. We examined the reliability of PICo-based title-only screening using keyword searches based on the PICo elements—Participants, Interventions, and Comparators, but not the Outcomes. Methods A convenience sample of 10 datasets, derived from the literature searches of completed systematic reviews, was used to test PICo-based title-only screening. Search terms for screening were generated from the inclusion criteria of each review, specifically the PICo elements—Participants, Interventions and Comparators. Synonyms for the PICo terms were sought, including alternatives for clinical conditions, trade names of generic drugs and abbreviations for clinical conditions, interventions and comparators. The MeSH database, Wikipedia, Google searches and online thesauri were used to assist generating terms. Title-only screening was performed by five reviewers independently in Endnote X7 reference management software using OR Boolean operator. Outcome measures were recall of included studies and the reduction in screening effort. Recall is the proportion of included studies retrieved using PICo title-only screening out of the total number of included studies in the original reviews. The percentage reduction in screening effort is the proportion of records not needing screening because the method eliminates them from the screen set. Results Across the 10 reviews, the reduction in screening effort ranged from 11 to 78% with a median reduction of 53%. In nine systematic reviews, the recall of included studies was 100%. In one review (oxygen therapy, four of five reviewers missed the same included study (median recall 67%. A post hoc analysis was performed on the dataset with the lowest reduction in screening effort (11%, and it was rescreened using only the intervention and comparator keywords and

  13. Application of GIS-based models for delineating the UAV flight region to support Search and Rescue activities

    Science.gov (United States)

    Jurecka, Miroslawa; Niedzielski, Tomasz

    2017-04-01

    The objective of the approach presented in this paper is to demonstrate a potential of using the combination of two GIS-based models - mobility model and ring model - for delineating a region above which an Unmanned Aerial Vehicle (UAV) should fly to support the Search and Rescue (SAR) activities. The procedure is based on two concepts, both describing a possible distance/path that lost person could travel from the initial planning point (being either the point last seen, or point last known). The first approach (the ring model) takes into account the crow's flight distance traveled by a lost person and its probability distribution. The second concept (the mobility model) is based on the estimated travel speed and the associated features of the geographical environment of the search area. In contrast to the ring model covering global (hence more general) SAR perspective, the mobility model represents regional viewpoint by taking into consideration local impedance. Both models working together can serve well as a starting point for the UAV flight planning to strengthen the SAR procedures. We present the method of combining the two above-mentioned models in order to delineate UAVs flight region and increase the Probability of Success for future SAR missions. The procedure is a part of a larger Search and Rescue (SAR) system which is being developed at the University of Wrocław, Poland (research project no. IP2014 032773 financed by the Ministry of Science and Higher Education of Poland). The mobility and ring models have been applied to the Polish territory, and they act in concert to provide the UAV operator with the optimal search region. This is attained in real time so that the UAV-based SAR mission can be initiated quickly.

  14. Rapid Optimization of Multiple Isocenters Using Computer Search for Linear Accelerator-based Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Suh, Tae Suk; Yoon, Sei Chul; Kim, Moon Chan; Bahk, Yong Whee; Shinn, Kyung Sub; Park, Charn Il; Ha, Sung Whan

    1994-01-01

    The purpose of this paper is to develop an efficient method for the quick determination of multiple isocenters plans to provide optimal dose distribution in stereotactic radiosurgery. A Spherical dose model was developed through the use of fit to the exact dose data calculated in a 18 cm diameter of spherical head phantom. It computes dose quickly for each spherical part and is useful to estimate dose distribution for multiple isocenter. An automatic computer search algorithm was developed using the relationship between the isocenter move and the change of dose shape, and adapted with a spherical dose model to determine isocenter separation and collimator sizes quickly and automatically. A spherical dose model shows a comparable isodose distribution with exact dose data and permits rapid calculation of 3-D isodoses. The computer search can provide reasonable isocenter settings more quickly than trial and error types of plans, while producing steep dose gradient around target boundary. A spherical dose model can be used for the quick determination of the multiple isocenter plans with a computer automatic search. Our guideline is useful to determine the initial multiple isocenter plans

  15. SemantGeo: Powering Ecological and Environment Data Discovery and Search with Standards-Based Geospatial Reasoning

    Science.gov (United States)

    Seyed, P.; Ashby, B.; Khan, I.; Patton, E. W.; McGuinness, D. L.

    2013-12-01

    Recent efforts to create and leverage standards for geospatial data specification and inference include the GeoSPARQL standard, Geospatial OWL ontologies (e.g., GAZ, Geonames), and RDF triple stores that support GeoSPARQL (e.g., AllegroGraph, Parliament) that use RDF instance data for geospatial features of interest. However, there remains a gap on how best to fuse software engineering best practices and GeoSPARQL within semantic web applications to enable flexible search driven by geospatial reasoning. In this abstract we introduce the SemantGeo module for the SemantEco framework that helps fill this gap, enabling scientists find data using geospatial semantics and reasoning. SemantGeo provides multiple types of geospatial reasoning for SemantEco modules. The server side implementation uses the Parliament SPARQL Endpoint accessed via a Tomcat servlet. SemantGeo uses the Google Maps API for user-specified polygon construction and JsTree for providing containment and categorical hierarchies for search. SemantGeo uses GeoSPARQL for spatial reasoning alone and in concert with RDFS/OWL reasoning capabilities to determine, e.g., what geofeatures are within, partially overlap with, or within a certain distance from, a given polygon. We also leverage qualitative relationships defined by the Gazetteer ontology that are composites of spatial relationships as well as administrative designations or geophysical phenomena. We provide multiple mechanisms for exploring data, such as polygon (map-based) and named-feature (hierarchy-based) selection, that enable flexible search constraints using boolean combination of selections. JsTree-based hierarchical search facets present named features and include a 'part of' hierarchy (e.g., measurement-site-01, Lake George, Adirondack Region, NY State) and type hierarchies (e.g., nodes in the hierarchy for WaterBody, Park, MeasurementSite), depending on the ';axis of choice' option selected. Using GeoSPARQL and aforementioned ontology

  16. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  17. Analisis Kata Tabu dan Klasifikasinya di Lirik Lagu Eminem pada Album `The Marshal Mathers LP`

    Directory of Open Access Journals (Sweden)

    Laily Nur Affini

    2017-04-01

    Full Text Available This research is dedicated for readers in the field of linguistics and has a purpose to reveal taboo words based on a certain theory and the classifications. The analysed taboo words exist in Emeninems album,The Marshall Mathers LP. A theory employed in the analysis is using Timothy Jay theory where taboo words are differentiated in seven classifications; cursing, profanity, blasphemy, obscenity, sexual harassment, vulgar language, name-calling and insult.The data was taken into two parts, primary and secondary. The primary data is the lyric itself and the secondary data is taken from books, articles and dictionary. The result of the analysis shows a revelation of the taboo words classifications, shown up in a table.

  18. A systematic search and narrative review of radiographic definitions of hand osteoarthritis in population-based studies.

    Science.gov (United States)

    Marshall, M; Dziedzic, K S; van der Windt, D A; Hay, E M

    2008-02-01

    Currently there is no agreed "gold standard" definition of radiographic hand osteoarthritis (RHOA) for use in epidemiological studies. We therefore undertook a systematic search and narrative review of community-based epidemiological studies of hand osteoarthritis (OA) to identify (1) grading systems used, (2) definitions of radiographic OA for individual joints and (3) definitions of overall RHOA. The following electronic databases were searched: Medline, Embase, Science Citation Index and Ageline (inception to Dec 2006). The search strategy combined terms for "hand" and specific joint sites, OA and radiography. Inclusion and exclusion criteria were applied. Data were extracted from each paper covering: hand joints studied, grading system used, definitions applied for OA at individual joints and overall RHOA. Titles and abstracts of 829 publications were reviewed and the full texts of 399 papers were obtained. One hundred fifty-two met the inclusion criteria and 24 additional papers identified from screening references. Kellgren and Lawrence (K&L) was the most frequently applied grading system used in 80% (n=141) of studies. In 71 studies defining OA at the individual joint level 69 (97%) used a definition of K&L grade > or = 2. Only 53 publications defined overall RHOA, using 21 different definitions based on five grading systems. The K&L scheme remains the most frequently used grading system. There is a consistency in defining OA in a single hand joint as K&L grade > or = 2. However, there are substantial variations in the definitions of overall RHOA in epidemiological studies.

  19. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding.

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  20. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Directory of Open Access Journals (Sweden)

    Yong-Bi Fu

    2017-07-01

    Full Text Available Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  1. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  2. GeoSearch: a new virtual globe application for the submission, storage, and sharing of point-based ecological data

    Science.gov (United States)

    Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.

    2009-12-01

    of ecological measurements in forests; we expect to extend the approach to a Quebec lake research network encompassing decades of lake measurements. In this session, we will describe and present four related components of the new system: GeoSearch’s globe-based searching and display of scientific data; prefuse-based visualization of social connections among members of a scientific research network; geolocation of research projects using Google Spreadsheets, KML, and Google Earth/Maps; and collaborative construction of a geolocated database of research articles. Each component is designed to have applications for scientists themselves as well as the general public. Although each implementation is in its infancy, we believe they could be useful to other researcher networks.

  3. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  4. The Rise of Market-Based Job Search Institutions and Job Niches for Low-Skilled Chinese Immigrants

    Directory of Open Access Journals (Sweden)

    Zai Liang

    2018-01-01

    Full Text Available Increasingly, market-based job search institutions, such as employment agencies and ethnic media, are playing a more important role than migrant networks for low-skilled Chinese immigrants searching for jobs. We argue that two major factors are driving this trend: the diversification of Chinese immigrants’ provinces of origin, and the spatial diffusion of businesses in the United States owned by Chinese immigrants. We also identify some new niche jobs for Chinese immigrants and assess the extent to which this development is driven by China’s growing prosperity. We use data from multiple sources, including a survey of employment agencies in Manhattan’s Chinatown, job advertisements in Chinese-language newspapers, and information on Chinese immigrant hometown associations in the United States.

  5. A Nonlinearity Mitigation Method for a Broadband RF Front-End in a Sensor Based on Best Delay Searching.

    Science.gov (United States)

    Zhao, Wen; Ma, Hong; Zhang, Hua; Jin, Jiang; Dai, Gang; Hu, Lin

    2017-09-28

    The cognitive radio wireless sensor network (CR-WSN) is experiencing more and more attention for its capacity to automatically extract broadband instantaneous radio environment information. Obtaining sufficient linearity and spurious-free dynamic range (SFDR) is a significant premise of guaranteeing sensing performance which, however, usually suffers from the nonlinear distortion coming from the broadband radio frequency (RF) front-end in the sensor node. Moreover, unlike other existing methods, the joint effect of non-constant group delay distortion and nonlinear distortion is discussed, and its corresponding solution is provided in this paper. After that, the nonlinearity mitigation architecture based on best delay searching is proposed. Finally, verification experiments, both on simulation signals and signals from real-world measurement, are conducted and discussed. The achieved results demonstrate that with best delay searching, nonlinear distortion can be alleviated significantly and, in this way, spectrum sensing performance is more reliable and accurate.

  6. A Nonlinearity Mitigation Method for a Broadband RF Front-End in a Sensor Based on Best Delay Searching

    Directory of Open Access Journals (Sweden)

    Wen Zhao

    2017-09-01

    Full Text Available The cognitive radio wireless sensor network (CR-WSN is experiencing more and more attention for its capacity to automatically extract broadband instantaneous radio environment information. Obtaining sufficient linearity and spurious-free dynamic range (SFDR is a significant premise of guaranteeing sensing performance which, however, usually suffers from the nonlinear distortion coming from the broadband radio frequency (RF front-end in the sensor node. Moreover, unlike other existing methods, the joint effect of non-constant group delay distortion and nonlinear distortion is discussed, and its corresponding solution is provided in this paper. After that, the nonlinearity mitigation architecture based on best delay searching is proposed. Finally, verification experiments, both on simulation signals and signals from real-world measurement, are conducted and discussed. The achieved results demonstrate that with best delay searching, nonlinear distortion can be alleviated significantly and, in this way, spectrum sensing performance is more reliable and accurate.

  7. Cooperative Search and Rescue with Artificial Fishes Based on Fish-Swarm Algorithm for Underwater Wireless Sensor Networks

    Science.gov (United States)

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties. PMID:24741341

  8. Quasi-steady State Reduction of Molecular Motor-Based Models of Directed Intermittent Search

    KAUST Repository

    Newby, Jay M.

    2010-02-19

    We present a quasi-steady state reduction of a linear reaction-hyperbolic master equation describing the directed intermittent search for a hidden target by a motor-driven particle moving on a one-dimensional filament track. The particle is injected at one end of the track and randomly switches between stationary search phases and mobile nonsearch phases that are biased in the anterograde direction. There is a finite possibility that the particle fails to find the target due to an absorbing boundary at the other end of the track. Such a scenario is exemplified by the motor-driven transport of vesicular cargo to synaptic targets located on the axon or dendrites of a neuron. The reduced model is described by a scalar Fokker-Planck (FP) equation, which has an additional inhomogeneous decay term that takes into account absorption by the target. The FP equation is used to compute the probability of finding the hidden target (hitting probability) and the corresponding conditional mean first passage time (MFPT) in terms of the effective drift velocity V, diffusivity D, and target absorption rate λ of the random search. The quasi-steady state reduction determines V, D, and λ in terms of the various biophysical parameters of the underlying motor transport model. We first apply our analysis to a simple 3-state model and show that our quasi-steady state reduction yields results that are in excellent agreement with Monte Carlo simulations of the full system under physiologically reasonable conditions. We then consider a more complex multiple motor model of bidirectional transport, in which opposing motors compete in a "tug-of-war", and use this to explore how ATP concentration might regulate the delivery of cargo to synaptic targets. © 2010 Society for Mathematical Biology.

  9. Search of protein kinase CK2 inhibitors based on purine-2,6-diones derivatives

    Directory of Open Access Journals (Sweden)

    M. V. Protopopov

    2017-10-01

    Full Text Available This work is aimed to the search of protein kinase CK2 inhibitors among the purine-2,6-dione derivatives by molecular docking and biochemical tests. It was found that the most active compound 8-[2-[(3-methoxyphenylmethylidene]hydrazine-1-yl]-3-methyl-7-(3-phenoxypropyl-2,3,6,7-tetrahydro-1H-purine-2,6-dione inhibited protein kinase CK2 with IC50 value of 8.5 µM in vitro in kinase assay. Biochemical tests and computer simulation results allowed determining the binding mode of the most active compound and structure-activity relationships.

  10. An Analysis of the Applicability of Federal Law Regarding Hash-Based Searches of Digital Media

    Science.gov (United States)

    2014-06-01

    trash . In addition, the trash is placed outside so that a third party can collect it. As such, the Court concluded that society would not accept an...media with reasonable suspicion. Howard Cotterman was stopped at the U.S.– Mexico border after a search in a database returned a hit for a fifteen-year...border in a suburb of San Diego, CA. Taylor usually makes a trip to Mexico about once a week. He spends a couple days there and returns to the United

  11. Template-based searches for gravitational waves: efficient lattice covering of flat parameter spaces

    International Nuclear Information System (INIS)

    Prix, Reinhard

    2007-01-01

    The construction of optimal template banks for matched-filtering searches is an example of the sphere covering problces tant-coefficient metrics a (near-) optimal template bank is achieved by the A* n lattice, which is the best lattice covering in dimensions n ≤ 5, and is close to the best covering known for dimensions n ≤ 16. Generally, this provides a substantially more efficient covering than the simpler hyper-cubic lattice. We present an algorithm for generating lattice template banks for constant-coefficient metrics and we illustrate its implementation by generating A* n template banks in n = 2, 3, 4 dimensions

  12. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  13. A novel field search and rescue system based on SIM card location

    Science.gov (United States)

    Zhang, Huihui; Guo, Shutao; Cui, Dejing

    2017-06-01

    Nowadays, the rapid development of outdoor sports and adventure leads to the increase of the frequency of missing accidents. On the other hand, it becomes much more convenient and efficient for the criminals to escape with the help of new technologies. So we have developed a long-distance raids targeted field search and rescue system which utilizes RSSI ranging and Kalman filtering algorithm to realize remote positioning and dynamic supervision management only by a mobile phone with a SIM card, without any additional terminal equipment.

  14. Random searching

    International Nuclear Information System (INIS)

    Shlesinger, Michael F

    2009-01-01

    There are a wide variety of searching problems from molecules seeking receptor sites to predators seeking prey. The optimal search strategy can depend on constraints on time, energy, supplies or other variables. We discuss a number of cases and especially remark on the usefulness of Levy walk search patterns when the targets of the search are scarce.

  15. Searches for the Higgs Boson at the LHC based on its couplings to Vector Bosons

    CERN Document Server

    Hackstein, C

    One of the primary goals of the Large Hadron Collider (LHC) is the sea rch for the Higgs Boson. All Higgs searches rely heavily on Monte Carlo predic tions of both the signal and background processes. These simulations n ecessarily include models and assumptions not derived from first principles. Esp ecially the process of hadronization and the underlying event are only par tially un- derstood and differ strongly between different generators. As a r esult, the predictions can be wrong for special regions of phase space. Ther efore, pre- dictions by several programs should be compared to gain an estimat e of the uncertainty of the observables considered. In this work, two different Monte Carlo generators were compared in their pre- dictions for a Higgs search in the Vector Boson Fusion (VBF) Higgs pr oduction channel with subsequent decay into W bosons that decay leptonica lly in turn. A significant difference in the description of both signal and backgro und was found between the two generators. As the Monte...

  16. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  17. Edge-Based Efficient Search over Encrypted Data Mobile Cloud Storage.

    Science.gov (United States)

    Guo, Yeting; Liu, Fang; Cai, Zhiping; Xiao, Nong; Zhao, Ziming

    2018-04-13

    Smart sensor-equipped mobile devices sense, collect, and process data generated by the edge network to achieve intelligent control, but such mobile devices usually have limited storage and computing resources. Mobile cloud storage provides a promising solution owing to its rich storage resources, great accessibility, and low cost. But it also brings a risk of information leakage. The encryption of sensitive data is the basic step to resist the risk. However, deploying a high complexity encryption and decryption algorithm on mobile devices will greatly increase the burden of terminal operation and the difficulty to implement the necessary privacy protection algorithm. In this paper, we propose ENSURE (EfficieNt and SecURE), an efficient and secure encrypted search architecture over mobile cloud storage. ENSURE is inspired by edge computing. It allows mobile devices to offload the computation intensive task onto the edge server to achieve a high efficiency. Besides, to protect data security, it reduces the information acquisition of untrusted cloud by hiding the relevance between query keyword and search results from the cloud. Experiments on a real data set show that ENSURE reduces the computation time by 15% to 49% and saves the energy consumption by 38% to 69% per query.

  18. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    Science.gov (United States)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  19. Search Patterns

    CERN Document Server

    Morville, Peter

    2010-01-01

    What people are saying about Search Patterns "Search Patterns is a delight to read -- very thoughtful and thought provoking. It's the most comprehensive survey of designing effective search experiences I've seen." --Irene Au, Director of User Experience, Google "I love this book! Thanks to Peter and Jeffery, I now know that search (yes, boring old yucky who cares search) is one of the coolest ways around of looking at the world." --Dan Roam, author, The Back of the Napkin (Portfolio Hardcover) "Search Patterns is a playful guide to the practical concerns of search interface design. It cont

  20. Do hilário ao sinistro: a publicidade e o uso do humor para lidiar com o tabu da morte

    Directory of Open Access Journals (Sweden)

    Pereira, Iranilton Marcolino

    2015-01-01

    Full Text Available Tomando como objeto de estudo peças publicitárias do Cemitério Morada da Paz, este artigo traz à tona reflexões sobre a abordagem da morte na propaganda destinada a vender planos funerários do Grupo Vila. O texto aborda o conflito entre o caráter sedutor da publicidade e as características do mercado fúnebre, que lida com um tema tabu nas sociedades ocidentais, a morte. À luz das ideias de Bauman, Lipovetsky, Kóvacs, entre outros, o artigo procura relacionar as sensações despertadas pela publicidade nos consumidores, motores do mundo capitalista, com a evolução dos costumes e convenções no que diz respeito à morte, principalmente na sociedade ocidental.

  1. Memetic Algorithm with Local Search as Modified Swine Influenza Model-Based Optimization and Its Use in ECG Filtering

    Directory of Open Access Journals (Sweden)

    Devidas G. Jadhav

    2014-01-01

    Full Text Available The Swine Influenza Model Based Optimization (SIMBO family is a newly introduced speedy optimization technique having the adaptive features in its mechanism. In this paper, the authors modified the SIMBO to make the algorithm further quicker. As the SIMBO family is faster, it is a better option for searching the basin. Thus, it is utilized in local searches in developing the proposed memetic algorithms (MAs. The MA has a faster speed compared to SIMBO with the balance in exploration and exploitation. So, MAs have small tradeoffs in convergence velocity for comprehensively optimizing the numerical standard benchmark test bed having functions with different properties. The utilization of SIMBO in the local searching is inherently the exploitation of better characteristics of the algorithms employed for the hybridization. The developed MA is applied to eliminate the power line interference (PLI from the biomedical signal ECG with the use of adaptive filter whose weights are optimized by the MA. The inference signal required for adaptive filter is obtained using the selective reconstruction of ECG from the intrinsic mode functions (IMFs of empirical mode decomposition (EMD.

  2. A Memory Hierarchy Model Based on Data Reuse for Full-Search Motion Estimation on High-Definition Digital Videos

    Directory of Open Access Journals (Sweden)

    Alba Sandyra Bezerra Lopes

    2012-01-01

    Full Text Available The motion estimation is the most complex module in a video encoder requiring a high processing throughput and high memory bandwidth, mainly when the focus is high-definition videos. The throughput problem can be solved increasing the parallelism in the internal operations. The external memory bandwidth may be reduced using a memory hierarchy. This work presents a memory hierarchy model for a full-search motion estimation core. The proposed memory hierarchy model is based on a data reuse scheme considering the full search algorithm features. The proposed memory hierarchy expressively reduces the external memory bandwidth required for the motion estimation process, and it provides a very high data throughput for the ME core. This throughput is necessary to achieve real time when processing high-definition videos. When considering the worst bandwidth scenario, this memory hierarchy is able to reduce the external memory bandwidth in 578 times. A case study for the proposed hierarchy, using 32×32 search window and 8×8 block size, was implemented and prototyped on a Virtex 4 FPGA. The results show that it is possible to reach 38 frames per second when processing full HD frames (1920×1080 pixels using nearly 299 Mbytes per second of external memory bandwidth.

  3. Algoritmo Tabú para un problema de distribución de espacios || Tabu search algorithm for a room allocation problem

    Directory of Open Access Journals (Sweden)

    Molina Luque, Julián

    2006-06-01

    Full Text Available La distribución de espacios es un problema que habitualmente se presenta en situaciones reales cuando se deben asignar simultáneamente diferentes conjuntos de espacios (despachos, habitaciones, salas, etc. distribuidos entre edificios y/o plantas entre varios grupos de personas de tal forma que se minimicen las distancias entre los espacios asignados a cada grupo y lasede de dicho grupo. Esta situación da lugar a un problema combinatorio con una función objetivo cuadrática, lo cual complica enormemente su resolución mediante un método exacto. Por este motivo, proponemos para su resolución un metaheurístico basado en Búsqueda Tabú con dos grupos de movimientos claramente diferenciados: intercambio de despachos y reasignación de sedes. Finalmente, aplicamos dicho algoritmo a un caso real en la Universidad Pablo de Olavide de Sevilla (España.

  4. An Adaptive Tabu Search Heuristic for the Location Routing Pickup and Delivery Problem with Time Windows with a Theater Distribution Application

    National Research Council Canada - National Science Library

    Burks, Jr, Robert E

    2006-01-01

    .... The location routing problem (LRP) is an extension of the vehicle routing problem where the solution identifies the optimal location of the depots and provides the vehicle schedules and distribution routes...

  5. Muscle forces during running predicted by gradient-based and random search static optimisation algorithms.

    Science.gov (United States)

    Miller, Ross H; Gillette, Jason C; Derrick, Timothy R; Caldwell, Graham E

    2009-04-01

    Muscle forces during locomotion are often predicted using static optimisation and SQP. SQP has been criticised for over-estimating force magnitudes and under-estimating co-contraction. These problems may be related to SQP's difficulty in locating the global minimum to complex optimisation problems. Algorithms designed to locate the global minimum may be useful in addressing these problems. Muscle forces for 18 flexors and extensors of the lower extremity were predicted for 10 subjects during the stance phase of running. Static optimisation using SQP and two random search (RS) algorithms (a genetic algorithm and simulated annealing) estimated muscle forces by minimising the sum of cubed muscle stresses. The RS algorithms predicted smaller peak forces (42% smaller on average) and smaller muscle impulses (46% smaller on average) than SQP, and located solutions with smaller cost function scores. Results suggest that RS may be a more effective tool than SQP for minimising the sum of cubed muscle stresses in static optimisation.

  6. Two Kinds of Classifications Based on Improved Gravitational Search Algorithm and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hongping Hu

    2017-01-01

    Full Text Available Gravitational Search Algorithm (GSA is a widely used metaheuristic algorithm. Although fewer parameters in GSA were adjusted, GSA has a slow convergence rate. In this paper, we change the constant acceleration coefficients to be the exponential function on the basis of combination of GSA and PSO (PSO-GSA and propose an improved PSO-GSA algorithm (written as I-PSO-GSA for solving two kinds of classifications: surface water quality and the moving direction of robots. I-PSO-GSA is employed to optimize weights and biases of backpropagation (BP neural network. The experimental results show that, being compared with combination of PSO and GSA (PSO-GSA, single PSO, and single GSA for optimizing the parameters of BP neural network, I-PSO-GSA outperforms PSO-GSA, PSO, and GSA and has better classification accuracy for these two actual problems.

  7. MUSiC. A model unspecific search in CMS based on 2010 LHC data

    Energy Technology Data Exchange (ETDEWEB)

    Pieta, Holger

    2012-06-20

    A Model Unspecific Search in CMS (MUSiC) is presented in this work, along with its results on the data taken in 2010 by the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC). This analysis shows a sensitivity to various models for new physics and provides a broad view at the data, due to its minimal theoretical bias. Events are classified with respect to their reconstructed objects: Muons, electrons, photons, jets and missing transverse energy. Up to three kinematic variables in each of these classes are systematically scanned for continuous bin regions deviating significantly from the predictions by the Standard Model of particle physics. No deviations beyond expected fluctuations are observed, when taking systematic uncertainties into account. The sensitivity of the analysis to certain models beyond the Standard Model is demonstrated.

  8. CSLM: Levenberg Marquardt based Back Propagation Algorithm Optimized with Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Nazri Mohd. Nawi

    2014-11-01

    Full Text Available Training an artificial neural network is an optimization task, since it is desired to find optimal weight sets for a neural network during training process. Traditional training algorithms such as back propagation have some drawbacks such as getting stuck in local minima and slow speed of convergence. This study combines the best features of two algorithms; i.e. Levenberg Marquardt back propagation (LMBP and Cuckoo Search (CS for improving the convergence speed of artificial neural networks (ANN training. The proposed CSLM algorithm is trained on XOR and OR datasets. The experimental results show that the proposed CSLM algorithm has better performance than other similar hybrid variants used in this study.

  9. The Process Synthesis Pyramid: Conceptual design of a Liquefied Energy Chain using Pinch Analysis,Exergy Analysis,Deterministic Optimization and Metaheuristic Searches

    Energy Technology Data Exchange (ETDEWEB)

    Aspelund, Audun

    2012-07-01

    Process Synthesis (PS) is a term used to describe a class of general and systematic methods for the conceptual design of processing plants and energy systems. The term also refers to the development of the process flowsheet (structure or topology), the selection of unit operations and the determination of the most important operating conditions.In this thesis an attempt is made to characterize some of the most common methodologies in a PS pyramid and discuss their advantages and disadvantages as well as where in the design phase they could be used most efficiently. The thesis shows how design tools have been developed for subambient processes by combining and expanding PS methods such as Heuristic Rules, sequential modular Process Simulations, Pinch Analysis, Exergy Analysis, Mathematical Programming using Deterministic Optimization methods and optimization using Stochastic Optimization methods. The most important contributions to the process design community are three new methodologies that include the pressure as an important variable in heat exchanger network synthesis (HENS).The methodologies have been used to develop a novel and efficient energy chain based on stranded natural gas including power production with carbon capture and sequestration (CCS). This Liquefied Energy Chain consists of an offshore process a combined gas carrier and an onshore process. This energy chain is capable of efficiently exploiting resources that cannot be utilized economically today with minor Co2 emissions. Finally, a new Stochastic Optimization approach based on a Tabu Search (TS), the Nelder Mead method or Downhill Simplex Method (NMDS) and the sequential process simulator HYSYS is used to search for better solutions for the Liquefied Energy Chain with respect to minimum cost or maximum profit. (au)

  10. Searching for document contents in an IHE-XDS EHR architecture via archetype-based indexing of document types.

    Science.gov (United States)

    Rinner, Christoph; Kohler, Michael; Saboor, Samrend; Huebner-Bloder, Gudrun; Ammenwerth, Elske; Duftschmid, Georg

    2013-01-01

    The shared EHR (electronic health record) system architecture IHE XDS is widely adopted internationally. It ensures a high level of data privacy via distributed storage of EHR documents. Its standard search capabilities, however, are limited; it only allows a retrieval of complete documents by querying a restricted set of document metadata. Existing approaches that aim to extend XDS queries to document contents typically employ a central index of document contents. Hereby they undermine XDS' basic characteristic of distributed data storage. To avoid data privacy concerns, we propose querying EHR contents in XDS by indexing document types based on Archetypes instead. We successfully tested our approach within the ISO/EN 13606 standard.

  11. A class-based search for the in-core fuel management optimization of a pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@lmp.ufrj.b [Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, CP 68509, CEP 21.941-972, Rio de Janeiro, RJ (Brazil); Rancoita, Paola [IDSIA (Dalle Molle Institute for Artificial Intelligence), Galleria 2, 6982 Manno-Lugano, TI (Switzerland); Mathematics Department, Universita degli Studi di Milano (Italy); Schirru, Roberto [Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, CP 68509, CEP 21.941-972, Rio de Janeiro, RJ (Brazil); Gambardella, Luca Maria [IDSIA (Dalle Molle Institute for Artificial Intelligence), Galleria 2, 6982 Manno-Lugano, TI (Switzerland)

    2010-11-15

    The In-Core Fuel Management Optimization (ICFMO) is a prominent problem in nuclear engineering, with high complexity and studied for more than 40 years. Besides manual optimization and knowledge-based methods, optimization metaheuristics such as Genetic Algorithms, Ant Colony Optimization and Particle Swarm Optimization have yielded outstanding results for the ICFMO. In the present article, the Class-Based Search (CBS) is presented for application to the ICFMO. It is a novel metaheuristic approach that performs the search based on the main nuclear characteristics of the fuel assemblies, such as reactivity. The CBS is then compared to the one of the state-of-art algorithms applied to the ICFMO, the Particle Swarm Optimization. Experiments were performed for the optimization of Angra 1 Nuclear Power Plant, located at the Southeast of Brazil. The CBS presented noticeable performance, providing Loading Patterns that yield a higher average of Effective Full Power Days in the simulation of Angra 1 NPP operation, according to our methodology.

  12. A class-based search for the in-core fuel management optimization of a pressurized water reactor

    International Nuclear Information System (INIS)

    Alvarenga de Moura Meneses, Anderson; Rancoita, Paola; Schirru, Roberto; Gambardella, Luca Maria

    2010-01-01

    The In-Core Fuel Management Optimization (ICFMO) is a prominent problem in nuclear engineering, with high complexity and studied for more than 40 years. Besides manual optimization and knowledge-based methods, optimization metaheuristics such as Genetic Algorithms, Ant Colony Optimization and Particle Swarm Optimization have yielded outstanding results for the ICFMO. In the present article, the Class-Based Search (CBS) is presented for application to the ICFMO. It is a novel metaheuristic approach that performs the search based on the main nuclear characteristics of the fuel assemblies, such as reactivity. The CBS is then compared to the one of the state-of-art algorithms applied to the ICFMO, the Particle Swarm Optimization. Experiments were performed for the optimization of Angra 1 Nuclear Power Plant, located at the Southeast of Brazil. The CBS presented noticeable performance, providing Loading Patterns that yield a higher average of Effective Full Power Days in the simulation of Angra 1 NPP operation, according to our methodology.

  13. OmniSearch: a semantic search system based on the Ontology for MIcroRNA Target (OMIT) for microRNA-target gene interaction data.

    Science.gov (United States)

    Huang, Jingshan; Gutierrez, Fernando; Strachan, Harrison J; Dou, Dejing; Huang, Weili; Smith, Barry; Blake, Judith A; Eilbeck, Karen; Natale, Darren A; Lin, Yu; Wu, Bin; Silva, Nisansa de; Wang, Xiaowei; Liu, Zixing; Borchert, Glen M; Tan, Ming; Ruttenberg, Alan

    2016-01-01

    As a special class of non-coding RNAs (ncRNAs), microRNAs (miRNAs) perform important roles in numerous biological and pathological processes. The realization of miRNA functions depends largely on how miRNAs regulate specific target genes. It is therefore critical to identify, analyze, and cross-reference miRNA-target interactions to better explore and delineate miRNA functions. Semantic technologies can help in this regard. We previously developed a miRNA domain-specific application ontology, Ontology for MIcroRNA Target (OMIT), whose goal was to serve as a foundation for semantic annotation, data integration, and semantic search in the miRNA field. In this paper we describe our continuing effort to develop the OMIT, and demonstrate its use within a semantic search system, OmniSearch, designed to facilitate knowledge capture of miRNA-target interaction data. Important changes in the current version OMIT are summarized as: (1) following a modularized ontology design (with 2559 terms imported from the NCRO ontology); (2) encoding all 1884 human miRNAs (vs. 300 in previous versions); and (3) setting up a GitHub project site along with an issue tracker for more effective community collaboration on the ontology development. The OMIT ontology is free and open to all users, accessible at: http://purl.obolibrary.org/obo/omit.owl. The OmniSearch system is also free and open to all users, accessible at: http://omnisearch.soc.southalabama.edu/index.php/Software.

  14. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    Science.gov (United States)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship

  15. Differential search algorithm-based parametric optimization of electrochemical micromachining processes

    Directory of Open Access Journals (Sweden)

    Debkalpa Goswami

    2014-01-01

    Full Text Available Electrochemical micromachining (EMM appears to be a very promising micromachining process for having higher machining rate, better precision and control, reliability, flexibility, environmental acceptability, and capability of machining a wide range of materials. It permits machining of chemically resistant materials, like titanium, copper alloys, super alloys and stainless steel to be used in biomedical, electronic, micro-electromechanical system and nano-electromechanical system applications. Therefore, the optimal use of an EMM process for achieving enhanced machining rate and improved profile accuracy demands selection of its various machining parameters. Various optimization tools, primarily Derringer’s desirability function approach have been employed by the past researchers for deriving the best parametric settings of EMM processes, which inherently lead to sub-optimal or near optimal solutions. In this paper, an attempt is made to apply an almost new optimization tool, i.e. differential search algorithm (DSA for parametric optimization of three EMM processes. A comparative study of optimization performance between DSA, genetic algorithm and desirability function approach proves the wide acceptability of DSA as a global optimization tool.

  16. Gradient Compressive Sensing for Image Data Reduction in UAV Based Search and Rescue in the Wild

    Directory of Open Access Journals (Sweden)

    Josip Musić

    2016-01-01

    Full Text Available Search and rescue operations usually require significant resources, personnel, equipment, and time. In order to optimize the resources and expenses and to increase the efficiency of operations, the use of unmanned aerial vehicles (UAVs and aerial photography is considered for fast reconnaissance of large and unreachable terrains. The images are then transmitted to control center for automatic processing and pattern recognition. Furthermore, due to the limited transmission capacities and significant battery consumption for recording high resolution images, in this paper we consider the use of smart acquisition strategy with decreased amount of image pixels following the compressive sensing paradigm. The images are completely reconstructed in the control center prior to the application of image processing for suspicious objects detection. The efficiency of this combined approach depends on the amount of acquired data and also on the complexity of the scenery observed. The proposed approach is tested on various high resolution aerial images, while the achieved results are analyzed using different quality metrics and validation tests. Additionally, a user study is performed on the original images to provide the baseline object detection performance.

  17. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    Science.gov (United States)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  18. Personalized Search

    CERN Document Server

    AUTHOR|(SzGeCERN)749939

    2015-01-01

    As the volume of electronically available information grows, relevant items become harder to find. This work presents an approach to personalizing search results in scientific publication databases. This work focuses on re-ranking search results from existing search engines like Solr or ElasticSearch. This work also includes the development of Obelix, a new recommendation system used to re-rank search results. The project was proposed and performed at CERN, using the scientific publications available on the CERN Document Server (CDS). This work experiments with re-ranking using offline and online evaluation of users and documents in CDS. The experiments conclude that the personalized search result outperform both latest first and word similarity in terms of click position in the search result for global search in CDS.

  19. Involuntary top-down control by search-irrelevant features: Visual working memory biases attention in an object-based manner.

    Science.gov (United States)

    Foerster, Rebecca M; Schneider, Werner X

    2018-03-01

    Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Installation Restoration Program Records Search for Richards-Gebaur Air Force Base, Missouri.

    Science.gov (United States)

    1983-03-01

    City Aviation Department. These operations have been primarily involved in the routine maintenance of assigned aircraft and associated ground support...tarnsferred to base supply (Talley Services, Inc.) for disposal off-base through contract. C. Kansas City Aviation Department (KCAD) i. Vehicle...Fixed-Base Operation ( FBO ) Talley Services operates the FBO for light aircraft out of Building 821. Small quantities (less than 60 gallons a year

  1. Effects of Discipline-based Career Course on Nursing Students' Career Search Self-efficacy, Career Preparation Behavior, and Perceptions of Career Barriers.

    Science.gov (United States)

    Park, Soonjoo

    2015-09-01

    The purpose of this study was to investigate the effectiveness of a discipline-based career course on perceptions of career barriers, career search self-efficacy, and career preparation behavior of nursing students. Differences in career search self-efficacy and career preparation behavior by the students' levels of career barriers were also examined. The study used a modified one-group, pretest-posttest design. The convenience sample consisted of 154 undergraduate nursing students in a university. The discipline-based career course consisted of eight sessions, and was implemented for 2 hours per session over 8 weeks. The data were collected from May to June in 2012 and 2013 using the following instruments: the Korean Career Indecision Inventory, the Career Search Efficacy Scale, and the Career Preparation Behavior Scale. Descriptive statistics, paired t test, and analysis of covariance were used to analyze the data. Upon the completion of the discipline-based career course, students' perceptions of career barriers decreased and career search self-efficacy and career preparation behavior increased. Career search self-efficacy and career preparation behavior increased in students with both low and high levels of career barriers. The difference between the low and high groups was significant for career search self-efficacy but not for career preparation behavior. The discipline-based career course was effective in decreasing perceptions of career barriers and increasing career search self-efficacy and career preparation behavior among nursing students. Copyright © 2015. Published by Elsevier B.V.

  2. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    Science.gov (United States)

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  3. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    Science.gov (United States)

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  4. SEARCHES FOR SUPERSYMMETRY IN ATLAS

    CERN Document Server

    Xu, Da; The ATLAS collaboration

    2017-01-01

    A wide range of supersymmetric searches are presented. All searches are based on the proton- proton collision dataset collected by the ATLAS experiment during the 2015 and 2016 (before summer) run with a center-of-mass energy of 13 TeV, corresponding to an integrated lumi- nosity of 36.1 (36.7) fb-1. The searches are categorized into inclusive gluino and squark search, third generation search, electroweak search, prompt RPV search and long-lived par- ticle search. No evidence of new physics is observed. The results are intepreted in various models and expressed in terms of limits on the masses of new particles.

  5. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  6. SearchResultFinder: federated search made easy

    NARCIS (Netherlands)

    Trieschnigg, Rudolf Berend; Tjin-Kam-Jet, Kien; Hiemstra, Djoerd

    Building a federated search engine based on a large number existing web search engines is a challenge: implementing the programming interface (API) for each search engine is an exacting and time-consuming job. In this demonstration we present SearchResultFinder, a browser plugin which speeds up

  7. Searching for transiting circumbinary planets in CoRoT and ground-based data using CB-BLS

    Science.gov (United States)

    Ofir, A.; Deeg, H. J.; Lacy, C. H. S.

    2009-10-01

    Aims: Already from the initial discoveries of extrasolar planets it was apparent that their population and environments are far more diverse than initially postulated. Discovering circumbinary (CB) planets will have many implications, and in this context it will again substantially diversify the environments that produce and sustain planets. We search for transiting CB planets around eclipsing binaries (EBs). Methods: CB-BLS is a recently-introduced algorithm for the detection of transiting CB planets around EBs. We describe progress in search sensitivity, generality and capability of CB-BLS, and detection tests of CB-BLS on simulated data. We also describe an analytical approach for the determination of CB-BLS detection limits, and a method for the correct detrending of intrinsically-variable stars. Results: We present some blind-tests with simulated planets injected to real CoRoT data. The presented upgrades to CB-BLS allowed it to detect all the blind tests successfully, and these detections were in line with the detection limits analysis. We also correctly detrend bright eclipsing binaries from observations by the TrES planet search, and present some of the first results of applying CB-BLS to multiple real light curves from a wide-field survey. Conclusions: CB-BLS is now mature enough for its application to real data, and the presented processing scheme will serve as the template for our future applications of CB-BLS to data from wide-field surveys such as CoRoT. Being able to put constraints even on non-detection will help to determine the correct frequency of CB planets, contributing to the understanding of planet formation in general. Still, searching for transiting CB planets is still a learning experience, similarly to the state of transiting planets around single stars only a few years ago. The recent rapid progress in this front, coupled with the exquisite quality of space-based photometry, allows to realistically expect that if transiting CB planets

  8. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    Science.gov (United States)

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  9. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  10. Maximize Minimum Utility Function of Fractional Cloud Computing System Based on Search Algorithm Utilizing the Mittag-Leffler Sum

    Directory of Open Access Journals (Sweden)

    Rabha W. Ibrahim

    2018-01-01

    Full Text Available The maximum min utility function (MMUF problem is an important representative of a large class of cloud computing systems (CCS. Having numerous applications in practice, especially in economy and industry. This paper introduces an effective solution-based search (SBS algorithm for solving the problem MMUF. First, we suggest a new formula of the utility function in term of the capacity of the cloud. We formulate the capacity in CCS, by using a fractional diffeo-integral equation. This equation usually describes the flow of CCS. The new formula of the utility function is modified recent active utility functions. The suggested technique first creates a high-quality initial solution by eliminating the less promising components, and then develops the quality of the achieved solution by the summation search solution (SSS. This method is considered by the Mittag-Leffler sum as hash functions to determine the position of the agent. Experimental results commonly utilized in the literature demonstrate that the proposed algorithm competes approvingly with the state-of-the-art algorithms both in terms of solution quality and computational efficiency.

  11. Education and Public Outreach for Stardust@home: An Interactive Internet-based Search for Interstellar Dust

    Science.gov (United States)

    Mendez, Bryan J.; Westphal, A. J.; Butterworth, A. L.; Craig, N.

    2006-12-01

    On January 15, 2006, NASA’s Stardust mission returned to Earth after nearly seven years in interplanetary space. During its journey, Stardust encountered comet Wild 2, collecting dust particles from it in a special material called aerogel. At two other times in the mission, aerogel collectors were also opened to collect interstellar dust. The Stardust Interstellar Dust Collector is being scanned by an automated microscope at the Johnson Space Center. There are approximately 700,000 fields of view needed to cover the entire collector, but we expect only a few dozen total grains of interstellar dust were captured within it. Finding these particles is a daunting task. We have recruited many thousands of volunteers from the public to aid in the search for these precious pieces of space dust trapped in the collectors. We call the project Stardust@home. Through Stardust@home, volunteers from the public search fields of view from the Stardust aerogel collector using a web-based Virtual Microscope. Volunteers who discover interstellar dust particles have the privilege of naming them. The interest and response to this project has been extraordinary. Many people from all walks of life are very excited about space science and eager to volunteer their time to contribute to a real research project such as this. We will discuss the progress of the project and the education and outreach activities being carried out for it.

  12. Integration Policies in Europe--A Web-Based Search for Consensus

    Science.gov (United States)

    Öttl, Ulrich Franz Josef; Pichler, Bernhard; Schultze-Naumburg, Jonas; Wadispointner, Sabine

    2014-01-01

    Purpose: The purpose of the present paper is to describe a web-based consensus-finding procedure, resulting in an agreement among the group of participants representing global stakeholders regarding the interdisciplinary topic in a university master's seminar on "Global Studies". The result of the collectively elaborated solution…

  13. An API-based search system for one click access to information

    NARCIS (Netherlands)

    Ionita, Dan; Tax, Niek; Hiemstra, Djoerd

    This paper proposes a prototype One Click access system, based on previous work in the field and the related 1CLICK-2@NTCIR10 task. The proposed solution integrates methods from into a three tier algorithm: query categorization, information extraction and output generation and offers suggestions on

  14. webPRC: the profile comparer for alignment-based searching of public domain databases.

    NARCIS (Netherlands)

    Brandt, B.W.; Heringa, J.

    2009-01-01

    Profile-profile methods are well suited to detect remote evolutionary relationships between protein families. Profile Comparer (PRC) is an existing stand-alone program for scoring and aligning hidden Markov models (HMMs), which are based on multiple sequence alignments. Since PRC compares profile

  15. In Search of Museum Professional Knowledge Base: Mapping the Professional Knowledge Debate onto Museum Work

    Science.gov (United States)

    Tlili, Anwar

    2016-01-01

    Museum professionalism remains an unexplored area in museum studies, particularly with regard to what is arguably the core generic question of a "sui generis" professional knowledge base, and its necessary and sufficient conditions. The need to examine this question becomes all the more important with the increasing expansion of the…

  16. Mapping Student Search Paths Through Immunology Problems by Computer Based Testing

    Science.gov (United States)

    Stevens, Ronald H.; Kwak, Anthony R.; McCoy, J. Michael

    1990-01-01

    The development and use of uncued computer based testing in Immunology has encouraged second year UCLA medical students to become more independent, active learners and problem solvers. We have used the Windows-based IMMEX software this year to show that these problem solving exercises are a valid form of testing and that student performance did not correlate with computer anxiety or performance on objective examinations. More importantly, we have developed a graphical display of the student's solution path through the problems which allows a visualization of the problem solving processes associated with successful or unsuccessful solutions. This approach provides an analysis of the student's reasoning about complex concepts in immunology and will make it possible in the future to specifically and personally address each student's educational needs.

  17. Facet-Based Search and Navigation With LCSH: Problems and Opportunities

    Directory of Open Access Journals (Sweden)

    Kelley McGrath

    2007-12-01

    Full Text Available Facet-based interfaces demonstrate some limitations of Library of Congress Subject Headings (LCSH, which were designed to deal with constraints that do not exist in the current computerized environment. This paper discusses some challenges for using LCSH for faceted browsing and navigation in library catalogs. Ideas are provided for improving results through system design, changes to LCSH practice, and LCSH structure.

  18. Installation Restoration Program Records Search for Twin Cities Air Force Reserve Base, Minnesota.

    Science.gov (United States)

    1983-03-01

    Sandstone, white, mediu poorly sorted and silty.!0-80+ Galesville Sandstone Sandstone, yellow to whi grained, poorly cemented Eau Claire Sandstone 0-150...Quality Data All potable water for Twin Cities AFRB is obtained from the City of Minneapolis. The potable water is supplied to the base by one 12-inch main...34 Exploration drilling, testing, and design of well fields for potable water supply with an installed capacity of over 65 mgd. " Determination of

  19. Installation Restoration Program. Phase I. Records Search. Seymour Johnson Air Force Base, North Carolina.

    Science.gov (United States)

    1982-07-01

    SQUADRON1 Closed Circuit TV Shop 2904 Computer Maintenance 3500 Crypto Maintenance 2904 (1) Aerospace Ground Equipment I4 4-9 -!~------ TABLE 4.2 INDUSTRIAL...Disposal Name (Bldg. No) Materials Wastes Method(s) 2012 CS (Continued) Computer Maintenance 3500 no no Crypto Maintenance 2904 no no Navig. Aids 4745...Area DATE OF OPERATION OR O( CURRENCE 1971 to Present OWNER/OPERATOR Seymour Johnson AFB comanS/s/Dcu zmom Received Base Refuse through 1978, still open

  20. Scalable gamma-ray camera for wide-area search based on silicon photomultipliers array

    Science.gov (United States)

    Jeong, Manhee; Van, Benjamin; Wells, Byron T.; D'Aries, Lawrence J.; Hammig, Mark D.

    2018-03-01

    Portable coded-aperture imaging systems based on scintillators and semiconductors have found use in a variety of radiological applications. For stand-off detection of weakly emitting materials, large volume detectors can facilitate the rapid localization of emitting materials. We describe a scalable coded-aperture imaging system based on 5.02 × 5.02 cm2 CsI(Tl) scintillator modules, each partitioned into 4 × 4 × 20 mm3 pixels that are optically coupled to 12 × 12 pixel silicon photo-multiplier (SiPM) arrays. The 144 pixels per module are read-out with a resistor-based charge-division circuit that reduces the readout outputs from 144 to four signals per module, from which the interaction position and total deposited energy can be extracted. All 144 CsI(Tl) pixels are readily distinguishable with an average energy resolution, at 662 keV, of 13.7% FWHM, a peak-to-valley ratio of 8.2, and a peak-to-Compton ratio of 2.9. The detector module is composed of a SiPM array coupled with a 2 cm thick scintillator and modified uniformly redundant array mask. For the image reconstruction, cross correlation and maximum likelihood expectation maximization methods are used. The system shows a field of view of 45° and an angular resolution of 4.7° FWHM.

  1. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  2. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Harmony Search (HS and Teaching-Learning-Based Optimization (TLBO as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  3. Slice correspondence estimation using SURF descriptors and context-based search for prostate whole-mount histology MRI registration.

    Science.gov (United States)

    Guzman, Lina; Commandeur, Frederic; Acosta, Oscar; Simon, Antoine; Fautrel, Alain; Rioux-Leclercq, Nathalie; Romero, Eduardo; Mathieu, Romain; de Crevoisier, Renaud

    2016-08-01

    Registration of histopathology volumes to Magnetic Resonance Images(MRI) is a crucial step for finding correlations in Prostate Cancer (PCa) and assessing tumor agressivity. This paper proposes a two-stage framework aimed at registering both modalities. Firstly, Speeded-Up Robust Features (SURF) algorithm and a context-based search is used to automatically determine slice correspondences between MRI and histology volumes. This step initializes a multimodal nonrigid registration strategy, which allows to propagate histology slices to MRI. Evaluation was performed on 5 prospective studies using a slice index score and landmark distances. With respect to a manual ground truth, the first stage of the framework exhibited an average error of 1,54 slice index and 3,51 mm in the prostate specimen. The reconstruction of a three-dimensional Whole-Mount Histology (WMH) shows promising results aimed to perform later PCa pattern detection and staging.

  4. Search for Potent and Selective Aurora A Inhibitors Based on General Ser/Thr Kinase Pharmacophore Model

    Directory of Open Access Journals (Sweden)

    Natalya I. Vasilevich

    2016-04-01

    Full Text Available Based on the data for compounds known from the literature to be active against various types of Ser/Thr kinases, a general pharmachophore model for these types of kinases was developed. The search for the molecules fitting to this pharmacophore among the ASINEX proprietary library revealed a number of compounds, which were tested and appeared to possess some activity against Ser/Thr kinases such as Aurora A, Aurora B and Haspin. Our work on the optimization of these molecules against Aurora A kinase allowed us to achieve several hits in a 3–5 nM range of activity with rather good selectivity and Absorption, Distribution, Metabolism, and Excretion (ADME properties, and cytotoxicity against 16 cancer cell lines. Thus, we showed the possibility to fine-tune the general Ser/Thr pharmacophore to design active and selective compounds against desired types of kinases.

  5. An improved Pattern Search based algorithm to solve the Dynamic Economic Dispatch problem with valve-point effect

    International Nuclear Information System (INIS)

    Alsumait, J.S.; Qasem, M.; Sykulski, J.K.; Al-Othman, A.K.

    2010-01-01

    In this paper, an improved algorithm based on Pattern Search method (PS) to solve the Dynamic Economic Dispatch is proposed. The algorithm maintains the essential unit ramp rate constraint, along with all other necessary constraints, not only for the time horizon of operation (24 h), but it preserves these constraints through the transaction period to the next time horizon (next day) in order to avoid the discontinuity of the power system operation. The Dynamic Economic and Emission Dispatch problem (DEED) is also considered. The load balance constraints, operating limits, valve-point loading and network losses are included in the models of both DED and DEED. The numerical results clarify the significance of the improved algorithm and verify its performance.

  6. Waste Load Allocation Based on Total Maximum Daily Load Approach Using the Charged System Search (CSS Algorithm

    Directory of Open Access Journals (Sweden)

    Elham Faraji

    2016-03-01

    Full Text Available In this research, the capability of a charged system search algorithm (CSS in handling water management optimization problems is investigated. First, two complex mathematical problems are solved by CSS and the results are compared with those obtained from other metaheuristic algorithms. In the last step, the optimization model developed by the CSS algorithm is applied to the waste load allocation in rivers based on the total maximum daily load (TMDL concept. The results are presented in Tables and Figures for easy comparison. The study indicates the superiority of the CSS algorithm in terms of its speed and performance over the other metaheuristic algorithms while its precision in water management optimization problems is verified.

  7. Determinants of Pro-Environmental Consumption: Multicountry Comparison Based upon Big Data Search

    Directory of Open Access Journals (Sweden)

    Donghyun Lee

    2017-01-01

    Full Text Available The Korean government has promoted a variety of environmental policies to revitalize pro-environmental consumption, and the government’s budget for this purpose has increased. However, there is a lack of quantitative data and analysis regarding the effects upon the pro-environmental consumption of education and changing public awareness of the environment. In addition, to improve pro-environmental consumption, the determinant and hindrance factors of pro-environmental consumption should be analyzed in advance. Accordingly, herein we suggest a pro-environmental consumption index that represents the condition of pro-environmental consumption based on big data queries and use the index to analyze determinants of and hindrances to pro-environmental consumption. To verify the reliability of the proposed indicator, we examine the correlation between the proposed indicator and Greendex, an existing survey-based indicator. In addition, we conduct an analysis of the determinants of pro-environmental consumption across 13 countries based upon the proposed indicator. The index is highest for Argentina and average for Korea. An analysis of the determinants shows that the levels of health expenditure, the ratio of the population aged over 65 years, and past orientation are significantly negatively related to the pro-environmental consumption index, but the level of preprimary education is significantly positively related with it. We also find that high-GDP countries have a significantly positive relationship between economy growth and pro-environmental consumption, but low-GDP countries do not have this relationship.

  8. Search for lost or orphan radioactive sources based on NaI gamma spectrometry

    International Nuclear Information System (INIS)

    Aage, H.K.; Korsbech, U.

    2003-01-01

    Within recent decades many radioactive sources have been lost, stolen, or abandoned, and some have caused contamination or irradiation of people. Therefore reliable methods for source recovery are needed. The use of car borne NaI(Tl) detectors is discussed. Standard processing of spectra in general can disclose strong and medium level signals from manmade nuclides. But methods for detecting low level signals from weak, distant or shielded sources can be improved. New methods for source detection and identification based on noise adjusted singular value decomposition and on area specific stripping of spectra are presented

  9. Search for lost or orphan radioactive sources based on Nal gamma spectrometry

    DEFF Research Database (Denmark)

    Aage, Helle Karina; Korsbech, Uffe C C

    2003-01-01

    Within recent decades many radioactive sources have been lost, stolen, or abandoned, and some have caused contamination or irradiation of people. Therefore reliable methods for source recovery are needed. The use of car borne NaI(Tl) detectors is discussed. Standard processing of spectra in general...... can disclose strong and medium level signals from manmade nuclides. But methods for detecting low level signals from weak, distant or shielded sources can be improved. New methods for source detection and identification based on noise adjusted singular value decomposition and on area specific...

  10. Constraints on the Interstellar Dust Flux Based on Stardust at Home Search Results

    Science.gov (United States)

    Zolensky, Michael E.; Westphal, J.; Allen, C.; Anderson, D.; Bajt, S.; Bechtel, H. A.; Borg, J.; Brenker, F.; Bridges, J.; Brownlee, D. E.; hide

    2011-01-01

    Recent advances in active particle selection in the Heidelberg Van de r Graaf (VdG) dust accelerator have led to high-fidelity, low-backgro und calibrations of track sizes in aerogel as a function of particle size and velocity in the difficult regime above 10 km sec..1 and sub micron sizes. To the extent that the VdG shots are analogs for inters tellar dust (ISD) impacts, these new measurements enable us to place preliminary constraints on the ISD flux based on Stardust@home data.

  11. AR-RBFS: Aware-Routing Protocol Based on Recursive Best-First Search Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Farzad Kiani

    2016-01-01

    Full Text Available Energy issue is one of the most important problems in wireless sensor networks. They consist of low-power sensor nodes and a few base station nodes. They must be adaptive and efficient in data transmission to sink in various areas. This paper proposes an aware-routing protocol based on clustering and recursive search approaches. The paper focuses on the energy efficiency issue with various measures such as prolonging network lifetime along with reducing energy consumption in the sensor nodes and increasing the system reliability. Our proposed protocol consists of two phases. In the first phase (network development phase, the sensors are placed into virtual layers. The second phase (data transmission is related to routes discovery and data transferring so it is based on virtual-based Classic-RBFS algorithm in the lake of energy problem environments but, in the nonchargeable environments, all nodes in each layer can be modeled as a random graph and then begin to be managed by the duty cycle method. Additionally, the protocol uses new topology control, data aggregation, and sleep/wake-up schemas for energy saving in the network. The simulation results show that the proposed protocol is optimal in the network lifetime and packet delivery parameters according to the present protocols.

  12. Nutrigenomics-based personalised nutritional advice: in search of a business model?

    Science.gov (United States)

    Ronteltap, Amber; van Trijp, Hans; Berezowska, Aleksandra; Goossens, Jo

    2013-03-01

    Nutritional advice has mainly focused on population-level recommendations. Recent developments in nutrition, communication, and marketing sciences have enabled potential deviations from this dominant business model in the direction of personalisation of nutrition advice. Such personalisation efforts can take on many forms, but these have in common that they can only be effective if they are supported by a viable business model. The present paper takes an inventory of approaches to personalised nutrition currently available in the market place as its starting point to arrive at an identification of their underlying business models. This analysis is presented as a unifying framework against which the potential of nutrigenomics-based personalised advice can be assessed. It has uncovered nine archetypical approaches to personalised nutrition advice in terms of their dominant underlying business models. Differentiating features among such business models are the type of information that is used as a basis for personalisation, the definition of the target group, the communication channels that are being adopted, and the partnerships that are built as a part of the business model. Future research should explore the consumer responses to the diversity of "archetypical" business models for personalised nutrition advice as a source of market information on which the delivery of nutrigenomics-based personalised nutrition advice may further build.

  13. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  14. Connection Setup Signaling Scheme with Flooding-Based Path Searching for Diverse-Metric Network

    Science.gov (United States)

    Kikuta, Ko; Ishii, Daisuke; Okamoto, Satoru; Oki, Eiji; Yamanaka, Naoaki

    Connection setup on various computer networks is now achieved by GMPLS. This technology is based on the source-routing approach, which requires the source node to store metric information of the entire network prior to computing a route. Thus all metric information must be distributed to all network nodes and kept up-to-date. However, as metric information become more diverse and generalized, it is hard to update all information due to the huge update overhead. Emerging network services and applications require the network to support diverse metrics for achieving various communication qualities. Increasing the number of metrics supported by the network causes excessive processing of metric update messages. To reduce the number of metric update messages, another scheme is required. This paper proposes a connection setup scheme that uses flooding-based signaling rather than the distribution of metric information. The proposed scheme requires only flooding of signaling messages with requested metric information, no routing protocol is required. Evaluations confirm that the proposed scheme achieves connection establishment without excessive overhead. Our analysis shows that the proposed scheme greatly reduces the number of control messages compared to the conventional scheme, while their blocking probabilities are comparable.

  15. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  16. Faceted Search

    CERN Document Server

    Tunkelang, Daniel

    2009-01-01

    We live in an information age that requires us, more than ever, to represent, access, and use information. Over the last several decades, we have developed a modern science and technology for information retrieval, relentlessly pursuing the vision of a "memex" that Vannevar Bush proposed in his seminal article, "As We May Think." Faceted search plays a key role in this program. Faceted search addresses weaknesses of conventional search approaches and has emerged as a foundation for interactive information retrieval. User studies demonstrate that faceted search provides more

  17. Perspective Intercultural Bioethics and Human Rights: the search for instruments for resolving ethical conflicts culturally based.

    Directory of Open Access Journals (Sweden)

    Aline ALBUQUERQUE

    2015-10-01

    Full Text Available This article aims to contribute to a deeper reflection on intercultural conflicts within the bioethics scope, and to point out the problem of using human rights as a theoretical normative mediator of the conflicts in bioethics that bear elements of interculturalism. The methodological steps adopted in this inquiry were: analysis of the concept of intercultural conflict in bioethics, from the perception developed by Colectivo Amani; study of human rights as tools of the culture of human beings, based on Bauman’s and Beauchamp’s theories; investigation of the toolsthat human rights offer so as to solve intercultural conflicts in bioethics. It was concluded that intercultural bioethics must incorporate to its prescriptive and descriptive tasks norms and institutions of human rights that ensure the participation and social integration of the individuals from communities that are in cultural conflict. Such measure will act as instrumentsfor the solution of intercultural conflicts.

  18. A gravitational wave burst search method based on the S transform

    International Nuclear Information System (INIS)

    Clapson, Andre-Claude; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelberg, Stephane; Varvella, Monica

    2005-01-01

    The detection of burst-type events in the output of ground gravitational wave observatories is particularly challenging due to the expected variety of astrophysical waveforms and the issue of discriminating them from instrumental noise. Robust methods, that achieve reasonable detection performances over a wide range of signals, would be most useful. We present a burst-detection pipeline based on a time-frequency transform, the S transform. This transform offers good time-frequency localization of energy without requiring prior knowledge of the event structure. We set a simple (and robust) event extraction chain. Results are provided for a variety of signals injected in simulated Gaussian statistics data (from the LIGO-Virgo joint working group). Indications are that detection is robust with respect to event type and that efficiency compares reasonably with reference methods. The time-frequency representation is shown to be affected by spectral features such as resonant lines. This emphasizes the role of pre-processing

  19. Arts-based research and the search for didactical potentials in haiku poems

    DEFF Research Database (Denmark)

    Knudsen, Lars Emmerik Damgaard

    the researcher, the informants and the audience can be perceived in ways that transcends quantitative and qualitative research. Arts based research enjoys more attention in North America and Southern Europe than in the Nordic countries even though not entirely ignored in a Danish context. In my research I...... Material Culture Didactics that I teach at the department of Education in Faculty of Arts at Aarhus University, Denmark. Material Culture Didactics celebrates its 10th year anniversary but compared to the parallel subjects of Danish, Math and Music the didactical literature and research on Material Culture...... (2012). (Expected) conclusions/findings The key potential to develop didactical literature and research on Material Culture Didactics are the student’s creativity and backgrounds as artists, craftsmen, designers, art teachers, teachers etc. I made the students participate in the exploration of how...

  20. Human memory search

    NARCIS (Netherlands)

    Davelaar, E.J.; Raaijmakers, J.G.W.; Hills, T.T.; Robbins, T.W.; Todd, P.M.

    2012-01-01

    The importance of understanding human memory search is hard to exaggerate: we build and live our lives based on what whe remember. This chapter explores the characteristics of memory search, with special emphasis on the use of retrieval cues. We introduce the dependent measures that are obtained