WorldWideScience

Sample records for making search application

  1. Searching Choices: Quantifying Decision-Making Processes Using Search Engine Data.

    Science.gov (United States)

    Moat, Helen Susannah; Olivola, Christopher Y; Chater, Nick; Preis, Tobias

    2016-07-01

    When making a decision, humans consider two types of information: information they have acquired through their prior experience of the world, and further information they gather to support the decision in question. Here, we present evidence that data from search engines such as Google can help us model both sources of information. We show that statistics from search engines on the frequency of content on the Internet can help us estimate the statistical structure of prior experience; and, specifically, we outline how such statistics can inform psychological theories concerning the valuation of human lives, or choices involving delayed outcomes. Turning to information gathering, we show that search query data might help measure human information gathering, and it may predict subsequent decisions. Such data enable us to compare information gathered across nations, where analyses suggest, for example, a greater focus on the future in countries with a higher per capita GDP. We conclude that search engine data constitute a valuable new resource for cognitive scientists, offering a fascinating new tool for understanding the human decision-making process. Copyright © 2016 The Authors. Topics in Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  2. Information Retrieval for Education: Making Search Engines Language Aware

    Science.gov (United States)

    Ott, Niels; Meurers, Detmar

    2010-01-01

    Search engines have been a major factor in making the web the successful and widely used information source it is today. Generally speaking, they make it possible to retrieve web pages on a topic specified by the keywords entered by the user. Yet web searching currently does not take into account which of the search results are comprehensible for…

  3. An Improved Botanical Search Application for Middle-and High-School Students

    Science.gov (United States)

    Kajiyama, Tomoko

    2016-01-01

    A previously reported botanical data retrieval application has been improved to make it better suited for use in middle-and high-school science classes. This search interface is ring-structured and treats multi-faceted metadata intuitively, enabling students not only to search for plant names but also to learn about the morphological features and…

  4. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    Science.gov (United States)

    Wang, Yifan; Wu, Lingdan; Luo, Liang; Zhang, Yifen; Dong, Guangheng

    2017-01-01

    The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  5. Short-term Internet search using makes people rely on search engines when facing unknown issues.

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    Full Text Available The Internet search engines, which have powerful search/sort functions and ease of use features, have become an indispensable tool for many individuals. The current study is to test whether the short-term Internet search training can make people more dependent on it. Thirty-one subjects out of forty subjects completed the search training study which included a pre-test, a six-day's training of Internet search, and a post-test. During the pre- and post- tests, subjects were asked to search online the answers to 40 unusual questions, remember the answers and recall them in the scanner. Un-learned questions were randomly presented at the recalling stage in order to elicited search impulse. Comparing to the pre-test, subjects in the post-test reported higher impulse to use search engines to answer un-learned questions. Consistently, subjects showed higher brain activations in dorsolateral prefrontal cortex and anterior cingulate cortex in the post-test than in the pre-test. In addition, there were significant positive correlations self-reported search impulse and brain responses in the frontal areas. The results suggest that a simple six-day's Internet search training can make people dependent on the search tools when facing unknown issues. People are easily dependent on the Internet search engines.

  6. Heuristic Search Theory and Applications

    CERN Document Server

    Edelkamp, Stefan

    2011-01-01

    Search has been vital to artificial intelligence from the very beginning as a core technique in problem solving. The authors present a thorough overview of heuristic search with a balance of discussion between theoretical analysis and efficient implementation and application to real-world problems. Current developments in search such as pattern databases and search with efficient use of external memory and parallel processing units on main boards and graphics cards are detailed. Heuristic search as a problem solving tool is demonstrated in applications for puzzle solving, game playing, constra

  7. Monte Carlo Tree Search for Continuous and Stochastic Sequential Decision Making Problems

    International Nuclear Information System (INIS)

    Couetoux, Adrien

    2013-01-01

    In this thesis, I studied sequential decision making problems, with a focus on the unit commitment problem. Traditionally solved by dynamic programming methods, this problem is still a challenge, due to its high dimension and to the sacrifices made on the accuracy of the model to apply state of the art methods. I investigated on the applicability of Monte Carlo Tree Search methods for this problem, and other problems that are single player, stochastic and continuous sequential decision making problems. In doing so, I obtained a consistent and anytime algorithm, that can easily be combined with existing strong heuristic solvers. (author)

  8. Search and Classification Using Multiple Autonomous Vehicles Decision-Making and Sensor Management

    CERN Document Server

    Wang, Yue

    2012-01-01

    Search and Classification Using Multiple Autonomous Vehicles provides a comprehensive study of decision-making strategies for domain search and object classification using multiple autonomous vehicles (MAV) under both deterministic and probabilistic frameworks. It serves as a first discussion of the problem of effective resource allocation using MAV with sensing limitations, i.e., for search and classification missions over large-scale domains, or when there are far more objects to be found and classified than there are autonomous vehicles available. Under such scenarios, search and classification compete for limited sensing resources. This is because search requires vehicle mobility while classification restricts the vehicles to the vicinity of any objects found. The authors develop decision-making strategies to choose between these competing tasks and vehicle-motion-control laws to achieve the proposed management scheme. Deterministic Lyapunov-based, probabilistic Bayesian-based, and risk-based decision-mak...

  9. GIS TECHNOLOGY AND TERRAIN ORTHOPHOTOMAP MAKING FOR MILITARY APPLICATION

    Directory of Open Access Journals (Sweden)

    Elshan Hashimov

    2017-11-01

    Full Text Available In this paper, it is shown that GIS and photogrammetry technologiyes, determination of searching target coordinates for the operational desicion making are very important for the military application, for the combat control. With aim of orthophotomap making of the terrain and identification of terrain supervision there has been constructed 3D model for choosen mountainous terrain of Azerbaijan Republic using GIS technology. Based on this model there has been obtained a terrain profile and carried out mapping. Using ArcGis software there has been investigated possibility remain control on obserbvable and unobservable parties of terrain on supervision line from supervision point to target point.

  10. QUEST: A model to quantify uncertain emergency search techniques, theory and application

    International Nuclear Information System (INIS)

    Johnson, M.M.; Goldsby, M.E.; Plantenga, T.D.; Wilcox, W.B.; Hensley, W.K.

    1996-01-01

    As recent world events show, criminal and terrorist access to nuclear materials is a growing national concern. The national laboratories are taking the lead in developing technologies to counter these potential threats to our national security. Sandia National Laboratories, with support from Pacific Northwest Laboratory and the Remote Sensing Laboratory, has developed QUEST (a model to Quantify Uncertain Emergency Search Techniques), to enhance the performance of organizations in the search for lost or stolen nuclear material. In addition, QUEST supports a wide range of other applications, such as environmental monitoring, nuclear facilities inspections, and searcher training. QUEST simulates the search for nuclear materials and calculates detector response fro various source types and locations. The probability of detecting a radioactive source during a search is a function of many different variables. Through calculation of dynamic detector response, QUEST makes possible quantitative comparisons of various sensor technologies and search patterns. The QUEST model can be used to examine the impact of new detector technologies, explore alternative search concepts, and provide interactive search/inspector training

  11. NATO Advanced Research Institute on Search Theory and Applications

    CERN Document Server

    Stone, Lawrence

    1980-01-01

    The NATO Advanced Research Institute on Search Theory and Appli­ cations was held at the Hotel Algarve in Praia Da Rocha, Portugal, from March 26 through March 30, 1979, and was sponsored by the NATO Special Programme Panel on Systems Science. There were forty-one participants representing a wide range of backgrounds and interests. The purpose of the institute was to bring together people working in search theory and applications with potential users of search techniques to stimulate the increased application of recent­ ly developed search technology to civilian problems such as search and rescue, mineral exploration, surveillance, and fishing. Con­ versely, it was felt that by exposing search analysts to potential applications and new problems, they would be stimulated to develop new techniques for these applications and problems. The exchange of ideas and problems necessary to accomplish these goals was provided in the meeting workshops. There were three workshops, Search and Rescue, Exploration, and Sur...

  12. Retrieval of publications addressing shared decision making: an evaluation of full-text searches on medical journal websites.

    Science.gov (United States)

    Blanc, Xavier; Collet, Tinh-Hai; Auer, Reto; Iriarte, Pablo; Krause, Jan; Légaré, France; Cornuz, Jacques; Clair, Carole

    2015-04-07

    Full-text searches of articles increase the recall, defined by the proportion of relevant publications that are retrieved. However, this method is rarely used in medical research due to resource constraints. For the purpose of a systematic review of publications addressing shared decision making, a full-text search method was required to retrieve publications where shared decision making does not appear in the title or abstract. The objective of our study was to assess the efficiency and reliability of full-text searches in major medical journals for identifying shared decision making publications. A full-text search was performed on the websites of 15 high-impact journals in general internal medicine to look up publications of any type from 1996-2011 containing the phrase "shared decision making". The search method was compared with a PubMed search of titles and abstracts only. The full-text search was further validated by requesting all publications from the same time period from the individual journal publishers and searching through the collected dataset. The full-text search for "shared decision making" on journal websites identified 1286 publications in 15 journals compared to 119 through the PubMed search. The search within the publisher-provided publications of 6 journals identified 613 publications compared to 646 with the full-text search on the respective journal websites. The concordance rate was 94.3% between both full-text searches. Full-text searching on medical journal websites is an efficient and reliable way to identify relevant articles in the field of shared decision making for review or other purposes. It may be more widely used in biomedical research in other fields in the future, with the collaboration of publishers and journals toward open-access data.

  13. Cuckoo search and firefly algorithm theory and applications

    CERN Document Server

    2014-01-01

    Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book.  Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others.   This book can serve as an ideal reference for both graduates and researchers in computer scienc...

  14. Age and self-relevance effects on information search during decision making.

    Science.gov (United States)

    Hess, Thomas M; Queen, Tara L; Ennis, Gilda E

    2013-09-01

    We investigated how information search strategies used to support decision making were influenced by self-related implications of the task to the individual. Consistent with the notion of selective engagement, we hypothesized that increased self-relevance would result in more adaptive search behaviors and that this effect would be stronger in older adults than in younger adults. We examined search behaviors in 79 younger and 81 older adults using a process-tracing procedure with 2 different decision tasks. The impact of motivation (i.e., self-related task implications) was examined by manipulating social accountability and the age-related relevance of the task. Although age differences in search strategies were not great, older adults were more likely than younger adults to use simpler strategies in contexts with minimal self-implications. Contrary to expectations, young and old alike were more likely to use noncompensatory than compensatory strategies, even when engaged in systematic search, with education being the most important determinant of search behavior. The results support the notion that older adults are adaptive decision makers and that factors other than age may be more important determinants of performance in situations where knowledge can be used to support performance.

  15. Searching Process with Raita Algorithm and its Application

    Science.gov (United States)

    Rahim, Robbi; Saleh Ahmar, Ansari; Abdullah, Dahlan; Hartama, Dedy; Napitupulu, Darmawan; Putera Utama Siahaan, Andysah; Hasan Siregar, Muhammad Noor; Nasution, Nurliana; Sundari, Siti; Sriadhi, S.

    2018-04-01

    Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.

  16. Losing a dime with a satisfied mind: positive affect predicts less search in sequential decision making.

    Science.gov (United States)

    von Helversen, Bettina; Mata, Rui

    2012-12-01

    We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved

  17. Stochastic local search foundations and applications

    CERN Document Server

    Hoos, Holger H; Stutzle, Thomas

    2004-01-01

    Stochastic local search (SLS) algorithms are among the most prominent and successful techniques for solving computationally difficult problems in many areas of computer science and operations research, including propositional satisfiability, constraint satisfaction, routing, and scheduling. SLS algorithms have also become increasingly popular for solving challenging combinatorial problems in many application areas, such as e-commerce and bioinformatics. Hoos and Stützle offer the first systematic and unified treatment of SLS algorithms. In this groundbreaking new book, they examine the general concepts and specific instances of SLS algorithms and carefully consider their development, analysis and application. The discussion focuses on the most successful SLS methods and explores their underlying principles, properties, and features. This book gives hands-on experience with some of the most widely used search techniques, and provides readers with the necessary understanding and skills to use this powerful too...

  18. Search without Boundaries Using Simple APIs

    Science.gov (United States)

    Tong, Qi

    2009-01-01

    The U.S. Geological Survey (USGS) Library, where the author serves as the digital services librarian, is increasingly challenged to make it easier for users to find information from many heterogeneous information sources. Information is scattered throughout different software applications (i.e., library catalog, federated search engine, link resolver, and vendor websites), and each specializes in one thing. How could the library integrate the functionalities of one application with another and provide a single point of entry for users to search across? To improve the user experience, the library launched an effort to integrate the federated search engine into the library's intranet website. The result is a simple search box that leverages the federated search engine's built-in application programming interfaces (APIs). In this article, the author describes how this project demonstrated the power of APIs and their potential to be used by other enterprise search portals inside or outside of the library.

  19. Learning search-driven application development with SharePoint 2013

    CERN Document Server

    Tordgeman, Johnny

    2013-01-01

    A fast paced, practical guide, filled with code examples and demonstrations of enterprise search using SharePoint 2013.This book is written for SharePoint and JavaScript developers who want to get started with SharePoint search and create search-driven applications. The book assumes working knowledge with previous versions of SharePoint and some experience with JavaScript and client side development

  20. Secret Shoppers: The Stealth Applicant Search for Higher Education

    Science.gov (United States)

    Dupaul, Stephanie; Harris, Michael S.

    2012-01-01

    Stealth applicants who do not flow through the traditional admission funnel now represent nearly one-third of the national applicant pool. This study employs a consumer behavior framework to examine the behaviors of stealth applicants at a private university. The findings provide a rich illustration of how stealth applicants search for college.…

  1. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

    Directory of Open Access Journals (Sweden)

    Lipu Zhang

    2012-01-01

    Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

  2. Application of Tabu Search Algorithm in Job Shop Scheduling

    Directory of Open Access Journals (Sweden)

    Betrianis Betrianis

    2010-10-01

    Full Text Available Tabu Search is one of local search methods which is used to solve the combinatorial optimization problem. This method aimed is to make the searching process of the best solution in a complex combinatorial optimization problem(np hard, ex : job shop scheduling problem, became more effective, in a less computational time but with no guarantee to optimum solution.In this paper, tabu search is used to solve the job shop scheduling problem consists of 3 (three cases, which is ordering package of September, October and November with objective of minimizing makespan (Cmax. For each ordering package, there is a combination for initial solution and tabu list length. These result then  compared with 4 (four other methods using basic dispatching rules such as Shortest Processing Time (SPT, Earliest Due Date (EDD, Most Work Remaining (MWKR dan First Come First Served (FCFS. Scheduling used Tabu Search Algorithm is sensitive for variables changes and gives makespan shorter than scheduling used by other four methods.

  3. Information search and decision making: effects of age and complexity on strategy use.

    Science.gov (United States)

    Queen, Tara L; Hess, Thomas M; Ennis, Gilda E; Dowd, Keith; Grühn, Daniel

    2012-12-01

    The impact of task complexity on information search strategy and decision quality was examined in a sample of 135 young, middle-aged, and older adults. We were particularly interested in the competing roles of fluid cognitive ability and domain knowledge and experience, with the former being a negative influence and the latter being a positive influence on older adults' performance. Participants utilized 2 decision matrices, which varied in complexity, regarding a consumer purchase. Using process tracing software and an algorithm developed to assess decision strategy, we recorded search behavior, strategy selection, and final decision. Contrary to expectations, older adults were not more likely than the younger age groups to engage in information-minimizing search behaviors in response to increases in task complexity. Similarly, adults of all ages used comparable decision strategies and adapted their strategies to the demands of the task. We also examined decision outcomes in relation to participants' preferences. Overall, it seems that older adults utilize simpler sets of information primarily reflecting the most valued attributes in making their choice. The results of this study suggest that older adults are adaptive in their approach to decision making and that this ability may benefit from accrued knowledge and experience. 2013 APA, all rights reserved

  4. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  5. The application of foraging theory to the information searching behaviour of general practitioners.

    Science.gov (United States)

    Dwairy, Mai; Dowell, Anthony C; Stahl, Jean-Claude

    2011-08-23

    General Practitioners (GPs) employ strategies to identify and retrieve medical evidence for clinical decision making which take workload and time constraints into account. Optimal Foraging Theory (OFT) initially developed to study animal foraging for food is used to explore the information searching behaviour of General Practitioners. This study is the first to apply foraging theory within this context.Study objectives were: 1. To identify the sequence and steps deployed in identifiying and retrieving evidence for clinical decision making. 2. To utilise Optimal Foraging Theory to assess the effectiveness and efficiency of General Practitioner information searching. GPs from the Wellington region of New Zealand were asked to document in a pre-formatted logbook the steps and outcomes of an information search linked to their clinical decision making, and fill in a questionnaire about their personal, practice and information-searching backgrounds. A total of 115/155 eligible GPs returned a background questionnaire, and 71 completed their information search logbook. GPs spent an average of 17.7 minutes addressing their search for clinical information. Their preferred information sources were discussions with colleagues (38% of sources) and books (22%). These were the two most profitable information foraging sources (15.9 min and 9.5 min search time per answer, compared to 34.3 minutes in databases). GPs nearly always accessed another source when unsuccessful (95% after 1st source), and frequently when successful (43% after 2nd source). Use of multiple sources accounted for 41% of searches, and increased search success from 70% to 89%. By consulting in foraging terms the most 'profitable' sources of information (colleagues, books), rapidly switching sources when unsuccessful, and frequently double checking, GPs achieve an efficient trade-off between maximizing search success and information reliability, and minimizing searching time. As predicted by foraging theory, GPs

  6. The database search problem: a question of rational decision making.

    Science.gov (United States)

    Gittelson, S; Biedermann, A; Bozza, S; Taroni, F

    2012-10-10

    This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2004-10-01

    Full Text Available Abstract Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper, a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST that provides a complementary solution for BLAST searches when the database is too large to fit into

  8. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.

    Science.gov (United States)

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-10-28

    Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together

  9. The application of foraging theory to the information searching behaviour of general practitioners

    Directory of Open Access Journals (Sweden)

    Dowell Anthony C

    2011-08-01

    Full Text Available Abstract Background General Practitioners (GPs employ strategies to identify and retrieve medical evidence for clinical decision making which take workload and time constraints into account. Optimal Foraging Theory (OFT initially developed to study animal foraging for food is used to explore the information searching behaviour of General Practitioners. This study is the first to apply foraging theory within this context. Study objectives were: 1. To identify the sequence and steps deployed in identifiying and retrieving evidence for clinical decision making. 2. To utilise Optimal Foraging Theory to assess the effectiveness and efficiency of General Practitioner information searching. Methods GPs from the Wellington region of New Zealand were asked to document in a pre-formatted logbook the steps and outcomes of an information search linked to their clinical decision making, and fill in a questionnaire about their personal, practice and information-searching backgrounds. Results A total of 115/155 eligible GPs returned a background questionnaire, and 71 completed their information search logbook. GPs spent an average of 17.7 minutes addressing their search for clinical information. Their preferred information sources were discussions with colleagues (38% of sources and books (22%. These were the two most profitable information foraging sources (15.9 min and 9.5 min search time per answer, compared to 34.3 minutes in databases. GPs nearly always accessed another source when unsuccessful (95% after 1st source, and frequently when successful (43% after 2nd source. Use of multiple sources accounted for 41% of searches, and increased search success from 70% to 89%. Conclusions By consulting in foraging terms the most 'profitable' sources of information (colleagues, books, rapidly switching sources when unsuccessful, and frequently double checking, GPs achieve an efficient trade-off between maximizing search success and information reliability, and

  10. The application of system dynamics modelling to environmental health decision-making and policy - a scoping review.

    Science.gov (United States)

    Currie, Danielle J; Smith, Carl; Jagals, Paul

    2018-03-27

    Policy and decision-making processes are routinely challenged by the complex and dynamic nature of environmental health problems. System dynamics modelling has demonstrated considerable value across a number of different fields to help decision-makers understand and predict the dynamic behaviour of complex systems in support the development of effective policy actions. In this scoping review we investigate if, and in what contexts, system dynamics modelling is being used to inform policy or decision-making processes related to environmental health. Four electronic databases and the grey literature were systematically searched to identify studies that intersect the areas environmental health, system dynamics modelling, and decision-making. Studies identified in the initial screening were further screened for their contextual, methodological and application-related relevancy. Studies deemed 'relevant' or 'highly relevant' according to all three criteria were included in this review. Key themes related to the rationale, impact and limitation of using system dynamics in the context of environmental health decision-making and policy were analysed. We identified a limited number of relevant studies (n = 15), two-thirds of which were conducted between 2011 and 2016. The majority of applications occurred in non-health related sectors (n = 9) including transportation, public utilities, water, housing, food, agriculture, and urban and regional planning. Applications were primarily targeted at micro-level (local, community or grassroots) decision-making processes (n = 9), with macro-level (national or international) decision-making to a lesser degree. There was significant heterogeneity in the stated rationales for using system dynamics and the intended impact of the system dynamics model on decision-making processes. A series of user-related, technical and application-related limitations and challenges were identified. None of the reported limitations or challenges

  11. Block Architecture Problem with Depth First Search Solution and Its Application

    Science.gov (United States)

    Rahim, Robbi; Abdullah, Dahlan; Simarmata, Janner; Pranolo, Andri; Saleh Ahmar, Ansari; Hidayat, Rahmat; Napitupulu, Darmawan; Nurdiyanto, Heri; Febriadi, Bayu; Zamzami, Z.

    2018-01-01

    Searching is a common process performed by many computer users, Raita algorithm is one algorithm that can be used to match and find information in accordance with the patterns entered. Raita algorithm applied to the file search application using java programming language and the results obtained from the testing process of the file search quickly and with accurate results and support many data types.

  12. Cognitive biases and heuristics in medical decision making: a critical review using a systematic search strategy.

    Science.gov (United States)

    Blumenthal-Barby, J S; Krieger, Heather

    2015-05-01

    The role of cognitive biases and heuristics in medical decision making is of growing interest. The purpose of this study was to determine whether studies on cognitive biases and heuristics in medical decision making are based on actual or hypothetical decisions and are conducted with populations that are representative of those who typically make the medical decision; to categorize the types of cognitive biases and heuristics found and whether they are found in patients or in medical personnel; and to critically review the studies based on standard methodological quality criteria. Data sources were original, peer-reviewed, empirical studies on cognitive biases and heuristics in medical decision making found in Ovid Medline, PsycINFO, and the CINAHL databases published in 1980-2013. Predefined exclusion criteria were used to identify 213 studies. During data extraction, information was collected on type of bias or heuristic studied, respondent population, decision type, study type (actual or hypothetical), study method, and study conclusion. Of the 213 studies analyzed, 164 (77%) were based on hypothetical vignettes, and 175 (82%) were conducted with representative populations. Nineteen types of cognitive biases and heuristics were found. Only 34% of studies (n = 73) investigated medical personnel, and 68% (n = 145) confirmed the presence of a bias or heuristic. Each methodological quality criterion was satisfied by more than 50% of the studies, except for sample size and validated instruments/questions. Limitations are that existing terms were used to inform search terms, and study inclusion criteria focused strictly on decision making. Most of the studies on biases and heuristics in medical decision making are based on hypothetical vignettes, raising concerns about applicability of these findings to actual decision making. Biases and heuristics have been underinvestigated in medical personnel compared with patients. © The Author(s) 2014.

  13. A Novel Personalized Web Search Model

    Institute of Scientific and Technical Information of China (English)

    ZHU Zhengyu; XU Jingqiu; TIAN Yunyan; REN Xiang

    2007-01-01

    A novel personalized Web search model is proposed.The new system, as a middleware between a user and a Web search engine, is set up on the client machine. It can learn a user's preference implicitly and then generate the user profile automatically. When the user inputs query keywords, the system can automatically generate a few personalized expansion words by computing the term-term associations according to the current user profile, and then these words together with the query keywords are submitted to a popular search engine such as Yahoo or Google.These expansion words help to express accurately the user's search intention. The new Web search model can make a common search engine personalized, that is, the search engine can return different search results to different users who input the same keywords. The experimental results show the feasibility and applicability of the presented work.

  14. Making Patron Data Work Harder: User Search Terms as Access Points?

    Directory of Open Access Journals (Sweden)

    Jason A. Clark

    2008-06-01

    Full Text Available Montana State University (MSU Libraries are experimenting with re-using patron-generated data to create browseable access points for the Electronic Theses and Dissertations (ETD collection. A beta QueryCatcher module logs recent search terms and the number of associated hits. These terms are used to create browseable lists and tagclouds which enhance access to the ETD collection. Gathering and reusing information about user behavior is an emerging trend in web application development. This article outlines MSU Libraries' reasoning for moving towards a user-generated model and provides a complete walkthrough of the steps in building the application and example code.

  15. Information search in health care decision-making: a study of word-of-mouth and internet information users.

    Science.gov (United States)

    Snipes, Robin L; Ingram, Rhea; Jiang, Pingjun

    2005-01-01

    This paper investigates how individual consumers may differ in their information search behavior in health care decision-making. Results indicate that most consumers still use word-of-mouth as a primary information source for health care decisions. However, usage of the Internet is increasing. The results of this study indicate that consumers who are most likely to use the Internet for health care information are single, younger, and less educated, whereas consumers who are most likely to use word-of-mouth are middle-aged, married, with higher income and higher education. Surprisingly, no significant gender difference was found in information search behavior for health care decision-making. The results also suggest that consumers with the highest tendency to use word-of-mouth are also the lowest users of the Internet in health care decision-making. Implications of these findings are discussed.

  16. Making a search engine for Indocean - A database of abstracts: An experience

    Digital Repository Service at National Institute of Oceanography (India)

    Tapaswi, M.P.; Haravu, L.J.

    stream_size 23701 stream_content_type text/plain stream_name Inf_Manage_Trends_Issues_2003_307.pdf.txt stream_source_info Inf_Manage_Trends_Issues_2003_307.pdf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8... Information Mallagement : Trends and Issues (Festschrift ill honour of Prof S. Seetharama) 52 . Making a Search Engine for Indocean - A Database of Abstracts : An Experience Murari P Tapaswi* and L J Haravu** *Documentation Officer. National Information...

  17. Developing a Process Model for the Forensic Extraction of Information from Desktop Search Applications

    Directory of Open Access Journals (Sweden)

    Timothy Pavlic

    2008-03-01

    Full Text Available Desktop search applications can contain cached copies of files that were deleted from the file system. Forensic investigators see this as a potential source of evidence, as documents deleted by suspects may still exist in the cache. Whilst there have been attempts at recovering data collected by desktop search applications, there is no methodology governing the process, nor discussion on the most appropriate means to do so. This article seeks to address this issue by developing a process model that can be applied when developing an information extraction application for desktop search applications, discussing preferred methods and the limitations of each. This work represents a more structured approach than other forms of current research.

  18. Fuzzy multiple attribute decision making methods and applications

    CERN Document Server

    Chen, Shu-Jen

    1992-01-01

    This monograph is intended for an advanced undergraduate or graduate course as well as for researchers, who want a compilation of developments in this rapidly growing field of operations research. This is a sequel to our previous works: "Multiple Objective Decision Making--Methods and Applications: A state-of-the-Art Survey" (No.164 of the Lecture Notes); "Multiple Attribute Decision Making--Methods and Applications: A State-of-the-Art Survey" (No.186 of the Lecture Notes); and "Group Decision Making under Multiple Criteria--Methods and Applications" (No.281 of the Lecture Notes). In this monograph, the literature on methods of fuzzy Multiple Attribute Decision Making (MADM) has been reviewed thoroughly and critically, and classified systematically. This study provides readers with a capsule look into the existing methods, their characteristics, and applicability to the analysis of fuzzy MADM problems. The basic concepts and algorithms from the classical MADM methods have been used in the development of the f...

  19. Making Patron Data Work Harder: User Search Terms as Access Points?

    OpenAIRE

    Jason A. Clark

    2008-01-01

    Montana State University (MSU) Libraries are experimenting with re-using patron-generated data to create browseable access points for the Electronic Theses and Dissertations (ETD) collection. A beta QueryCatcher module logs recent search terms and the number of associated hits. These terms are used to create browseable lists and tagclouds which enhance access to the ETD collection. Gathering and reusing information about user behavior is an emerging trend in web application development. This ...

  20. Search without Boundaries Using Simple APIs

    Science.gov (United States)

    Tong, Qi (Helen)

    2009-01-01

    The U.S. Geological Survey (USGS) Library, where the author serves as the digital services librarian, is increasingly challenged to make it easier for users to find information from many heterogeneous information sources. Information is scattered throughout different software applications (i.e., library catalog, federated search engine, link…

  1. Decision making in family medicine: randomized trial of the effects of the InfoClinique and Trip database search engines.

    Science.gov (United States)

    Labrecque, Michel; Ratté, Stéphane; Frémont, Pierre; Cauchon, Michel; Ouellet, Jérôme; Hogg, William; McGowan, Jessie; Gagnon, Marie-Pierre; Njoya, Merlin; Légaré, France

    2013-10-01

    To compare the ability of users of 2 medical search engines, InfoClinique and the Trip database, to provide correct answers to clinical questions and to explore the perceived effects of the tools on the clinical decision-making process. Randomized trial. Three family medicine units of the family medicine program of the Faculty of Medicine at Laval University in Quebec city, Que. Fifteen second-year family medicine residents. Residents generated 30 structured questions about therapy or preventive treatment (2 questions per resident) based on clinical encounters. Using an Internet platform designed for the trial, each resident answered 20 of these questions (their own 2, plus 18 of the questions formulated by other residents, selected randomly) before and after searching for information with 1 of the 2 search engines. For each question, 5 residents were randomly assigned to begin their search with InfoClinique and 5 with the Trip database. The ability of residents to provide correct answers to clinical questions using the search engines, as determined by third-party evaluation. After answering each question, participants completed a questionnaire to assess their perception of the engine's effect on the decision-making process in clinical practice. Of 300 possible pairs of answers (1 answer before and 1 after the initial search), 254 (85%) were produced by 14 residents. Of these, 132 (52%) and 122 (48%) pairs of answers concerned questions that had been assigned an initial search with InfoClinique and the Trip database, respectively. Both engines produced an important and similar absolute increase in the proportion of correct answers after searching (26% to 62% for InfoClinique, for an increase of 36%; 24% to 63% for the Trip database, for an increase of 39%; P = .68). For all 30 clinical questions, at least 1 resident produced the correct answer after searching with either search engine. The mean (SD) time of the initial search for each question was 23.5 (7

  2. Harmony Search Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    X. Z. Gao

    2015-01-01

    Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  3. Effective Image Database Search via Dimensionality Reduction

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Aanæs, Henrik

    2008-01-01

    Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabul......Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction......, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building...... of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this...

  4. Making Temporal Search More Central in Spatial Data Infrastructures

    Science.gov (United States)

    Corti, P.; Lewis, B.

    2017-10-01

    A temporally enabled Spatial Data Infrastructure (SDI) is a framework of geospatial data, metadata, users, and tools intended to provide an efficient and flexible way to use spatial information which includes the historical dimension. One of the key software components of an SDI is the catalogue service which is needed to discover, query, and manage the metadata. A search engine is a software system capable of supporting fast and reliable search, which may use any means necessary to get users to the resources they need quickly and efficiently. These techniques may include features such as full text search, natural language processing, weighted results, temporal search based on enrichment, visualization of patterns in distributions of results in time and space using temporal and spatial faceting, and many others. In this paper we will focus on the temporal aspects of search which include temporal enrichment using a time miner - a software engine able to search for date components within a larger block of text, the storage of time ranges in the search engine, handling historical dates, and the use of temporal histograms in the user interface to display the temporal distribution of search results.

  5. Open Search Environments: The Free Alternative to Commercial Search Services

    Directory of Open Access Journals (Sweden)

    Adrian O'Riordan

    2014-06-01

    Full Text Available Open search systems present a free and less restricted alternative to commercial search services. This paper explores the space of open search technology looking in particular at the issue of interoperability. A description of current protocols and formats for engineering open search applications is presented. The suitability of these technologies and issues around their adoption and operation are discussed. This open search approach is especially proving a fitting choice in applications involving the harvesting of resources and information integration. Principal among the technological solutions are OpenSearch and SRU. OpenSearch and SRU implement a federated model to enable existing and new search engines and search clients communicate. Applications and instances where Opensearch and SRU can be combined are presented. Other relevant technologies such as OpenURL, Apache Solr, and OAI-PMH are also discussed. The deployment of these freely licensed open standards in digital library applications is now a genuine alternative to commercial or proprietary systems.

  6. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    Science.gov (United States)

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  7. Institutionalizing telemedicine applications: the challenge of legitimizing decision-making.

    Science.gov (United States)

    Zanaboni, Paolo; Lettieri, Emanuele

    2011-09-28

    During the last decades a variety of telemedicine applications have been trialed worldwide. However, telemedicine is still an example of major potential benefits that have not been fully attained. Health care regulators are still debating why institutionalizing telemedicine applications on a large scale has been so difficult and why health care professionals are often averse or indifferent to telemedicine applications, thus preventing them from becoming part of everyday clinical routines. We believe that the lack of consolidated procedures for supporting decision making by health care regulators is a major weakness. We aim to further the current debate on how to legitimize decision making about the institutionalization of telemedicine applications on a large scale. We discuss (1) three main requirements--rationality, fairness, and efficiency--that should underpin decision making so that the relevant stakeholders perceive them as being legitimate, and (2) the domains and criteria for comparing and assessing telemedicine applications--benefits and sustainability. According to these requirements and criteria, we illustrate a possible reference process for legitimate decision making about which telemedicine applications to implement on a large scale. This process adopts the health care regulators' perspective and is made up of 2 subsequent stages, in which a preliminary proposal and then a full proposal are reviewed.

  8. Approximate search for Big Data with applications in information security – A survey

    Directory of Open Access Journals (Sweden)

    Slobodan Petrović

    2015-04-01

    Full Text Available This paper is a survey of approximate search techniques in very large data sets (so-called Big Data. After a short introduction, some techniques for speeding up approximate search in such data sets based on exploitation of inherent bit-parallelism in computers are described. It then reviews the applications in search related to information security problems (digital forensics, malware detection, intrusion detection are reviewed. Finally, the need for constraints in approximate search regarding the number of so-called elementary edit operations and the run lengths of particular elementary edit operations is explained and the status of on-going research on efficient implementation of approximate search algorithms with various constraints is given.

  9. Search on Rugged Landscapes

    DEFF Research Database (Denmark)

    Billinger, Stephan; Stieglitz, Nils; Schumacher, Terry

    2014-01-01

    This paper presents findings from a laboratory experiment on human decision-making in a complex combinatorial task. We find strong evidence for a behavioral model of adaptive search. Success narrows down search to the neighborhood of the status quo, while failure promotes gradually more explorative...... for local improvements too early. We derive stylized decision rules that generate the search behavior observed in the experiment and discuss the implications of our findings for individual decision-making and organizational search....

  10. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    Science.gov (United States)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and

  11. Mental workload while driving: effects on visual search, discrimination, and decision making.

    Science.gov (United States)

    Recarte, Miguel A; Nunes, Luis M

    2003-06-01

    The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.

  12. The EBI search engine: EBI search as a service—making biological data accessible for all

    Science.gov (United States)

    Park, Young M.; Squizzato, Silvano; Buso, Nicola; Gur, Tamer

    2017-01-01

    Abstract We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL–EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing ‘Search as a Service’ capabilities, which are the main topic of this article. PMID:28472374

  13. Application of decision-making methodology to certificate-of-need applications for CT scanners

    International Nuclear Information System (INIS)

    Gottinger, H.W.; Shapiro, P.

    1985-01-01

    This paper describes a case study and application of decision-making methodology to two competing Certificate of Need (CON) applications for CT body scanners. We demonstrate the use of decision-making methodology by evaluating the CON applications. Explicit value judgements reflecting the monetary equivalent of the different categories of benefit are introduced to facilitate this comparison. The difference between the benefits (measured in monetary terms) and costs is called the net social value. Any alternative with positive net social value is judged economically justifiable, and the alternative with the greatest net social value is judged the most attractive. (orig.)

  14. The EBI search engine: EBI search as a service-making biological data accessible for all.

    Science.gov (United States)

    Park, Young M; Squizzato, Silvano; Buso, Nicola; Gur, Tamer; Lopez, Rodrigo

    2017-07-03

    We present an update of the EBI Search engine, an easy-to-use fast text search and indexing system with powerful data navigation and retrieval capabilities. The interconnectivity that exists between data resources at EMBL-EBI provides easy, quick and precise navigation and a better understanding of the relationship between different data types that include nucleotide and protein sequences, genes, gene products, proteins, protein domains, protein families, enzymes and macromolecular structures, as well as the life science literature. EBI Search provides a powerful RESTful API that enables its integration into third-party portals, thus providing 'Search as a Service' capabilities, which are the main topic of this article. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Brazilian academic search filter: application to the scientific literature on physical activity.

    Science.gov (United States)

    Sanz-Valero, Javier; Ferreira, Marcos Santos; Castiel, Luis David; Wanden-Berghe, Carmina; Guilam, Maria Cristina Rodrigues

    2010-10-01

    To develop a search filter in order to retrieve scientific publications on physical activity from Brazilian academic institutions. The academic search filter consisted of the descriptor "exercise" associated through the term AND, to the names of the respective academic institutions, which were connected by the term OR. The MEDLINE search was performed with PubMed on 11/16/2008. The institutions were selected according to the classification from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for interuniversity agreements. A total of 407 references were retrieved, corresponding to about 0.9% of all articles about physical activity and 0.5% of the Brazilian academic publications indexed in MEDLINE on the search date. When compared with the manual search undertaken, the search filter (descriptor + institutional filter) showed a sensitivity of 99% and a specificity of 100%. The institutional search filter showed high sensitivity and specificity, and is applicable to other areas of knowledge in health sciences. It is desirable that every Brazilian academic institution establish its "standard name/brand" in order to efficiently retrieve their scientific literature.

  16. Needle Custom Search: Recall-oriented search on the Web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon; de Rijke, Maarten; Kenter, Tom; de Vries, A.P.; Zhai, Chen Xiang; de Jong, Franciska M.G.; Radinsky, Kira; Hofmann, Katja

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  17. Needle Custom Search : Recall-oriented search on the web using semantic annotations

    NARCIS (Netherlands)

    Kaptein, Rianne; Koot, Gijs; Huis in 't Veld, Mirjam A.A.; van den Broek, Egon L.

    2014-01-01

    Web search engines are optimized for early precision, which makes it difficult to perform recall-oriented tasks using these search engines. In this article, we present our tool Needle Custom Search. This tool exploits semantic annotations of Web search results and, thereby, increase the efficiency

  18. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG.

    Science.gov (United States)

    Younger, Paula; Boddy, Kate

    2009-06-01

    The researchers involved in this study work at Exeter Health library and at the Complementary Medicine Unit, Peninsula School of Medicine and Dentistry (PCMD). Within this collaborative environment it is possible to access the electronic resources of three institutions. This includes access to AMED and other databases using different interfaces. The aim of this study was to investigate whether searching different interfaces to the AMED allied health and complementary medicine database produced the same results when using identical search terms. The following Internet-based AMED interfaces were searched: DIALOG DataStar; EBSCOhost and OVID SP_UI01.00.02. Search results from all three databases were saved in an endnote database to facilitate analysis. A checklist was also compiled comparing interface features. In our initial search, DIALOG returned 29 hits, OVID 14 and Ebsco 8. If we assume that DIALOG returned 100% of potential hits, OVID initially returned only 48% of hits and EBSCOhost only 28%. In our search, a researcher using the Ebsco interface to carry out a simple search on AMED would miss over 70% of possible search hits. Subsequent EBSCOhost searches on different subjects failed to find between 21 and 86% of the hits retrieved using the same keywords via DIALOG DataStar. In two cases, the simple EBSCOhost search failed to find any of the results found via DIALOG DataStar. Depending on the interface, the number of hits retrieved from the same database with the same simple search can vary dramatically. Some simple searches fail to retrieve a substantial percentage of citations. This may result in an uninformed literature review, research funding application or treatment intervention. In addition to ensuring that keywords, spelling and medical subject headings (MeSH) accurately reflect the nature of the search, database users should include wildcards and truncation and adapt their search strategy substantially to retrieve the maximum number of appropriate

  19. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  20. Uncertain multi-attribute decision making methods and applications

    CERN Document Server

    Xu, Zeshui

    2015-01-01

    This book introduces methods for uncertain multi-attribute decision making including uncertain multi-attribute group decision making and their applications to supply chain management, investment decision making, personnel assessment, redesigning products, maintenance services, military system efficiency evaluation. Multi-attribute decision making, also known as multi-objective decision making with finite alternatives, is an important component of modern decision science. The theory and methods of multi-attribute decision making have been extensively applied in engineering, economics, management and military contexts, such as venture capital project evaluation, facility location, bidding, development ranking of industrial sectors and so on. Over the last few decades, great attention has been paid to research on multi-attribute decision making in uncertain settings, due to the increasing complexity and uncertainty of supposedly objective aspects and the fuzziness of human thought. This book can be used as a ref...

  1. Large Neighborhood Search

    DEFF Research Database (Denmark)

    Pisinger, David; Røpke, Stefan

    2010-01-01

    Heuristics based on large neighborhood search have recently shown outstanding results in solving various transportation and scheduling problems. Large neighborhood search methods explore a complex neighborhood by use of heuristics. Using large neighborhoods makes it possible to find better...... candidate solutions in each iteration and hence traverse a more promising search path. Starting from the large neighborhood search method,we give an overview of very large scale neighborhood search methods and discuss recent variants and extensions like variable depth search and adaptive large neighborhood...

  2. Applications of Modern Analysis Techniques in Searching back Ancient Art Ceramic Technologies

    Directory of Open Access Journals (Sweden)

    Nguyen Quang Liem

    2011-12-01

    Full Text Available This report highlights the promising applications of modern analysis techniques such as Scanning Electron Microsopy, X-ray fluorescence, X-ray diffraction, Raman scattering spectroscopy, and thermal expansion measurement in searching back the ancient art ceramics technologies.

  3. Eagle-i: Making Invisible Resources, Visible

    Science.gov (United States)

    Haendel, M.; Wilson, M.; Torniai, C.; Segerdell, E.; Shaffer, C.; Frost, R.; Bourges, D.; Brownstein, J.; McInnerney, K.

    2010-01-01

    RP-134 The eagle-i Consortium – Dartmouth College, Harvard Medical School, Jackson State University, Morehouse School of Medicine, Montana State University, Oregon Health and Science University (OHSU), the University of Alaska, the University of Hawaii, and the University of Puerto Rico – aims to make invisible resources for scientific research visible by developing a searchable network of resource repositories at research institutions nationwide. Now in early development, it is hoped that the system will scale beyond the consortium at the end of the two-year pilot. Data Model & Ontology: The eagle-i ontology development team at the OHSU Library is generating the data model and ontologies necessary for resource indexing and querying. Our indexing system will enable cores and research labs to represent resources within a defined vocabulary, leading to more effective searches and better linkage between data types. This effort is being guided by active discussions within the ontology community (http://RRontology.tk) bringing together relevant preexisting ontologies in a logical framework. The goal of these discussions is to provide context for interoperability and domain-wide standards for resource types used throughout biomedical research. Research community feedback is welcomed. Architecture Development, led by a team at Harvard, includes four main components: tools for data collection, management and curation; an institutional resource repository; a federated network; and a central search application. Each participating institution will populate and manage their repository locally, using data collection and curation tools. To help improve search performance, data tools will support the semi-automatic annotation of resources. A central search application will use a federated protocol to broadcast queries to all repositories and display aggregated results. The search application will leverage the eagle-i ontologies to help guide users to valid queries via auto

  4. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  5. Practical fulltext search in medical records

    Directory of Open Access Journals (Sweden)

    Vít Volšička

    2015-09-01

    Full Text Available Performing a search through previously existing documents, including medical reports, is an integral part of acquiring new information and educational processes. Unfortunately, finding relevant information is not always easy, since many documents are saved in free text formats, thereby making it difficult to search through them. A full-text search is a viable solution for searching through documents. The full-text search makes it possible to efficiently search through large numbers of documents and to find those that contain specific search phrases in a short time. All leading database systems currently offer full-text search, but some do not support the complex morphology of the Czech language. Apache Solr provides full support options and some full-text libraries. This programme provides the good support of the Czech language in the basic installation, and a wide range of settings and options for its deployment over any platform. The library had been satisfactorily tested using real data from the hospitals. Solr provided useful, fast, and accurate searches. However, there is still a need to make adjustments in order to receive effective search results, particularly by correcting typographical errors made not only in the text, but also when entering words in the search box and creating a list of frequently used abbreviations and synonyms for more accurate results.

  6. Identification of risk conditions for the development of adrenal disorders: how optimized PubMed search strategies makes the difference.

    Science.gov (United States)

    Guaraldi, Federica; Parasiliti-Caprino, Mirko; Goggi, Riccardo; Beccuti, Guglielmo; Grottoli, Silvia; Arvat, Emanuela; Ghizzoni, Lucia; Ghigo, Ezio; Giordano, Roberta; Gori, Davide

    2014-12-01

    The exponential growth of scientific literature available through electronic databases (namely PubMed) has increased the chance of finding interesting articles. At the same time, search has become more complicated, time consuming, and at risk of missing important information. Therefore, optimized strategies have to be adopted to maximize searching impact. The aim of this study was to formulate efficient strings to search PubMed for etiologic associations between adrenal disorders (ADs) and other conditions. A comprehensive list of terms identifying endogenous conditions primarily affecting adrenals was compiled. An ad hoc analysis was performed to find the best way to express each term in order to find the highest number of potentially pertinent articles in PubMed. A predefined number of retrieved abstracts were read to assess their association with ADs' etiology. A more sensitive (providing the largest literature coverage) and a more specific (including only those terms retrieving >40 % of potentially pertinent articles) string were formulated. Various researches were performed to assess strings' ability to identify articles of interest in comparison with non-optimized literature searches. We formulated optimized, ready applicable tools for the identification of the literature assessing etiologic associations in the field of ADs using PubMed, and demonstrated the advantages deriving from their application. Detailed description of the methodological process is also provided, so that this work can easily be translated to other fields of practice.

  7. Improved Multiobjective Harmony Search Algorithm with Application to Placement and Sizing of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2014-01-01

    Full Text Available To solve the comprehensive multiobjective optimization problem, this study proposes an improved metaheuristic searching algorithm with combination of harmony search and the fast nondominated sorting approach. This is a kind of the novel intelligent optimization algorithm for multiobjective harmony search (MOHS. The detailed description and the algorithm formulating are discussed. Taking the optimal placement and sizing issue of distributed generation (DG in distributed power system as one example, the solving procedure of the proposed method is given. Simulation result on modified IEEE 33-bus test system and comparison with NSGA-II algorithm has proved that the proposed MOHS can get promising results for engineering application.

  8. Fuzzy multiple objective decision making methods and applications

    CERN Document Server

    Lai, Young-Jou

    1994-01-01

    In the last 25 years, the fuzzy set theory has been applied in many disciplines such as operations research, management science, control theory, artificial intelligence/expert system, etc. In this volume, methods and applications of crisp, fuzzy and possibilistic multiple objective decision making are first systematically and thoroughly reviewed and classified. This state-of-the-art survey provides readers with a capsule look into the existing methods, and their characteristics and applicability to analysis of fuzzy and possibilistic programming problems. To realize practical fuzzy modelling, it presents solutions for real-world problems including production/manufacturing, location, logistics, environment management, banking/finance, personnel, marketing, accounting, agriculture economics and data analysis. This book is a guided tour through the literature in the rapidly growing fields of operations research and decision making and includes the most up-to-date bibliographical listing of literature on the topi...

  9. ElasticSearch cookbook

    CERN Document Server

    Paro, Alberto

    2015-01-01

    If you are a developer who implements ElasticSearch in your web applications and want to sharpen your understanding of the core elements and applications, this is the book for you. It is assumed that you've got working knowledge of JSON and, if you want to extend ElasticSearch, of Java and related technologies.

  10. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  11. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-01-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  12. Quantify uncertain emergency search techniques (QUEST) -- Theory and user's guide

    International Nuclear Information System (INIS)

    Johnson, M.M.; Goldsby, M.E.; Plantenga, T.D.; Porter, T.L.; West, T.H.; Wilcox, W.B.; Hensley, W.K.

    1998-01-01

    As recent world events show, criminal and terrorist access to nuclear materials is a growing national concern. The national laboratories are taking the lead in developing technologies to counter these potential threats to the national security. Sandia National laboratories, with support from Pacific Northwest National Laboratory and the Bechtel Nevada, Remote Sensing Laboratory, has developed QUEST (a model to Quantify Uncertain Emergency Search Techniques), to enhance the performance of organizations in the search for lost or stolen nuclear material. In addition, QUEST supports a wide range of other applications, such as environmental monitoring, nuclear facilities inspections, and searcher training. QUEST simulates the search for nuclear materials and calculates detector response for various source types and locations. The probability of detecting a radioactive source during a search is a function of many different variables, including source type, search location and structure geometry (including shielding), search dynamics (path and speed), and detector type and size. Through calculation of dynamic detector response, QUEST makes possible quantitative comparisons of various sensor technologies and search patterns. The QUEST model can be used as a tool to examine the impact of new detector technologies, explore alternative search concepts, and provide interactive search/inspector training

  13. Fast and accurate protein substructure searching with simulated annealing and GPUs

    Directory of Open Access Journals (Sweden)

    Stivala Alex D

    2010-09-01

    Full Text Available Abstract Background Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif searching. Results We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU. Conclusions The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.

  14. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  15. Evidence-based Medicine Search: a customizable federated search engine.

    Science.gov (United States)

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  16. Spin formalism and applications to new physics searches

    Energy Technology Data Exchange (ETDEWEB)

    Haber, H.E. [Univ. of California, Santa Cruz, CA (United States)

    1994-12-01

    An introduction to spin techniques in particle physics is given. Among the topics covered are: helicity formalism and its applications to the decay and scattering of spin-1/2 and spin-1 particles, techniques for evaluating helicity amplitudes (including projection operator methods and the spinor helicity method), and density matrix techniques. The utility of polarization and spin correlations for untangling new physics beyond the Standard Model at future colliders such as the LHC and a high energy e{sup +}e{sup {minus}} linear collider is then considered. A number of detailed examples are explored including the search for low-energy supersymmetry, a non-minimal Higgs boson sector, and new gauge bosons beyond the W{sup {+-}} and Z.

  17. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments.

    Science.gov (United States)

    Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J

    2015-04-01

    To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Analytical group decision making in natural resources: methodology and application

    Science.gov (United States)

    Daniel L. Schmoldt; David L. Peterson

    2000-01-01

    Group decision making is becoming increasingly important in natural resource management and associated scientific applications, because multiple values are treated coincidentally in time and space, multiple resource specialists are needed, and multiple stakeholders must be included in the decision process. Decades of social science research on decision making in groups...

  19. Top-k Keyword Search Over Graphs Based On Backward Search

    Directory of Open Access Journals (Sweden)

    Zeng Jia-Hui

    2017-01-01

    Full Text Available Keyword search is one of the most friendly and intuitive information retrieval methods. Using the keyword search to get the connected subgraph has a lot of application in the graph-based cognitive computation, and it is a basic technology. This paper focuses on the top-k keyword searching over graphs. We implemented a keyword search algorithm which applies the backward search idea. The algorithm locates the keyword vertices firstly, and then applies backward search to find rooted trees that contain query keywords. The experiment shows that query time is affected by the iteration number of the algorithm.

  20. QCD processes and search for supersymmetry at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Schum, Torben

    2012-07-15

    In this thesis, a data-driven method to estimate the number of QCD background events in a multijet search for supersymmetry at the LHC was developed. The method makes use of two models which predict the correlation of two key search variables, the missing transverse momentum and an angular variable, in order to extrapolate from a QCD dominated control region to the signal region. A good performance of the method was demonstrated by its application to 36 pb{sup -1} data, taken by the CMS experiment in 2010, and by the comparison with an alternative method. Comparing the number of data events to a combined background expectation of QCD and data-driven estimates of the electroweak and top background, no statistically significant excess was observed for three pre-defined search regions. Limits were calculated for the (m{sub 0},m{sub 1/2}) parameter space of the cMSSM, exceeding previous measurements. The expected sensitivity for further refined search regions was investigated.

  1. QCD processes and search for supersymmetry at the LHC

    International Nuclear Information System (INIS)

    Schum, Torben

    2012-07-01

    In this thesis, a data-driven method to estimate the number of QCD background events in a multijet search for supersymmetry at the LHC was developed. The method makes use of two models which predict the correlation of two key search variables, the missing transverse momentum and an angular variable, in order to extrapolate from a QCD dominated control region to the signal region. A good performance of the method was demonstrated by its application to 36 pb -1 data, taken by the CMS experiment in 2010, and by the comparison with an alternative method. Comparing the number of data events to a combined background expectation of QCD and data-driven estimates of the electroweak and top background, no statistically significant excess was observed for three pre-defined search regions. Limits were calculated for the (m 0 ,m 1/2 ) parameter space of the cMSSM, exceeding previous measurements. The expected sensitivity for further refined search regions was investigated.

  2. Search and seizure law; practical advice and interpretation for nuclear protective force persons

    Energy Technology Data Exchange (ETDEWEB)

    Cadwell, J.J.

    1983-07-06

    Recent Supreme Court decisions, which interpret the 200-year-old Fourth Amendment of the US Constitution, are used to provide a brief overview of some search and seizure subjects important to management and officers responsible for physical protection of nuclear facilities. The overview is framed in practical terms in order to make the comments applicable to the everyday activity of nuclear-protective-force persons. The Supreme Court has described several exceptions where searches and seizures (arrests) are permitted without a warrant, despite the Fourth Amendment which states that warrants are always required. The seven exceptions briefly discussed are search incidents to a lawful arrest, the automobile-search exception, the suitcase or container exception, the hot-pursuit or emergency exception, the stop-and-frisk exception, the plain-view exception, and consent to be searched.

  3. Search and seizure law; practical advice and interpretation for nuclear protective force persons

    International Nuclear Information System (INIS)

    Cadwell, J.J.

    1983-01-01

    Recent Supreme Court decisions, which interpret the 200-year-old Fourth Amendment of the US Constitution, are used to provide a brief overview of some search and seizure subjects important to management and officers responsible for physical protection of nuclear facilities. The overview is framed in practical terms in order to make the comments applicable to the everyday activity of nuclear-protective-force persons. The Supreme Court has described several exceptions where searches and seizures (arrests) are permitted without a warrant, despite the Fourth Amendment which states that warrants are always required. The seven exceptions briefly discussed are search incidents to a lawful arrest, the automobile-search exception, the suitcase or container exception, the hot-pursuit or emergency exception, the stop-and-frisk exception, the plain-view exception, and consent to be searched

  4. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology.

    Science.gov (United States)

    Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.

  5. Analysis of Decision Making and Incentives in Danish Green Web Applications

    DEFF Research Database (Denmark)

    Scheele, Christian Elling

    2013-01-01

    Traditional information campaigns aimed at incentivising the kind of behaviour change that will lead to more sustainable levels of energy consumption have been proven inefficient. Politicians and government bodies could consider using green web applications as an alternative. However, there is li...... normative or behavioural gains. The third approach is based on a socio-psychological decision model in which values, attitudes and norms affect the choices we make. All three theoretical approaches aim at explaining decision-making in the context of energy consumption......., there is little research documenting how such applications actually motivate behaviour change. There is a need for a better understanding of how such applications work and whether they are effective. This paper addresses the first question by demonstrating how three Danish green web applications employ different...

  6. Construction of FuzzyFind Dictionary using Golay Coding Transformation for Searching Applications

    Science.gov (United States)

    Kowsari, Kamram

    2015-03-01

    searching through a large volume of data is very critical for companies, scientists, and searching engines applications due to time complexity and memory complexity. In this paper, a new technique of generating FuzzyFind Dictionary for text mining was introduced. We simply mapped the 23 bits of the English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more FuzzyFind Dictionary, and reflecting the presence or absence of particular letters. This representation preserves closeness of word distortions in terms of closeness of the created binary vectors within Hamming distance of 2 deviations. This paper talks about the Golay Coding Transformation Hash Table and how it can be used on a FuzzyFind Dictionary as a new technology for using in searching through big data. This method is introduced by linear time complexity for generating the dictionary and constant time complexity to access the data and update by new data sets, also updating for new data sets is linear time depends on new data points. This technique is based on searching only for letters of English that each segment has 23 bits, and also we have more than 23-bit and also it could work with more segments as reference table.

  7. Project Lefty: More Bang for the Search Query

    Science.gov (United States)

    Varnum, Ken

    2010-01-01

    This article describes the Project Lefty, a search system that, at a minimum, adds a layer on top of traditional federated search tools that will make the wait for results more worthwhile for researchers. At best, Project Lefty improves search queries and relevance rankings for web-scale discovery tools to make the results themselves more relevant…

  8. Decision-making in healthcare: a practical application of partial least square path modelling to coverage of newborn screening programmes.

    Science.gov (United States)

    Fischer, Katharina E

    2012-08-02

    Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites) that are not observed directly. The objective of this study was to present a practical application of partial least square path modelling (PLS-PM) to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. After modification by dropping two indicators that showed poor measures in the measurement models' quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of 'transparency', 'participation', 'scientific rigour' and 'reasonableness'. The structural equation model was among the first applications of PLS-PM to coverage decision-making. It allowed testing of hypotheses in situations where there

  9. The application of alpha-card measurement to search underground water resources in bedrock

    International Nuclear Information System (INIS)

    Xiong Fulin; Liang Jinhua

    1986-01-01

    The alpha-card measurement is a new kind of nuclear technique to evaluate the concentration of the radium and thorium emanations with a short-time accumulation. It has shown the wonderful characteristics of higher sensitivity, greater detective depth, less interference and shorter work period when applied to search underground water resources in bedrock. The satisfactory result of its application in Pingyin limestone area is described. The principle of the alpha-card measurement and the foundation of its application are briefly discussed

  10. Analytical group decision making in natural resources: Methodology and application

    Science.gov (United States)

    Schmoldt, D.L.; Peterson, D.L.

    2000-01-01

    Group decision making is becoming increasingly important in natural resource management and associated scientific applications, because multiple values are treated coincidentally in time and space, multiple resource specialists are needed, and multiple stakeholders must be included in the decision process. Decades of social science research on decision making in groups have provided insights into the impediments to effective group processes and on techniques that can be applied in a group context. Nevertheless, little integration and few applications of these results have occurred in resource management decision processes, where formal groups are integral, either directly or indirectly. A group decision-making methodology is introduced as an effective approach for temporary, formal groups (e.g., workshops). It combines the following three components: (1) brainstorming to generate ideas; (2) the analytic hierarchy process to produce judgments, manage conflict, enable consensus, and plan for implementation; and (3) a discussion template (straw document). Resulting numerical assessments of alternative decision priorities can be analyzed statistically to indicate where group member agreement occurs and where priority values are significantly different. An application of this group process to fire research program development in a workshop setting indicates that the process helps focus group deliberations; mitigates groupthink, nondecision, and social loafing pitfalls; encourages individual interaction; identifies irrational judgments; and provides a large amount of useful quantitative information about group preferences. This approach can help facilitate scientific assessments and other decision-making processes in resource management.

  11. Value Application in Radiation Protection Decision-Making

    International Nuclear Information System (INIS)

    Bombaerts, G.; Hardeman, F.; Braeckman, J.

    2001-01-01

    Full text: The radiation protection (RP) community is looking for a sound philosophical basis to support their actions. The utilitarianism-egalitarianism debate of the nineties is a reference here. These ethical theories are well elaborated in philosophy throughout the past ages, making it very difficult for RP decision-makers to use these systems in all their sensitivities. Furthermore, the debate's polarisation obstructs dealing with the essence: how can concerned people be given their say? A theory of core values gives a firm basis. Cost Benefit Analysis is another common decision-making tool in RP, often reducing values to economic numbers when it is used. A 'collective willingness to pay (CWP) principle' is proposed and it avoids this reduction. The new principle can be brought in accordance with risk perception analysis. The existing cost-effectiveness differences in RP measures will be illustrated and explained. Nevertheless, there are limitations to CWP applications. A third philosophical foundation (post-materialism) is presented to state that decision-making procedures have to balance between top-down functionality and bottom-up participation. To make sure the latter has full play, the RP officers have to put out special receptors to detect the societal important core values. This mechanism is illustrated with a few examples (Doel, Belgium; ...). (author)

  12. Measuring Personalization of Web Search

    DEFF Research Database (Denmark)

    Hannak, Aniko; Sapiezynski, Piotr; Kakhki, Arash Molavi

    2013-01-01

    are simply unable to access information that the search engines’ algorithm decidesis irrelevant. Despitetheseconcerns, there has been little quantification of the extent of personalization in Web search today, or the user attributes that cause it. In light of this situation, we make three contributions...... as a result of searching with a logged in account and the IP address of the searching user. Our results are a first step towards understanding the extent and effects of personalization on Web search engines today....

  13. 24 CFR 55.11 - Applicability of subpart C decision making process.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false Applicability of subpart C decision making process. 55.11 Section 55.11 Housing and Urban Development Office of the Secretary, Department of Housing and Urban Development FLOODPLAIN MANAGEMENT Application of Executive Order on Floodplain Management § 55.11 Applicability of subpart C...

  14. Making the most of the relic density for dark matter searches at the LHC 14 TeV Run

    International Nuclear Information System (INIS)

    Busoni, Giorgio; Simone, Andrea De; Jacques, Thomas; Morgante, Enrico; Riotto, Antonio

    2015-01-01

    As the LHC continues to search for new weakly interacting particles, it is important to remember that the search is strongly motivated by the existence of dark matter. In view of a possible positive signal, it is essential to ask whether the newly discovered weakly interacting particle can be be assigned the label 'dark matter'. Within a given set of simplified models and modest working assumptions, we reinterpret the relic abundance bound as a relic abundance range, and compare the parameter space yielding the correct relic abundance with projections of the Run II exclusion regions. Assuming that dark matter is within the reach of the LHC, we also make the comparison with the potential 5σ discovery regions. Reversing the logic, relic density calculations can be used to optimize dark matter searches by motivating choices of parameters where the LHC can probe most deeply into the dark matter parameter space. In the event that DM is seen outside of the region giving the correct relic abundance, we will learn that either thermal relic DM is ruled out in that model, or the DM-quark coupling is suppressed relative to the DM coupling strength to other SM particles

  15. Intelligent decision-making models for production and retail operations

    CERN Document Server

    Guo, Zhaoxia

    2016-01-01

    This book provides an overview of intelligent decision-making techniques and discusses their application in production and retail operations. Manufacturing and retail enterprises have stringent standards for using advanced and reliable techniques to improve decision-making processes, since these processes have significant effects on the performance of relevant operations and the entire supply chain. In recent years, researchers have been increasingly focusing attention on using intelligent techniques to solve various decision-making problems. The opening chapters provide an introduction to several commonly used intelligent techniques, such as genetic algorithm, harmony search, neural network and extreme learning machine. The book then explores the use of these techniques for handling various production and retail decision-making problems, such as production planning and scheduling, assembly line balancing, and sales forecasting.

  16. Application of PSA in risk informed decision making

    International Nuclear Information System (INIS)

    Hari Prasad, M.; Vinod, Gopika; Saraf, R.K.; Ghosh, A.K.; Kushwaha, H.S.

    2006-01-01

    Probabilistic Safety Assessment (PSA) models have been successfully employed during design evaluation to assess weak links and carry out design modifications to improve system reliability and safety. Recently, studies are directed towards applying PSA in various decision making issues concerned with plant operations and safety regulations. This necessitates development of software tools like Living PSA, Risk Monitor etc. Risk Monitor is a PC based tool developed to assess the risk, based on the actual status of systems and components. Such tools find wide application with plant personnel and regulatory authorities since they can provide solutions to various plant issues and regulatory decision making issues respectively. (author)

  17. Search Analytics: Automated Learning, Analysis, and Search with Open Source

    Science.gov (United States)

    Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.

    2016-12-01

    The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82

  18. Risk aversion in medical decision making: a survey

    OpenAIRE

    Liliana Chicaíza; Mario García; Giancarlo Romano

    2011-01-01

    This article surveys the literature on risk aversion in medical decision making. The search covered Econlit, Jstor Science Direct and Springer Link since 1985. The results are classified in three topics: Risk aversion in the frameworks of Expected Utility and Rank Dependent Expected Utility theories, and the methodologies for measuring risk aversion and its applications to clinical situations from the points of view of economics and psychology. It was found that, despite conceptual and method...

  19. ElasticSearch cookbook

    CERN Document Server

    Paro, Alberto

    2013-01-01

    Written in an engaging, easy-to-follow style, the recipes will help you to extend the capabilities of ElasticSearch to manage your data effectively.If you are a developer who implements ElasticSearch in your web applications, manage data, or have decided to start using ElasticSearch, this book is ideal for you. This book assumes that you've got working knowledge of JSON and Java

  20. Dialogic Consensus In Clinical Decision-Making.

    Science.gov (United States)

    Walker, Paul; Lovat, Terry

    2016-12-01

    This paper is predicated on the understanding that clinical encounters between clinicians and patients should be seen primarily as inter-relations among persons and, as such, are necessarily moral encounters. It aims to relocate the discussion to be had in challenging medical decision-making situations, including, for example, as the end of life comes into view, onto a more robust moral philosophical footing than is currently commonplace. In our contemporary era, those making moral decisions must be cognizant of the existence of perspectives other than their own, and be attuned to the demands of inter-subjectivity. Applicable to clinical practice, we propose and justify a Habermasian approach as one useful means of achieving what can be described as dialogic consensus. The Habermasian approach builds around, first, his discourse theory of morality as universalizable to all and, second, communicative action as a cooperative search for truth. It is a concrete way to ground the discourse which must be held in complex medical decision-making situations, in its actual reality. Considerations about the theoretical underpinnings of the application of dialogic consensus to clinical practice, and potential difficulties, are explored.

  1. Incorporating uncertainty regarding applicability of evidence from meta-analyses into clinical decision making.

    Science.gov (United States)

    Kriston, Levente; Meister, Ramona

    2014-03-01

    Judging applicability (relevance) of meta-analytical findings to particular clinical decision-making situations remains challenging. We aimed to describe an evidence synthesis method that accounts for possible uncertainty regarding applicability of the evidence. We conceptualized uncertainty regarding applicability of the meta-analytical estimates to a decision-making situation as the result of uncertainty regarding applicability of the findings of the trials that were included in the meta-analysis. This trial-level applicability uncertainty can be directly assessed by the decision maker and allows for the definition of trial inclusion probabilities, which can be used to perform a probabilistic meta-analysis with unequal probability resampling of trials (adaptive meta-analysis). A case study with several fictitious decision-making scenarios was performed to demonstrate the method in practice. We present options to elicit trial inclusion probabilities and perform the calculations. The result of an adaptive meta-analysis is a frequency distribution of the estimated parameters from traditional meta-analysis that provides individually tailored information according to the specific needs and uncertainty of the decision maker. The proposed method offers a direct and formalized combination of research evidence with individual clinical expertise and may aid clinicians in specific decision-making situations. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Making the MagIC (Magnetics Information Consortium) Web Application Accessible to New Users and Useful to Experts

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.

    2017-12-01

    Challenges are faced by both new and experienced users interested in contributing their data to community repositories, in data discovery, or engaged in potentially transformative science. The Magnetics Information Consortium (https://earthref.org/MagIC) has recently simplified its data model and developed a new containerized web application to reduce the friction in contributing, exploring, and combining valuable and complex datasets for the paleo-, geo-, and rock magnetic scientific community. The new data model more closely reflects the hierarchical workflow in paleomagnetic experiments to enable adequate annotation of scientific results and ensure reproducibility. The new open-source (https://github.com/earthref/MagIC) application includes an upload tool that is integrated with the data model to provide early data validation feedback and ease the friction of contributing and updating datasets. The search interface provides a powerful full text search of contributions indexed by ElasticSearch and a wide array of filters, including specific geographic and geological timescale filtering, to support both novice users exploring the database and experts interested in compiling new datasets with specific criteria across thousands of studies and millions of measurements. The datasets are not large, but they are complex, with many results from evolving experimental and analytical approaches. These data are also extremely valuable due to the cost in collecting or creating physical samples and the, often, destructive nature of the experiments. MagIC is heavily invested in encouraging young scientists as well as established labs to cultivate workflows that facilitate contributing their data in a consistent format. This eLightning presentation includes a live demonstration of the MagIC web application, developed as a configurable container hosting an isomorphic Meteor JavaScript application, MongoDB database, and ElasticSearch search engine. Visitors can explore the Mag

  3. The IBM PC as an Online Search Machine. Part 5: Searching through Crosstalk.

    Science.gov (United States)

    Kolner, Stuart J.

    1985-01-01

    This last of a five-part series on using the IBM personal computer for online searching highlights a brief review, search process, making the connection, switching between screens and modes, online transaction, capture buffer controls, coping with options, function keys, script files, processing downloaded information, note to TELEX users, and…

  4. On the application of the expected log-likelihood gain to decision making in molecular replacement.

    Science.gov (United States)

    Oeffner, Robert D; Afonine, Pavel V; Millán, Claudia; Sammito, Massimo; Usón, Isabel; Read, Randy J; McCoy, Airlie J

    2018-04-01

    Molecular-replacement phasing of macromolecular crystal structures is often fast, but if a molecular-replacement solution is not immediately obtained the crystallographer must judge whether to pursue molecular replacement or to attempt experimental phasing as the quickest path to structure solution. The introduction of the expected log-likelihood gain [eLLG; McCoy et al. (2017), Proc. Natl Acad. Sci. USA, 114, 3637-3641] has given the crystallographer a powerful new tool to aid in making this decision. The eLLG is the log-likelihood gain on intensity [LLGI; Read & McCoy (2016), Acta Cryst. D72, 375-387] expected from a correctly placed model. It is calculated as a sum over the reflections of a function dependent on the fraction of the scattering for which the model accounts, the estimated model coordinate error and the measurement errors in the data. It is shown how the eLLG may be used to answer the question `can I solve my structure by molecular replacement?'. However, this is only the most obvious of the applications of the eLLG. It is also discussed how the eLLG may be used to determine the search order and minimal data requirements for obtaining a molecular-replacement solution using a given model, and for decision making in fragment-based molecular replacement, single-atom molecular replacement and likelihood-guided model pruning.

  5. LHCb Exotica and Higgs searches

    CERN Multimedia

    Lucchesi, Donatella

    2016-01-01

    The unique phase space coverage and features of the LHCb detector at the LHC makes it an ideal environment to probe complementary New Physics parameter regions. In particular, recently developed jet tagging algorithms are ideal for searches involving $b$ and $c$ jets. This poster will review different jet-related exotica searches together with the efforts in the search for a Higgs boson decaying to a pair of heavy quarks.

  6. Decision-making in healthcare: a practical application of partial least square path modelling to coverage of newborn screening programmes

    Directory of Open Access Journals (Sweden)

    Fischer Katharina E

    2012-08-01

    Full Text Available Abstract Background Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites that are not observed directly. The objective of this study was to present a practical application of partial least square path modelling (PLS-PM to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Methods Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. Results After modification by dropping two indicators that showed poor measures in the measurement models’ quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of ‘transparency’, ‘participation’, ‘scientific rigour’ and ‘reasonableness’. Conclusions The structural equation model was among the first applications of PLS-PM to

  7. Earthdata Search: How Usability Drives Innovation To Enable A Broad User Base

    Science.gov (United States)

    Reese, M.; Siarto, J.; Lynnes, C.; Shum, D.

    2017-12-01

    Earthdata Search (https://search.earthdata.nasa.gov) is a modern web application allowing users to search, discover, visualize, refine, and access NASA Earth Observation data using a wide array of service offerings. Its goal is to ease the technical burden on data users by providing a high-quality application that makes it simple to interact with NASA Earth observation data, freeing them to spend more effort on innovative endeavors. This talk would detail how we put end users first in our design and development process, focusing on usability and letting usability needs drive requirements for the underlying technology. Just a few examples of how this plays out practically, Earthdata Search teams with a lightning fast metadata repository, allowing it to be an extremely responsive UI that updates as the user changes criteria not only at the dataset level, but also at the file level. This results in a better exploration experience as the time penalty is greatly reduced. Also, since Earthdata Search uses metadata from over 35,000 datasets that are managed by different data providers, metadata standards, quality and consistency will vary. We found that this was negatively impacting users' search and exploration experience. We have resolved this problem with the introduction of "humanizers", which is a community-driven process to both "smooth out" metadata values and provide non-jargonistic representations of some content within the Earthdata Search UI. This is helpful for both the experience data scientist and our users that are brand new to the discipline.

  8. Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task

    Directory of Open Access Journals (Sweden)

    Cristóbal Moënne-Loccoz

    2017-09-01

    Full Text Available Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.

  9. Feasible Application Area Study for Linear Laser Cutting in Paper Making Processes

    Science.gov (United States)

    Happonen, A.; Stepanov, A.; Piili, H.

    Traditional industry sectors, like paper making industry, tend to stay within well-known technology rather than going forward towards promising, but still quite new technical solutions and applications. This study analyses the feasibility of the laser cutting in large-scale industrial paper making processes. Aim was to reveal development and process related challenges and improvement potential in paper making processes by utilizing laser technology. This study has been carried out, because there still seems to be only few large-scale industrial laser processing applications in paper converting processes worldwide, even in the beginning of 2010's. Because of this, the small-scale use of lasers in paper material manufacturing industry is related to a shortage of well-known and widely available published research articles and published measurement data (e.g. actual achieved cut speeds with high quality cut edges, set-up times and so on). It was concluded that laser cutting has strong potential in industrial applications for paper making industries. This potential includes quality improvements and a competitive advantage for paper machine manufacturers and industry. The innovations have also added potential, when developing new paper products. An example of these kinds of products are ones with printed intelligence, which could be a new business opportunity for the paper industries all around the world.

  10. Randomized Search Strategies With Imperfect Sensors

    National Research Council Canada - National Science Library

    Gage, Douglas W

    1993-01-01

    .... An important class of coverage applications are those that involve a search, in which a number of searching elements move about within a prescribed search area in order to find one or more target...

  11. Economic Decision Making: Application of the Theory of Complex Systems

    Science.gov (United States)

    Kitt, Robert

    In this chapter the complex systems are discussed in the context of economic and business policy and decision making. It will be showed and motivated that social systems are typically chaotic, non-linear and/or non-equilibrium and therefore complex systems. It is discussed that the rapid change in global consumer behaviour is underway, that further increases the complexity in business and management. For policy making under complexity, following principles are offered: openness and international competition, tolerance and variety of ideas, self-reliability and low dependence on external help. The chapter contains four applications that build on the theoretical motivation of complexity in social systems. The first application demonstrates that small economies have good prospects to gain from the global processes underway, if they can demonstrate production flexibility, reliable business ethics and good risk management. The second application elaborates on and discusses the opportunities and challenges in decision making under complexity from macro and micro economic perspective. In this environment, the challenges for corporate management are being also permanently changed: the balance between short term noise and long term chaos whose attractor includes customers, shareholders and employees must be found. The emergence of chaos in economic relationships is demonstrated by a simple system of differential equations that relate the stakeholders described above. The chapter concludes with two financial applications: about debt and risk management. The non-equilibrium economic establishment leads to additional problems by using excessive borrowing; unexpected downturns in economy can more easily kill companies. Finally, the demand for quantitative improvements in risk management is postulated. Development of the financial markets has triggered non-linearity to spike in prices of various production articles such as agricultural and other commodities that has added market

  12. TECHNIQUES USED IN SEARCH ENGINE MARKETING

    OpenAIRE

    Assoc. Prof. Liviu Ion Ciora Ph. D; Lect. Ion Buligiu Ph. D

    2010-01-01

    Search engine marketing (SEM) is a generic term covering a variety of marketing techniques intended for attracting web traffic in search engines and directories. SEM is a popular tool since it has the potential of substantial gains with minimum investment. On the one side, most search engines and directories offer free or extremely cheap listing. On the other side, the traffic coming from search engines and directories tends to be motivated for acquisitions, making these visitors some of the ...

  13. Concurrent Memory Load Can Make RSVP Search More Efficient

    Science.gov (United States)

    Gil-Gomez de Liano, Beatriz; Botella, Juan

    2011-01-01

    The detrimental effect of increased memory load on selective attention has been demonstrated in many situations. However, in search tasks over time using RSVP methods, it is not clear how memory load affects attentional processes; no effects as well as beneficial and detrimental effects of memory load have been found in these types of tasks. The…

  14. Group decision-making techniques for natural resource management applications

    Science.gov (United States)

    Coughlan, Beth A.K.; Armour, Carl L.

    1992-01-01

    This report is an introduction to decision analysis and problem-solving techniques for professionals in natural resource management. Although these managers are often called upon to make complex decisions, their training in the natural sciences seldom provides exposure to the decision-making tools developed in management science. Our purpose is to being to fill this gap. We present a general analysis of the pitfalls of group problem solving, and suggestions for improved interactions followed by the specific techniques. Selected techniques are illustrated. The material is easy to understand and apply without previous training or excessive study and is applicable to natural resource management issues.

  15. Modified harmony search

    Science.gov (United States)

    Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt

    2017-09-01

    A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.

  16. Reviewing model application to support animal health decision making.

    Science.gov (United States)

    Singer, Alexander; Salman, Mo; Thulke, Hans-Hermann

    2011-04-01

    Animal health is of societal importance as it affects human welfare, and anthropogenic interests shape decision making to assure animal health. Scientific advice to support decision making is manifold. Modelling, as one piece of the scientific toolbox, is appreciated for its ability to describe and structure data, to give insight in complex processes and to predict future outcome. In this paper we study the application of scientific modelling to support practical animal health decisions. We reviewed the 35 animal health related scientific opinions adopted by the Animal Health and Animal Welfare Panel of the European Food Safety Authority (EFSA). Thirteen of these documents were based on the application of models. The review took two viewpoints, the decision maker's need and the modeller's approach. In the reviewed material three types of modelling questions were addressed by four specific model types. The correspondence between tasks and models underpinned the importance of the modelling question in triggering the modelling approach. End point quantifications were the dominating request from decision makers, implying that prediction of risk is a major need. However, due to knowledge gaps corresponding modelling studies often shed away from providing exact numbers. Instead, comparative scenario analyses were performed, furthering the understanding of the decision problem and effects of alternative management options. In conclusion, the most adequate scientific support for decision making - including available modelling capacity - might be expected if the required advice is clearly stated. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Searching for reciclability of modified clays for an environmental application

    Science.gov (United States)

    Del Hoyo Martínez, Carmen; Solange Lozano García, Marina; Sánchez Escribano, Vicente; Antequera, Jorge

    2014-05-01

    Thanks to the development of the science and the technology of the nourishment in the last 50 years, there have revealed itself several new substances that can fulfill beneficial functions in the food, and these substances, named food additives, are today within reach of all. The food additives recover a very important role in the complex nourishing supply. The additives fulfill several useful functions in the food, which often we give for sat. Nevertheless the widespread use of food additives in the food production also influences the public health. The food industries, which are very important for the economy, spill residues proved from its activity that they have to be controlled to evaluate the environmental impact and to offer the necessary information about the quantitative evaluation of the chemical risk of the use of food additives for the public health. The clay materials have led to numerous applications in the field of public health (del Hoyo, 2007; Volzone, 2007) having been demonstrated its effectiveness as adsorbents of all contaminants. Some biodegradable materials are used for for adsorption of chemical contaminants: lignins (Valderrabano et al., 2008) and also clays and clay minerals, whose colloidal properties, ease of generating structural changes, abundance in nature, and low cost make them very suitable for this kind of applications. Among the strategies used at present to preserve the quality of the water and this way to diminish the environmental risk that supposes the chemical pollution, stands out the use of adsorbents of under cost, already they are natural or modified, to immobilize these compounds and to avoid the pollution of the water with the consequent reduction of environmental and economic costs. We have studied the adsorption of several contaminants related to the food industry by natural or modified clays, searching their interaction mechanisms and the possible recycling of these materials for environmental purposes and

  18. Search and imperative programming

    OpenAIRE

    Apt, Krzysztof; Schaerf, A.

    1996-01-01

    textabstractWe augment the expressive power of imperative programming in order to make it a more attractive vehicle for problems that involve search.The proposed additions are limited yet powerful and are inspired by the logic programming paradigm.We illustrate their use by presenting solutions to a number of classical problems, including the straight search problem, the knapsack problem, and the 8 queens problem. These solutions are substantially simpler than their counterparts written in th...

  19. How to improve your PubMed/MEDLINE searches: 3. advanced searching, MeSH and My NCBI.

    Science.gov (United States)

    Fatehi, Farhad; Gray, Leonard C; Wootton, Richard

    2014-03-01

    Although the basic PubMed search is often helpful, the results may sometimes be non-specific. For more control over the search process you can use the Advanced Search Builder interface. This allows a targeted search in specific fields, with the convenience of being able to select the intended search field from a list. It also provides a history of your previous searches. The search history is useful to develop a complex search query by combining several previous searches using Boolean operators. For indexing the articles in MEDLINE, the NLM uses a controlled vocabulary system called MeSH. This standardised vocabulary solves the problem of authors, researchers and librarians who may use different terms for the same concept. To be efficient in a PubMed search, you should start by identifying the most appropriate MeSH terms and use them in your search where possible. My NCBI is a personal workspace facility available through PubMed and makes it possible to customise the PubMed interface. It provides various capabilities that can enhance your search performance.

  20. The effectiveness of tools used to evaluate successful critical decision making skills for applicants to healthcare graduate educational programs: a systematic review.

    Science.gov (United States)

    Benham, Brian; Hawley, Diane

    2015-05-15

    Students leave healthcare academic programs for a variety of reasons. When they attrite, it is disappointing for the student as well as their faculty. Advanced practice nursing and other healthcare professions require not only extensive academic preparation, but also the ability to critically evaluate patient care situations. The ability to critically evaluate a situation is not innate. Critical decision making skills are high level skills that are difficult to assess. For the purpose of this review, critical decision making and critical thinking skills refer to the same constructs and will be referred to globally as critical decision making skills. The objective of this review was to identify the effectiveness of tools used to evaluate critical decision making skills for applicants to healthcare graduate educational programs. Adult (18 years of age or older) applicants, students enrolled and/or recent graduates (within one year from completion) of healthcare graduate educational programs. Types of interventions: This review considered studies that evaluated the utilization of unique tools as well as standard tools, such as the Graduate Record Exam or grade point average, to evaluate critical decision making skills in graduate healthcare program applicants. Types of studies: Experimental and non-experimental studies were considered for inclusion. Types of outcomes: Successful quantitative evaluations based on specific field of study standards. The search strategy aimed to find both published and unpublished studies. Studies published in English after 1969 were considered for inclusion in this review. Databases that included both published and unpublished (grey) literature were searched. Additionally, reference lists from all articles retrieved were examined for articles for inclusion. Selected papers were assessed by two independent reviewers using standardized critical appraisal instruments from Joanna Briggs Institute. Any disagreement between reviewers was

  1. Finding people, papers, and posts: Vertical search algorithms and evaluation

    NARCIS (Netherlands)

    Berendsen, R.W.

    2015-01-01

    There is a growing diversity of information access applications. While general web search has been dominant in the past few decades, a wide variety of so-called vertical search tasks and applications have come to the fore. Vertical search is an often used term for search that targets specific

  2. Supporting complex search tasks

    DEFF Research Database (Denmark)

    Gäde, Maria; Hall, Mark; Huurdeman, Hugo

    2015-01-01

    , is fragmented at best. The workshop addressed the many open research questions: What are the obvious use cases and applications of complex search? What are essential features of work tasks and search tasks to take into account? And how do these evolve over time? With a multitude of information, varying from...

  3. Optimizing Vector-Quantization Processor Architecture for Intelligent Query-Search Applications

    Science.gov (United States)

    Xu, Huaiyu; Mita, Yoshio; Shibata, Tadashi

    2002-04-01

    The architecture of a very large scale integration (VLSI) vector-quantization processor (VQP) has been optimized to develop a general-purpose intelligent query-search agent. The agent performs a similarity-based search in a large-volume database. Although similarity-based search processing is computationally very expensive, latency-free searches have become possible due to the highly parallel maximum-likelihood search architecture of the VQP chip. Three architectures of the VQP chip have been studied and their performances are compared. In order to give reasonable searching results according to the different policies, the concept of penalty function has been introduced into the VQP. An E-commerce real-estate agency system has been developed using the VQP chip implemented in a field-programmable gate array (FPGA) and the effectiveness of such an agency system has been demonstrated.

  4. Automatic sorting of toxicological information into the IUCLID (International Uniform Chemical Information Database) endpoint-categories making use of the semantic search engine Go3R.

    Science.gov (United States)

    Sauer, Ursula G; Wächter, Thomas; Hareng, Lars; Wareing, Britta; Langsch, Angelika; Zschunke, Matthias; Alvers, Michael R; Landsiedel, Robert

    2014-06-01

    The knowledge-based search engine Go3R, www.Go3R.org, has been developed to assist scientists from industry and regulatory authorities in collecting comprehensive toxicological information with a special focus on identifying available alternatives to animal testing. The semantic search paradigm of Go3R makes use of expert knowledge on 3Rs methods and regulatory toxicology, laid down in the ontology, a network of concepts, terms, and synonyms, to recognize the contents of documents. Search results are automatically sorted into a dynamic table of contents presented alongside the list of documents retrieved. This table of contents allows the user to quickly filter the set of documents by topics of interest. Documents containing hazard information are automatically assigned to a user interface following the endpoint-specific IUCLID5 categorization scheme required, e.g. for REACH registration dossiers. For this purpose, complex endpoint-specific search queries were compiled and integrated into the search engine (based upon a gold standard of 310 references that had been assigned manually to the different endpoint categories). Go3R sorts 87% of the references concordantly into the respective IUCLID5 categories. Currently, Go3R searches in the 22 million documents available in the PubMed and TOXNET databases. However, it can be customized to search in other databases including in-house databanks. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Lunar and Planetary Science XXXV: Astrobiology: Analogs and Applications to the Search for Life

    Science.gov (United States)

    2004-01-01

    The session "Astrobiology: Analogs and Applications to the Search for Life" included the folowing reports:The Search for Life on Mars Using Macroscopically Visible Microbial Mats (Stromatolites) in 3.5/3.3 Ga Cherts from the Pilbara in Australia and Barberton in South Africa as Analogues; Life in a Mars Analog: Microbial Activity Associated with Carbonate Cemented Lava Breccias from NW Spitsbergen; Groundwater-fed Iron-rich Microbial Mats in a Freshwater Creek: Growth Cycles and Fossilization Potential of Microbial Features; Episodic Fossilization of Microorganisms on an Annual Timescale in an Anthropogenically Modified Natural Environment: Geochemical Controls and Implications for Astrobiology; Proterozoic Microfossils and Their Implications for Recognizing Life on Mars; Microbial Alteration of Volcanic Glass in Modern and Ancient Oceanic Crust as a Proxy for Studies of Extraterrestrial Material ; Olivine Alteration on Earth and Mars; Searching for an Acidic Aquifer in the R!o Tinto Basin. First Geobiology Results of MARTE Project; In-Field Testing of Life Detection Instruments and Protocols in a Mars Analogue Arctic Environment; Habitability of the Shallow Subsurface on Mars: Clues from the Meteorites; Mars Analog Rio Tinto Experiment (MARTE): 2003 Drilling Campaign to Search for a Subsurface Biosphere at Rio Tinto Spain; Characterization of the Organic Matter in an Archean Chert (Warrawoona, Australia); and The Solfatara Crater, Italy: Characterization of Hydrothermal Deposits, Biosignatures and Their Astrobiological Implication.

  6. Intelligent Search Optimization using Artificial Fuzzy Logics

    OpenAIRE

    Manral, Jai

    2015-01-01

    Information on the web is prodigious; searching relevant information is difficult making web users to rely on search engines for finding relevant information on the web. Search engines index and categorize web pages according to their contents using crawlers and rank them accordingly. For given user query they retrieve millions of webpages and display them to users according to web-page rank. Every search engine has their own algorithms based on certain parameters for ranking web-pages. Searc...

  7. Adaptive Large Neighbourhood Search

    DEFF Research Database (Denmark)

    Røpke, Stefan

    Large neighborhood search is a metaheuristic that has gained popularity in recent years. The heuristic repeatedly moves from solution to solution by first partially destroying the solution and then repairing it. The best solution observed during this search is presented as the final solution....... This tutorial introduces the large neighborhood search metaheuristic and the variant adaptive large neighborhood search that dynamically tunes parameters of the heuristic while it is running. Both heuristics belong to a broader class of heuristics that are searching a solution space using very large...... neighborhoods. The tutorial also present applications of the adaptive large neighborhood search, mostly related to vehicle routing problems for which the heuristic has been extremely successful. We discuss how the heuristic can be parallelized and thereby take advantage of modern desktop computers...

  8. Multi-Agent Based Beam Search for Real-Time Production Scheduling and Control Method, Software and Industrial Application

    CERN Document Server

    Kang, Shu Gang

    2013-01-01

    The Multi-Agent Based Beam Search (MABBS) method systematically integrates four major requirements of manufacturing production - representation capability, solution quality, computation efficiency, and implementation difficulty - within a unified framework to deal with the many challenges of complex real-world production planning and scheduling problems. Multi-agent Based Beam Search for Real-time Production Scheduling and Control introduces this method, together with its software implementation and industrial applications.  This book connects academic research with industrial practice, and develops a practical solution to production planning and scheduling problems. To simplify implementation, a reusable software platform is developed to build the MABBS method into a generic computation engine.  This engine is integrated with a script language, called the Embedded Extensible Application Script Language (EXASL), to provide a flexible and straightforward approach to representing complex real-world problems. ...

  9. Search Techniques for the Web of Things: A Taxonomy and Survey

    Science.gov (United States)

    Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus

    2016-01-01

    The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented. PMID:27128918

  10. Medical student appraisal: searching on smartphones.

    Science.gov (United States)

    Khalifian, S; Markman, T; Sampognaro, P; Mitchell, S; Weeks, S; Dattilo, J

    2013-01-01

    The rapidly growing industry for mobile medical applications provides numerous smartphone resources designed for healthcare professionals. However, not all applications are equally useful in addressing the questions of early medical trainees. Three popular, free, mobile healthcare applications were evaluated along with a Google(TM) web search on both Apple(TM) and Android(TM) devices. Six medical students at a large academic hospital evaluated each application for a one-week period while on various clinical rotations. Google(TM) was the most frequently used search method and presented multimedia resources but was inefficient for obtaining clinical management information. Epocrates(TM) Pill ID feature was praised for its clinical utility. Medscape(TM) had the highest satisfaction of search and excelled through interactive educational features. Micromedex(TM) offered both FDA and off-label dosing for drugs. Google(TM) was the preferred search method for questions related to basic disease processes and multimedia resources, but was inadequate for clinical management. Caution should also be exercised when using Google(TM) in front of patients. Medscape(TM) was the most appealing application due to a broad scope of content and educational features relevant to medical trainees. Students should also be cognizant of how mobile technology may be perceived by their evaluators to avoid false impressions.

  11. Google Patents: The global patent search engine

    OpenAIRE

    Noruzi, Alireza; Abdekhoda, Mohammadhiwa

    2014-01-01

    Google Patents (www.google.com/patents) includes over 8 million full-text patents. Google Patents works in the same way as the Google search engine. Google Patents is the global patent search engine that lets users search through patents from the USPTO (United States Patent and Trademark Office), EPO (European Patent Office), etc. This study begins with an overview of how to use Google Patent and identifies advanced search techniques not well-documented by Google Patent. It makes several sug...

  12. Indexing Bibliographic Database Content Using MariaDB and Sphinx Search Server

    Directory of Open Access Journals (Sweden)

    Arie Nugraha

    2014-07-01

    Full Text Available Fast retrieval of digital content has become mandatory for library and archive information systems. Many software applications have emerged to handle the indexing of digital content, from low-level ones such Apache Lucene, to more RESTful and web-services-ready ones such Apache Solr and ElasticSearch. Solr’s popularity among library software developers makes it the “de-facto” standard software for indexing digital content. For content (full-text content or bibliographic description already stored inside a relational DBMS such as MariaDB (a fork of MySQL or PostgreSQL, Sphinx Search Server (Sphinx is a suitable alternative. This article will cover an introduction on how to use Sphinx with MariaDB databases to index database content as well as some examples of Sphinx API usage.

  13. Application of improved topsis method to accident emergency decision-making at nuclear power station

    International Nuclear Information System (INIS)

    Zhang Jin; Cai Qi; Zhang Fan; Chang Ling

    2009-01-01

    Given the complexity in multi-attribute decision-making on nuclear accident emergency, and by integrating subjective weight and impersonal weight of each evaluating index, a decision-making model for emergency plan at nuclear power stations is established with the application of improved TOPSIS model. The testing results indicated that the improved TOPSIS-based multi-attribute decision-making has a better assessment results. (authors)

  14. Policy implications for familial searching

    OpenAIRE

    Kim, Joyce; Mammo, Danny; Siegel, Marni B; Katsanis, Sara H

    2011-01-01

    Abstract In the United States, several states have made policy decisions regarding whether and how to use familial searching of the Combined DNA Index System (CODIS) database in criminal investigations. Familial searching pushes DNA typing beyond merely identifying individuals to detecting genetic relatedness, an application previously reserved for missing persons identifications and custody battles. The intentional search of CODIS for partial matches to an item of evidence offers law enforce...

  15. OpenSearch technology for geospatial resources discovery

    Science.gov (United States)

    Papeschi, Fabrizio; Enrico, Boldrini; Mazzetti, Paolo

    2010-05-01

    In 2005, the term Web 2.0 has been coined by Tim O'Reilly to describe a quickly growing set of Web-based applications that share a common philosophy of "mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing". Around this same period, OpenSearch a new Web 2.0 technology, was developed. More properly, OpenSearch is a collection of technologies that allow publishing of search results in a format suitable for syndication and aggregation. It is a way for websites and search engines to publish search results in a standard and accessible format. Due to its strong impact on the way the Web is perceived by users and also due its relevance for businesses, Web 2.0 has attracted the attention of both mass media and the scientific community. This explosive growth in popularity of Web 2.0 technologies like OpenSearch, and practical applications of Service Oriented Architecture (SOA) resulted in an increased interest in similarities, convergence, and a potential synergy of these two concepts. SOA is considered as the philosophy of encapsulating application logic in services with a uniformly defined interface and making these publicly available via discovery mechanisms. Service consumers may then retrieve these services, compose and use them according to their current needs. A great degree of similarity between SOA and Web 2.0 may be leading to a convergence between the two paradigms. They also expose divergent elements, such as the Web 2.0 support to the human interaction in opposition to the typical SOA machine-to-machine interaction. According to these considerations, the Geospatial Information (GI) domain, is also moving first steps towards a new approach of data publishing and discovering, in particular taking advantage of the OpenSearch technology. A specific GI niche is represented by the OGC Catalog Service for Web (CSW) that is part of the OGC Web Services (OWS) specifications suite, which provides a

  16. Mental fatigue impairs soccer-specific decision-making skill.

    Science.gov (United States)

    Smith, Mitchell R; Zeuwts, Linus; Lenoir, Matthieu; Hens, Nathalie; De Jong, Laura M S; Coutts, Aaron J

    2016-07-01

    This study aimed to investigate the impact of mental fatigue on soccer-specific decision-making. Twelve well-trained male soccer players performed a soccer-specific decision-making task on two occasions, separated by at least 72 h. The decision-making task was preceded in a randomised order by 30 min of the Stroop task (mental fatigue) or 30 min of reading from magazines (control). Subjective ratings of mental fatigue were measured before and after treatment, and mental effort (referring to treatment) and motivation (referring to the decision-making task) were measured after treatment. Performance on the soccer-specific decision-making task was assessed using response accuracy and time. Visual search behaviour was also assessed throughout the decision-making task. Subjective ratings of mental fatigue and effort were almost certainly higher following the Stroop task compared to the magazines. Motivation for the upcoming decision-making task was possibly higher following the Stroop task. Decision-making accuracy was very likely lower and response time likely higher in the mental fatigue condition. Mental fatigue had unclear effects on most visual search behaviour variables. The results suggest that mental fatigue impairs accuracy and speed of soccer-specific decision-making. These impairments are not likely related to changes in visual search behaviour.

  17. Improving Web Search for Difficult Queries

    Science.gov (United States)

    Wang, Xuanhui

    2009-01-01

    Search engines have now become essential tools in all aspects of our life. Although a variety of information needs can be served very successfully, there are still a lot of queries that search engines can not answer very effectively and these queries always make users feel frustrated. Since it is quite often that users encounter such "difficult…

  18. The role of emotion in clinical decision making: an integrative literature review.

    Science.gov (United States)

    Kozlowski, Desirée; Hutchinson, Marie; Hurley, John; Rowley, Joanne; Sutherland, Joanna

    2017-12-15

    Traditionally, clinical decision making has been perceived as a purely rational and cognitive process. Recently, a number of authors have linked emotional intelligence (EI) to clinical decision making (CDM) and calls have been made for an increased focus on EI skills for clinicians. The objective of this integrative literature review was to identify and synthesise the empirical evidence for a role of emotion in CDM. A systematic search of the bibliographic databases PubMed, PsychINFO, and CINAHL (EBSCO) was conducted to identify empirical studies of clinician populations. Search terms were focused to identify studies reporting clinician emotion OR clinician emotional intelligence OR emotional competence AND clinical decision making OR clinical reasoning. Twenty three papers were retained for synthesis. These represented empirical work from qualitative, quantitative, and mixed-methods approaches and comprised work with a focus on experienced emotion and on skills associated with emotional intelligence. The studies examined nurses (10), physicians (7), occupational therapists (1), physiotherapists (1), mixed clinician samples (3), and unspecified infectious disease experts (1). We identified two main themes in the context of clinical decision making: the subjective experience of emotion; and, the application of emotion and cognition in CDM. Sub-themes under the subjective experience of emotion were: emotional response to contextual pressures; emotional responses to others; and, intentional exclusion of emotion from CDM. Under the application of emotion and cognition in CDM, sub-themes were: compassionate emotional labour - responsiveness to patient emotion within CDM; interdisciplinary tension regarding the significance and meaning of emotion in CDM; and, emotion and moral judgement. Clinicians' experienced emotions can and do affect clinical decision making, although acknowledgement of that is far from universal. Importantly, this occurs in the in the absence of a

  19. Cognitive factors predicting intentions to search for health information: an application of the theory of planned behaviour.

    Science.gov (United States)

    Austvoll-Dahlgren, Astrid; Falk, Ragnhild S; Helseth, Sølvi

    2012-12-01

    Peoples' ability to obtain health information is a precondition for their effective participation in decision making about health. However, there is limited evidence describing which cognitive factors can predict the intention of people to search for health information. To test the utility of a questionnaire in predicting intentions to search for health information, and to identify important predictors associated with this intention such that these could be targeted in an Intervention. A questionnaire was developed based on the Theory of Planned Behaviour and tested on both a mixed population sample (n=30) and a sample of parents (n = 45). The questionnaire was explored by testing for internal consistency, calculating inter-correlations between theoretically-related constructs, and by using multiple regression analysis. The reliability and validity of the questionnaire were found to be satisfactory and consistent across the two samples. The questionnaires' direct measures prediction of intention was high and accounted for 47% and 55% of the variance in behavioural intentions. Attitudes and perceived behavioural control were identified as important predictors to intention for search for health information. The questionnaire may be a useful tool for understanding and evaluating behavioural intentions and beliefs related to searches for health information. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.

  20. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.

    Science.gov (United States)

    Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-14

    Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, Pnatural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance

  1. Deep web search: an overview and roadmap

    NARCIS (Netherlands)

    Tjin-Kam-Jet, Kien; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    2011-01-01

    We review the state-of-the-art in deep web search and propose a novel classification scheme to better compare deep web search systems. The current binary classification (surfacing versus virtual integration) hides a number of implicit decisions that must be made by a developer. We make these

  2. New particle searches at e+e- machines

    International Nuclear Information System (INIS)

    Haissinski, J.

    1985-09-01

    Recent results on new particle searches with e + e - colliding beam rings are reported. This brief review makes no attempt for comprehensiveness but some emphasis is put on the searches for new heavy leptons and for supersymmetric particles, and on the question of particle compositeness. 37 refs.; 15 figs.

  3. Real-Time Search in Clouds

    OpenAIRE

    Uddin, Misbah; Skinner, Amy; Stadler, Rolf; Clemm, Alexander

    2013-01-01

    We developed a novel approach for management of networks/networked systems based on network search [4]. Network search provides a simple, uniform interface, through which human administrators and management applications can obtain network information, configuration or operational, without knowing its schema and location. We believe that the capability of network search will spur the development of new tools for human administrators and enable the rapid development of new classes of network co...

  4. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    Science.gov (United States)

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  5. Search for Dark Matter at ATLAS

    CERN Document Server

    Conventi, Francesco; The ATLAS collaboration

    2017-01-01

    Dark Matter composes almost 25% of our Universe, but its identity is still unknown which makes it a large challenge for current fundamental physics. A lot of approaches are used to discover the identity of Dark Matter and one of them, collider searches, are discussed in this talk. The latest results on Dark Matter search at ATLAS using 2015 and 2016 data are presented. Results from searches for new physics in the events with final states containing large missing transverse energy + X (photons, jets, boson) are shown. Higgs to invisible and dijet searches are used in sense of complementarity to constrain properties of Dark Matter.

  6. Search Techniques for the Web of Things: A Taxonomy and Survey

    Directory of Open Access Journals (Sweden)

    Yuchao Zhou

    2016-04-01

    Full Text Available The Web of Things aims to make physical world objects and their data accessible through standard Web technologies to enable intelligent applications and sophisticated data analytics. Due to the amount and heterogeneity of the data, it is challenging to perform data analysis directly; especially when the data is captured from a large number of distributed sources. However, the size and scope of the data can be reduced and narrowed down with search techniques, so that only the most relevant and useful data items are selected according to the application requirements. Search is fundamental to the Web of Things while challenging by nature in this context, e.g., mobility of the objects, opportunistic presence and sensing, continuous data streams with changing spatial and temporal properties, efficient indexing for historical and real time data. The research community has developed numerous techniques and methods to tackle these problems as reported by a large body of literature in the last few years. A comprehensive investigation of the current and past studies is necessary to gain a clear view of the research landscape and to identify promising future directions. This survey reviews the state-of-the-art search methods for the Web of Things, which are classified according to three different viewpoints: basic principles, data/knowledge representation, and contents being searched. Experiences and lessons learned from the existing work and some EU research projects related to Web of Things are discussed, and an outlook to the future research is presented.

  7. Semantic Search of Web Services

    Science.gov (United States)

    Hao, Ke

    2013-01-01

    This dissertation addresses semantic search of Web services using natural language processing. We first survey various existing approaches, focusing on the fact that the expensive costs of current semantic annotation frameworks result in limited use of semantic search for large scale applications. We then propose a vector space model based service…

  8. Global search in photoelectron diffraction structure determination using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Viana, M L [Departamento de Fisica, Icex, UFMG, Belo Horizonte, Minas Gerais (Brazil); Muino, R Diez [Donostia International Physics Center DIPC, Paseo Manuel de Lardizabal 4, 20018 San Sebastian (Spain); Soares, E A [Departamento de Fisica, Icex, UFMG, Belo Horizonte, Minas Gerais (Brazil); Hove, M A Van [Department of Physics and Materials Science, City University of Hong Kong, Hong Kong (China); Carvalho, V E de [Departamento de Fisica, Icex, UFMG, Belo Horizonte, Minas Gerais (Brazil)

    2007-11-07

    Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 x 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.

  9. SoftSearch: integration of multiple sequence features to identify breakpoints of structural variations.

    Directory of Open Access Journals (Sweden)

    Steven N Hart

    Full Text Available BACKGROUND: Structural variation (SV represents a significant, yet poorly understood contribution to an individual's genetic makeup. Advanced next-generation sequencing technologies are widely used to discover such variations, but there is no single detection tool that is considered a community standard. In an attempt to fulfil this need, we developed an algorithm, SoftSearch, for discovering structural variant breakpoints in Illumina paired-end next-generation sequencing data. SoftSearch combines multiple strategies for detecting SV including split-read, discordant read-pair, and unmated pairs. Co-localized split-reads and discordant read pairs are used to refine the breakpoints. RESULTS: We developed and validated SoftSearch using real and synthetic datasets. SoftSearch's key features are 1 not requiring secondary (or exhaustive primary alignment, 2 portability into established sequencing workflows, and 3 is applicable to any DNA-sequencing experiment (e.g. whole genome, exome, custom capture, etc.. SoftSearch identifies breakpoints from a small number of soft-clipped bases from split reads and a few discordant read-pairs which on their own would not be sufficient to make an SV call. CONCLUSIONS: We show that SoftSearch can identify more true SVs by combining multiple sequence features. SoftSearch was able to call clinically relevant SVs in the BRCA2 gene not reported by other tools while offering significantly improved overall performance.

  10. Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction

    Science.gov (United States)

    Jonsson, Ari K.; Frank, Jeremy

    2000-01-01

    Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.

  11. Application of fuzzy decision-making method in nuclear emergency

    International Nuclear Information System (INIS)

    Xu Zhixin; Xi Shuren; Qu Jingyuan

    2005-01-01

    Protective actions such as evacuation, sheltering and iodine administration can be taken to mitigate the radiological consequence in the event of an accidental release. In general, decision-making of countermeasures involves both quantitative and qualitative criteria. The conventional approaches to assessing these criteria tend to be less effective when dealing with those qualitative criteria that are imprecise or vague. In this regard, fuzzy set method is an alternative tool. It can cope with vague assessment in a better way. This paper presents the application of fussy methodology to decision-making of protective actions in nuclear emergencies. In this method linguistic terms and fuzzy triangular numbers are used to represent decision-maker's subjective assessment for different decision criteria considered and decision alternatives versus the decision criteria. Following the assessment performed by specialists, corresponding evaluations can be synthesized and ranked. Finally, the optimal strategy for implementing protective actions can be recommended. (authors)

  12. Journal of Computer Science and Its Application: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  13. PubMed Interact: an Interactive Search Application for MEDLINE/PubMed

    Science.gov (United States)

    Muin, Michael; Fontelo, Paul; Ackerman, Michael

    2006-01-01

    Online search and retrieval systems are important resources for medical literature research. Progressive Web 2.0 technologies provide opportunities to improve search strategies and user experience. Using PHP, Document Object Model (DOM) manipulation and Asynchronous JavaScript and XML (Ajax), PubMed Interact allows greater functionality so users can refine search parameters with ease and interact with the search results to retrieve and display relevant information and related articles. PMID:17238658

  14. Search in Real-Time Video Games

    OpenAIRE

    Cowling, Peter I.; Buro, Michael; Bida, Michal; Botea, Adi; Bouzy, Bruno; Butz, Martin V.; Hingston, Philip; Muñoz-Avila, Hector; Nau, Dana; Sipper, Moshe

    2013-01-01

    This chapter arises from the discussions of an experienced international group of researchers interested in the potential for creative application of algorithms for searching finite discrete graphs, which have been highly successful in a wide range of application areas, to address a broad range of problems arising in video games. The chapter first summarises the state of the art in search algorithms for games. It then considers the challenges in implementing these algorithms in video games (p...

  15. Risk-based decision making for terrorism applications.

    Science.gov (United States)

    Dillon, Robin L; Liebe, Robert M; Bestafka, Thomas

    2009-03-01

    This article describes the anti-terrorism risk-based decision aid (ARDA), a risk-based decision-making approach for prioritizing anti-terrorism measures. The ARDA model was developed as part of a larger effort to assess investments for protecting U.S. Navy assets at risk and determine whether the most effective anti-terrorism alternatives are being used to reduce the risk to the facilities and war-fighting assets. With ARDA and some support from subject matter experts, we examine thousands of scenarios composed of 15 attack modes against 160 facility types on two installations and hundreds of portfolios of 22 mitigation alternatives. ARDA uses multiattribute utility theory to solve some of the commonly identified challenges in security risk analysis. This article describes the process and documents lessons learned from applying the ARDA model for this application.

  16. Search engines, the new bottleneck for content access

    NARCIS (Netherlands)

    van Eijk, N.; Preissl, B.; Haucap, J.; Curwen, P.

    2009-01-01

    The core function of a search engine is to make content and sources of information easily accessible (although the search results themselves may actually include parts of the underlying information). In an environment with unlimited amounts of information available on open platforms such as the

  17. Dark Matter searches with the ATLAS Detector

    CERN Document Server

    Suchek, Stanislav; The ATLAS collaboration

    2017-01-01

    Dark Matter composes almost 25% of our Universe, but its identity is still unknown which makes it a large challenge for current fundamental physics. A lot of approaches are used to discover the identity of Dark Matter and one of them, collider searches, are discussed in this talk. The latest results on Dark Matter search at ATLAS using 2015 and 2016 data are presented. Results from searches for new physics in the events with final states containing large missing transverse energy and a single photon or Higgs boson are shown. Higgs to invisible and dijet searches are used in sense of complementarity to constrain properties of Dark Matter. Results and perspectives for all these searches are presented.

  18. Searching Algorithm Using Bayesian Updates

    Science.gov (United States)

    Caudle, Kyle

    2010-01-01

    In late October 1967, the USS Scorpion was lost at sea, somewhere between the Azores and Norfolk Virginia. Dr. Craven of the U.S. Navy's Special Projects Division is credited with using Bayesian Search Theory to locate the submarine. Bayesian Search Theory is a straightforward and interesting application of Bayes' theorem which involves searching…

  19. Searching for confining hidden valleys at LHCb, ATLAS, and CMS

    Science.gov (United States)

    Pierce, Aaron; Shakya, Bibhushan; Tsai, Yuhsin; Zhao, Yue

    2018-05-01

    We explore strategies for probing hidden valley scenarios exhibiting confinement. Such scenarios lead to a moderate multiplicity of light hidden hadrons for generic showering and hadronization similar to QCD. Their decays are typically soft and displaced, making them challenging to probe with traditional LHC searches. We show that the low trigger requirements and excellent track and vertex reconstruction at LHCb provide a favorable environment to search for such signals. We propose novel search strategies in both muonic and hadronic channels. We also study existing ATLAS and CMS searches and compare them with our proposals at LHCb. We find that the reach at LHCb is generically better in the parameter space we consider here, even with optimistic background estimations for ATLAS and CMS searches. We discuss potential modifications at ATLAS and CMS that might make these experiments competitive with the LHCb reach. Our proposed searches can be applied to general hidden valley models as well as exotic Higgs boson decays, such as in twin Higgs models.

  20. Investigation on the improvement of genetic algorithm for PWR loading pattern search and its benchmark verification

    International Nuclear Information System (INIS)

    Li Qianqian; Jiang Xiaofeng; Zhang Shaohong

    2009-01-01

    In this study, the age technique, the concepts of relativeness degree and worth function are exploited to improve the performance of genetic algorithm (GA) for PWR loading pattern search. Among them, the age technique endows the algorithm be capable of learning from previous search 'experience' and guides it to do a better search in the vicinity ora local optimal; the introduction of the relativeness degree checks the relativeness of two loading patterns before performing crossover between them, which can significantly reduce the possibility of prematurity of the algorithm; while the application of the worth function makes the algorithm be capable of generating new loading patterns based on the statistics of common features of evaluated good loading patterns. Numerical verification against a loading pattern search benchmark problem ora two-loop reactor demonstrates that the adoption of these techniques is able to significantly enhance the efficiency of the genetic algorithm while improves the quality of the final solution as well. (authors)

  1. Making the Most of Libraries in the Search for Academic Excellence.

    Science.gov (United States)

    Breivik, Patricia Senn

    1987-01-01

    The role of libraries in the search for quality education was addressed in the Carnegie Foundation's report, "College," and at the first higher education conference on academic libraries. Information literacy and policy, campus organizational issues, and programs in economic development support, active learning, and faculty development…

  2. Deliberation versus automaticity in decision making: Which presentation format features facilitate automatic decision making?

    Directory of Open Access Journals (Sweden)

    Anke Soellner

    2013-05-01

    Full Text Available The idea of automatic decision making approximating normatively optimal decisions without necessitating much cognitive effort is intriguing. Whereas recent findings support the notion that such fast, automatic processes explain empirical data well, little is known about the conditions under which such processes are selected rather than more deliberate stepwise strategies. We investigate the role of the format of information presentation, focusing explicitly on the ease of information acquisition and its influence on information integration processes. In a probabilistic inference task, the standard matrix employed in prior research was contrasted with a newly created map presentation format and additional variations of both presentation formats. Across three experiments, a robust presentation format effect emerged: Automatic decision making was more prevalent in the matrix (with high information accessibility, whereas sequential decision strategies prevailed when the presentation format demanded more information acquisition effort. Further scrutiny of the effect showed that it is not driven by the presentation format as such, but rather by the extent of information search induced by a format. Thus, if information is accessible with minimal need for information search, information integration is likely to proceed in a perception-like, holistic manner. In turn, a moderate demand for information search decreases the likelihood of behavior consistent with the assumptions of automatic decision making.

  3. Some applications of fuzzy sets and the analytical hierarchy process to decision making

    OpenAIRE

    Castro, Alberto Rosas

    1984-01-01

    Approved for public release; distribution unlimited This thesis examines the use of fuzzy set theory and the analytic hierarchy process in decision making. It begins by reviewing the insight of psychologists, social scientists and computer scientists to the decision making process. The Operations Research- Systems Analysis approach is discussed followed by a presentation of the basis of fuzzy set theory and the analytic hierarchy process. Two applications of these meth...

  4. Optimal Path Determination for Flying Vehicle to Search an Object

    Science.gov (United States)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  5. Resource Selection for Federated Search on the Web

    NARCIS (Netherlands)

    Nguyen, Dong-Phuong; Demeester, Thomas; Trieschnigg, Rudolf Berend; Hiemstra, Djoerd

    A publicly available dataset for federated search reflecting a real web environment has long been bsent, making it difficult for researchers to test the validity of their federated search algorithms for the web setting. We present several experiments and analyses on resource selection on the web

  6. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    Science.gov (United States)

    Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.

    2015-12-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII

  7. A preference aggregation model and application in AHP-group decision making

    Science.gov (United States)

    Yang, Taiyi; Yang, De; Chao, Xiangrui

    2018-04-01

    Group decision making process integrate individual preferences to obtain the group preference by applying aggregation rules and preference relations. The two most useful approaches, the aggregation of individual judgements and the aggregation of individual priorities, traditionally are employed in the Analytic Hierarchy Process to deal with group decision making problems. In both cases, it is assumed that the group preference is approximate weighted mathematical expectation of individual judgements and individual priorities. We propose new preference aggregation methods using optimization models in order to obtain group preference which is close to all individual priorities. Some illustrative examples are finally examined to demonstrate proposed models for application.

  8. Search and imperative programming

    NARCIS (Netherlands)

    K.R. Apt (Krzysztof); A. Schaerf

    1996-01-01

    textabstractWe augment the expressive power of imperative programming in order to make it a more attractive vehicle for problems that involve search.The proposed additions are limited yet powerful and are inspired by the logic programming paradigm.We illustrate their use by presenting solutions to a

  9. In the making

    DEFF Research Database (Denmark)

    2005-01-01

    disciplines and includes other research areas with common interest in how people shape and make sense of things in an increasingly man-made world. The conference directs its interest towards the diversity, challenges, emerging practices and understanding of design. Rather than searching for common definitions...

  10. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    Science.gov (United States)

    Peltier, Chad; Becker, Mark W

    2017-05-01

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  11. Spatial search by quantum walk

    International Nuclear Information System (INIS)

    Childs, Andrew M.; Goldstone, Jeffrey

    2004-01-01

    Grover's quantum search algorithm provides a way to speed up combinatorial search, but is not directly applicable to searching a physical database. Nevertheless, Aaronson and Ambainis showed that a database of N items laid out in d spatial dimensions can be searched in time of order √(N) for d>2, and in time of order √(N) poly(log N) for d=2. We consider an alternative search algorithm based on a continuous-time quantum walk on a graph. The case of the complete graph gives the continuous-time search algorithm of Farhi and Gutmann, and other previously known results can be used to show that √(N) speedup can also be achieved on the hypercube. We show that full √(N) speedup can be achieved on a d-dimensional periodic lattice for d>4. In d=4, the quantum walk search algorithm takes time of order √(N) poly(log N), and in d<4, the algorithm does not provide substantial speedup

  12. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search

    Science.gov (United States)

    Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-01

    Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, Peffect of task (F 3,57=6.3, Pinterface (F 1,19=18.0, Peffect of task (F 2,38=4.1, P=.025, Greenhouse

  13. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    Directory of Open Access Journals (Sweden)

    Shyamala Devi Munisamy

    2015-01-01

    Full Text Available Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider’s premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  14. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    Science.gov (United States)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship

  15. Advances in the application of decision theory to test-based decision making

    NARCIS (Netherlands)

    van der Linden, Willem J.

    This paper reviews recent research in the Netherlands on the application of decision theory to test-based decision making about personnel selection and student placement. The review is based on an earlier model proposed for the classification of decision problems, and emphasizes an empirical

  16. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    Science.gov (United States)

    Dong, Peng; Wong, Ling Ling; Ng, Sarah; Loh, Marie; Mondry, Adrian

    2004-01-01

    Background Critically Appraised Topics (CATs) are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM). The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45) compared to the individual search engines' 52.65% (±12.0) (p search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians. PMID:15588311

  17. Decision making.

    Science.gov (United States)

    Chambers, David W

    2011-01-01

    A decision is a commitment of resources under conditions of risk in expectation of the best future outcome. The smart decision is always the strategy with the best overall expected value-the best combination of facts and values. Some of the special circumstances involved in decision making are discussed, including decisions where there are multiple goals, those where more than one person is involved in making the decision, using trigger points, framing decisions correctly, commitments to lost causes, and expert decision makers. A complex example of deciding about removal of asymptomatic third molars, with and without an EBD search, is discussed.

  18. Application of silicon carbide to synchrotron-radiation mirrors

    International Nuclear Information System (INIS)

    Takacs, P.Z.; Hursman, T.L.; Williams, J.T.

    1983-09-01

    Damage to conventional mirror materials exposed to the harsh synchrotron radiation (SR) environment has prompted the SR user community to search for more suitable materials. Next-generation insertion devices, with their attendant flux increases, will make the problem of mirror design even more difficult. A parallel effort in searching for better materials has been underway within the laser community for several years. The technology for dealing with high thermal loads is highly developed among laser manufacturers. Performance requirements for laser heat exchangers are remarkably similar to SR mirror requirements. We report on the application of laser heat exchanger technology to the solution of typical SR mirror design problems. The superior performance of silicon carbide for laser applications is illustrated by various material trades studies, and its superior performance for SR applications is illustrated by means of model calculations

  19. To Stretch and Search for Better Ways

    Science.gov (United States)

    Moore, John W.

    2000-06-01

    Ambassadors. The response has been wonderful. Many people are willing and eager to show others what JCE has to offer and encourage them to subscribe. The program began in the latter half of 1999, and there were 37 Journal Ambassadors by year's end. Some are located as far away as South America and Europe, and requests for information packets for meetings and workshops now arrive several times a week. We thank everyone who has been involved in this program for getting it off to a great start. Our authors and reviewers actively search for better ways to teach chemistry and for better ways to communicate to other teachers what they have learned. This enriches their own classes first and then a much wider audience. Others have volunteered to help make JCE articles easier to find and more accessible on the Web. The ACS student affiliates at one college have taken on the project of assigning keywords to articles published in some of the years before 1995. We will add these to the JCE Index online, making it an even more effective means for finding articles on specified topics. There are many possibilities for collaboration with JCE. If you would like to contribute to an ongoing project or would like to initiate a new one, please let us know. We welcome anyone who would like to help us make this Journal better. It is important that students learn how to stretch and search for better ways. This will not happen unless we challenge them within a humane and supportive learning environment. We should expect more than memorization or unthinking application of algorithmic solutions to exercises. We should provide means by which those who do not succeed at first can try again and again. And we should provide an intellectual scaffold for those whose climb toward understanding is difficult. These are not easy goals to achieve, but the more we try and the more we communicate with others who are attempting similar tasks, the more likely we are to be successful. Most important of all is that

  20. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    Science.gov (United States)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  1. Nuclearites search with the TLS detector

    International Nuclear Information System (INIS)

    Liang, S.; Wada, T.; Nakatsuka, T.; Okei, K.; Saavedra, O.; Takahashi, N.; Tsuji, S.; Yamamoto, I.; Yamashita, Y.; Lan, S.; Okita, M.; Takahashi, N.; Morita, T.; Ishii, R.; Goto, Y.; Iyono, A.; Matsumoto, H.; Nakagawa, M.; Noda, C.; Masuda, M.

    2008-01-01

    It is shown that a thermoluminescent sheet stack (TLS) detector, consisting of TL sheets and medical x-ray films, is an effective nuclearite detector. The TLS can be used for searching lighter nuclearites at sea level owing to the fact that the usual relativistic particles cannot make their tracks in the TLS unless they have a charge of larger than 50. We will report recent results of searching for lighter nuclearites at sea level

  2. Application of multi-criteria decision making to sustainable energy planning - a review

    Energy Technology Data Exchange (ETDEWEB)

    Pohekar, S.D.; Ramachandram, M. [Birla Inst. of Technology and Science, Pilani (India)

    2004-08-01

    Multi-Criteria Decision Making (MCDM) techniques are gaining popularity in sustainable energy management. The techniques provide solutions to the problems involving conflicting and multiple objectives. Several methods based on weighted averages, priority setting, outranking, fuzzy principles and their combinations are employed for energy planning decisions. A review of more than 90 published papers is presented here to analyze the applicability of various methods discussed. A classification on application areas and the year of application is presented to highlight the trends. It is observed that Analytical Hierarchy Process is the most popular technique followed by outranking techniques PROMETHEE and ELECTRE. Validation of results with multiple methods, development of interactive decision support systems and application of fuzzy methods to tackle uncertainties in the data is observed in the published literature. (author)

  3. Far-infrared contraband-detection-system development for personnel-search applications

    International Nuclear Information System (INIS)

    Schellenbaum, R.L.

    1982-09-01

    Experiments have been conducted toward the development of an active near-millimeter-wave, far infrared, personnel search system for the detection of contraband. These experiments employed a microwave hybrid tee interferometer/radiometer scanning system and quasi-optical techniques at 3.3-mm wavelength to illuminate and detect the reflection from target objects against a human body background. Clothing and other common concealing materials are transport at this wavelength. Retroreflector arrays, in conjunction with a Gunn diode radiation source, were investigated to provide all-angle illumination and detection of specular reflections from unaligned and irregular-shaped objects. Results indicate that, under highly controlled search conditions, metal objects greater than or equal to 25 cm 2 can be detected in an enclosure lined with retroreflectors. Further development is required to produce a practical personnel search system. The investigation and feasibility of alternate far infrared search techniques are presented. 23 figures, 2 tables

  4. Using Google Search Appliance (GSA) to search digital library collections: A Case Study of the INIS Collection Search

    International Nuclear Information System (INIS)

    Savic, Dobrica

    2014-01-01

    Google Search has established a new standard for information retrieval which did not exist with previous generations of library search facilities. The INIS hosts one of the world’s largest collections of published information on the peaceful uses of nuclear science and technology. It offers on-line access to a unique collection of 3.6 million bibliographic records and 483,000 full texts of non-conventional (grey) literature. This large digital library collection suffered from most of the well-known shortcomings of the classic library catalogue. Searching was complex and complicated, it required training in Boolean logic, full-text searching was not an option, and response time was slow. An opportune moment to improve the system came with the retirement of the previous catalogue software and the adoption of GSA as an organization-wide search engine standard. INIS was quick to realize the potential of using such a well-known application to replace its on-line catalogue. This paper presents the advantages and disadvantages encountered during three years of GSA use. Based on specific INIS-based practice and experience, this paper also offers some guidelines on ways to improve classic collections of millions of bibliographic and full-text documents, while reaping multiple benefits, such as increased use, accessibility, usability, expandability and improving user search and retrieval experiences. (author)

  5. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation.

    Science.gov (United States)

    Rognes, Torbjørn

    2011-06-01

    The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  6. Children searching information on the Internet : Performance on children's interfaces compared to Google

    NARCIS (Netherlands)

    Jochmann-Mannak, Hanna; Huibers, Theo W.C.; Lentz, Leo; Sanders, Ted

    2010-01-01

    Children frequently make use of the Internet to search for information. However, research shows that children experience many problems with searching and browsing the web. The last decade numerous search environments have been developed, especially for children. Do these search interfaces support

  7. Job search monitoring and assistance for the unemployed

    OpenAIRE

    Marinescu, Ioana E.

    2017-01-01

    In many countries, reducing unemployment is among the most important policy goals. In this context, monitoring job search by the unemployed and providing job search assistance can play a crucial role. However, more and more stringent monitoring and sanctions are not a panacea. Policymakers must consider possible downsides, such as unemployed people accepting less stable and lower-paying jobs. Tying “moderate” monitoring to job search assistance may be the essential ingredient to make this app...

  8. Learning Search Algorithms: An Educational View

    Directory of Open Access Journals (Sweden)

    Ales Janota

    2014-12-01

    Full Text Available Artificial intelligence methods find their practical usage in many applications including maritime industry. The paper concentrates on the methods of uninformed and informed search, potentially usable in solving of complex problems based on the state space representation. The problem of introducing the search algorithms to newcomers has its technical and psychological dimensions. The authors show how it is possible to cope with both of them through design and use of specialized authoring systems. A typical example of searching a path through the maze is used to demonstrate how to test, observe and compare properties of various search strategies. Performance of search methods is evaluated based on the common criteria.

  9. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  10. Can electronic search engines optimize screening of search results in systematic reviews: an empirical study.

    Science.gov (United States)

    Sampson, Margaret; Barrowman, Nicholas J; Moher, David; Clifford, Tammy J; Platt, Robert W; Morrison, Andra; Klassen, Terry P; Zhang, Li

    2006-02-24

    Most electronic search efforts directed at identifying primary studies for inclusion in systematic reviews rely on the optimal Boolean search features of search interfaces such as DIALOG and Ovid. Our objective is to test the ability of an Ultraseek search engine to rank MEDLINE records of the included studies of Cochrane reviews within the top half of all the records retrieved by the Boolean MEDLINE search used by the reviewers. Collections were created using the MEDLINE bibliographic records of included and excluded studies listed in the review and all records retrieved by the MEDLINE search. Records were converted to individual HTML files. Collections of records were indexed and searched through a statistical search engine, Ultraseek, using review-specific search terms. Our data sources, systematic reviews published in the Cochrane library, were included if they reported using at least one phase of the Cochrane Highly Sensitive Search Strategy (HSSS), provided citations for both included and excluded studies and conducted a meta-analysis using a binary outcome measure. Reviews were selected if they yielded between 1000-6000 records when the MEDLINE search strategy was replicated. Nine Cochrane reviews were included. Included studies within the Cochrane reviews were found within the first 500 retrieved studies more often than would be expected by chance. Across all reviews, recall of included studies into the top 500 was 0.70. There was no statistically significant difference in ranking when comparing included studies with just the subset of excluded studies listed as excluded in the published review. The relevance ranking provided by the search engine was better than expected by chance and shows promise for the preliminary evaluation of large results from Boolean searches. A statistical search engine does not appear to be able to make fine discriminations concerning the relevance of bibliographic records that have been pre-screened by systematic reviewers.

  11. The application of a selection of decision-making techniques by employees in a transport work environment in conjunction with their perceived decision-making success and practice

    OpenAIRE

    Theuns F.J. Oosthuizen

    2014-01-01

    A lack of optimum selection and application of decision-making techniques, in conjunction with suitable decision-making practice and perception of employees in a transport work environment demands attention to improve overall performance. Although multiple decision-making techniques exist, five prevalent techniques were considered in this article, namely the Kepner-Tregoe, Delphi, stepladder, nominal group and brainstorming techniques. A descriptive research design was followed, using an empi...

  12. Application of pattern search method to power system security constrained economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Al-Othman, A.K.; El-Naggar, K.M.

    2008-01-01

    Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)

  13. Obstacles to prior art searching by the trilateral patent offices: empirical evidence from International Search Reports.

    Science.gov (United States)

    Wada, Tetsuo

    Despite many empirical studies having been carried out on examiner patent citations, few have scrutinized the obstacles to prior art searching when adding patent citations during patent prosecution at patent offices. This analysis takes advantage of the longitudinal gap between an International Search Report (ISR) as required by the Patent Cooperation Treaty (PCT) and subsequent national examination procedures. We investigate whether several kinds of distance actually affect the probability that prior art is detected at the time of an ISR; this occurs much earlier than in national phase examinations. Based on triadic PCT applications between 2002 and 2005 for the trilateral patent offices (the European Patent Office, the US Patent and Trademark Office, and the Japan Patent Office) and their family-level citations made by the trilateral offices, we find evidence that geographical distance negatively affects the probability of capture of prior patents in an ISR. In addition, the technological complexity of an application negatively affects the probability of capture, whereas the volume of forward citations of prior art affects it positively. These results demonstrate the presence of obstacles to searching at patent offices, and suggest ways to design work sharing by patent offices, such that the duplication of search costs arises only when patent office search horizons overlap.

  14. Chemical Search Web Utility

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical Search Web Utility is an intuitive web application that allows the public to easily find the chemical that they are interested in using, and which...

  15. 2nd International Conference on Harmony Search Algorithm

    CERN Document Server

    Geem, Zong

    2016-01-01

    The Harmony Search Algorithm (HSA) is one of the most well-known techniques in the field of soft computing, an important paradigm in the science and engineering community.  This volume, the proceedings of the 2nd International Conference on Harmony Search Algorithm 2015 (ICHSA 2015), brings together contributions describing the latest developments in the field of soft computing with a special focus on HSA techniques. It includes coverage of new methods that have potentially immense application in various fields. Contributed articles cover aspects of the following topics related to the Harmony Search Algorithm: analytical studies; improved, hybrid and multi-objective variants; parameter tuning; and large-scale applications.  The book also contains papers discussing recent advances on the following topics: genetic algorithms; evolutionary strategies; the firefly algorithm and cuckoo search; particle swarm optimization and ant colony optimization; simulated annealing; and local search techniques.   This book ...

  16. Applicability of internet search index for asthma admission forecast using machine learning.

    Science.gov (United States)

    Luo, Li; Liao, Chengcheng; Zhang, Fengyi; Zhang, Wei; Li, Chunyang; Qiu, Zhixin; Huang, Debin

    2018-04-15

    This study aimed to determine whether a search index could provide insight into trends in asthma admission in China. An Internet search index is a powerful tool to monitor and predict epidemic outbreaks. However, whether using an internet search index can significantly improve asthma admissions forecasts remains unknown. The long-term goal is to develop a surveillance system to help early detection and interventions for asthma and to avoid asthma health care resource shortages in advance. In this study, we used a search index combined with air pollution data, weather data, and historical admissions data to forecast asthma admissions using machine learning. Results demonstrated that the best area under the curve in the test set that can be achieved is 0.832, using all predictors mentioned earlier. A search index is a powerful predictor in asthma admissions forecast, and a recent search index can reflect current asthma admissions with a lag-effect to a certain extent. The addition of a real-time, easily accessible search index improves forecasting capabilities and demonstrates the predictive potential of search index. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Search Engines: Gateway to a New ``Panopticon''?

    Science.gov (United States)

    Kosta, Eleni; Kalloniatis, Christos; Mitrou, Lilian; Kavakli, Evangelia

    Nowadays, Internet users are depending on various search engines in order to be able to find requested information on the Web. Although most users feel that they are and remain anonymous when they place their search queries, reality proves otherwise. The increasing importance of search engines for the location of the desired information on the Internet usually leads to considerable inroads into the privacy of users. The scope of this paper is to study the main privacy issues with regard to search engines, such as the anonymisation of search logs and their retention period, and to examine the applicability of the European data protection legislation to non-EU search engine providers. Ixquick, a privacy-friendly meta search engine will be presented as an alternative to privacy intrusive existing practices of search engines.

  18. Pounding the Payment. [A Job-Search Gaming-Simulation].

    Science.gov (United States)

    Aiken, Rebecca; Lutrick, Angie; Kirk, James J.; Nickerson, Lisa; Wilder, Ginny

    This manual is a gaming simulation that career development professionals can use to promote awareness of and sensitivity to the job search experience encountered by their clientele. Goals of the simulation are to approximate a real life job search experience from different perspectives, while at the same time making it fun and interactive. Players…

  19. Optimal Taxation with On-the-Job Search

    DEFF Research Database (Denmark)

    Bagger, Jesper; Moen, Espen R.; Vejlin, Rune Majlund

    We study the optimal taxation of labor income in the presence of search frictions. Heterogeneous workers undertake costly search off- and on-the-job in order to locate more productive jobs that pay higher wages. More productive workers search harder, resulting in equilibrium sorting where low......-type workers are overrepresented in low-wage jobs while high-type workers are overrepresented in high-wage jobs. Absent taxes, worker search effort is efficient, because the social and private gains from search coincide. The optimal tax system balance efficiency and equity concerns at the margin. Equity...... concerns make it desirable to levy low taxes on (or indeed, subsidize) low-wage jobs including unemployment, and levy high taxes on high-wage jobs. Efficiency concerns limit how much taxes an optimal tax system levy on high-paid jobs, as high taxes distort the workers' incentives to search. The model...

  20. Search and the Aging Mind: The Promise and Limits of the Cognitive Control Hypothesis of Age Differences in Search.

    Science.gov (United States)

    Mata, Rui; von Helversen, Bettina

    2015-07-01

    Search is a prerequisite for successful performance in a broad range of tasks ranging from making decisions between consumer goods to memory retrieval. How does aging impact search processes in such disparate situations? Aging is associated with structural and neuromodulatory brain changes that underlie cognitive control processes, which in turn have been proposed as a domain-general mechanism controlling search in external environments as well as memory. We review the aging literature to evaluate the cognitive control hypothesis that suggests that age-related change in cognitive control underlies age differences in both external and internal search. We also consider the limits of the cognitive control hypothesis and propose additional mechanisms such as changes in strategy use and affect that may be necessary to understand how aging affects search. Copyright © 2015 Cognitive Science Society, Inc.

  1. 25 CFR 26.15 - What makes an applicant eligible for Job Placement and Training services?

    Science.gov (United States)

    2010-04-01

    ... SERVICES JOB PLACEMENT AND TRAINING PROGRAM General Applicability § 26.15 What makes an applicant eligible for Job Placement and Training services? You are eligible for services if: (a) You meet the definition... show a need for job training or placement services in order to become gainfully and meaningfully...

  2. Use of a "Balance-Sheet" Procedure to Improve the Quality of Personal Decision Making: A Field Experiment with College Applicants

    Science.gov (United States)

    Mann, Leon

    1972-01-01

    This study tested the effectiveness of a tallying procedure to induce high school seniors to think carefully about considerations relevant to their college choice. The procedure appears to make salient the importance of the decision, helps clarify the merits of the choice and stimulates a search for feasible alternatives. (Author)

  3. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    International Nuclear Information System (INIS)

    Carter, Joshua A.; Agol, Eric

    2013-01-01

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance— s mearing — as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  4. Quantitative evaluation of recall and precision of CAT Crawler, a search engine specialized on retrieval of Critically Appraised Topics

    Directory of Open Access Journals (Sweden)

    Loh Marie

    2004-12-01

    Full Text Available Abstract Background Critically Appraised Topics (CATs are a useful tool that helps physicians to make clinical decisions as the healthcare moves towards the practice of Evidence-Based Medicine (EBM. The fast growing World Wide Web has provided a place for physicians to share their appraised topics online, but an increasing amount of time is needed to find a particular topic within such a rich repository. Methods A web-based application, namely the CAT Crawler, was developed by Singapore's Bioinformatics Institute to allow physicians to adequately access available appraised topics on the Internet. A meta-search engine, as the core component of the application, finds relevant topics following keyword input. The primary objective of the work presented here is to evaluate the quantity and quality of search results obtained from the meta-search engine of the CAT Crawler by comparing them with those obtained from two individual CAT search engines. From the CAT libraries at these two sites, all possible keywords were extracted using a keyword extractor. Of those common to both libraries, ten were randomly chosen for evaluation. All ten were submitted to the two search engines individually, and through the meta-search engine of the CAT Crawler. Search results were evaluated for relevance both by medical amateurs and professionals, and the respective recall and precision were calculated. Results While achieving an identical recall, the meta-search engine showed a precision of 77.26% (±14.45 compared to the individual search engines' 52.65% (±12.0 (p Conclusion The results demonstrate the validity of the CAT Crawler meta-search engine approach. The improved precision due to inherent filters underlines the practical usefulness of this tool for clinicians.

  5. Making SharePoint® Chemically Aware™.

    Science.gov (United States)

    Tallapragada, Kartik; Chewning, Joseph; Kombo, David; Ludwick, Beverly

    2012-01-12

    The use of SharePoint® collaboration software for content management has become a critical part of today's drug discovery process. SharePoint 2010 software has laid a foundation which enables researchers to collaborate and search on various contents. The amount of data generated during a transition of a single compound from preclinical discovery to commercialization can easily range in terabytes, thus there is a greater demand of a chemically aware search algorithm that supplements SharePoint which enables researchers to query for information in a more intuitive and effective way. Thus by supplementing SharePoint with Chemically Aware™ features provides a great value to the pharmaceutical and biotech companies and makes drug discovery more efficient. Using several tools we have integrated SharePoint with chemical, compound, and reaction databases, thereby improving the traditional search engine capability and enhancing the user experience. This paper describes the implementation of a Chemically Aware™ system to supplement SharePoint. A Chemically Aware SharePoint (CASP) allows users to tag documents by drawing a structure and associating it with the related content. It also allows the user to search SharePoint software content and internal/external databases by carrying out substructure, similarity, SMILES, and IUPAC name searches. Building on traditional search, CASP takes SharePoint one step further by providing a intuitive GUI to the researchers to base their search on their knowledge of chemistry than textual search. CASP also provides a way to integrate with other systems, for example a researcher can perform a sub-structure search on pdf documents with embedded molecular entities. A Chemically Aware™ system supplementing SharePoint is a step towards making drug discovery process more efficient and also helps researchers to search for information in a more intuitive way. It also helps the researchers to find information which was once difficult to find

  6. Making SharePoint® Chemically Aware™

    Directory of Open Access Journals (Sweden)

    Tallapragada Kartik

    2012-01-01

    Full Text Available Abstract Background The use of SharePoint® collaboration software for content management has become a critical part of today's drug discovery process. SharePoint 2010 software has laid a foundation which enables researchers to collaborate and search on various contents. The amount of data generated during a transition of a single compound from preclinical discovery to commercialization can easily range in terabytes, thus there is a greater demand of a chemically aware search algorithm that supplements SharePoint which enables researchers to query for information in a more intuitive and effective way. Thus by supplementing SharePoint with Chemically Aware™ features provides a great value to the pharmaceutical and biotech companies and makes drug discovery more efficient. Using several tools we have integrated SharePoint with chemical, compound, and reaction databases, thereby improving the traditional search engine capability and enhancing the user experience. Results This paper describes the implementation of a Chemically Aware™ system to supplement SharePoint. A Chemically Aware SharePoint (CASP allows users to tag documents by drawing a structure and associating it with the related content. It also allows the user to search SharePoint software content and internal/external databases by carrying out substructure, similarity, SMILES, and IUPAC name searches. Building on traditional search, CASP takes SharePoint one step further by providing a intuitive GUI to the researchers to base their search on their knowledge of chemistry than textual search. CASP also provides a way to integrate with other systems, for example a researcher can perform a sub-structure search on pdf documents with embedded molecular entities. Conclusion A Chemically Aware™ system supplementing SharePoint is a step towards making drug discovery process more efficient and also helps researchers to search for information in a more intuitive way. It also helps the

  7. The application of the assessment of nuclear accident status in emergency decision-making during nuclear accident

    International Nuclear Information System (INIS)

    Yang Ling

    2011-01-01

    Nuclear accident assessment is one of the bases for emergency decision-making in the situation of nuclear accident in NPP. Usually, the assessment includes accident status and consequence assessment. It is accident status assessment, and its application in emergency decision-making is introduced here. (author)

  8. Clinical leadership and nursing explored: A literature search.

    Science.gov (United States)

    Stanley, David; Stanley, Karen

    2017-10-27

    clinical leaders contribute to the health service is central to the application of values-based practice and how clinical leaders impact on innovation, change and making care better. © 2017 John Wiley & Sons Ltd.

  9. An intuitive graphical webserver for multiple-choice protein sequence search.

    Science.gov (United States)

    Banky, Daniel; Szalkai, Balazs; Grolmusz, Vince

    2014-04-10

    Every day tens of thousands of sequence searches and sequence alignment queries are submitted to webservers. The capitalized word "BLAST" becomes a verb, describing the act of performing sequence search and alignment. However, if one needs to search for sequences that contain, for example, two hydrophobic and three polar residues at five given positions, the query formation on the most frequently used webservers will be difficult. Some servers support the formation of queries with regular expressions, but most of the users are unfamiliar with their syntax. Here we present an intuitive, easily applicable webserver, the Protein Sequence Analysis server, that allows the formation of multiple choice queries by simply drawing the residues to their positions; if more than one residue are drawn to the same position, then they will be nicely stacked on the user interface, indicating the multiple choice at the given position. This computer-game-like interface is natural and intuitive, and the coloring of the residues makes possible to form queries requiring not just certain amino acids in the given positions, but also small nonpolar, negatively charged, hydrophobic, positively charged, or polar ones. The webserver is available at http://psa.pitgroup.org. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. GRASP (Greedy Randomized Adaptive Search Procedures) applied to optimization of petroleum products distribution in pipeline networks; GRASP (Greedy Randomized Adaptative Search Procedures) aplicado ao 'scheduling' de redes de distribuicao de petroleo e derivados

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Viviane Cristhyne Bini; Arruda, Lucia Valeria Ramos de; Yamamoto, Lia [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2008-07-01

    Planning and scheduling of the pipeline network operations aim the most efficient use of the resources resulting in a better performance of the network. A petroleum distribution pipeline network is composed by refineries, sources and/or storage parks, connected by a set of pipelines, which operate the transportation of petroleum and derivatives among adjacent areas. In real scenes, this problem is considered a combinatorial problem, which has difficult solution, which makes necessary methodologies of the resolution that present low computational time. This work aims to get solutions that attempt the demands and minimize the number of batch fragmentations on the sent operations of products for the pipelines in a simplified model of a real network, through by application of the local search metaheuristic GRASP. GRASP does not depend of solutions of previous iterations and works in a random way so it allows the search for the solution in an ampler and diversified search space. GRASP utilization does not demand complex calculation, even the construction stage that requires more computational effort, which provides relative rapidity in the attainment of good solutions. GRASP application on the scheduling of the operations of this network presented feasible solutions in a low computational time. (author)

  11. The Use of Web Search Engines in Information Science Research.

    Science.gov (United States)

    Bar-Ilan, Judit

    2004-01-01

    Reviews the literature on the use of Web search engines in information science research, including: ways users interact with Web search engines; social aspects of searching; structure and dynamic nature of the Web; link analysis; other bibliometric applications; characterizing information on the Web; search engine evaluation and improvement; and…

  12. Search Engine Liability for Copyright Infringement

    Science.gov (United States)

    Fitzgerald, B.; O'Brien, D.; Fitzgerald, A.

    The chapter provides a broad overview to the topic of search engine liability for copyright infringement. In doing so, the chapter examines some of the key copyright law principles and their application to search engines. The chapter also provides a discussion of some of the most important cases to be decided within the courts of the United States, Australia, China and Europe regarding the liability of search engines for copyright infringement. Finally, the chapter will conclude with some thoughts for reform, including how copyright law can be amended in order to accommodate and realise the great informative power which search engines have to offer society.

  13. Utilization of a radiology-centric search engine.

    Science.gov (United States)

    Sharpe, Richard E; Sharpe, Megan; Siegel, Eliot; Siddiqui, Khan

    2010-04-01

    Internet-based search engines have become a significant component of medical practice. Physicians increasingly rely on information available from search engines as a means to improve patient care, provide better education, and enhance research. Specialized search engines have emerged to more efficiently meet the needs of physicians. Details about the ways in which radiologists utilize search engines have not been documented. The authors categorized every 25th search query in a radiology-centric vertical search engine by radiologic subspecialty, imaging modality, geographic location of access, time of day, use of abbreviations, misspellings, and search language. Musculoskeletal and neurologic imagings were the most frequently searched subspecialties. The least frequently searched were breast imaging, pediatric imaging, and nuclear medicine. Magnetic resonance imaging and computed tomography were the most frequently searched modalities. A majority of searches were initiated in North America, but all continents were represented. Searches occurred 24 h/day in converted local times, with a majority occurring during the normal business day. Misspellings and abbreviations were common. Almost all searches were performed in English. Search engine utilization trends are likely to mirror trends in diagnostic imaging in the region from which searches originate. Internet searching appears to function as a real-time clinical decision-making tool, a research tool, and an educational resource. A more thorough understanding of search utilization patterns can be obtained by analyzing phrases as actually entered as well as the geographic location and time of origination. This knowledge may contribute to the development of more efficient and personalized search engines.

  14. MuZeeker - Adapting a music search engine for mobile phones

    DEFF Research Database (Denmark)

    Larsen, Jakob Eg; Halling, Søren Christian; Sigurdsson, Magnus Kristinn

    2010-01-01

    We describe MuZeeker, a search engine with domain knowledge based on Wikipedia. MuZeeker enables the user to refine a search in multiple steps by means of category selection. In the present version we focus on multimedia search related to music and we present two prototype search applications (web......-based and mobile) and discuss the issues involved in adapting the search engine for mobile phones. A category based filtering approach enables the user to refine a search through relevance feedback by category selection instead of typing additional text, which is hypothesized to be an advantage in the mobile Mu......Zeeker application. We report from two usability experiments using the think aloud protocol, in which N=20 participants performed tasks using MuZeeker and a customized Google search engine. In both experiments web-based and mobile user interfaces were used. The experiment shows that participants are capable...

  15. Mastering data-intensive collaboration and decision making research and practical applications in the dicode project

    CERN Document Server

    2014-01-01

    This book reports on cutting-edge research carried out within the context of the EU-funded Dicode project, which aims at facilitating and augmenting collaboration and decision making in data-intensive and cognitively complex settings. Whenever appropriate, Dicode builds on prominent high-performance computing paradigms and large data processing technologies to meaningfully search, analyze, and aggregate data from diverse, extremely large, and rapidly evolving sources. The Dicode approach and services are fully explained, and particular emphasis is placed on deepening insights regarding the exploitation of big data, as well as on collaboration and issues relating to sense-making support. Building on current advances, the solution developed in the Dicode project brings together the reasoning capabilities of both the machine and humans. It can be viewed as an innovative “workbench” incorporating and orchestrating a set of interoperable services that reduce the data intensiveness and complexity overload at cr...

  16. Automatic examination of nuclear reactor vessels with focused search units. Status and typical application to inspections performed in accordance with ASME code

    International Nuclear Information System (INIS)

    Verger, B.; Saglio, R.

    1981-05-01

    The use of focused search units in nuclear reactor vessel examinations has significantly increased the capability of flaw indication detection and characterization. These search units especially allow a more accurate sizing of indications and a more efficient follow up of their history. In this aspect, they are a unique tool in the area of safety and reliability of installations. It was this type of search unit which was adopted to perform the examinations required within the scope of inservice inspections of all P.W.R. reactors of the French nuclear program. This paper summarizes the results gathered through the 4l examinations performed over the last five years. A typical application of focused search units in automated inspections performed in accordance with ASME code requirements on P.W.R. nuclear reactor vessels is then described

  17. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  18. Quantum-circuit model of Hamiltonian search algorithms

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    We analyze three different quantum search algorithms, namely, the traditional circuit-based Grover's algorithm, its continuous-time analog by Hamiltonian evolution, and the quantum search by local adiabatic evolution. We show that these algorithms are closely related in the sense that they all perform a rotation, at a constant angular velocity, from a uniform superposition of all states to the solution state. This makes it possible to implement the two Hamiltonian-evolution algorithms on a conventional quantum circuit, while keeping the quadratic speedup of Grover's original algorithm. It also clarifies the link between the adiabatic search algorithm and Grover's algorithm

  19. Search and Recommendation

    DEFF Research Database (Denmark)

    Bogers, Toine

    2014-01-01

    In just a little over half a century, the field of information retrieval has experienced spectacular growth and success, with IR applications such as search engines becoming a billion-dollar industry in the past decades. Recommender systems have seen an even more meteoric rise to success with wide...

  20. Robots for hazardous duties: Military, space, and nuclear facility applications. (Latest citations from the NTIS bibliographic database). Published Search

    International Nuclear Information System (INIS)

    1993-09-01

    The bibliography contains citations concerning the design and application of robots used in place of humans where the environment could be hazardous. Military applications include autonomous land vehicles, robotic howitzers, and battlefield support operations. Space operations include docking, maintenance, mission support, and intra-vehicular and extra-vehicular activities. Nuclear applications include operations within the containment vessel, radioactive waste operations, fueling operations, and plant security. Many of the articles reference control techniques and the use of expert systems in robotic operations. Applications involving industrial manufacturing, walking robots, and robot welding are cited in other published searches in this series. (Contains a minimum of 183 citations and includes a subject term index and title list.)

  1. CONFORMATIONAL SEARCH BY POTENTIAL-ENERGY ANNEALING - ALGORITHM AND APPLICATION TO CYCLOSPORINE-A

    NARCIS (Netherlands)

    VANSCHAIK, RC; VANGUNSTEREN, WF; BERENDSEN, HJC

    A major problem in modelling (biological) macromolecules is the search for low-energy conformations. The complexity of a conformational search problem increases exponentially with the number of degrees of freedom which means that a systematic search can only be performed for very small structures.

  2. Using a Google Search Appliance (GSA to search digital library collections: a case study of the INIS Collection Search

    Directory of Open Access Journals (Sweden)

    Dobrica Savic

    2014-05-01

    The International Nuclear Information System (INIS hosts one of the world’s largest collections of published information on the peaceful uses of nuclear science and technology. It offers online access to a unique collection of 3.6 million bibliographic records and 320,000 full-texts of non-conventional (grey literature. This large digital library collection suffered from most of the well-known shortcomings of the classic library catalogue. Searching was complex and complicated, required some training in using Boolean logic, full-text searching was not an option, and the response time was slow. An opportune moment came with the retirement of the previous catalogue software and with the adoption of Google Search Appliance (GSA as an organization-wide search engine standard. INIS was quick to realize a great potential in using such a well-known application as a replacement for its online catalogue and this paper presents the advantages and disadvantages encountered during three years of GSA use. Based on specific INIS-based practice and experience, this paper also offers some guidelines on ways to improve classic collections of millions of bibliographic and full-text documents, while achieving multiple benefits such as increased use, accessibility, usability, expandability and improving the user search and retrieval experience.

  3. Anonymous Search Histories Featuring Personalized Advertisement - Balancing Privacy with Economic Interests

    OpenAIRE

    Thorben Burghardt; Klemens Bohm; Achim Guttmann; Chris Clifton

    2011-01-01

    Search engines are key to finding information on the web. Search presently is free for users financed by targeted advertisement. Today, the current search terms determine the ad placement. In the near future, search-engine providers will make use of detailed user profiles for better ad placement. This puts user privacy at risk. Anonymizing search histories, which is a solution in principle, gives way to a trade-off between privacy and the usability of the data for ad placement. This paper stu...

  4. A new linguistic aggregation operator and its application to multiple attribute decision making

    Directory of Open Access Journals (Sweden)

    Jibin Lan

    2015-12-01

    Full Text Available In this paper, a new linguistic aggregation operator in linguistic environment is established and the desirable properties: monotonic, focus effect, idempotent, commutative and bounded are studied. Then, a new restricted ordering relation on the n-dimensional linguistic scales is proposed which satisfies strict pareto-dominance and is restricted by a weighting vector. A practical multiple attribute decision making methodology for an uncertain linguistic environment is proposed based on the proposed operator. An example is given to illustrate the rationality and validity of the new approach to decision making application.

  5. Handbook on Decision Making Vol 2 Risk Management in Decision Making

    CERN Document Server

    Lu, Jie; Zhang, Guangquan

    2012-01-01

    This book presents innovative theories, methodologies, and techniques in the field of risk management and decision making. It introduces new research developments and provides a comprehensive image of their potential applications to readers interested in the area. The collection includes: computational intelligence applications in decision making, multi-criteria decision making under risk, risk modelling,forecasting and evaluation, public security and community safety, risk management in supply chain and other business decision making, political risk management and disaster response systems. The book is directed to academic and applied researchers working on risk management, decision making, and management information systems.

  6. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  7. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  8. GPU Based N-Gram String Matching Algorithm with Score Table Approach for String Searching in Many Documents

    Science.gov (United States)

    Srinivasa, K. G.; Shree Devi, B. N.

    2017-10-01

    String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.

  9. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation

    Directory of Open Access Journals (Sweden)

    Rognes Torbjørn

    2011-06-01

    Full Text Available Abstract Background The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. Results A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Conclusions Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  10. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    Science.gov (United States)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  11. Optimal Semi-Adaptive Search With False Targets

    Science.gov (United States)

    2017-12-01

    Kress, K. Y. Lin, and R. Szechtman, “Optimal discrete search with imperfect specificity,” Math Meth Oper Res, vol. 68, pp. 539–549, 2008. [16] L. D...constraints on employment of physical search assets will involve discrete approximations to the continuous solutions given by these techniques. These...model assumes. We optimize in the continuous case, to be able then to make the best possible discrete approximations if needed, given the constraints of a

  12. Disciplined Decision Making in an Interdisciplinary Environment: Some Implications for Clinical Applications of Statistical Process Control.

    Science.gov (United States)

    Hantula, Donald A.

    1995-01-01

    Clinical applications of statistical process control (SPC) in human service organizations are considered. SPC is seen as providing a standard set of criteria that serves as a common interface for data-based decision making, which may bring decision making under the control of established contingencies rather than the immediate contingencies of…

  13. Reader error, object recognition, and visual search

    Science.gov (United States)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  14. Second Workshop on Supporting Complex Search Tasks

    NARCIS (Netherlands)

    Belkin, Nicholas J.; Bogers, Toine; Kamps, Jaap; Kelly, Diane; Koolen, Marijn; Yilmaz, Emine

    2017-01-01

    There is broad consensus in the field of IR that search is complex in many use cases and applications, both on the Web and in domain specific collections, and both professionally and in our daily life. Yet our understanding of complex search tasks, in comparison to simple look up tasks, is

  15. Searching for Heavy Photons with Detached Verices in the Heavy Photon Search Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Szumila-Vance, Holly [Old Dominion Univ., Norfolk, VA (United States); Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States)

    2017-08-01

    The Jefferson Lab Heavy Photon Search (HPS) experiment is searching for a hypothetical massive particle called the heavy photon which could mediate a dark electromagnetic-type force. If heavy photons kinetically mix with Standard Model photons, they may be radiated by electrons scattering from a heavy nucleus and then decay to e+e- pairs. HPS uniquely searches for heavy photons that either decay at the target or a measurable distance after. The experiment utilizes a silicon vertex tracker (SVT) for momentum and vertex reconstruction, together with an electromagnetic calorimeter for measuring particle energies and triggering events. The HPS experiment took its first data during the spring 2015 engineering run using a 1 GeV electron beam incident on a tungsten target and its second data in the spring of 2016 at a beam energy of 2.3 GeV. The 2015 run obtained two days of production data that was used for the first physics results. The analysis of the data was conducted as a blinded analysis by tuning cuts on 10% of the data. This dissertation discusses the displaced vertex search for heavy photons in the 2015 engineering run. It describes the theoretical motivation for looking for heavy photons and provides an overview of the HPS experimental design and performance. The performance details of the experiment are primarily derived from the 2015 engineering run with some discussion from the higher energy running in 2016. This dissertation further discusses the cuts used to optimize the displaced vertex search and the results of the search. The displaced vertex search did not set a limit on the heavy photon but did validate the methodology for conducting the search. Finally, we used the full data set to make projections and guide future analyses.

  16. An automated full-symmetry Patterson search method

    International Nuclear Information System (INIS)

    Rius, J.; Miravitlles, C.

    1987-01-01

    A full-symmetry Patterson search method is presented that performs a molecular coarse rotation search in vector space and orientation refinement using the σ function. The oriented molecule is positioned using the fast translation function τ 0 , which is based on the automated interpretation of τ projections using the sum function. This strategy reduces the number of Patterson-function values to be stored in the rotation search, and the use of the τ 0 function minimizes the required time for the development of all probable rotation search solutions. The application of this method to five representative test examples is shown. (orig.)

  17. The Role of Libraries in the Search for Educational Excellence.

    Science.gov (United States)

    Breivik, Patricia Senn

    1987-01-01

    Discusses ways in which libraries can make a major contribution to the search for educational excellence and urges librarians to make a concerted effort to capture the attention of educational leaders. Four references are listed. (MES)

  18. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    Science.gov (United States)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  19. PRINCIPLE OF POINT MAKING OFMUTUALLY ACCEPTABLE MULTIPROJECTION DECISION

    Directory of Open Access Journals (Sweden)

    Olga N. Lapaeva

    2015-01-01

    Full Text Available The principle of point making of mutually acceptable multi-projection decision in economics is set forth in the article. The principle envisages searching for the best variant by each stakeholder and result making by crossing of individual sets.

  20. Policy implications for familial searching.

    Science.gov (United States)

    Kim, Joyce; Mammo, Danny; Siegel, Marni B; Katsanis, Sara H

    2011-11-01

    In the United States, several states have made policy decisions regarding whether and how to use familial searching of the Combined DNA Index System (CODIS) database in criminal investigations. Familial searching pushes DNA typing beyond merely identifying individuals to detecting genetic relatedness, an application previously reserved for missing persons identifications and custody battles. The intentional search of CODIS for partial matches to an item of evidence offers law enforcement agencies a powerful tool for developing investigative leads, apprehending criminals, revitalizing cold cases and exonerating wrongfully convicted individuals. As familial searching involves a range of logistical, social, ethical and legal considerations, states are now grappling with policy options for implementing familial searching to balance crime fighting with its potential impact on society. When developing policies for familial searching, legislators should take into account the impact of familial searching on select populations and the need to minimize personal intrusion on relatives of individuals in the DNA database. This review describes the approaches used to narrow a suspect pool from a partial match search of CODIS and summarizes the economic, ethical, logistical and political challenges of implementing familial searching. We examine particular US state policies and the policy options adopted to address these issues. The aim of this review is to provide objective background information on the controversial approach of familial searching to inform policy decisions in this area. Herein we highlight key policy options and recommendations regarding effective utilization of familial searching that minimize harm to and afford maximum protection of US citizens.

  1. Geometric Models for Collaborative Search and Filtering

    Science.gov (United States)

    Bitton, Ephrat

    2011-01-01

    This dissertation explores the use of geometric and graphical models for a variety of information search and filtering applications. These models serve to provide an intuitive understanding of the problem domains and as well as computational efficiencies to our solution approaches. We begin by considering a search and rescue scenario where both…

  2. Simulation Optimization by Genetic Search: A Comprehensive Study with Applications to Production Management

    National Research Council Canada - National Science Library

    Yunker, James

    2003-01-01

    In this report, a relatively new simulation optimization technique, the genetic search, is compared to two more established simulation techniques-the pattern search and the response surface methodology search...

  3. Decision-making in nursing practice: An integrative literature review.

    Science.gov (United States)

    Nibbelink, Christine W; Brewer, Barbara B

    2018-03-01

    To identify and summarise factors and processes related to registered nurses' patient care decision-making in medical-surgical environments. A secondary goal of this literature review was to determine whether medical-surgical decision-making literature included factors that appeared to be similar to concepts and factors in naturalistic decision making (NDM). Decision-making in acute care nursing requires an evaluation of many complex factors. While decision-making research in acute care nursing is prevalent, errors in decision-making continue to lead to poor patient outcomes. Naturalistic decision making may provide a framework for further exploring decision-making in acute care nursing practice. A better understanding of the literature is needed to guide future research to more effectively support acute care nurse decision-making. PubMed and CINAHL databases were searched, and research meeting criteria was included. Data were identified from all included articles, and themes were developed based on these data. Key findings in this review include nursing experience and associated factors; organisation and unit culture influences on decision-making; education; understanding patient status; situation awareness; and autonomy. Acute care nurses employ a variety of decision-making factors and processes and informally identify experienced nurses to be important resources for decision-making. Incorporation of evidence into acute care nursing practice continues to be a struggle for acute care nurses. This review indicates that naturalistic decision making may be applicable to decision-making nursing research. Experienced nurses bring a broad range of previous patient encounters to their practice influencing their intuitive, unconscious processes which facilitates decision-making. Using naturalistic decision making as a conceptual framework to guide research may help with understanding how to better support less experienced nurses' decision-making for enhanced patient

  4. Environmental applications of biosurfactants: recent advances.

    Science.gov (United States)

    Pacwa-Płociniczak, Magdalena; Płaza, Grażyna A; Piotrowska-Seget, Zofia; Cameotra, Swaranjit Singh

    2011-01-18

    Increasing public awareness of environmental pollution influences the search and development of technologies that help in clean up of organic and inorganic contaminants such as hydrocarbons and metals. An alternative and eco-friendly method of remediation technology of environments contaminated with these pollutants is the use of biosurfactants and biosurfactant-producing microorganisms. The diversity of biosurfactants makes them an attractive group of compounds for potential use in a wide variety of industrial and biotechnological applications. The purpose of this review is to provide a comprehensive overview of advances in the applications of biosurfactants and biosurfactant-producing microorganisms in hydrocarbon and metal remediation technologies.

  5. SUSY searches in early CMS data

    International Nuclear Information System (INIS)

    Tricomi, A

    2008-01-01

    In the first year of data taking at LHC, the CMS experiment expects to collect about 1 fb -1 of data, which make possible the first searches for new phenomena. All such searches require however the measurement of the SM background and a detailed understanding of the detector performance, reconstruction algorithms and triggering. The CMS efforts are hence addressed to designing a realistic analysis plan in preparation to the data taking. In this paper, the CMS perspectives and analysis strategies for Supersymmetry (SUSY) discovery with early data are presented

  6. Neutral Supersymmetric Higgs Boson Searches

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, Stephen Luke [Imperial College, London (United Kingdom)

    2008-07-01

    In some Supersymmetric extensions of the Standard Model, including the Minimal Supersymmetric Standard Model (MSSM), the coupling of Higgs bosons to b-quarks is enhanced. This enhancement makes the associated production of the Higgs with b-quarks an interesting search channel for the Higgs and Supersymmetry at D0. The identification of b-quarks, both online and offline, is essential to this search effort. This thesis describes the author's involvement in the development of both types of b-tagging and in the application of these techniques to the MSSM Higgs search. Work was carried out on the Level-3 trigger b-tagging algorithms. The impact parameter (IP) b-tagger was retuned and the effects of increased instantaneous luminosity on the tagger were studied. An extension of the IP-tagger to use the z-tracking information was developed. A new b-tagger using secondary vertices was developed and commissioned. A tool was developed to allow the use of large multi-run samples for trigger studies involving b-quarks. Offline, a neural network (NN) b-tagger was trained combining the existing offline lifetime based b-tagging tools. The efficiency and fake rate of the NN b-tagger were measured in data and MC. This b-tagger was internally reviewed and certified by the Collaboration and now provides the official b-tagging for all analyses using the Run IIa dataset at D0. A search was performed for neutral MSSM Higgs bosons decaying to a b{bar b} pair and produced in association with one or more b-quarks. Limits are set on the cross-section times the branching ratio for such a process. The limits were interpreted in various MSSM scenarios. This analysis uses the NN b-tagger and was the first to use this tool. The analysis also relies on triggers using the Level-3 IP b-tagging tool described previously. A likelihood discriminant was used to improve the analysis and a neural network was developed to cross-check this technique. The result of the analysis has been submitted to PRL

  7. Application of fuzzy inference system to increase efficiency of management decision-making in agricultural enterprises

    OpenAIRE

    Balanovskаya, Tetiana Ivanovna; Boretska, Zoreslava Petrovna

    2014-01-01

    Application of fuzzy inference system to increase efficiency of management decision- making in agricultural enterprises. Theoretical and methodological issues, practical recommendations on improvement of management decision-making in agricultural enterprises to increase their competitiveness have been intensified and developed in the article. A simulation example of a quality management system for agricultural products on the basis of the theory of fuzzy sets and fuzzy logic has been proposed...

  8. Ultrasonic inspection technology development and search units design examples of practical applications

    CERN Document Server

    Brook, Mark V

    2012-01-01

    "Ultrasonic testing is a relatively new branch of science and industry. The development of ultrasonic testing started in the late 1920s. At the beginning, the fundamentals of this method were borrowed from basic physics, geometrical and wave optics, acoustics and seismology. Later it became clear that some of these theories and calculation methods could not always explain the phenomena observed in many specific cases of ultrasonic testing. Without knowing the nuances of the ultrasonic wave propagation in the test object it is impossible to design effective inspection technique and search units for it realization. This book clarifies the theoretical differences of ultrasonics from the other wave propagation theories presenting both basics of physics in the wave propagation, elementary mathematic and advanced practical applications. Almost every specific technique presented in this book is proofed by actual experimental data and examples of calculations"--

  9. Content-based Music Search and Recommendation System

    Science.gov (United States)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  10. Geochemical Exploration Techniques Applicable in the Search for Copper Deposits

    Science.gov (United States)

    Chaffee, Maurice A.

    1975-01-01

    media. Samples of ice and snow have been used for limited geochemical surveys. Both geobotanical and biogeochemical surveys have been successful in locating copper deposits in many parts of the world. Micro-organisms, including bacteria and algae, are other unproved media that should be studied. Animals can be used in geochemical-prospecting programs. Dogs have been used quite successfully to sniff out hidden and exposed sulfide minerals. Tennite mounds are commonly composed of subsurface material, but have not as yet proved to be useful in locating buried mineral deposits. Animal tissue and waste products are essentially unproved but potentially valuable sampling media. Knowledge of the location of areas where trace-element-associated diseases in animals and man are endemic as well as a better understanding of these diseases, may aid in identifying regions that are enriched in or depleted of various elements, including copper. Results of analyses of gases in the atmosphere are proving valuable in mineral-exploration surveys. Studies involving metallic compounds exhaled by plants into the atmosphere, and of particulate matter suspended in the atmosphere are reviewed these methods may become important in the future. Remote-sensing techniques are useful for making indirect measurements of geochemical responses. Two techniques applicable to geochemical exploration are neutron-activation analysis and gamma-ray spectrometry. Aerial photography is especially useful in vegetation surveys. Radar imagery is an unproved but potentially valuable method for use in studies of vegetation in perpetually clouded regions. With the advent of modern computers, many new techniques, such as correlation analysis, regression analysis, discriminant analysis, factor analysis, cluster analysis, trend-surface analysis, and moving-average analysis can be applied to geochemical data sets. Selective use of these techniques can provide new insights into the interpretatio

  11. Application of Neural Networks to Higgs Boson Search

    Czech Academy of Sciences Publication Activity Database

    Hakl, František; Hlaváček, M.; Kalous, R.

    2003-01-01

    Roč. 502, - (2003), s. 489-491 ISSN 0168-9002 R&D Projects: GA MPO RP-4210/69/97 Institutional research plan: AV0Z1030915 Keywords : neural network s * Higgs search * genetic optimization Subject RIV: BA - General Mathematics Impact factor: 1.166, year: 2003

  12. Information search with situation-specific reward functions

    Directory of Open Access Journals (Sweden)

    Bjorn Meder

    2012-03-01

    Full Text Available can strongly conflict with the goal of obtaining information for improving payoffs. Two environments with such a conflict were identified through computer optimization. Three subsequent experiments investigated people's search behavior in these environments. Experiments 1 and 2 used a multiple-cue probabilistic category-learning task to convey environmental probabilities. In a subsequent search task subjects could query only a single feature before making a classification decision. The crucial manipulation concerned the search-task reward structure. The payoffs corresponded either to accuracy, with equal rewards associated with the two categories, or to an asymmetric payoff function, with different rewards associated with each category. In Experiment 1, in which learning-task feedback corresponded to the true category, people later preferentially searched the accuracy-maximizing feature, whether or not this would improve monetary rewards. In Experiment 2, an asymmetric reward structure was used during learning. Subjects searched the reward-maximizing feature when asymmetric payoffs were preserved in the search task. However, if search-task payoffs corresponded to accuracy, subjects preferentially searched a feature that was suboptimal for reward and accuracy alike. Importantly, this feature would have been most useful, under the learning-task payoff structure. Experiment 3 found that, if words and numbers are used to convey environmental probabilities, neither reward nor accuracy consistently predicts search. These findings emphasize the necessity of taking into account people's goals and search-and-decision processes during learning, thereby challenging current models of information search.

  13. A Variable Neighborhood Search Algorithm for the Leather Nesting Problem

    Directory of Open Access Journals (Sweden)

    Cláudio Alves

    2012-01-01

    Full Text Available The leather nesting problem is a cutting and packing optimization problem that consists in finding the best layout for a set of irregular pieces within a natural leather hide with an irregular surface and contour. In this paper, we address a real application of this problem related to the production of car seats in the automotive industry. The high quality requirements imposed on these products combined with the heterogeneity of the leather hides make the problem very complex to solve in practice. Very few results are reported in the literature for the leather nesting problem. Furthermore, the majority of the approaches impose some additional constraints to the layouts related to the particular application that is considered. In this paper, we describe a variable neighborhood search algorithm for the general leather nesting problem. To evaluate the performance of our approaches, we conducted an extensive set of computational experiments on real instances. The results of these experiments are reported at the end of the paper.

  14. Discover yourself - Making your online information searchable

    KAUST Repository

    Martin, Jose

    2015-01-01

    The slides used during the presentation where KAUST Library shows 2 different approaches to making the information available in the Library websites searchable via the Catalog. This enables users to search for information about not only resources, but also the services provided by the Library. The first approach is based on using Encore and the OAI-PMH protocol, and the second one uses Google's Custom Search Engine.

  15. GeoSearch: a new virtual globe application for the submission, storage, and sharing of point-based ecological data

    Science.gov (United States)

    Cardille, J. A.; Gonzales, R.; Parrott, L.; Bai, J.

    2009-12-01

    How should researchers store and share data? For most of history, scientists with results and data to share have been mostly limited to books and journal articles. In recent decades, the advent of personal computers and shared data formats has made it feasible, though often cumbersome, to transfer data between individuals or among small groups. Meanwhile, the use of automatic samplers, simulation models, and other data-production techniques has increased greatly. The result is that there is more and more data to store, and a greater expectation that they will be available at the click of a button. In 10 or 20 years, will we still send emails to each other to learn about what data exist? The development and widespread familiarity with virtual globes like Google Earth and NASA WorldWind has created the potential, in just the last few years, to revolutionize the way we share data, search for and search through data, and understand the relationship between individual projects in research networks, where sharing and dissemination of knowledge is encouraged. For the last two years, we have been building the GeoSearch application, a cutting-edge online resource for the storage, sharing, search, and retrieval of data produced by research networks. Linking NASA’s WorldWind globe platform, the data browsing toolkit prefuse, and SQL databases, GeoSearch’s version 1.0 enables flexible searches and novel geovisualizations of large amounts of related scientific data. These data may be submitted to the database by individual researchers and processed by GeoSearch’s data parser. Ultimately, data from research groups gathered in a research network would be shared among users via the platform. Access is not limited to the scientists themselves; administrators can determine which data can be presented publicly and which require group membership. Under the auspices of the Canada’s Sustainable Forestry Management Network of Excellence, we have created a moderate-sized database

  16. Search query data to monitor interest in behavior change: application for public health.

    Science.gov (United States)

    Carr, Lucas J; Dunsiger, Shira I

    2012-01-01

    There is a need for effective interventions and policies that target the leading preventable causes of death in the U.S. (e.g., smoking, overweight/obesity, physical inactivity). Such efforts could be aided by the use of publicly available, real-time search query data that illustrate times and locations of high and low public interest in behaviors related to preventable causes of death. This study explored patterns of search query activity for the terms 'weight', 'diet', 'fitness', and 'smoking' using Google Insights for Search. Search activity for 'weight', 'diet', 'fitness', and 'smoking' conducted within the United States via Google between January 4(th), 2004 (first date data was available) and November 28(th), 2011 (date of data download and analysis) were analyzed. Using a generalized linear model, we explored the effects of time (month) on mean relative search volume for all four terms. Models suggest a significant effect of month on mean search volume for all four terms. Search activity for all four terms was highest in January with observable declines throughout the remainder of the year. These findings demonstrate discernable temporal patterns of search activity for four areas of behavior change. These findings could be used to inform the timing, location and messaging of interventions, campaigns and policies targeting these behaviors.

  17. Multi-objective Search-based Mobile Testing

    OpenAIRE

    Mao, K.

    2017-01-01

    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis...

  18. Impediments for the application of risk-informed decision making in nuclear safety

    International Nuclear Information System (INIS)

    Hahn, L.

    2001-01-01

    A broad application of risk-informed decision making in the regulation of safety of nuclear power plants is hindered by the lack of quantitative risk and safety standards as well as of precise instruments to demonstrate an appropriate safety. An additional severe problem is associated with the difficulty to harmonize deterministic design requirements and probabilistic safety assessment. The problem is strengthened by the vulnerability of PSA for subjective influences and the potential of misuse. Beside this scepticism the nuclear community is encouraged to intensify the efforts to improve the quality standards for probabilistic safety assessments and their quality assurance. A prerequisite for reliable risk-informed decision making processes is also a well-defined and transparent relationship between deterministic and probabilistic safety approaches. (author)

  19. Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    Directory of Open Access Journals (Sweden)

    Bin Guo

    2008-01-01

    Full Text Available A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certain conditions, smart objects become hidden ones. However, current systems provide no mechanism to detect and manage objects when this problem occurs. Second, there is no common context infrastructure for building smart artifact systems, which makes it difficult for separately developed applications to interact with each other and uneasy for them to share and reuse knowledge. Unlike previous systems, Home-Explorer builds on an ontology-based knowledge infrastructure named Sixth-Sense, which makes it easy for the system to interact with other applications or agents also based on this ontology. The hidden object problem is also reflected in our ontology, which enables Home-Explorer to deal with both smart objects and hidden objects. A set of rules for deducing an object's status or location information and for locating hidden objects are described and evaluated.

  20. MAKING USE OF INFORMATION TECHNOLOGIES IN THE STUDY OF CLASSICAL TURKISH LITERATURE AND E-LIBRARY APPLICATIONS / KLÂSİK TÜRK EDEBİYATI ÇALIŞMALARINDA BİLİŞİM TEKNOLOJİSİNDEN YARARLANMA VE E-KÜTÜPHÂNE UYGULAMALARI

    Directory of Open Access Journals (Sweden)

    Dr. İlyas YAZAR

    2007-08-01

    Full Text Available The traditional methods and applications used in Classical TurkishLiterary works still remain of great importance currently, besides thefacilities that the information age provide shape the methods andapplications mentioned above. The progress made in infromationtecnologies and e-library applications contribute to the works made inClassical Turkish Literature and research techniques in this area ofstudy and they also provide alternative options. In addition to advancingthe research done in the area of Classical Turkish Literature, somesoftware applications especially the computer and the others improvedby means of computers, some projects improved by making use fordigital technologies especially e-library works and computer-assitedapplications lead to new method search. This article proposes makinguse of information technologies in the study of Classical TurkishLiterature, including projects, applications, methods and e-libraryapplications.

  1. Search for extraterrestrial life: recent developments. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Papagiannis, M D [ed.

    1985-01-01

    Seventy experts from 20 different countries discuss the many interrelated aspects of the search for extraterrestrial life, including the search for other planetary systems where life may originate and evolve, the widespread presence of complex prebiotic molecules in our Solar System and in interstellar space which could be precursors of life, and the universal aspects of the biological evolution on Earth. They also discuss the nearly 50 radio searches that were undertaken in the last 25 years, the technological progress that has occurred in this period, and the plans for the future including the comprehensive SETI search program that NASA is now preparing for the 1990's. Extensive introductions by the Editor to each of the 8 sections, make this volume friendly even to the non-specialist who has a genuine interest for this new field. 549 refs.; 84 figs.; 21 tabs.

  2. Searches for squarks and gluinos with ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00394440; The ATLAS collaboration

    2017-01-01

    One of the most versatile and attractive extensions to the successful yet incomplete Standard Model of particle physics is Supersymmetry - a theory the ATLAS experiment at the Large Hadron Collider is looking for in its recorded data. Due to the nature of proton-proton collisions, the recorded physics events are mainly produced via the strong force. This fact makes searches for the superpartners of the gluon and the quarks particularly promising. This document provides an overview of searches for squarks and gluinos using the ATLAS experiment and describes two of the major analyses in detail. The analysis strategies are outlined, the results discussed and interpreted. Finally, an outlook onto other searches for strongly produced Supersymmetry with ATLAS is given.

  3. Combinatorial search from algorithms to systems

    CERN Document Server

    Hamadi, Youssef

    2013-01-01

    This book details key techniques in constraint networks, dealing in particular with constraint satisfaction, search, satisfiability, and applications in machine learning and constraint programming. Includes case studies.

  4. Interval neutrosophic sets and their application in multicriteria decision making problems.

    Science.gov (United States)

    Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong

    2014-01-01

    As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.

  5. A novel directional asymmetric sampling search algorithm for fast block-matching motion estimation

    Science.gov (United States)

    Li, Yue-e.; Wang, Qiang

    2011-11-01

    This paper proposes a novel directional asymmetric sampling search (DASS) algorithm for video compression. Making full use of the error information (block distortions) of the search patterns, eight different direction search patterns are designed for various situations. The strategy of local sampling search is employed for the search of big-motion vector. In order to further speed up the search, early termination strategy is adopted in procedure of DASS. Compared to conventional fast algorithms, the proposed method has the most satisfactory PSNR values for all test sequences.

  6. International patent applications for non-injectable naloxone for opioid overdose reversal: Exploratory search and retrieve analysis of the PatentScope database.

    Science.gov (United States)

    McDonald, Rebecca; Danielsson Glende, Øyvind; Dale, Ola; Strang, John

    2018-02-01

    Non-injectable naloxone formulations are being developed for opioid overdose reversal, but only limited data have been published in the peer-reviewed domain. Through examination of a hitherto-unsearched database, we expand public knowledge of non-injectable formulations, tracing their development and novelty, with the aim to describe and compare their pharmacokinetic properties. (i) The PatentScope database of the World Intellectual Property Organization was searched for relevant English-language patent applications; (ii) Pharmacokinetic data were extracted, collated and analysed; (iii) PubMed was searched using Boolean search query '(nasal OR intranasal OR nose OR buccal OR sublingual) AND naloxone AND pharmacokinetics'. Five hundred and twenty-two PatentScope and 56 PubMed records were identified: three published international patent applications and five peer-reviewed papers were eligible. Pharmacokinetic data were available for intranasal, sublingual, and reference routes. Highly concentrated formulations (10-40 mg mL -1 ) had been developed and tested. Sublingual bioavailability was very low (1%; relative to intravenous). Non-concentrated intranasal spray (1 mg mL -1 ; 1 mL per nostril) had low bioavailability (11%). Concentrated intranasal formulations (≥10 mg mL -1 ) had bioavailability of 21-42% (relative to intravenous) and 26-57% (relative to intramuscular), with peak concentrations (dose-adjusted C max  = 0.8-1.7 ng mL -1 ) reached in 19-30 min (t max ). Exploratory analysis identified intranasal bioavailability as associated positively with dose and negatively with volume. We find consistent direction of development of intranasal sprays to high-concentration, low-volume formulations with bioavailability in the 20-60% range. These have potential to deliver a therapeutic dose in 0.1 mL volume. [McDonald R, Danielsson Glende Ø, Dale O, Strang J. International patent applications for non-injectable naloxone for opioid overdose reversal

  7. Analysis and Design of Web-Based Database Application for Culinary Community

    OpenAIRE

    Huda, Choirul; Awang, Osel Dharmawan; Raymond, Raymond; Raynaldi, Raynaldi

    2017-01-01

    This research is based on the rapid development of the culinary and information technology. The difficulties in communicating with the culinary expert and on recipe documentation make a proper support for media very important. Therefore, a web-based database application for the public is important to help the culinary community in communication, searching and recipe management. The aim of the research was to design a web-based database application that could be used as social media for the cu...

  8. Discover yourself - Making your online information searchable

    KAUST Repository

    Martin, Jose

    2015-11-08

    The slides used during the presentation where KAUST Library shows 2 different approaches to making the information available in the Library websites searchable via the Catalog. This enables users to search for information about not only resources, but also the services provided by the Library. The first approach is based on using Encore and the OAI-PMH protocol, and the second one uses Google\\'s Custom Search Engine.

  9. Predicting consumer behavior with Web search.

    Science.gov (United States)

    Goel, Sharad; Hofman, Jake M; Lahaie, Sébastien; Pennock, David M; Watts, Duncan J

    2010-10-12

    Recent work has demonstrated that Web search volume can "predict the present," meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future.

  10. Teaching AI Search Algorithms in a Web-Based Educational System

    Science.gov (United States)

    Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis

    2013-01-01

    In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…

  11. An introduction to harmony search optimization method

    CERN Document Server

    Wang, Xiaolei; Zenger, Kai

    2014-01-01

    This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche

  12. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all situat...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  13. A Systematic Review of Healthcare Applications for Smartphones

    Directory of Open Access Journals (Sweden)

    Mosa Abu Saleh

    2012-07-01

    Full Text Available Abstract Background Advanced mobile communications and portable computation are now combined in handheld devices called “smartphones”, which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category. Methods In April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches. Results A total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21, drug reference (6, medical calculators (8, literature search (6, clinical communication (3, Hospital Information System (HIS client applications (4, medical training (2 and general healthcare applications (7; 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6, ENT-related (4, fall-related (3, and two other conditions (2. The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students. Conclusions Many medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also

  14. A systematic review of healthcare applications for smartphones.

    Science.gov (United States)

    Mosa, Abu Saleh Mohammad; Yoo, Illhoi; Sheets, Lincoln

    2012-07-10

    Advanced mobile communications and portable computation are now combined in handheld devices called "smartphones", which are also capable of running third-party software. The number of smartphone users is growing rapidly, including among healthcare professionals. The purpose of this study was to classify smartphone-based healthcare technologies as discussed in academic literature according to their functionalities, and summarize articles in each category. In April 2011, MEDLINE was searched to identify articles that discussed the design, development, evaluation, or use of smartphone-based software for healthcare professionals, medical or nursing students, or patients. A total of 55 articles discussing 83 applications were selected for this study from 2,894 articles initially obtained from the MEDLINE searches. A total of 83 applications were documented: 57 applications for healthcare professionals focusing on disease diagnosis (21), drug reference (6), medical calculators (8), literature search (6), clinical communication (3), Hospital Information System (HIS) client applications (4), medical training (2) and general healthcare applications (7); 11 applications for medical or nursing students focusing on medical education; and 15 applications for patients focusing on disease management with chronic illness (6), ENT-related (4), fall-related (3), and two other conditions (2). The disease diagnosis, drug reference, and medical calculator applications were reported as most useful by healthcare professionals and medical or nursing students. Many medical applications for smartphones have been developed and widely used by health professionals and patients. The use of smartphones is getting more attention in healthcare day by day. Medical applications make smartphones useful tools in the practice of evidence-based medicine at the point of care, in addition to their use in mobile clinical communication. Also, smartphones can play a very important role in patient education

  15. Making sense of the future: The information search strategies of construction practitioners in exploring the risk landscape

    DEFF Research Database (Denmark)

    Stingl, Verena; Maytorena-Sanchez, Eunice

    This paper explores the cognitive strategies that construction practitioners rely on when searching to identify risks in a simulated project. By using the active information search methodology in interviews with 45 industry practitioners, we were able to distinguish three stereotypical information...

  16. Active Path Planning for Drones in Object Search

    OpenAIRE

    Wang, Zeyangyi

    2017-01-01

    Object searching is one of the most popular applications of unmanned aerial vehicles. Low cost small drones are particularly suited for surveying tasks in difficult conditions. With their limited on-board processing power and battery life, there is a need for more efficient search algorithm. The proposed path planning algorithm utilizes AZ-net, a deep learning network to process images captured on drones for adaptive flight path planning. Search simulation based on videos and actual experimen...

  17. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    Directory of Open Access Journals (Sweden)

    Piotr Minkiewicz

    2016-12-01

    Full Text Available Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  18. 78 FR 13624 - Proposed Information Collection; Comment Request; Age Search Service

    Science.gov (United States)

    2013-02-28

    ...- 3434; or: [email protected] . SUPPLEMENTARY INFORMATION I. Abstract Age Search is a service... necessary information to conduct a search of historical population decennial census records in order to... Found'', advises the applicant that the search for information from the census records was unsuccessful...

  19. 75 FR 12174 - Proposed Information Collection; Comment Request; AGE Search Service

    Science.gov (United States)

    2010-03-15

    ... DEPARTMENT OF COMMERCE Census Bureau Proposed Information Collection; Comment Request; AGE Search... provide the Census Bureau with the necessary information to conduct a search of historical population... applicant that search for information from the census records was unsuccessful. The BC-658(L), is sent to...

  20. Internet search and krokodil in the Russian Federation: an infoveillance study.

    Science.gov (United States)

    Zheluk, Andrey; Quinn, Casey; Meylakhs, Peter

    2014-09-18

    through traditional survey methods. Our analysis suggests it is plausible that Yandex search behavior served as a proxy for patterns of krokodil production and use during the date range we investigated. More generally, this study demonstrates the application of novel methods recently used by policy makers to both monitor illicit drug use and influence drug policy decision making.

  1. Visual search in barn owls: Task difficulty and saccadic behavior.

    Science.gov (United States)

    Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2018-01-01

    How do we find what we are looking for? A target can be in plain view, but it may be detected only after extensive search. During a search we make directed attentional deployments like saccades to segment the scene until we detect the target. Depending on difficulty, the search may be fast with few attentional deployments or slow with many, shorter deployments. Here we study visual search in barn owls by tracking their overt attentional deployments-that is, their head movements-with a camera. We conducted a low-contrast feature search, a high-contrast orientation conjunction search, and a low-contrast orientation conjunction search, each with set sizes varying from 16 to 64 items. The barn owls were able to learn all of these tasks and showed serial search behavior. In a subsequent step, we analyzed how search behavior of owls changes with search complexity. We compared the search mechanisms in these three serial searches with results from pop-out searches our group had reported earlier. Saccade amplitude shortened and fixation duration increased in difficult searches. Also, in conjunction search saccades were guided toward items with shared target features. These data suggest that during visual search, barn owls utilize mechanisms similar to those that humans use.

  2. Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems

    Directory of Open Access Journals (Sweden)

    Hong-yu Zhang

    2014-01-01

    Full Text Available As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.

  3. Manipulating Google's Knowledge Graph Box to Counter Biased Information Processing During an Online Search on Vaccination: Application of a Technological Debiasing Strategy.

    Science.gov (United States)

    Ludolph, Ramona; Allam, Ahmed; Schulz, Peter J

    2016-06-02

    One of people's major motives for going online is the search for health-related information. Most consumers start their search with a general search engine but are unaware of the fact that its sorting and ranking criteria do not mirror information quality. This misconception can lead to distorted search outcomes, especially when the information processing is characterized by heuristic principles and resulting cognitive biases instead of a systematic elaboration. As vaccination opponents are vocal on the Web, the chance of encountering their non‒evidence-based views on immunization is high. Therefore, biased information processing in this context can cause subsequent impaired judgment and decision making. A technological debiasing strategy could counter this by changing people's search environment. This study aims at testing a technological debiasing strategy to reduce the negative effects of biased information processing when using a general search engine on people's vaccination-related knowledge and attitudes. This strategy is to manipulate the content of Google's knowledge graph box, which is integrated in the search interface and provides basic information about the search topic. A full 3x2 factorial, posttest-only design was employed with availability of basic factual information (comprehensible vs hardly comprehensible vs not present) as the first factor and a warning message as the second factor of experimental manipulation. Outcome variables were the evaluation of the knowledge graph box, vaccination-related knowledge, as well as beliefs and attitudes toward vaccination, as represented by three latent variables emerged from an exploratory factor analysis. Two-way analysis of variance revealed a significant main effect of availability of basic information in the knowledge graph box on participants' vaccination knowledge scores (F2,273=4.86, P=.01), skepticism/fear of vaccination side effects (F2,273=3.5, P=.03), and perceived information quality (F2

  4. Decision-Making Theories and Models: A Discussion of Rational and Psychological Decision-Making Theories and Models: The Search for a Cultural-Ethical Decision-Making Model

    OpenAIRE

    Oliveira, Arnaldo

    2007-01-01

    This paper examines rational and psychological decision-making models. Descriptive and normative methodologies such as attribution theory, schema theory, prospect theory, ambiguity model, game theory, and expected utility theory are discussed. The definition of culture is reviewed, and the relationship between culture and decision making is also highlighted as many organizations use a cultural-ethical decision-making model.

  5. Application of preference selection index method for decision making over the design stage of production system life cycle

    Directory of Open Access Journals (Sweden)

    Rajesh Attri

    2015-07-01

    Full Text Available The life cycle of production system shows the progress of production system from the inception to the termination of the system. During each stage, mainly in the design stage, certain strategic decisions have to be taken. These decisions are more complex as the decision makers have to assess a wide range of alternatives based on a set of conflicting criteria. As the decision making process is found to be unstructured, characterized by domain dependent knowledge, there is a need to apply an efficient multi-criteria decision making (MCDM tool to help the decision makers in making correct decisions. This paper explores the application of a novel MCDM method i.e. Preference selection index (PSI method to solve various decision-making problems that are generally encountered in the design stage of production system life cycle. To prove the potentiality, applicability and accuracy of PSI method in solving decision making problem during the design stage of production system life cycle, five examples are cited from the literature and are compared with the results obtained by the past researchers.

  6. Environmental Applications of Biosurfactants: Recent Advances

    Directory of Open Access Journals (Sweden)

    Swaranjit Singh Cameotra

    2011-01-01

    Full Text Available Increasing public awareness of environmental pollution influences the search and development of technologies that help in clean up of organic and inorganic contaminants such as hydrocarbons and metals. An alternative and eco-friendly method of remediation technology of environments contaminated with these pollutants is the use of biosurfactants and biosurfactant-producing microorganisms. The diversity of biosurfactants makes them an attractive group of compounds for potential use in a wide variety of industrial and biotechnological applications. The purpose of this review is to provide a comprehensive overview of advances in the applications of biosurfactants and biosurfactant-producing microorganisms in hydrocarbon and metal remediation technologies.

  7. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  8. Search engine imaginary: Visions and values in the co-production of search technology and Europe.

    Science.gov (United States)

    Mager, Astrid

    2017-04-01

    This article discusses the co-production of search technology and a European identity in the context of the EU data protection reform. The negotiations of the EU data protection legislation ran from 2012 until 2015 and resulted in a unified data protection legislation directly binding for all European member states. I employ a discourse analysis to examine EU policy documents and Austrian media materials related to the reform process. Using the concept 'sociotechnical imaginary', I show how a European imaginary of search engines is forming in the EU policy domain, how a European identity is constructed in the envisioned politics of control, and how national specificities contribute to the making and unmaking of a European identity. I discuss the roles that national technopolitical identities play in shaping both search technology and Europe, taking as an example Austria, a small country with a long history in data protection and a tradition of restrained technology politics.

  9. 75 FR 59718 - US Search, Inc. And US Search, LLC; Analysis of Proposed Consent Order to Aid Public Comment

    Science.gov (United States)

    2010-09-28

    ..., aliases, maiden name, death records, address history, information about friends, associates, and relatives... prohibits US Search from making any representations concerning the effectiveness its ``PrivacyLock'' service... representation covered by the order all advertisements and promotional materials containing the representation...

  10. BIOMedical Search Engine Framework: Lightweight and customized implementation of domain-specific biomedical search engines.

    Science.gov (United States)

    Jácome, Alberto G; Fdez-Riverola, Florentino; Lourenço, Anália

    2016-07-01

    Text mining and semantic analysis approaches can be applied to the construction of biomedical domain-specific search engines and provide an attractive alternative to create personalized and enhanced search experiences. Therefore, this work introduces the new open-source BIOMedical Search Engine Framework for the fast and lightweight development of domain-specific search engines. The rationale behind this framework is to incorporate core features typically available in search engine frameworks with flexible and extensible technologies to retrieve biomedical documents, annotate meaningful domain concepts, and develop highly customized Web search interfaces. The BIOMedical Search Engine Framework integrates taggers for major biomedical concepts, such as diseases, drugs, genes, proteins, compounds and organisms, and enables the use of domain-specific controlled vocabulary. Technologies from the Typesafe Reactive Platform, the AngularJS JavaScript framework and the Bootstrap HTML/CSS framework support the customization of the domain-oriented search application. Moreover, the RESTful API of the BIOMedical Search Engine Framework allows the integration of the search engine into existing systems or a complete web interface personalization. The construction of the Smart Drug Search is described as proof-of-concept of the BIOMedical Search Engine Framework. This public search engine catalogs scientific literature about antimicrobial resistance, microbial virulence and topics alike. The keyword-based queries of the users are transformed into concepts and search results are presented and ranked accordingly. The semantic graph view portraits all the concepts found in the results, and the researcher may look into the relevance of different concepts, the strength of direct relations, and non-trivial, indirect relations. The number of occurrences of the concept shows its importance to the query, and the frequency of concept co-occurrence is indicative of biological relations

  11. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    CERN Document Server

    Bagnasco, S; Guarise, A; Lusso, S; Masera, M; Vallero, S

    2015-01-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monit...

  12. Routine development of objectively derived search strategies

    Directory of Open Access Journals (Sweden)

    Hausner Elke

    2012-02-01

    Full Text Available Abstract Background Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Methods Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG". This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. Results We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set were used. The validation set was also completely identified. Discussion Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a

  13. An ontology-based search engine for protein-protein interactions.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-01-18

    Keyword matching or ID matching is the most common searching method in a large database of protein-protein interactions. They are purely syntactic methods, and retrieve the records in the database that contain a keyword or ID specified in a query. Such syntactic search methods often retrieve too few search results or no results despite many potential matches present in the database. We have developed a new method for representing protein-protein interactions and the Gene Ontology (GO) using modified Gödel numbers. This representation is hidden from users but enables a search engine using the representation to efficiently search protein-protein interactions in a biologically meaningful way. Given a query protein with optional search conditions expressed in one or more GO terms, the search engine finds all the interaction partners of the query protein by unique prime factorization of the modified Gödel numbers representing the query protein and the search conditions. Representing the biological relations of proteins and their GO annotations by modified Gödel numbers makes a search engine efficiently find all protein-protein interactions by prime factorization of the numbers. Keyword matching or ID matching search methods often miss the interactions involving a protein that has no explicit annotations matching the search condition, but our search engine retrieves such interactions as well if they satisfy the search condition with a more specific term in the ontology.

  14. Factors influencing smallholder cocoa production : a management analysis of behavioural decision-making processes of technology adoption and application

    NARCIS (Netherlands)

    Taher, S.

    1996-01-01

    The objectives of the study were to expand present knowledge on the technology adoption and application rates for production inputs and fermentation processing related to farmers' decision- making, and to formulate an optimal technology application policy, particularly for smallholder cocoa

  15. Phylogenetic search through partial tree mixing

    Science.gov (United States)

    2012-01-01

    Background Recent advances in sequencing technology have created large data sets upon which phylogenetic inference can be performed. Current research is limited by the prohibitive time necessary to perform tree search on a reasonable number of individuals. This research develops new phylogenetic algorithms that can operate on tens of thousands of species in a reasonable amount of time through several innovative search techniques. Results When compared to popular phylogenetic search algorithms, better trees are found much more quickly for large data sets. These algorithms are incorporated in the PSODA application available at http://dna.cs.byu.edu/psoda Conclusions The use of Partial Tree Mixing in a partition based tree space allows the algorithm to quickly converge on near optimal tree regions. These regions can then be searched in a methodical way to determine the overall optimal phylogenetic solution. PMID:23320449

  16. Application of 3D Zernike descriptors to shape-based ligand similarity searching.

    Science.gov (United States)

    Venkatraman, Vishwesh; Chakravarthy, Padmasini Ramji; Kihara, Daisuke

    2009-12-17

    The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD) for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR) and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  17. Process for making cylindrical ceramic tubes with local impressions and device for the application of this process

    International Nuclear Information System (INIS)

    1982-01-01

    Immediately after these pipes have been formed they are placed on facilities to make them rotate about their axisymmetrical axis parallel to the axis of the pieces of pipe, pressure application means are applied to them in such a manner that the pieces of pipe are enclosed between the rotational gear and the pressure means. The pressure means or the rotational means are fitted with at least one tool for making impressions. Means to bring about a second relative movement between the pipe and tool(s) are applied so that no appreciable slip occurs. Application to the fabrication of pipes for enrichment by gas diffusion [fr

  18. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  19. Search for ideal metal hydrides for PEMFC applications

    International Nuclear Information System (INIS)

    Perng, T.-P.; Shen, C.-C.

    2004-01-01

    'Full text:' Previously, an LmNi5-based alloy was prepared and its hydrogenation properties were studied. In order to make use of such a type of metal hydride for application in PEMFC, the room-temperature desorption pressure has to be adjusted to 1-2atm and the cyclic stability has to be maintained. In this study, the same alloy was partially substituted with Al and cyclic hydrogenation was conducted with different hydrogen loadings up to 3000 cycles at room temperature. The saturated hydrogen loadings in equilibrium were controlled at H/M = 0.75 and 1.0. The P-C-T curves after 1000, 2000, and 3000 cycles of test were collected at T=30, 50, and 70 o C. After 3000 cycles, it is observed that the maximum hydrogenation capacities of the samples for the loadings of 0.75 and 1.0 are reduced to 0.93 and 0.91, respectively. The plateaus do not change much for T=30 and 50 o C, but become little sloped without observable split at 70 o C. X-ray diffraction analysis shows that the strains associated with repeated hydrogenation are isotropic for all samples. Both unsubstituted and Al-substituted alloys were then used to store hydrogen in a small cylinder with a diameter 10mm and length of 40 mm. The cylinder was connected to a small PEMFC for discharge test at room temperature. More than 540ml H2 was released at below 2atm and discharged to a capacity of 1200mAh. The hydrogenation properties of the alloys and design of the hydrogen storage cylinder for application in small portable PEMFCs for electronic devices are evaluated. The effect of Al substitution and hydrogen loading on cyclic hydrogenation property of the LmNi5-based alloy is also discussed. (author)

  20. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  1. How Users Search the Mobile Web: A Model for Understanding the Impact of Motivation and Context on Search Behaviors

    Directory of Open Access Journals (Sweden)

    Dan Wu

    2016-03-01

    Full Text Available Purpose: This study explores how search motivation and context influence mobile Web search behaviors. Design/methodology/approach: We studied 30 experienced mobile Web users via questionnaires, semi-structured interviews, and an online diary tool that participants used to record their daily search activities. SQLite Developer was used to extract data from the users' phone logs for correlation analysis in Statistical Product and Service Solutions (SPSS. Findings: One quarter of mobile search sessions were driven by two or more search motivations. It was especially difficult to distinguish curiosity from time killing in particular user reporting. Multi-dimensional contexts and motivations influenced mobile search behaviors, and among the context dimensions, gender, place, activities they engaged in while searching, task importance, portal, and interpersonal relations (whether accompanied or alone when searching correlated with each other. Research limitations: The sample was comprised entirely of college students, so our findings may not generalize to other populations. More participants and longer experimental duration will improve the accuracy and objectivity of the research. Practical implications: Motivation analysis and search context recognition can help mobile service providers design applications and services for particular mobile contexts and usages. Originality/value: Most current research focuses on specific contexts, such as studies on place, or other contextual influences on mobile search, and lacks a systematic analysis of mobile search context. Based on analysis of the impact of mobile search motivations and search context on search behaviors, we built a multi-dimensional model of mobile search behaviors.

  2. Further investigation on adaptive search

    Directory of Open Access Journals (Sweden)

    Ming Hong Pi

    2014-05-01

    Full Text Available Adaptive search is one of the fastest fractal compression algorithms and has gained great success in many industrial applications. By substituting the luminance offset by the range block mean, the authors create a completely new version for both the encoding and decoding algorithms. In this paper, theoretically, they prove that the proposed decoding algorithm converges at least as fast as the existing decoding algorithms using the luminance offset. In addition, they prove that the attractor of the decoding algorithm can be represented by a linear combination of range-averaged images. These theorems are very important contributions to the theory and applications of fractal image compression. As a result, the decoding image can be represented as the sum of the DC and AC component images, which is similar with discrete cosine transform or wavelet transform. To further speed up this algorithm and reduce the complexity of range and domain blocks matching, they propose two improvements in this paper, that is, employing the post-quantisation and geometric neighbouring local search to replace the currently used pre-quantisation and the global search, respectively. The corresponding experimental results show the proposed encoding and decoding algorithms can provide a better performance compared with the existing algorithms.

  3. On the Interpretation of Top Partners Searches

    CERN Document Server

    Matsedonskyi, Oleksii; Wulzer, Andrea

    2014-01-01

    Relatively light Top Partners are unmistakable signatures of reasonably Natural Composite Higgs models and as such they are worth searching for at the LHC. Their phenomenology is characterized by a certain amount of model-dependence, which makes the interpretation of Top Partner experimental searches not completely straightforward especially if one is willing to take also single production into account. We describe a model-independent strategy by which the interpretation is provided on the parameter space of a Simplified Model that captures the relevant features of all the explicit constructions. The Simplified Model limits are easy to interpret within explicit models, in a way that requires no recasting and no knowledge of the experimental details of the analyses. We illustrate the method by concrete examples, among which the searches for a charge 5/3 Partner in same-sign dileptons and the searches for a charge 2/3 singlet. In each case we perform a theory recasting of the available 8 TeV Run-1 results and a...

  4. Optimal Search for an Astrophysical Gravitational-Wave Background

    Science.gov (United States)

    Smith, Rory; Thrane, Eric

    2018-04-01

    Roughly every 2-10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals) for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both "safe" and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources), simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.

  5. Application of evidence-based dentistry: from research to clinical periodontal practice.

    Science.gov (United States)

    Kwok, Vivien; Caton, Jack G; Polson, Alan M; Hunter, Paul G

    2012-06-01

    Dentists need to make daily decisions regarding patient care, and these decisions should essentially be scientifically sound. Evidence-based dentistry is meant to empower clinicians to provide the most contemporary treatment. The benefits of applying the evidence-based method in clinical practice include application of the most updated treatment and stronger reasoning to justify the treatment. A vast amount of information is readily accessible with today's digital technology, and a standardized search protocol can be developed to ensure that a literature search is valid, specific and repeatable. It involves developing a preset question (population, intervention, comparison and outcome; PICO) and search protocol. It is usually used academically to perform commissioned reviews, but it can also be applied to answer simple clinical queries. The scientific evidence thus obtained can then be considered along with patient preferences and values, clinical patient circumstances and the practitioner's experience and judgment in order to make the treatment decision. This paper describes how clinicians can incorporate evidence-based methods into patient care and presents a clinical example to illustrate the process. © 2012 John Wiley & Sons A/S.

  6. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    Science.gov (United States)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  7. Inclusive search for b → sg

    International Nuclear Information System (INIS)

    1998-07-01

    The authors describe an inclusive search for flavor changing neutral current decays of the type b → s gluon in the SLD experiment. Models of b → sg indicate that the production of high momentum kaons is enhanced over background from standard B decays. If the branching ratio for b → sg is ∼ 10%, then such an enhancement should have a good signal to background ratio. The analysis makes use of the particle identification and high precision vertexing capabilities of SLD to search for such an enhancement. The data sample consists of 300K hadronic Z 0 decays collected between 1993 and 1997

  8. Library Search Prefilters for Vehicle Manufacturers to Assist in the Forensic Examination of Automotive Paints.

    Science.gov (United States)

    Lavine, Barry K; White, Collin G; Ding, Tao

    2018-03-01

    Pattern recognition techniques have been applied to the infrared (IR) spectral libraries of the Paint Data Query (PDQ) database to differentiate between nonidentical but similar IR spectra of automotive paints. To tackle the problem of library searching, search prefilters were developed to identify the vehicle make from IR spectra of the clear coat, surfacer-primer, and e-coat layers. To develop these search prefilters with the appropriate degree of accuracy, IR spectra from the PDQ database were preprocessed using the discrete wavelet transform to enhance subtle but significant features in the IR spectral data. Wavelet coefficients characteristic of vehicle make were identified using a genetic algorithm for pattern recognition and feature selection. Search prefilters to identify automotive manufacturer through IR spectra obtained from a paint chip recovered at a crime scene were developed using 1596 original manufacturer's paint systems spanning six makes (General Motors, Chrysler, Ford, Honda, Nissan, and Toyota) within a limited production year range (2000-2006). Search prefilters for vehicle manufacturer that were developed as part of this study were successfully validated using IR spectra obtained directly from the PDQ database. Information obtained from these search prefilters can serve to quantify the discrimination power of original automotive paint encountered in casework and further efforts to succinctly communicate trace evidential significance to the courts.

  9. Searching for Information Online: Using Big Data to Identify the Concerns of Potential Army Recruits

    Science.gov (United States)

    2016-01-01

    software. For instance, such Internet search engines as Google or Yahoo! often gather anonymized data regarding the topics that people search for, as...suggesting that these and other information needs may be fur- ther reflected in usage of online search engines . Google makes aggregated and anonymized...Internet search engines such as Google or Yahoo! often gather anonymized data regarding the topics that people search for, as well as the date and

  10. MINOS Sterile Neutrino Search

    Energy Technology Data Exchange (ETDEWEB)

    Koskinen, David Jason [Univ. College London, Bloomsbury (United Kingdom)

    2009-02-01

    The Main Injector Neutrino Oscillation Search (MINOS) is a long-baseline accelerator neutrino experiment designed to measure properties of neutrino oscillation. Using a high intensity muon neutrino beam, produced by the Neutrinos at Main Injector (NuMI) complex at Fermilab, MINOS makes two measurements of neutrino interactions. The first measurement is made using the Near Detector situated at Fermilab and the second is made using the Far Detector located in the Soudan Underground laboratory in northern Minnesota. The primary goal of MINOS is to verify, and measure the properties of, neutrino oscillation between the two detectors using the v μ→ Vτ transition. A complementary measurement can be made to search for the existence of sterile neutrinos; an oft theorized, but experimentally unvalidated particle. The following thesis will show the results of a sterile neutrino search using MINOS RunI and RunII data totaling ~2.5 x 1020 protons on target. Due to the theoretical nature of sterile neutrinos, complete formalism that covers transition probabilities for the three known active states with the addition of a sterile state is also presented.

  11. How to Implement a New Search System and Make Friends While Doing It!

    Science.gov (United States)

    Powers, Michelle M.; Contreras, Rhonda

    2013-01-01

    What could bring more change to an online library than going from a federated to a discovery search system? Not much! This paper will share what was learned about initiating and implementing an important change that ultimately impacts a variety of stakeholders, and the importance of collaboration between library personnel at all levels of an…

  12. Searching Online Chemical Data Repositories via the ChemAgora Portal.

    Science.gov (United States)

    Zanzi, Antonella; Wittwehr, Clemens

    2017-12-26

    ChemAgora, a web application designed and developed in the context of the "Data Infrastructure for Chemical Safety Assessment" (diXa) project, provides search capabilities to chemical data from resources available online, enabling users to cross-reference their search results with both regulatory chemical information and public chemical databases. ChemAgora, through an on-the-fly search, informs whether a chemical is known or not in each of the external data sources and provides clikable links leading to the third-party web site pages containing the information. The original purpose of the ChemAgora application was to correlate studies stored in the diXa data warehouse with available chemical data. Since the end of the diXa project, ChemAgora has evolved into an independent portal, currently accessible directly through the ChemAgora home page, with improved search capabilities of online data sources.

  13. Make Yourself At Home! Adolescents in Search of the Queer Spaces of Home

    Directory of Open Access Journals (Sweden)

    Kokkola, Lydia

    2014-09-01

    Full Text Available Home is often assumed to be a safe place, a place to which children can return after their adventures Away. For many gay and lesbian teens, both fictional and in real life, however, the space they share with their family of origin is not a place where they can feel at home. The heterosexual family home is often so hostile to queerly desiring teens that they are forced to leave in search of a place where they can feel at home. The queer spaces they enter in their search are usually considered risky spaces – public spaces, urban spaces, the bar and the street – unhomely spaces. In these temporary, in-between spaces, the queerly desiring teens in the novels examined in this paper form new family structures. Although all the Anglophone novels discussed in this article end on moments of up-lift and hope for the future, the association of the queerly desiring youngster with risky spaces suggests that the queer teens are themselves unheimlich (uncanny..

  14. Enabling Searches on Wavelengths in a Hyperspectral Indices Database

    Science.gov (United States)

    Piñuela, F.; Cerra, D.; Müller, R.

    2017-10-01

    Spectral indices derived from hyperspectral reflectance measurements are powerful tools to estimate physical parameters in a non-destructive and precise way for several fields of applications, among others vegetation health analysis, coastal and deep water constituents, geology, and atmosphere composition. In the last years, several micro-hyperspectral sensors have appeared, with both full-frame and push-broom acquisition technologies, while in the near future several hyperspectral spaceborne missions are planned to be launched. This is fostering the use of hyperspectral data in basic and applied research causing a large number of spectral indices to be defined and used in various applications. Ad hoc search engines are therefore needed to retrieve the most appropriate indices for a given application. In traditional systems, query input parameters are limited to alphanumeric strings, while characteristics such as spectral range/ bandwidth are not used in any existing search engine. Such information would be relevant, as it enables an inverse type of search: given the spectral capabilities of a given sensor or a specific spectral band, find all indices which can be derived from it. This paper describes a tool which enables a search as described above, by using the central wavelength or spectral range used by a given index as a search parameter. This offers the ability to manage numeric wavelength ranges in order to select indices which work at best in a given set of wavelengths or wavelength ranges.

  15. EIA application in China's expressway infrastructure: clarifying the decision-making hierarchy.

    Science.gov (United States)

    Zhou, Kai-Yi; Sheate, William R

    2011-06-01

    China's EIA Law came into effect in 2003 and formally requires road transport infrastructure development actions to be subject to Environmental Impact Assessment (EIA). EIAs (including project EIA and plan EIA, or strategic environmental impact assessment, SEA) have been being widely applied in the expressway infrastructure planning field. Among those applications, SEA is applied to provincial level expressway network (PLEI) plans, and project EIA is applied to expressway infrastructure development 'projects' under PLEI plans. Three case studies (one expressway project EIA and two PLEI plan SEAs) were examined to understand currently how EIAs are applied to expressway infrastructure development planning. Through the studies, a number of problems that significantly influence the quality of EIA application in the field were identified. The reasons causing those problems are analyzed and possible solutions are suggested aimed at enhancing EIA practice, helping deliver better decision-making and ultimately improving the environmental performance of expressway infrastructure. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. The "Make or Buy" Decision: Five Main Points to Consider

    Science.gov (United States)

    Archer, Mary Ann E.

    1978-01-01

    Five points which should be considered when making decisions about whether to purchase magnetic tapes for in-hours searching by batch processing, purchase terminals and contract with on-line vendors, or contract with information brokers for retrospective searching or SDI are availability of information in the most useful form, hardware and…

  17. WWER core pattern enhancement using adaptive improved harmony search

    International Nuclear Information System (INIS)

    Nazari, T.; Aghaie, M.; Zolfaghari, A.; Minuchehr, A.; Norouzi, A.

    2013-01-01

    Highlights: ► The classical and improved harmony search algorithms are introduced. ► The advantage of IHS is demonstrated in Shekel's Foxholes. ► The CHS and IHS are compared with other Heuristic algorithms. ► The adaptive improved harmony search is applied for two cases. ► Two cases of WWER core are optimized in BOC FA pattern. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Core performance analysis constitutes an essential phase in core fuel management optimization. Finding an optimum core arrangement for loading of fuel assemblies, FAs, in a nuclear core is a complex problem. In this paper, application of classical harmony search (HS) and adaptive improved harmony search (IHS) in loading pattern (LP) design, for pressurized water reactors, is described. In this analysis, finding the best core pattern, which attains maximum multiplication factor, k eff , by considering maximum allowable power picking factors (PPF) is the main objective. Therefore a HS based, LP optimization code is prepared and CITATION code which is a neutronic calculation code, applied to obtain effective multiplication factor, neutron fluxes and power density in desired cores. Using adaptive improved harmony search and neutronic code, generated LP optimization code, could be applicable for PWRs core with many numbers of FAs. In this work, at first step, HS and IHS efficiencies are compared with some other heuristic algorithms in Shekel's Foxholes problem and capability of the adaptive improved harmony search is demonstrated. Results show, efficient application of IHS. At second step, two WWER cases are studied and then IHS proffered improved core patterns with regard to mentioned objective functions.

  18. Optimal search behavior and classic foraging theory

    International Nuclear Information System (INIS)

    Bartumeus, F; Catalan, J

    2009-01-01

    Random walk methods and diffusion theory pervaded ecological sciences as methods to analyze and describe animal movement. Consequently, statistical physics was mostly seen as a toolbox rather than as a conceptual framework that could contribute to theory on evolutionary biology and ecology. However, the existence of mechanistic relationships and feedbacks between behavioral processes and statistical patterns of movement suggests that, beyond movement quantification, statistical physics may prove to be an adequate framework to understand animal behavior across scales from an ecological and evolutionary perspective. Recently developed random search theory has served to critically re-evaluate classic ecological questions on animal foraging. For instance, during the last few years, there has been a growing debate on whether search behavior can include traits that improve success by optimizing random (stochastic) searches. Here, we stress the need to bring together the general encounter problem within foraging theory, as a mean for making progress in the biological understanding of random searching. By sketching the assumptions of optimal foraging theory (OFT) and by summarizing recent results on random search strategies, we pinpoint ways to extend classic OFT, and integrate the study of search strategies and its main results into the more general theory of optimal foraging.

  19. Search for macroscopic dark matter in the halo of the milky way through microlensing. A feasibility study

    International Nuclear Information System (INIS)

    Moniez, M.

    1990-05-01

    The possibility of searching for non-visible massive compact objects in the galactic halo is discussed here. The discovery of such objects would solve the problem of the missing mass in the galaxies, and the experiments which investigate for weakly interacting particles assuming a diffuse cloud of dark matter would have to revise their limits. The non-discovery of these objects would exclude the last possibility left for baryonic dark matter, providing good evidence that the galactic halo has to be made of new particles. The description of the general-relativistic microlensing effect and its application to the search of massive compact objects are given here. A feasibility study shows that it is possible to monitor the luminosity of several million stars in the Large Magellanic Cloud with the required precision, in order to detect a possible microlensing phenomenon induced by heavy compact objects (10 -4 - 10 -1 solar mass units). A CCD-based experimental setup is described, which would make it possible to search for compact objects in the 10 -6 - 10 -4 solar mass unit domain

  20. Application of Static Var Compensator (SVC) With PI Controller for Grid Integration of Wind Farm Using Harmony Search

    Science.gov (United States)

    Keshta, H. E.; Ali, A. A.; Saied, E. M.; Bendary, F. M.

    2016-10-01

    Large-scale integration of wind turbine generators (WTGs) may have significant impacts on power system operation with respect to system frequency and bus voltages. This paper studies the effect of Static Var Compensator (SVC) connected to wind energy conversion system (WECS) on voltage profile and the power generated from the induction generator (IG) in wind farm. Also paper presents, a dynamic reactive power compensation using Static Var Compensator (SVC) at the a point of interconnection of wind farm while static compensation (Fixed Capacitor Bank) is unable to prevent voltage collapse. Moreover, this paper shows that using advanced optimization techniques based on artificial intelligence (AI) such as Harmony Search Algorithm (HS) and Self-Adaptive Global Harmony Search Algorithm (SGHS) instead of a Conventional Control Method to tune the parameters of PI controller for SVC and pitch angle. Also paper illustrates that the performance of the system with controllers based on AI is improved under different operating conditions. MATLAB/Simulink based simulation is utilized to demonstrate the application of SVC in wind farm integration. It is also carried out to investigate the enhancement in performance of the WECS achieved with a PI Controller tuned by Harmony Search Algorithm as compared to a Conventional Control Method.

  1. Assessing the performance of methodological search filters to improve the efficiency of evidence information retrieval: five literature reviews and a qualitative study.

    Science.gov (United States)

    Lefebvre, Carol; Glanville, Julie; Beale, Sophie; Boachie, Charles; Duffy, Steven; Fraser, Cynthia; Harbour, Jenny; McCool, Rachael; Smith, Lynne

    2017-11-01

    Effective study identification is essential for conducting health research, developing clinical guidance and health policy and supporting health-care decision-making. Methodological search filters (combinations of search terms to capture a specific study design) can assist in searching to achieve this. This project investigated the methods used to assess the performance of methodological search filters, the information that searchers require when choosing search filters and how that information could be better provided. Five literature reviews were undertaken in 2010/11: search filter development and testing; comparison of search filters; decision-making in choosing search filters; diagnostic test accuracy (DTA) study methods; and decision-making in choosing diagnostic tests. We conducted interviews and a questionnaire with experienced searchers to learn what information assists in the choice of search filters and how filters are used. These investigations informed the development of various approaches to gathering and reporting search filter performance data. We acknowledge that there has been a regrettable delay between carrying out the project, including the searches, and the publication of this report, because of serious illness of the principal investigator. The development of filters most frequently involved using a reference standard derived from hand-searching journals. Most filters were validated internally only. Reporting of methods was generally poor. Sensitivity, precision and specificity were the most commonly reported performance measures and were presented in tables. Aspects of DTA study methods are applicable to search filters, particularly in the development of the reference standard. There is limited evidence on how clinicians choose between diagnostic tests. No published literature was found on how searchers select filters. Interviewing and questioning searchers via a questionnaire found that filters were not appropriate for all tasks but were

  2. The application of a selection of decision-making techniques by employees in a transport work environment in conjunction with their perceived decision-making success and practice

    Directory of Open Access Journals (Sweden)

    Theuns F.J. Oosthuizen

    2014-03-01

    Full Text Available A lack of optimum selection and application of decision-making techniques, in conjunction with suitable decision-making practice and perception of employees in a transport work environment demands attention to improve overall performance. Although multiple decision-making techniques exist, five prevalent techniques were considered in this article, namely the Kepner-Tregoe, Delphi, stepladder, nominal group and brainstorming techniques. A descriptive research design was followed, using an empirical survey which was conducted among 210 workers employed in a transport work environment and studying in the field of transport management. The purpose was to establish to what extent the five decision-making techniques are used in their work environment and furthermore how the decision-making practice of using gut-feel and/or a step-by-step decision-making process and their perception of their decision-making success relate. The research confirmed that the use of decision-making techniques is correlated to perceived decision-making success. Furthermore, the Kepner-Tregoe, stepladder, Delphi and brainstorming techniques are associated with a step-by-step decision-making process. No significant association was confirmed between the use of gut-feel and decision-making techniques. Brainstorming was found to be the technique most frequently used by transport employees; however, it has limitations as a comprehensive decision-making technique. Employees working in a transport work environment need training in order to select and use the four comprehensive decision-making techniques.

  3. Searching for preeclampsia genes : the current position

    NARCIS (Netherlands)

    Lachmeijer, AMA; Dekker, GA; Pals, G; Aarnoudse, JG; ten Kate, LP; Arngrimsson, R

    2002-01-01

    Although there is substantial evidence that preeclampsia has a genetic background, the complexity of the processes involved and the fact that preeclampsia is a maternal-fetal phenomenon does not make the search for the molecular basis of preeclampsia genes easy. It is possible that the single

  4. 'Meatball searching' - The adversarial approach to online information retrieval

    Science.gov (United States)

    Jack, R. F.

    1985-01-01

    It is proposed that the different styles of online searching can be described as either formal (highly precise) or informal with the needs of the client dictating which is most applicable at a particular moment. The background and personality of the searcher also come into play. Particular attention is focused on meatball searching which is a form of online searching characterized by deliberate vagueness. It requires generally comprehensive searches, often on unusual topics and with tight deadlines. It is most likely to occur in search centers serving many different disciplines and levels of client information sophistication. Various information needs are outlined as well as the laws of meatball searching and the adversarial approach. Traits and characteristics important to sucessful searching include: (1) concept analysis, (2) flexibility of thinking, (3) ability to think in synonyms and (4) anticipation of variant word forms and spellings.

  5. Manipulating Google’s Knowledge Graph Box to Counter Biased Information Processing During an Online Search on Vaccination: Application of a Technological Debiasing Strategy

    Science.gov (United States)

    Allam, Ahmed; Schulz, Peter J

    2016-01-01

    Background One of people’s major motives for going online is the search for health-related information. Most consumers start their search with a general search engine but are unaware of the fact that its sorting and ranking criteria do not mirror information quality. This misconception can lead to distorted search outcomes, especially when the information processing is characterized by heuristic principles and resulting cognitive biases instead of a systematic elaboration. As vaccination opponents are vocal on the Web, the chance of encountering their non‒evidence-based views on immunization is high. Therefore, biased information processing in this context can cause subsequent impaired judgment and decision making. A technological debiasing strategy could counter this by changing people’s search environment. Objective This study aims at testing a technological debiasing strategy to reduce the negative effects of biased information processing when using a general search engine on people’s vaccination-related knowledge and attitudes. This strategy is to manipulate the content of Google’s knowledge graph box, which is integrated in the search interface and provides basic information about the search topic. Methods A full 3x2 factorial, posttest-only design was employed with availability of basic factual information (comprehensible vs hardly comprehensible vs not present) as the first factor and a warning message as the second factor of experimental manipulation. Outcome variables were the evaluation of the knowledge graph box, vaccination-related knowledge, as well as beliefs and attitudes toward vaccination, as represented by three latent variables emerged from an exploratory factor analysis. Results Two-way analysis of variance revealed a significant main effect of availability of basic information in the knowledge graph box on participants’ vaccination knowledge scores (F2,273=4.86, P=.01), skepticism/fear of vaccination side effects (F2,273=3.5, P=.03

  6. Adult patient decision-making regarding implantation of complex cardiac devices: a scoping review.

    Science.gov (United States)

    Malecki-Ketchell, Alison; Marshall, Paul; Maclean, Joan

    2017-10-01

    Complex cardiac rhythm management device (CRMD) therapy provides an important treatment option for people at risk of sudden cardiac death. Despite the survival benefit, device implantation is associated with significant physical and psychosocial concerns presenting considerable challenges for the decision-making process surrounding CRMD implantation for patients and physicians. The purpose of this scoping review was to explore what is known about how adult (>16 years) patients make decisions regarding implantation of CRMD therapy. Published, peer reviewed, English language studies from 2000 to 2016 were identified in a search across eight healthcare databases. Eligible studies were concerned with patient decision-making for first time device implantation. Quality assessment was completed using the mixed methods appraisal tool for all studies meeting the inclusion criteria. The findings of eight qualitative and seven quantitative studies, including patients who accepted or declined primary or secondary sudden cardiac death prevention devices, were clustered into two themes: knowledge acquisition and the process of decision-making, exposing similarities and distinctions with the treatment decision-making literature. The review revealed some insight in to the way patients approach decision-making but also exposed a lack of clarity and research activity specific to CRMD patients. Further research is recommended to support the development and application of targeted decision support mechanisms.

  7. Exploration and exploitation during information search and experimential choice

    Directory of Open Access Journals (Sweden)

    Cleotilde Gonzalez

    2016-07-01

    Full Text Available Before making a choice we often search and explore the options available. For example, we try clothes on before selecting the one to buy and we search for career options before deciding a career to pursue. Although the exploration process, where one is free to sample available options is pervasive, we know little about how and why humans explore an environment before making choices. This research contributes to the clarification of some of the phenomena that describes how people perform search during free sampling: we find a gradual decrease of exploration and, in parallel, a tendency to explore and choose options of high value. These patterns provide support to the existence of learning and an exploration-exploitation tradeoff that may occur during free sampling. Thus, explorations in free sampling is not led by the purely epistemic value of the available options. Rather, exploration during free sampling is a learning process that is influenced by memory effects and by the value of the options available, where participants pursue options of high value more frequently. These parallel processes predict the consequential choice.

  8. Introduction to Search with Sphinx From installation to relevance tuning

    CERN Document Server

    Aksyonoff, Andrew

    2011-01-01

    This concise introduction to Sphinx shows you how to use this free software to index an enormous number of documents and provide fast results to both simple and complex searches. Written by the creator of Sphinx, this authoritative book is short and to the point. Understand the particular way Sphinx conducts searchesInstall and configure Sphinx, and run a few basic testsIssue basic queries to Sphinx at the application levelLearn the syntax of search text and the effects of various search optionsGet strategies for dealing with large data sets, such as multi-index searchingApply relevance and r

  9. Solving Large Clustering Problems with Meta-Heuristic Search

    DEFF Research Database (Denmark)

    Turkensteen, Marcel; Andersen, Kim Allan; Bang-Jensen, Jørgen

    In Clustering Problems, groups of similar subjects are to be retrieved from data sets. In this paper, Clustering Problems with the frequently used Minimum Sum-of-Squares Criterion are solved using meta-heuristic search. Tabu search has proved to be a successful methodology for solving optimization...... problems, but applications to large clustering problems are rare. The simulated annealing heuristic has mainly been applied to relatively small instances. In this paper, we implement tabu search and simulated annealing approaches and compare them to the commonly used k-means approach. We find that the meta-heuristic...

  10. The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching.

    Science.gov (United States)

    Haddaway, Neal Robert; Collins, Alexandra Mary; Coughlin, Deborah; Kirk, Stuart

    2015-01-01

    Google Scholar (GS), a commonly used web-based academic search engine, catalogues between 2 and 100 million records of both academic and grey literature (articles not formally published by commercial academic publishers). Google Scholar collates results from across the internet and is free to use. As a result it has received considerable attention as a method for searching for literature, particularly in searches for grey literature, as required by systematic reviews. The reliance on GS as a standalone resource has been greatly debated, however, and its efficacy in grey literature searching has not yet been investigated. Using systematic review case studies from environmental science, we investigated the utility of GS in systematic reviews and in searches for grey literature. Our findings show that GS results contain moderate amounts of grey literature, with the majority found on average at page 80. We also found that, when searched for specifically, the majority of literature identified using Web of Science was also found using GS. However, our findings showed moderate/poor overlap in results when similar search strings were used in Web of Science and GS (10-67%), and that GS missed some important literature in five of six case studies. Furthermore, a general GS search failed to find any grey literature from a case study that involved manual searching of organisations' websites. If used in systematic reviews for grey literature, we recommend that searches of article titles focus on the first 200 to 300 results. We conclude that whilst Google Scholar can find much grey literature and specific, known studies, it should not be used alone for systematic review searches. Rather, it forms a powerful addition to other traditional search methods. In addition, we advocate the use of tools to transparently document and catalogue GS search results to maintain high levels of transparency and the ability to be updated, critical to systematic reviews.

  11. Expertise in complex decision making: the role of search in chess 70 years after de Groot.

    Science.gov (United States)

    Connors, Michael H; Burns, Bruce D; Campitelli, Guillermo

    2011-01-01

    One of the most influential studies in all expertise research is de Groot's (1946) study of chess players, which suggested that pattern recognition, rather than search, was the key determinant of expertise. Many changes have occurred in the chess world since de Groot's study, leading some authors to argue that the cognitive mechanisms underlying expertise have also changed. We decided to replicate de Groot's study to empirically test these claims and to examine whether the trends in the data have changed over time. Six Grandmasters, five International Masters, six Experts, and five Class A players completed the think-aloud procedure for two chess positions. Findings indicate that Grandmasters and International Masters search more quickly than Experts and Class A players, and that both groups today search substantially faster than players in previous studies. The findings, however, support de Groot's overall conclusions and are consistent with predictions made by pattern recognition models. Copyright © 2011 Cognitive Science Society, Inc.

  12. Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study

    CERN Document Server

    Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M

    2005-01-01

    In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...

  13. Description and search labor for information retrieval

    OpenAIRE

    Warner, Julian

    2007-01-01

    Selection power is taken as the fundamental value for information retrieval systems. Selection power is regarded as produced by selection labor, which itself separates historically into description and search labor. As forms of mental labor, description and search labor participate in the conditions for labor and for mental labor. Concepts and distinctions applicable to physical and mental labor are indicated, introducing the necessity of labor for survival, the idea of technology as a human ...

  14. Generating crop calendars with Web search data

    International Nuclear Information System (INIS)

    Van der Velde, Marijn; See, Linda; Fritz, Steffen; Khabarov, Nikolay; Obersteiner, Michael; Verheijen, Frank G A

    2012-01-01

    This paper demonstrates the potential of using Web search volumes for generating crop specific planting and harvesting dates in the USA integrating climatic, social and technological factors affecting crop calendars. Using Google Insights for Search, clear peaks in volume occur at times of planting and harvest at the national level, which were used to derive corn specific planting and harvesting dates at a weekly resolution. Disaggregated to state level, search volumes for corn planting generally are in agreement with planting dates from a global crop calendar dataset. However, harvest dates were less discriminatory at the state level, indicating that peaks in search volume may be blurred by broader searches on harvest as a time of cultural events. The timing of other agricultural activities such as purchase of seed and response to weed and pest infestation was also investigated. These results highlight the future potential of using Web search data to derive planting dates in countries where the data are sparse or unreliable, once sufficient search volumes are realized, as well as the potential for monitoring in real time the response of farmers to climate change over the coming decades. Other potential applications of search volume data of relevance to agronomy are also discussed. (letter)

  15. Time analysis in astronomy: Tools for periodicity searches

    International Nuclear Information System (INIS)

    Buccheri, R.; Sacco, B.

    1985-01-01

    The authors discuss periodicity searches in radio and gamma-ray astronomy with special considerations for pulsar searches. The basic methodologies of fast Fourier transform, Rayleigh test, and epoch folding are reviewed with the main objective to compare cost and sensitivities in different applications. It is found that FFT procedures are convenient in unbiased searches for periodicity in radio astronomy, while in spark chamber gamma-ray astronomy, where the measurements are spread over a long integration time, unbiased searches are very difficult with the existing computing facilities and analyses with a-priori knowledge on the period values to look for are better done using the Rayleigh test with harmonics folding (Z /sub n/ test)

  16. Visual search of Mooney faces

    Directory of Open Access Journals (Sweden)

    Jessica Emeline Goold

    2016-02-01

    Full Text Available Faces spontaneously capture attention. However, which special attributes of a face underlie this effect are unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: 1 although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention towards a face. 2 Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. 3 By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces, making faces a special object category for guiding attention.

  17. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  18. Dense Plasma Focus: A question in search of answers, a technology in search of applications

    International Nuclear Information System (INIS)

    Auluck, S.K.H.

    2014-01-01

    Diagnostic information accumulated over four decades of research suggests a directionality of toroidal motion for energetic ions responsible for fusion neutron production in the Dense Plasma Focus (DPF) and existence of an axial component of magnetic field even under conditions of azimuthal symmetry. This is at variance with the traditional view of Dense Plasma Focus as a purely irrotational compressive flow. The difficulty in understanding the experimental situation from a theoretical standpoint arises from polarity of the observed solenoidal state: three independent experiments confirm existence of a fixed polarity of the axial magnetic field or related azimuthal current. Since the equations governing plasma dynamics do not have a built-in direction, the fixed polarity must be related with initial conditions: the plasma dynamics must interact with an external physical vector in order to generate a solenoidal state of fixed polarity. Only four such external physical vectors can be identified: the earth's magnetic field, earth's angular momentum, direction of current flow and the direction of the plasma accelerator. How interaction of plasma dynamics with these fields can generate observed solenoidal state is a question still in search of answers; this paper outlines one possible answer. The importance of this question goes beyond scientific curiosity into technological uses of the energetic ions and the high-power-density plasma environment. However, commercial utilization of such technologies faces reliability concerns, which can be met only by first-principles integrated design of globally-optimized industrial-quality DPF hardware. Issues involved in the emergence of the Dense Plasma Focus as a technology platform for commercial applications in the not-too-distant future are discussed. (author)

  19. Application of 3D Zernike descriptors to shape-based ligand similarity searching

    Directory of Open Access Journals (Sweden)

    Venkatraman Vishwesh

    2009-12-01

    Full Text Available Abstract Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  20. An Analysis of the Applicability of Federal Law Regarding Hash-Based Searches of Digital Media

    Science.gov (United States)

    2014-06-01

    similarity matching, Fourth Amend- ment, federal law, search and seizure, warrant search, consent search, border search. 15. NUMBER OF PAGES 107 16. PRICE ...containing a white powdery substance labeled flour [53]. 3.3.17 United States v Heckenkamp 482 F.3d 1142 (9th Circuit 2007) People have a reasonable

  1. What Major Search Engines Like Google, Yahoo and Bing Need to Know about Teachers in the UK?

    Science.gov (United States)

    Seyedarabi, Faezeh

    2014-01-01

    This article briefly outlines the current major search engines' approach to teachers' web searching. The aim of this article is to make Web searching easier for teachers when searching for relevant online teaching materials, in general, and UK teacher practitioners at primary, secondary and post-compulsory levels, in particular. Therefore, major…

  2. SEARCH FOR WORKERS AS SYMBOLIC CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Sergey Yurevich Alasheev

    2013-08-01

    Full Text Available The labour market is seen as a field of symbolic exchange where the main actors are employers and job applicants, whereas the objects of exchange are workplaces and professional competence of employees. The analysis is based on the observed behavioural practices and verbal expressions. An attempt has been made to consider the area of interaction between employers and jobseekers as a field of symbolic production and consumption and to describe methods of construction and perception of representations in the labour market. The analysis of several interviews has revealed significant characteristics of the image of an employee, the employer’s expectations and the specificity of perception of a job applicant.Search and recruitment is a communication process which forms an image of the profession. The use of various search channels imposes restrictions on the construction of the image of a required worker by the employer and determines the specificity of perception of the vacancy by job applicants.DOI: http://dx.doi.org/10.12731/2218-7405-2013-7-3

  3. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  4. Mobile applications in children with cerebral palsy.

    Science.gov (United States)

    Rodríguez Mariblanca, M; Cano de la Cuerda, R

    2017-12-21

    Cerebral palsy (CP) is one of the most common developmental disorders. Technological development has enabled a transformation of the healthcare sector, which can offer more individualised, participatory, and preventive services. Within this context of new technology applied to the healthcare sector, mobile applications, or apps, constitute a very promising tool for the management of children with CP. The purpose of this article is to perform a systematic review of the information published about various mobile applications either directly related to CP or with potential to be useful in the context of the disease, and to describe, analyse, and classify these applications. A literature search was carried out to gather articles published in English or Spanish between 2011 and 2017 which presented, analysed, or validated applications either specifically designed or potentially useful for CP. Furthermore, a search for mobile applications was conducted in the main mobile application markets. A total of 63 applications were found in biomedical databases and mobile application markets, of which 40 were potentially useful for CP and 23 were specifically designed for the condition (11 for information, 3 for evaluation, and 9 for treatment). There are numerous mobile applications either specifically designed for or with potential to be useful in the field of CP. However, despite the existing scientific evidence, the low methodological quality of scientific articles makes it impossible to generalise the use of these tools. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  5. Short-Term Wind Speed Forecasting Study and Its Application Using a Hybrid Model Optimized by Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2015-01-01

    Full Text Available The support vector regression (SVR and neural network (NN are both new tools from the artificial intelligence field, which have been successfully exploited to solve various problems especially for time series forecasting. However, traditional SVR and NN cannot accurately describe intricate time series with the characteristics of high volatility, nonstationarity, and nonlinearity, such as wind speed and electricity price time series. This study proposes an ensemble approach on the basis of 5-3 Hanning filter (5-3H and wavelet denoising (WD techniques, in conjunction with artificial intelligence optimization based SVR and NN model. So as to confirm the validity of the proposed model, two applicative case studies are conducted in terms of wind speed series from Gansu Province in China and electricity price from New South Wales in Australia. The computational results reveal that cuckoo search (CS outperforms both PSO and GA with respect to convergence and global searching capacity, and the proposed CS-based hybrid model is effective and feasible in generating more reliable and skillful forecasts.

  6. Optimal Search for an Astrophysical Gravitational-Wave Background

    Directory of Open Access Journals (Sweden)

    Rory Smith

    2018-04-01

    Full Text Available Roughly every 2–10 min, a pair of stellar-mass black holes merge somewhere in the Universe. A small fraction of these mergers are detected as individually resolvable gravitational-wave events by advanced detectors such as LIGO and Virgo. The rest contribute to a stochastic background. We derive the statistically optimal search strategy (producing minimum credible intervals for a background of unresolved binaries. Our method applies Bayesian parameter estimation to all available data. Using Monte Carlo simulations, we demonstrate that the search is both “safe” and effective: it is not fooled by instrumental artifacts such as glitches and it recovers simulated stochastic signals without bias. Given realistic assumptions, we estimate that the search can detect the binary black hole background with about 1 day of design sensitivity data versus ≈40 months using the traditional cross-correlation search. This framework independently constrains the merger rate and black hole mass distribution, breaking a degeneracy present in the cross-correlation approach. The search provides a unified framework for population studies of compact binaries, which is cast in terms of hyperparameter estimation. We discuss a number of extensions and generalizations, including application to other sources (such as binary neutron stars and continuous-wave sources, simultaneous estimation of a continuous Gaussian background, and applications to pulsar timing.

  7. Searching for Single Pulses Using Heimdall

    Science.gov (United States)

    Walsh, Gregory; Lynch, Ryan

    2018-01-01

    In radio pulsar surveys, the interstellar medium causes a frequency dependent dispersive delay of a pulsed signal across the observing band. If not corrected, this delay substantially lowers S/N and makes most pulses undetectable. The delay is proportional to an unknown dispersion measure (DM), which must be searched over with many trial values. A number of new, GPU-accelerated codes are now available to optimize this dedispersion task, and to search for transient pulsed radio emission. We report on the use of Heimdall, one such GPU-accelerated tree dedispersion utility, to search for transient radio sources in a Green Bank Telescope survey of the Cygnus Region and North Galactic Plane. The survey is carried out at central frequency of 820 MHz with a goal of finding Fast Radio Bursts, Rotating Radio Transients, young pulsars, and millisecond pulsars. We describe the the survey, data processing pipeline, and follow-up of candidate sources.

  8. Searching Trajectories by Regions of Interest

    KAUST Repository

    Shang, Shuo

    2017-03-22

    With the increasing availability of moving-object tracking data, trajectory search is increasingly important. We propose and investigate a novel query type named trajectory search by regions of interest (TSR query). Given an argument set of trajectories, a TSR query takes a set of regions of interest as a parameter and returns the trajectory in the argument set with the highest spatial-density correlation to the query regions. This type of query is useful in many popular applications such as trip planning and recommendation, and location based services in general. TSR query processing faces three challenges: how to model the spatial-density correlation between query regions and data trajectories, how to effectively prune the search space, and how to effectively schedule multiple so-called query sources. To tackle these challenges, a series of new metrics are defined to model spatial-density correlations. An efficient trajectory search algorithm is developed that exploits upper and lower bounds to prune the search space and that adopts a query-source selection strategy, as well as integrates a heuristic search strategy based on priority ranking to schedule multiple query sources. The performance of TSR query processing is studied in extensive experiments based on real and synthetic spatial data.

  9. Searching Trajectories by Regions of Interest

    KAUST Repository

    Shang, Shuo; chen, Lisi; Jensen, Christian S.; Wen, Ji-Rong; Kalnis, Panos

    2017-01-01

    With the increasing availability of moving-object tracking data, trajectory search is increasingly important. We propose and investigate a novel query type named trajectory search by regions of interest (TSR query). Given an argument set of trajectories, a TSR query takes a set of regions of interest as a parameter and returns the trajectory in the argument set with the highest spatial-density correlation to the query regions. This type of query is useful in many popular applications such as trip planning and recommendation, and location based services in general. TSR query processing faces three challenges: how to model the spatial-density correlation between query regions and data trajectories, how to effectively prune the search space, and how to effectively schedule multiple so-called query sources. To tackle these challenges, a series of new metrics are defined to model spatial-density correlations. An efficient trajectory search algorithm is developed that exploits upper and lower bounds to prune the search space and that adopts a query-source selection strategy, as well as integrates a heuristic search strategy based on priority ranking to schedule multiple query sources. The performance of TSR query processing is studied in extensive experiments based on real and synthetic spatial data.

  10. Application of fast orthogonal search to linear and nonlinear stochastic systems

    DEFF Research Database (Denmark)

    Chon, K H; Korenberg, M J; Holstein-Rathlou, N H

    1997-01-01

    Standard deterministic autoregressive moving average (ARMA) models consider prediction errors to be unexplainable noise sources. The accuracy of the estimated ARMA model parameters depends on producing minimum prediction errors. In this study, an accurate algorithm is developed for estimating...... linear and nonlinear stochastic ARMA model parameters by using a method known as fast orthogonal search, with an extended model containing prediction errors as part of the model estimation process. The extended algorithm uses fast orthogonal search in a two-step procedure in which deterministic terms...

  11. Pharmacophore definition and 3D searches.

    Science.gov (United States)

    Langer, T; Wolber, G

    2004-12-01

    The most common pharmacophore building concepts based on either 3D structure of the target or ligand information are discussed together with the application of such models as queries for 3D database search. An overview of the key techniques available on the market is given and differences with respect to algorithms used and performance obtained are highlighted. Pharmacophore modelling and 3D database search are shown to be successful tools for enriching screening experiments aimed at the discovery of novel bio-active compounds.: © 2004 Elsevier Ltd . All rights reserved.

  12. Search for Directed Networks by Different Random Walk Strategies

    Science.gov (United States)

    Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long

    2012-03-01

    A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.

  13. Method of Improving Personal Name Search in Academic Information Service

    Directory of Open Access Journals (Sweden)

    Heejun Han

    2012-12-01

    Full Text Available All academic information on the web or elsewhere has its creator, that is, a subject who has created the information. The subject can be an individual, a group, or an institution, and can be a nation depending on the nature of the relevant information. Most information is composed of a title, an author, and contents. An essay which is under the academic information category has metadata including a title, an author, keyword, abstract, data about publication, place of publication, ISSN, and the like. A patent has metadata including the title, an applicant, an inventor, an attorney, IPC, number of application, and claims of the invention. Most web-based academic information services enable users to search the information by processing the meta-information. An important element is to search information by using the author field which corresponds to a personal name. This study suggests a method of efficient indexing and using the adjacent operation result ranking algorithm to which phrase search-based boosting elements are applied, and thus improving the accuracy of the search results of personal names. It also describes a method for providing the results of searching co-authors and related researchers in searching personal names. This method can be effectively applied to providing accurate and additional search results in the academic information services.

  14. Application of Conformational Space Search in Drug Action | Adikwu ...

    African Journals Online (AJOL)

    The role of conformational space in drug action is presented. Two examples of molecules in different therapeutic groups are presented. Conformational space search will lead to isolating the exact conformation with the desired medicinal properties. Many conformations of a plant isolate may exist which are active, weakly ...

  15. WWER core pattern enhancement using adaptive improved harmony search

    Energy Technology Data Exchange (ETDEWEB)

    Nazari, T. [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Aghaie, M., E-mail: M_Aghaie@sbu.ac.ir [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A.; Norouzi, A. [Nuclear Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer The classical and improved harmony search algorithms are introduced. Black-Right-Pointing-Pointer The advantage of IHS is demonstrated in Shekel's Foxholes. Black-Right-Pointing-Pointer The CHS and IHS are compared with other Heuristic algorithms. Black-Right-Pointing-Pointer The adaptive improved harmony search is applied for two cases. Black-Right-Pointing-Pointer Two cases of WWER core are optimized in BOC FA pattern. - Abstract: The efficient operation and fuel management of PWRs are of utmost importance. Core performance analysis constitutes an essential phase in core fuel management optimization. Finding an optimum core arrangement for loading of fuel assemblies, FAs, in a nuclear core is a complex problem. In this paper, application of classical harmony search (HS) and adaptive improved harmony search (IHS) in loading pattern (LP) design, for pressurized water reactors, is described. In this analysis, finding the best core pattern, which attains maximum multiplication factor, k{sub eff}, by considering maximum allowable power picking factors (PPF) is the main objective. Therefore a HS based, LP optimization code is prepared and CITATION code which is a neutronic calculation code, applied to obtain effective multiplication factor, neutron fluxes and power density in desired cores. Using adaptive improved harmony search and neutronic code, generated LP optimization code, could be applicable for PWRs core with many numbers of FAs. In this work, at first step, HS and IHS efficiencies are compared with some other heuristic algorithms in Shekel's Foxholes problem and capability of the adaptive improved harmony search is demonstrated. Results show, efficient application of IHS. At second step, two WWER cases are studied and then IHS proffered improved core patterns with regard to mentioned objective functions.

  16. The Role of Google Scholar in Evidence Reviews and Its Applicability to Grey Literature Searching

    Science.gov (United States)

    Haddaway, Neal Robert; Collins, Alexandra Mary; Coughlin, Deborah; Kirk, Stuart

    2015-01-01

    Google Scholar (GS), a commonly used web-based academic search engine, catalogues between 2 and 100 million records of both academic and grey literature (articles not formally published by commercial academic publishers). Google Scholar collates results from across the internet and is free to use. As a result it has received considerable attention as a method for searching for literature, particularly in searches for grey literature, as required by systematic reviews. The reliance on GS as a standalone resource has been greatly debated, however, and its efficacy in grey literature searching has not yet been investigated. Using systematic review case studies from environmental science, we investigated the utility of GS in systematic reviews and in searches for grey literature. Our findings show that GS results contain moderate amounts of grey literature, with the majority found on average at page 80. We also found that, when searched for specifically, the majority of literature identified using Web of Science was also found using GS. However, our findings showed moderate/poor overlap in results when similar search strings were used in Web of Science and GS (10–67%), and that GS missed some important literature in five of six case studies. Furthermore, a general GS search failed to find any grey literature from a case study that involved manual searching of organisations’ websites. If used in systematic reviews for grey literature, we recommend that searches of article titles focus on the first 200 to 300 results. We conclude that whilst Google Scholar can find much grey literature and specific, known studies, it should not be used alone for systematic review searches. Rather, it forms a powerful addition to other traditional search methods. In addition, we advocate the use of tools to transparently document and catalogue GS search results to maintain high levels of transparency and the ability to be updated, critical to systematic reviews. PMID:26379270

  17. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    OpenAIRE

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave...

  18. The effect of query complexity on Web searching results

    Directory of Open Access Journals (Sweden)

    B.J. Jansen

    2000-01-01

    Full Text Available This paper presents findings from a study of the effects of query structure on retrieval by Web search services. Fifteen queries were selected from the transaction log of a major Web search service in simple query form with no advanced operators (e.g., Boolean operators, phrase operators, etc. and submitted to 5 major search engines - Alta Vista, Excite, FAST Search, Infoseek, and Northern Light. The results from these queries became the baseline data. The original 15 queries were then modified using the various search operators supported by each of the 5 search engines for a total of 210 queries. Each of these 210 queries was also submitted to the applicable search service. The results obtained were then compared to the baseline results. A total of 2,768 search results were returned by the set of all queries. In general, increasing the complexity of the queries had little effect on the results with a greater than 70% overlap in results, on average. Implications for the design of Web search services and directions for future research are discussed.

  19. Axion experiment makes its debut

    CERN Multimedia

    Dumé, Belle

    2004-01-01

    An experiment built from components recycled from other experiments has put new limits on the properties of particles that might be the "dark matter" that makes up about 25% of the universe. The CERN Axion Solar telescope (CAST) was built to search for exotic particles called axions that might be produced inside the sun (1 page)

  20. Constraint programming and decision making

    CERN Document Server

    Kreinovich, Vladik

    2014-01-01

    In many application areas, it is necessary to make effective decisions under constraints. Several area-specific techniques are known for such decision problems; however, because these techniques are area-specific, it is not easy to apply each technique to other applications areas. Cross-fertilization between different application areas is one of the main objectives of the annual International Workshops on Constraint Programming and Decision Making. Those workshops, held in the US (El Paso, Texas), in Europe (Lyon, France), and in Asia (Novosibirsk, Russia), from 2008 to 2012, have attracted researchers and practitioners from all over the world. This volume presents extended versions of selected papers from those workshops. These papers deal with all stages of decision making under constraints: (1) formulating the problem of multi-criteria decision making in precise terms, (2) determining when the corresponding decision problem is algorithmically solvable; (3) finding the corresponding algorithms, and making...

  1. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  2. Search Engine Optimization for Flash Best Practices for Using Flash on the Web

    CERN Document Server

    Perkins, Todd

    2009-01-01

    Search Engine Optimization for Flash dispels the myth that Flash-based websites won't show up in a web search by demonstrating exactly what you can do to make your site fully searchable -- no matter how much Flash it contains. You'll learn best practices for using HTML, CSS and JavaScript, as well as SWFObject, for building sites with Flash that will stand tall in search rankings.

  3. Cooperative Search with Autonomous Vehicles in a 3D Aquatic Testbed

    Science.gov (United States)

    2012-01-01

    Cooperative Search with Autonomous Vehicles in a 3D Aquatic Testbed Matthew Keeter1, Daniel Moore2,3, Ryan Muller2,3, Eric Nieters1, Jennifer...Many applications for autonomous vehicles involve three-dimensional domains, notably aerial and aquatic environments. Such applications include mon...TYPE 3. DATES COVERED 00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Cooperative Search With Autonomous Vehicles In A 3D Aquatic Testbed 5a

  4. Spatial Indexing for Data Searching in Mobile Sensing Environments.

    Science.gov (United States)

    Zhou, Yuchao; De, Suparna; Wang, Wei; Moessner, Klaus; Palaniswami, Marimuthu S

    2017-06-18

    Data searching and retrieval is one of the fundamental functionalities in many Web of Things applications, which need to collect, process and analyze huge amounts of sensor stream data. The problem in fact has been well studied for data generated by sensors that are installed at fixed locations; however, challenges emerge along with the popularity of opportunistic sensing applications in which mobile sensors keep reporting observation and measurement data at variable intervals and changing geographical locations. To address these challenges, we develop the Geohash-Grid Tree, a spatial indexing technique specially designed for searching data integrated from heterogeneous sources in a mobile sensing environment. Results of the experiments on a real-world dataset collected from the SmartSantander smart city testbed show that the index structure allows efficient search based on spatial distance, range and time windows in a large time series database.

  5. Community ambulation: influences on therapists and clients reasoning and decision making.

    Science.gov (United States)

    Corrigan, Rosemary; McBurney, Helen

    2008-01-01

    Community ambulation is an important element of a rehabilitation training programme and its achievement is a goal shared by rehabilitation professionals and clients. The factors that influence a physiotherapist's or health professionals decision making around the preparation of a client for community ambulation and the factors that influence a client's decision to return to walking in their community are unclear. To review the available literature about the factors that have influenced the reasoning and decision making of rehabilitation therapists and clients around the topic of ambulation in the community. Three separate searches of the available literature were undertaken using Ovid, Cinahl, ProQuest, Medline and Ebscohost databases. Databases were searched from 1966 to October 2006.The first search explored the literature for factors that influence the clinical reasoning of rehabilitation therapists. The second search explored the literature for factors that influence client's decision to ambulate in the community. A third search was undertaken to explore the literature for the demands of community ambulation in rural communities. Very few studies were found that explored community ambulation in the context of clinical reasoning and decision making, the facilitators and barriers to a clients return to ambulation in their community or the demands of ambulation in a rural community. Consideration of the environment is key to the successful return to walking in the community of clients with mobility problems yet little literature has been found to guide physiotherapist's decision making about preparing a clients to return to walking in the community. An individual's participation in their society is also a result of the interaction between their personal characteristics and his or her environment. The influence of these characteristics may vary from one individual to another yet the factors that influence a person's decision to return to walking in their community

  6. Search for continuous and single day emission from ultra-high-energy sources

    International Nuclear Information System (INIS)

    Chen, Mei-Li.

    1993-01-01

    Data from the CYGNUS experiment has been used to search the northern sky for point sources of continuous ultra-high-energy gamma radiation and to examine 51 candidate sources on a daily basis to search for episodic emission. In this paper, we make use of our most recent data to update our previously published results from these searches. The data sample is approximately twice as large as the published data set for continuous emission, and contains an additional year for the daily search. The latest results, up to the time of the conference, will be presented at the meeting

  7. Shared Decision-Making for Nursing Practice: An Integrative Review.

    Science.gov (United States)

    Truglio-Londrigan, Marie; Slyer, Jason T

    2018-01-01

    Shared decision-making has received national and international interest by providers, educators, researchers, and policy makers. The literature on shared decision-making is extensive, dealing with the individual components of shared decision-making rather than a comprehensive process. This view of shared decision-making leaves healthcare providers to wonder how to integrate shared decision-making into practice. To understand shared decision-making as a comprehensive process from the perspective of the patient and provider in all healthcare settings. An integrative review was conducted applying a systematic approach involving a literature search, data evaluation, and data analysis. The search included articles from PubMed, CINAHL, the Cochrane Central Register of Controlled Trials, and PsycINFO from 1970 through 2016. Articles included quantitative experimental and non-experimental designs, qualitative, and theoretical articles about shared decision-making between all healthcare providers and patients in all healthcare settings. Fifty-two papers were included in this integrative review. Three categories emerged from the synthesis: (a) communication/ relationship building; (b) working towards a shared decision; and (c) action for shared decision-making. Each major theme contained sub-themes represented in the proposed visual representation for shared decision-making. A comprehensive understanding of shared decision-making between the nurse and the patient was identified. A visual representation offers a guide that depicts shared decision-making as a process taking place during a healthcare encounter with implications for the continuation of shared decisions over time offering patients an opportunity to return to the nurse for reconsiderations of past shared decisions.

  8. An application of the value tree analysis methodology within the integrated risk informed decision making for the nuclear facilities

    International Nuclear Information System (INIS)

    Borysiewicz, Mieczysław; Kowal, Karol; Potempski, Sławomir

    2015-01-01

    A new framework of integrated risk informed decision making (IRIDM) has been recently developed in order to improve the risk management of the nuclear facilities. IRIDM is a process in which qualitatively different inputs, corresponding to different types of risk, are jointly taken into account. However, the relative importance of the IRIDM inputs and their influence on the decision to be made is difficult to be determined quantitatively. An improvement of this situation can be achieved by application of the Value Tree Analysis (VTA) methods. The aim of this article is to present the VTA methodology in the context of its potential usage in the decision making on nuclear facilities. The benefits of the VTA application within the IRIDM process were identified while making the decision on fuel conversion of the research reactor MARIA. - Highlights: • New approach to risk informed decision making on nuclear facilities was postulated. • Value tree diagram was developed for decision processes on nuclear installations. • An experiment was performed to compare the new approach with the standard one. • Benefits of the new approach were reached in fuel conversion of a research reactor. • The new approach makes the decision making process more transparent and auditable

  9. INTERACTION OF SEARCH CAPABILITIES OF ELECTRONIC AND TRADITIONAL (CARD CATALOGS

    Directory of Open Access Journals (Sweden)

    Л. В. Головко

    2017-10-01

    Full Text Available Purpose. Interaction of search capabilities of electronic and traditional (card catalogs. Subject: search capabilities of electronic and traditional (card catalogs and their interaction. Goal: Creating efficient search system for library information services, updating and improving the information retrieval system. To reach this goal, following tasks are set: – to determine the possibility of parallel functioning of electronic and traditional card catalogs, and to reveal the interaction of their search capabilities by conducting a survey via questionnaire titled «Interaction of search capabilities of electronic and traditional (card catalogs»; – to find out which search systems are preferred by users; – to estimate the actual condition of search capabilities of electronic and traditional (card catalogs in the library. Methodology. On various stages of the survey the following methods were used: analysis and synthesis, comparison, generalization, primary sources search; sociological method (survey. These methods allowed determining, processing and ana lyzing the whole complex of available sources, which became an important factor of research objectivity. Finding. Survey results allowed us to analyze the dynamics of changes, new needs of the readers, and to make a decision regarding the quality improvement of information search services. Practical value. Creating a theoretical foundation for implementation of set tasks is the practical value of the acquired findings. Conclusions and results of the research can be used in university students’, postgraduates’ and professors’ information search activities. Certain results of the research are used and implemented in practice of the library of Kryvyi Rih State Pedagogical University, namely at workshops on the basics of information culture (using bibliographic reference unit, information search by key words, authors and titles via electronic catalogue. Guides for users were created. Duty

  10. Human Detection and Classification of Landing Sites for Search and Rescue Drones

    NARCIS (Netherlands)

    N. Martins, Felipe; de Groot, Marc; Stokkel, Xeryus; Wiering, Marco

    2016-01-01

    Search and rescue is often time and labour intensive. We present a system to be used in drones to make search and rescue operations more effective. The system uses a drone downward facing camera to detect people and to evaluate potential sites as being safe or not for the drone to land. Histogram of

  11. Technology makes life better

    Institute of Scientific and Technical Information of China (English)

    李红

    2015-01-01

    There are many theories about the relationship between technology and society.With the development of world economy,technology has made great progress.However,many changes were taken place in our daily life,especially the appearance of computer.Sending emails,chatting with others online,search for information which is what we need to learn and many other demands in people’s daily life,computers make all of it into possibility.

  12. GeoSearch: A lightweight broking middleware for geospatial resources discovery

    Science.gov (United States)

    Gui, Z.; Yang, C.; Liu, K.; Xia, J.

    2012-12-01

    With petabytes of geodata, thousands of geospatial web services available over the Internet, it is critical to support geoscience research and applications by finding the best-fit geospatial resources from the massive and heterogeneous resources. Past decades' developments witnessed the operation of many service components to facilitate geospatial resource management and discovery. However, efficient and accurate geospatial resource discovery is still a big challenge due to the following reasons: 1)The entry barriers (also called "learning curves") hinder the usability of discovery services to end users. Different portals and catalogues always adopt various access protocols, metadata formats and GUI styles to organize, present and publish metadata. It is hard for end users to learn all these technical details and differences. 2)The cost for federating heterogeneous services is high. To provide sufficient resources and facilitate data discovery, many registries adopt periodic harvesting mechanism to retrieve metadata from other federated catalogues. These time-consuming processes lead to network and storage burdens, data redundancy, and also the overhead of maintaining data consistency. 3)The heterogeneous semantics issues in data discovery. Since the keyword matching is still the primary search method in many operational discovery services, the search accuracy (precision and recall) is hard to guarantee. Semantic technologies (such as semantic reasoning and similarity evaluation) offer a solution to solve these issues. However, integrating semantic technologies with existing service is challenging due to the expandability limitations on the service frameworks and metadata templates. 4)The capabilities to help users make final selection are inadequate. Most of the existing search portals lack intuitive and diverse information visualization methods and functions (sort, filter) to present, explore and analyze search results. Furthermore, the presentation of the value

  13. Improved magnetic encoding device and method for making the same. [Patent application

    Science.gov (United States)

    Fox, R.J.

    A magnetic encoding device and method for making the same are provided for use as magnetic storage media in identification control applications that give output signals from a reader that are of shorter duration and substantially greater magnitude than those of the prior art. Magnetic encoding elements are produced by uniformly bending wire or strip stock of a magnetic material longitudinally about a common radius to exceed the elastic limit of the material and subsequently mounting the material so that it is restrained in an unbent position on a substrate of nonmagnetic material. The elements are spot weld attached to a substrate to form a binary coded array of elements according to a desired binary code. The coded substrate may be enclosed in a plastic laminate structure. Such devices may be used for security badges, key cards, and the like and may have many other applications. 7 figures.

  14. The Search Performance Evaluation and Prediction in Exploratory Search

    OpenAIRE

    LIU, FEI

    2016-01-01

    The exploratory search for complex search tasks requires an effective search behavior model to evaluate and predict user search performance. Few studies have investigated the relationship between user search behavior and search performance in exploratory search. This research adopts a mixed approach combining search system development, user search experiment, search query log analysis, and multivariate regression analysis to resolve the knowledge gap. Through this study, it is shown that expl...

  15. Searches over graphs representing geospatial-temporal remote sensing data

    Science.gov (United States)

    Brost, Randolph; Perkins, David Nikolaus

    2018-03-06

    Various technologies pertaining to identifying objects of interest in remote sensing images by searching over geospatial-temporal graph representations are described herein. Graphs are constructed by representing objects in remote sensing images as nodes, and connecting nodes with undirected edges representing either distance or adjacency relationships between objects and directed edges representing changes in time. Geospatial-temporal graph searches are made computationally efficient by taking advantage of characteristics of geospatial-temporal data in remote sensing images through the application of various graph search techniques.

  16. Natural Language Processing Approach for Searching the Quran: Quick and Intuitive

    Directory of Open Access Journals (Sweden)

    Zainal Abidah

    2017-01-01

    Full Text Available The Quran is a scripture that acts as the main reference to people which their religion is Islam. It covers information from politics to science, with vast amount of information that requires effort to uncover the knowledge behind it. Today, the emergence of smartphones has led to the development of a wide-range application for enhancing knowledge-seeking activities. This project proposes a mobile application that is taking a natural language approach to searching topics in the Quran based on keyword searching. The benefit of the application is two-fold; it is intuitive and it saves time.

  17. Search performance is better predicted by tileability than presence of a unique basic feature

    Science.gov (United States)

    Chang, Honghua; Rosenholtz, Ruth

    2016-01-01

    Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a “basic feature” not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search. PMID:27548090

  18. Intelligent techniques in engineering management theory and applications

    CERN Document Server

    Onar, Sezi

    2015-01-01

    This book presents recently developed intelligent techniques with applications and theory in the area of engineering management. The involved applications of intelligent techniques such as neural networks, fuzzy sets, Tabu search, genetic algorithms, etc. will be useful for engineering managers, postgraduate students, researchers, and lecturers. The book has been written considering the contents of a classical engineering management book but intelligent techniques are used for handling the engineering management problem areas. This comprehensive characteristics of the book makes it an excellent reference for the solution of complex problems of engineering management. The authors of the chapters are well-known researchers with their previous works in the area of engineering management.

  19. Foraging in Semantic Fields: How We Search Through Memory.

    Science.gov (United States)

    Hills, Thomas T; Todd, Peter M; Jones, Michael N

    2015-07-01

    When searching for concepts in memory--as in the verbal fluency task of naming all the animals one can think of--people appear to explore internal mental representations in much the same way that animals forage in physical space: searching locally within patches of information before transitioning globally between patches. However, the definition of the patches being searched in mental space is not well specified. Do we search by activating explicit predefined categories (e.g., pets) and recall items from within that category (categorical search), or do we activate and recall a connected sequence of individual items without using categorical information, with each item recalled leading to the retrieval of an associated item in a stream (associative search), or both? Using semantic representations in a search of associative memory framework and data from the animal fluency task, we tested competing hypotheses based on associative and categorical search models. Associative, but not categorical, patch transitions took longer to make than position-matched productions, suggesting that categorical transitions were not true transitions. There was also clear evidence of associative search even within categorical patch boundaries. Furthermore, most individuals' behavior was best explained by an associative search model without the addition of categorical information. Thus, our results support a search process that does not use categorical information, but for which patch boundaries shift with each recall and local search is well described by a random walk in semantic space, with switches to new regions of the semantic space when the current region is depleted. Copyright © 2015 Cognitive Science Society, Inc.

  20. Blog feed search with a post index

    NARCIS (Netherlands)

    Weerkamp, W.; Balog, K.; de Rijke, M.

    2011-01-01

    User generated content forms an important domain for mining knowledge. In this paper, we address the task of blog feed search: to find blogs that are principally devoted to a given topic, as opposed to blogs that merely happen to mention the topic in passing. The large number of blogs makes the

  1. Towards Efficient Search for Activity Trajectories

    DEFF Research Database (Denmark)

    Zheng, Kai; Shang, Shuo; Yuan, Jing

    2013-01-01

    , recent proliferation in location-based web applications (e.g., Foursquare, Facebook) has given rise to large amounts of trajectories associated with activity information, called activity trajectory. In this paper, we study the problem of efficient similarity search on activity trajectory database. Given...

  2. Direct Tests on Individual Behaviour in Small Decision-Making Problems

    Directory of Open Access Journals (Sweden)

    Takemi Fujikawa

    2007-10-01

    Full Text Available This paper provides an empirical and experimental analysis of individual decision making in small decision-making problems with a series of laboratory experiments. Two experimental treatments with binary small decision-making problems are implemented: (1 the search treatment with the unknown payoff distribution to the decision makers and (2 the choice treatment with the known payoff distribution. A first observation is that in the search treatment the tendency to select best reply to the past performances, and misestimation of the payoff distribution can lead to robust deviations from expected value maximisation. A second observation is concerned with choice problems with two options with the same expected value: one option is more risky with larger payoff variability; the other option is moderate with less payoff variability. Experimental results show that it is likely that the more the decision makers choose a risky option, the higher they can achieve high points, ex post. Finally, I investigate the exploration tendency. Comparison of results between the search treatment and the choice treatment reveals that the additional information to the decision makers enhances expected value maximisation.

  3. Application of artificial intelligence to search ground-state geometry of clusters

    International Nuclear Information System (INIS)

    Lemes, Mauricio Ruv; Marim, L.R.; Dal Pino, A. Jr.

    2002-01-01

    We introduce a global optimization procedure, the neural-assisted genetic algorithm (NAGA). It combines the power of an artificial neural network (ANN) with the versatility of the genetic algorithm. This method is suitable to solve optimization problems that depend on some kind of heuristics to limit the search space. If a reasonable amount of data is available, the ANN can 'understand' the problem and provide the genetic algorithm with a selected population of elements that will speed up the search for the optimum solution. We tested the method in a search for the ground-state geometry of silicon clusters. We trained the ANN with information about the geometry and energetics of small silicon clusters. Next, the ANN learned how to restrict the configurational space for larger silicon clusters. For Si 10 and Si 20 , we noticed that the NAGA is at least three times faster than the 'pure' genetic algorithm. As the size of the cluster increases, it is expected that the gain in terms of time will increase as well

  4. 76 FR 82279 - Electronic Delivery of Search Results From the United States Patent and Trademark Office to the...

    Science.gov (United States)

    2011-12-30

    ...] Electronic Delivery of Search Results From the United States Patent and Trademark Office to the European... United States Patent and Trademark Office (USPTO) has recently begun electronic delivery of search... the search results from a previously filed patent application to which the European patent application...

  5. Searching for the right fit: development of applicant person-organization fit perceptions during the recruitment process.

    Science.gov (United States)

    Swider, Brian W; Zimmerman, Ryan D; Barrick, Murray R

    2015-05-01

    Numerous studies link applicant fit perceptions measured at a single point in time to recruitment outcomes. Expanding upon this prior research by incorporating decision-making theory, this study examines how applicants develop these fit perceptions over the duration of the recruitment process, showing meaningful changes in fit perceptions across and within organizations overtime. To assess the development of applicant fit perceptions, eight assessments of person-organization (PO) fit with up to four different organizations across 169 applicants for 403 job choice decisions were analyzed. Results showed the presence of initial levels and changes in differentiation of applicant PO fit perceptions across organizations, which significantly predicted future job choice. In addition, changes in within-organizational PO fit perceptions across two stages of recruitment predicted applicant job choices among multiple employers. The implications of these results for accurately understanding the development of fit perceptions, relationships between fit perceptions and key recruiting outcomes, and possible limitations of past meta-analytically derived estimates of these relationships are discussed. (c) 2015 APA, all rights reserved.

  6. One visual search, many memory searches: An eye-tracking investigation of hybrid search.

    Science.gov (United States)

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2017-09-01

    Suppose you go to the supermarket with a shopping list of 10 items held in memory. Your shopping expedition can be seen as a combination of visual search and memory search. This is known as "hybrid search." There is a growing interest in understanding how hybrid search tasks are accomplished. We used eye tracking to examine how manipulating the number of possible targets (the memory set size [MSS]) changes how observers (Os) search. We found that dwell time on each distractor increased with MSS, suggesting a memory search was being executed each time a new distractor was fixated. Meanwhile, although the rate of refixation increased with MSS, it was not nearly enough to suggest a strategy that involves repeatedly searching visual space for subgroups of the target set. These data provide a clear demonstration that hybrid search tasks are carried out via a "one visual search, many memory searches" heuristic in which Os examine items in the visual array once with a very low rate of refixations. For each item selected, Os activate a memory search that produces logarithmic response time increases with increased MSS. Furthermore, the percentage of distractors fixated was strongly modulated by the MSS: More items in the MSS led to a higher percentage of fixated distractors. Searching for more potential targets appears to significantly alter how Os approach the task, ultimately resulting in more eye movements and longer response times.

  7. Investigating User Search Tactic Patterns and System Support in Using Digital Libraries

    Science.gov (United States)

    Joo, Soohyung

    2013-01-01

    This study aims to investigate users' search tactic application and system support in using digital libraries. A user study was conducted with sixty digital library users. The study was designed to answer three research questions: 1) How do users engage in a search process by applying different types of search tactics while conducting different…

  8. Application of Regulatory Focus Theory to Search Advertising.

    Science.gov (United States)

    Mowle, Elyse N; Georgia, Emily J; Doss, Brian D; Updegraff, John A

    The purpose of this paper is to test the utility of regulatory focus theory principles in a real-world setting; specifically, Internet hosted text advertisements. Effect of compatibility of the ad text with the regulatory focus of the consumer was examined. Advertisements were created using Google AdWords. Data were collected for the number of views and clicks each ad received. Effect of regulatory fit was measured using logistic regression. Logistic regression analyses demonstrated that there was a strong main effect for keyword, such that users were almost six times as likely to click on a promotion advertisement as a prevention advertisement, as well as a main effect for compatibility, such that users were twice as likely to click on an advertisement with content that was consistent with their keyword. Finally, there was a strong interaction of these two variables, such that the effect of consistent advertisements was stronger for promotion searches than for prevention searches. The effect of ad compatibility had medium to large effect sizes, suggesting that individuals' state may have more influence on advertising response than do individuals' traits (e.g. personality traits). Measurement of regulatory fit was limited by the constraints of Google AdWords. The results of this study provide a possible framework for ad creation for Internet advertisers. This paper is the first study to demonstrate the utility of regulatory focus theory in online advertising.

  9. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks

  10. Supporting Communication and Decision Making in Finnish Intensive Care with Language Technology

    Directory of Open Access Journals (Sweden)

    Hanna J. Suominen

    2010-01-01

    Full Text Available A fluent flow of health information is critical for health communication and decision making. However, the flow is fragmented by the large amount of textual records and their specific jargon. This creates risks for both patient safety and cost-effective health services. Language technology for the automated processing of textual health records is emerging. In this paper, we describe method development for building topical overviews in Finnish intensive care. Our topical search methods are based on supervised multi-label classification and regression, as well as supervised and unsupervised multi-class classification. Our linguistic analysis methods are based on rule-based and statistical parsing, as well as tailoring of a commercial morphological analyser. According to our experimental results, the supervised methods generalise for multiple topics and human annotators, and the unsupervised method enables an ad hoc information search. Tailored linguistic analysis improves performance in the experiments and, in addition, improves text comprehensibility for health professionals and laypeople. In conclusion, the performance of our methods is promising for real-life applications.

  11. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    Science.gov (United States)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  12. Searches for New Physics in Multijet Final States

    Directory of Open Access Journals (Sweden)

    Vuosalo Carl

    2013-11-01

    Full Text Available A variety of new physics models predict heavy resonances that decay to multiple hadronic jets. These models include axigluons, colorons, diquarks, excited quarks, Randall-Sundrum gravitons, string resonances, and Z’ models, among others. Other models make the prediction that high-pT jets will be suppressed, resulting in jet extinction. Using the data collected in 2012 at a center-of-mass energy of 8 TeV, the CMS collaboration has made a baseline inclusive jet cross section measurement for comparison with new-physics searches, and then performed searches for jet extinction and resonances that decay to two hadronic jets. The results of these searches will be presented. No evidence of new physics has been observed, and these results set new limits on the parameters of these models.

  13. Fast radio burst search: cross spectrum vs. auto spectrum method

    Science.gov (United States)

    Liu, Lei; Zheng, Weimin; Yan, Zhen; Zhang, Juan

    2018-06-01

    The search for fast radio bursts (FRBs) is a hot topic in current radio astronomy studies. In this work, we carry out a single pulse search with a very long baseline interferometry (VLBI) pulsar observation data set using both auto spectrum and cross spectrum search methods. The cross spectrum method, first proposed in Liu et al., maximizes the signal power by fully utilizing the fringe phase information of the baseline cross spectrum. The auto spectrum search method is based on the popular pulsar software package PRESTO, which extracts single pulses from the auto spectrum of each station. According to our comparison, the cross spectrum method is able to enhance the signal power and therefore extract single pulses from data contaminated by high levels of radio frequency interference (RFI), which makes it possible to carry out a search for FRBs in regular VLBI observations when RFI is present.

  14. Engineering Bacteria to Search for Specific Concentrations of Molecules by a Systematic Synthetic Biology Design Method.

    Science.gov (United States)

    Tien, Shin-Ming; Hsu, Chih-Yuan; Chen, Bor-Sen

    2016-01-01

    Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.

  15. Engineering Bacteria to Search for Specific Concentrations of Molecules by a Systematic Synthetic Biology Design Method.

    Directory of Open Access Journals (Sweden)

    Shin-Ming Tien

    Full Text Available Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.

  16. SearchResultFinder: federated search made easy

    NARCIS (Netherlands)

    Trieschnigg, Rudolf Berend; Tjin-Kam-Jet, Kien; Hiemstra, Djoerd

    Building a federated search engine based on a large number existing web search engines is a challenge: implementing the programming interface (API) for each search engine is an exacting and time-consuming job. In this demonstration we present SearchResultFinder, a browser plugin which speeds up

  17. Learning Behavior Characterizations for Novelty Search

    DEFF Research Database (Denmark)

    Meyerson, Elliot; Lehman, Joel Anthony; Miikulainen, Risto

    2016-01-01

    Novelty search and related diversity-driven algorithms provide a promising approach to overcoming deception in complex domains. The behavior characterization (BC) is a critical choice in the application of such algorithms. The BC maps each evaluated individual to a behavior, i.e., some vector...

  18. Spatial Indexing for Data Searching in Mobile Sensing Environments

    Directory of Open Access Journals (Sweden)

    Yuchao Zhou

    2017-06-01

    Full Text Available Data searching and retrieval is one of the fundamental functionalities in many Web of Things applications, which need to collect, process and analyze huge amounts of sensor stream data. The problem in fact has been well studied for data generated by sensors that are installed at fixed locations; however, challenges emerge along with the popularity of opportunistic sensing applications in which mobile sensors keep reporting observation and measurement data at variable intervals and changing geographical locations. To address these challenges, we develop the Geohash-Grid Tree, a spatial indexing technique specially designed for searching data integrated from heterogeneous sources in a mobile sensing environment. Results of the experiments on a real-world dataset collected from the SmartSantander smart city testbed show that the index structure allows efficient search based on spatial distance, range and time windows in a large time series database.

  19. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  20. Stop searches in flavourful supersymmetry

    CERN Document Server

    Crivellin, Andreas; Tunstall, Lewis C.

    2016-01-01

    Natural realisations of supersymmetry require light stops ${\\tilde t}_1$, making them a prime target of LHC searches for physics beyond the Standard Model. Depending on the kinematic region, the main search channels are ${\\tilde t_1}\\to t \\tilde \\chi^0_1$, ${\\tilde t_1}\\to W b \\tilde \\chi^0_1$ and ${\\tilde t_1}\\to c \\tilde \\chi^0_1$. We first examine the interplay of these decay modes with ${\\tilde c_1}\\to c \\tilde \\chi^0_1$ in a model-independent fashion, revealing the existence of large regions in parameter space which are excluded for any ${\\tilde t_1}\\to c \\tilde \\chi^0_1$ branching ratio. This effect is then illustrated for scenarios with stop-scharm mixing in the right-handed sector, where it has previously been observed that the stop mass limits can be significantly weakened for large mixing. Our analysis shows that once the LHC bounds from ${\\tilde c_1}\\to c \\tilde \\chi^0_1$ searches are taken into account, non-zero stop-scharm mixing leads only to a modest increase in the allowed regions of parameter...

  1. Application of Bayesian Decision Theory Based on Prior Information in the Multi-Objective Optimization Problem

    Directory of Open Access Journals (Sweden)

    Xia Lei

    2010-12-01

    Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.

  2. Missing Links in Middle School: Developing Use of Disciplinary Relatedness in Evaluating Internet Search Results.

    Directory of Open Access Journals (Sweden)

    Frank C Keil

    Full Text Available In the "digital native" generation, internet search engines are a commonly used source of information. However, adolescents may fail to recognize relevant search results when they are related in discipline to the search topic but lack other cues. Middle school students, high school students, and adults rated simulated search results for relevance to the search topic. The search results were designed to contrast deep discipline-based relationships with lexical similarity to the search topic. Results suggest that the ability to recognize disciplinary relatedness without supporting cues may continue to develop into high school. Despite frequent search engine usage, younger adolescents may require additional support to make the most of the information available to them.

  3. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  4. A systematic review of decision support needs of parents making child health decisions

    Science.gov (United States)

    Jackson, Cath; Cheater, Francine M.; Reid, Innes

    2008-01-01

    Abstract Objective  To identify the decision support needs of parents attempting to make an informed health decision on behalf of a child. Context  The first step towards implementing patient decision support is to assess patients’ information and decision‐making needs. Search strategy  A systematic search of key bibliographic databases for decision support studies was performed in 2005. Reference lists of relevant review articles and key authors were searched. Three relevant journals were hand searched. Inclusion criteria  Non‐intervention studies containing data on decision support needs of parents making child health decisions. Data extraction and synthesis  Data were extracted on study characteristics, decision focus and decision support needs. Studies were quality assessed using a pre‐defined set of criteria. Data synthesis used the UK Evidence for Policy and Practice Information and Co‐ordinating Centre approach. Main results  One‐hundred and forty nine studies were included across various child health decisions, settings and study designs. Thematic analysis of decision support needs indicated three key issues: (i) information (including suggestions about the content, delivery, source, timing); (ii) talking to others (including concerns about pressure from others); and (iii) feeling a sense of control over the process that could be influenced by emotionally charged decisions, the consultation process, and structural or service barriers. These were consistent across decision type, study design and whether or not the study focused on informed decision making. PMID:18816320

  5. Supported Decision-Making: Implications from Positive Psychology for Assessment and Intervention in Rehabilitation and Employment.

    Science.gov (United States)

    Uyanik, Hatice; Shogren, Karrie A; Blanck, Peter

    2017-12-01

    Purpose This article reviews existing literature on positive psychology, supported decision-making (SDM), employment, and disability. It examines interventions and assessments that have been empirically evaluated for the enhancement of decision-making and overall well-being of people with disabilities. Additionally, conceptual themes present in the literature were explored. Methods A systematic review was conducted across two databases (ERIC and PsychINFO) using various combination of keywords of 'disabilit*', work rehabilitation and employment terms, positive psychology terms, and SDM components. Seven database searches were conducted with diverse combinations of keywords, which identified 1425 results in total to be screened for relevance using their titles and abstracts. Database search was supplemented with hand searches of oft-cited journals, ancestral search, and supplemental search from grey literature. Results Only four studies were identified in the literature targeting SDM and positive psychology related constructs in the employment and job development context. Results across the studies indicated small to moderate impacts of the assessment and interventions on decision-making and engagement outcomes. Conceptually there are thematic areas of potential overlap, although they are limited in the explicit integration of theory in supported decision-making, positive psychology, disability, and employment. Conclusion Results suggest a need for additional scholarship in this area that focuses on theory development and integration as well as empirical work. Such work should examine the potential utility of considering positive psychological interventions when planning for SDM in the context of career development activities to enhance positive outcomes related to decision-making, self-determination, and other positive psychological constructs.

  6. Report on the Second Workshop on Supporting Complex Search Tasks

    DEFF Research Database (Denmark)

    Koolen, Marijn; Kamps, Jaap; Bogers, Toine

    2017-01-01

    There is broad consensus in the field of IR that search is complex in many use cases and applications, both on the Web and in domain-specific collections, and both in our professional and in our daily life. Yet our understanding of complex search tasks, in comparison to simple look up tasks...

  7. Algorithms for Regular Tree Grammar Network Search and Their Application to Mining Human-viral Infection Patterns.

    Science.gov (United States)

    Smoly, Ilan; Carmel, Amir; Shemer-Avni, Yonat; Yeger-Lotem, Esti; Ziv-Ukelson, Michal

    2016-03-01

    Network querying is a powerful approach to mine molecular interaction networks. Most state-of-the-art network querying tools either confine the search to a prespecified topology in the form of some template subnetwork, or do not specify any topological constraints at all. Another approach is grammar-based queries, which are more flexible and expressive as they allow for expressing the topology of the sought pattern according to some grammar-based logic. Previous grammar-based network querying tools were confined to the identification of paths. In this article, we extend the patterns identified by grammar-based query approaches from paths to trees. For this, we adopt a higher order query descriptor in the form of a regular tree grammar (RTG). We introduce a novel problem and propose an algorithm to search a given graph for the k highest scoring subgraphs matching a tree accepted by an RTG. Our algorithm is based on the combination of dynamic programming with color coding, and includes an extension of previous k-best parsing optimization approaches to avoid isomorphic trees in the output. We implement the new algorithm and exemplify its application to mining viral infection patterns within molecular interaction networks. Our code is available online.

  8. 9 CFR 381.198 - Importer to make application for inspection of poultry products offered for entry.

    Science.gov (United States)

    2010-01-01

    ... MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY INSPECTION AND CERTIFICATION POULTRY PRODUCTS... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Importer to make application for inspection of poultry products offered for entry. 381.198 Section 381.198 Animals and Animal Products FOOD...

  9. Source Security Program in the Philippines: a lost source search experience

    International Nuclear Information System (INIS)

    Romallosa, Kristine M.; Salabit, Maria T.; Caseria, Estrella; Valdezco, Eulinia

    2008-01-01

    The Philippine Nuclear Research Institute (PNRI), the national agency in the licensing and regulations of radioactive materials in the country, is strengthening its capabilities in the security of radioactive sources. Part of this program is the PNRI's participation in the Regional Security of Radioactive Sources (RSRS) Project of the Australian Nuclear Science and Technology Organization (ANSTO). The project has provided equipment and methods training, assistance in the development of PNRI's own training program and support for actual orphan source search activities. On May 2007, a source search for the two lost Cs-137 level gauges of a steel manufacturing company was conducted by the PNRI and ANSTO. The source search are the: a) Development of instrument and source search training for the team, the National Training Workshop on Orphan Source Searches which was organized and conducted as a result of train-the-trainors fellowship under the RSRS project; and b) Planning and implementation of the lost source search activity. The conduct of the actual search on warehouses, product yard, canals, dust storage, steel making building, scrap yards and nearby junk shops of the steel plant took one week. The week-long search did not find the lost sources. However, naturally occurring radioactive materials identified to be Thorium, were found on sands, bricks and sack piles that are stored and/or generally present in the warehouses, yard and steel making building. The search activity had therefore cleared the facility of the lost source and its corresponding hazards. The NORM found present in the plant's premises on the other hand brought the attention of the management of the needed measures to ensure safety of the staff from possible hazards of these materials. Currently, the course syllabus that was developed is continuously enhanced to accommodate the training needs of the PNRI staff particularly for the emergency response and preparedness. This component of the source

  10. Application of Regulatory Focus Theory to Search Advertising

    Science.gov (United States)

    Mowle, Elyse N.; Georgia, Emily J.; Doss, Brian D.; Updegraff, John A.

    2015-01-01

    Purpose The purpose of this paper is to test the utility of regulatory focus theory principles in a real-world setting; specifically, Internet hosted text advertisements. Effect of compatibility of the ad text with the regulatory focus of the consumer was examined. Design/methodology/approach Advertisements were created using Google AdWords. Data were collected for the number of views and clicks each ad received. Effect of regulatory fit was measured using logistic regression. Findings Logistic regression analyses demonstrated that there was a strong main effect for keyword, such that users were almost six times as likely to click on a promotion advertisement as a prevention advertisement, as well as a main effect for compatibility, such that users were twice as likely to click on an advertisement with content that was consistent with their keyword. Finally, there was a strong interaction of these two variables, such that the effect of consistent advertisements was stronger for promotion searches than for prevention searches. Research limitations/implications The effect of ad compatibility had medium to large effect sizes, suggesting that individuals’ state may have more influence on advertising response than do individuals’ traits (e.g. personality traits). Measurement of regulatory fit was limited by the constraints of Google AdWords. Practical implications The results of this study provide a possible framework for ad creation for Internet advertisers. Originality/value This paper is the first study to demonstrate the utility of regulatory focus theory in online advertising. PMID:26430293

  11. PhAST: pharmacophore alignment search tool.

    Science.gov (United States)

    Hähnke, Volker; Hofmann, Bettina; Grgat, Tomislav; Proschak, Ewgenij; Steinhilber, Dieter; Schneider, Gisbert

    2009-04-15

    We present a ligand-based virtual screening technique (PhAST) for rapid hit and lead structure searching in large compound databases. Molecules are represented as strings encoding the distribution of pharmacophoric features on the molecular graph. In contrast to other text-based methods using SMILES strings, we introduce a new form of text representation that describes the pharmacophore of molecules. This string representation opens the opportunity for revealing functional similarity between molecules by sequence alignment techniques in analogy to homology searching in protein or nucleic acid sequence databases. We favorably compared PhAST with other current ligand-based virtual screening methods in a retrospective analysis using the BEDROC metric. In a prospective application, PhAST identified two novel inhibitors of 5-lipoxygenase product formation with minimal experimental effort. This outcome demonstrates the applicability of PhAST to drug discovery projects and provides an innovative concept of sequence-based compound screening with substantial scaffold hopping potential. 2008 Wiley Periodicals, Inc.

  12. Recent developments in MrBUMP: better search-model preparation, graphical interaction with search models, and solution improvement and assessment.

    Science.gov (United States)

    Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J

    2018-03-01

    Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.

  13. Adaptive switching gravitational search algorithm: an attempt to ...

    Indian Academy of Sciences (India)

    Nor Azlina Ab Aziz

    An adaptive gravitational search algorithm (GSA) that switches between synchronous and ... genetic algorithm (GA), bat-inspired algorithm (BA) and grey wolf optimizer (GWO). ...... heuristic with applications in applied electromagnetics. Prog.

  14. ISART: A Generic Framework for Searching Books with Social Information.

    Science.gov (United States)

    Yin, Xu-Cheng; Zhang, Bo-Wen; Cui, Xiao-Ping; Qu, Jiao; Geng, Bin; Zhou, Fang; Song, Li; Hao, Hong-Wei

    2016-01-01

    Effective book search has been discussed for decades and is still future-proof in areas as diverse as computer science, informatics, e-commerce and even culture and arts. A variety of social information contents (e.g, ratings, tags and reviews) emerge with the huge number of books on the Web, but how they are utilized for searching and finding books is seldom investigated. Here we develop an Integrated Search And Recommendation Technology (IsArt), which breaks new ground by providing a generic framework for searching books with rich social information. IsArt comprises a search engine to rank books with book contents and professional metadata, a Generalized Content-based Filtering model to thereafter rerank books with user-generated social contents, and a learning-to-rank technique to finally combine a wide range of diverse reranking results. Experiments show that this technology permits embedding social information to promote book search effectiveness, and IsArt, by making use of it, has the best performance on CLEF/INEX Social Book Search Evaluation datasets of all 4 years (from 2011 to 2014), compared with some other state-of-the-art methods.

  15. 9 CFR 590.920 - Importer to make application for inspection of imported eggs and egg products.

    Science.gov (United States)

    2010-01-01

    ... inspection of imported eggs and egg products. 590.920 Section 590.920 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION INSPECTION OF EGGS AND EGG PRODUCTS (EGG PRODUCTS INSPECTION ACT) Imports § 590.920 Importer to make application for inspection of...

  16. Protein structural similarity search by Ramachandran codes

    Directory of Open Access Journals (Sweden)

    Chang Chih-Hung

    2007-08-01

    Full Text Available Abstract Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation. SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era.

  17. Open meta-search with OpenSearch: a case study

    OpenAIRE

    O'Riordan, Adrian P.

    2007-01-01

    The goal of this project was to demonstrate the possibilities of open source search engine and aggregation technology in a Web environment by building a meta-search engine which employs free open search engines and open protocols. In contrast many meta-search engines on the Internet use proprietary search systems. The search engines employed in this case study are all based on the OpenSearch protocol. OpenSearch-compliant systems support XML technologies such as RSS and Atom for aggregation a...

  18. The Open Spectral Database: an open platform for sharing and searching spectral data.

    Science.gov (United States)

    Chalk, Stuart J

    2016-01-01

    A number of websites make available spectral data for download (typically as JCAMP-DX text files) and one (ChemSpider) that also allows users to contribute spectral files. As a result, searching and retrieving such spectral data can be time consuming, and difficult to reuse if the data is compressed in the JCAMP-DX file. What is needed is a single resource that allows submission of JCAMP-DX files, export of the raw data in multiple formats, searching based on multiple chemical identifiers, and is open in terms of license and access. To address these issues a new online resource called the Open Spectral Database (OSDB) http://osdb.info/ has been developed and is now available. Built using open source tools, using open code (hosted on GitHub), providing open data, and open to community input about design and functionality, the OSDB is available for anyone to submit spectral data, making it searchable and available to the scientific community. This paper details the concept and coding, internal architecture, export formats, Representational State Transfer (REST) Application Programming Interface and options for submission of data. The OSDB website went live in November 2015. Concurrently, the GitHub repository was made available at https://github.com/stuchalk/OSDB/, and is open for collaborators to join the project, submit issues, and contribute code. The combination of a scripting environment (PHPStorm), a PHP Framework (CakePHP), a relational database (MySQL) and a code repository (GitHub) provides all the capabilities to easily develop REST based websites for ingestion, curation and exposure of open chemical data to the community at all levels. It is hoped this software stack (or equivalent ones in other scripting languages) will be leveraged to make more chemical data available for both humans and computers.

  19. Introducing StatHand: A Cross-Platform Mobile Application to Support Students' Statistical Decision Making.

    Science.gov (United States)

    Allen, Peter J; Roberts, Lynne D; Baughman, Frank D; Loxton, Natalie J; Van Rooy, Dirk; Rock, Adam J; Finlay, James

    2016-01-01

    Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer) are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students' statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand.

  20. Hesitant Probabilistic Fuzzy Linguistic Sets with Applications in Multi-Criteria Group Decision Making Problems

    Directory of Open Access Journals (Sweden)

    Dheeraj Kumar Joshi

    2018-03-01

    Full Text Available Uncertainties due to randomness and fuzziness comprehensively exist in control and decision support systems. In the present study, we introduce notion of occurring probability of possible values into hesitant fuzzy linguistic element (HFLE and define hesitant probabilistic fuzzy linguistic set (HPFLS for ill structured and complex decision making problem. HPFLS provides a single framework where both stochastic and non-stochastic uncertainties can be efficiently handled along with hesitation. We have also proposed expected mean, variance, score and accuracy function and basic operations for HPFLS. Weighted and ordered weighted aggregation operators for HPFLS are also defined in the present study for its applications in multi-criteria group decision making (MCGDM problems. We propose a MCGDM method with HPFL information which is illustrated by an example. A real case study is also taken in the present study to rank State Bank of India, InfoTech Enterprises, I.T.C., H.D.F.C. Bank, Tata Steel, Tata Motors and Bajaj Finance using real data. Proposed HPFLS-based MCGDM method is also compared with two HFL-based decision making methods.

  1. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  2. Simplified Models for LHC New Physics Searches

    CERN Document Server

    Alves, Daniele; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Buckley, Matthew; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R.Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Evans, Jared A.; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto; Freitas, Ayres; Gainer, James S.; Gershtein, Yuri; Gray, Richard; Gregoire, Thomas; Gripaios, Ben; Gunion, Jack; Han, Tao; Haas, Andy; Hansson, Per; Hewett, JoAnne; Hits, Dmitry; Hubisz, Jay; Izaguirre, Eder; Kaplan, Jared; Katz, Emanuel; Kilic, Can; Kim, Hyung-Do; Kitano, Ryuichiro; Koay, Sue Ann; Ko, Pyungwon; Krohn, David; Kuflik, Eric; Lewis, Ian; Lisanti, Mariangela; Liu, Tao; Liu, Zhen; Lu, Ran; Luty, Markus; Meade, Patrick; Morrissey, David; Mrenna, Stephen; Nojiri, Mihoko; Okui, Takemichi; Padhi, Sanjay; Papucci, Michele; Park, Michael; Park, Myeonghun; Perelstein, Maxim; Peskin, Michael; Phalen, Daniel; Rehermann, Keith; Rentala, Vikram; Roy, Tuhin; Ruderman, Joshua T.; Sanz, Veronica; Schmaltz, Martin; Schnetzer, Stephen; Schuster, Philip; Schwaller, Pedro; Schwartz, Matthew D.; Schwartzman, Ariel; Shao, Jing; Shelton, Jessie; Shih, David; Shu, Jing; Silverstein, Daniel; Simmons, Elizabeth; Somalwar, Sunil; Spannowsky, Michael; Spethmann, Christian; Strassler, Matthew; Su, Shufang; Tait, Tim; Thomas, Brooks; Thomas, Scott; Toro, Natalia; Volansky, Tomer; Wacker, Jay; Waltenberger, Wolfgang; Yavin, Itay; Yu, Felix; Zhao, Yue; Zurek, Kathryn

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the "Topologies for Early LHC Searches" workshop, held at SLAC in September of 2010, the purpose of which was to develop a...

  3. Making Individual Prognoses in Psychiatry Using Neuroimaging and Machine Learning.

    Science.gov (United States)

    Janssen, Ronald J; Mourão-Miranda, Janaina; Schnack, Hugo G

    2018-04-22

    Psychiatric prognosis is a difficult problem. Making a prognosis requires looking far into the future, as opposed to making a diagnosis, which is concerned with the current state. During the follow-up period, many factors will influence the course of the disease. Combined with the usually scarcer longitudinal data and the variability in the definition of outcomes/transition, this makes prognostic predictions a challenging endeavor. Employing neuroimaging data in this endeavor introduces the additional hurdle of high dimensionality. Machine-learning techniques are especially suited to tackle this challenging problem. This review starts with a brief introduction to machine learning in the context of its application to clinical neuroimaging data. We highlight a few issues that are especially relevant for prediction of outcome and transition using neuroimaging. We then review the literature that discusses the application of machine learning for this purpose. Critical examination of the studies and their results with respect to the relevant issues revealed the following: 1) there is growing evidence for the prognostic capability of machine-learning-based models using neuroimaging; and 2) reported accuracies may be too optimistic owing to small sample sizes and the lack of independent test samples. Finally, we discuss options to improve the reliability of (prognostic) prediction models. These include new methodologies and multimodal modeling. Paramount, however, is our conclusion that future work will need to provide properly (cross-)validated accuracy estimates of models trained on sufficiently large datasets. Nevertheless, with the technological advances enabling acquisition of large databases of patients and healthy subjects, machine learning represents a powerful tool in the search for psychiatric biomarkers. Copyright © 2018 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. Fuzzy statistical decision-making theory and applications

    CERN Document Server

    Kabak, Özgür

    2016-01-01

    This book offers a comprehensive reference guide to fuzzy statistics and fuzzy decision-making techniques. It provides readers with all the necessary tools for making statistical inference in the case of incomplete information or insufficient data, where classical statistics cannot be applied. The respective chapters, written by prominent researchers, explain a wealth of both basic and advanced concepts including: fuzzy probability distributions, fuzzy frequency distributions, fuzzy Bayesian inference, fuzzy mean, mode and median, fuzzy dispersion, fuzzy p-value, and many others. To foster a better understanding, all the chapters include relevant numerical examples or case studies. Taken together, they form an excellent reference guide for researchers, lecturers and postgraduate students pursuing research on fuzzy statistics. Moreover, by extending all the main aspects of classical statistical decision-making to its fuzzy counterpart, the book presents a dynamic snapshot of the field that is expected to stimu...

  5. Smartphones, tablets and mobile applications for radiology.

    Science.gov (United States)

    Székely, András; Talanow, Roland; Bágyi, Péter

    2013-05-01

    Smartphones are phone devices that may also be used for browsing, navigation and running smaller computer programs called applications. One may consider them as compact personal computers which are primarily to be used for making phone calls. Tablets or "tablet PCs" are fully functioning standalone computers the size of a thin LCD monitor that use the screen itself for control and data input. Both of these devices may be categorized based on the mobile operating system that they use. The aim of this study is to illustrate how smartphones and tablets can be used by diagnostic imaging professionals, radiographers and residents, and to introduce relevant applications that are available for their field. A search was performed on iTunes, Android Market, Blackberry App World, and Windows Phone Marketplace for mobile applications pertinent to the field of diagnostic imaging. The following terms were applied for the search strategy: (1) radiology, (2) X-ray, (3) ultrasound, (4) MRI, (5) CT, (6) radiographer, (7) nuclear medicine. Two radiologists and one radiology resident reviewed the results. Our review was limited to english-language software. Additional applications were identified by reviewing the list of similar software provided in the description of each application. We downloaded and installed all applications that appeared relevant to an appropriate mobile phone or tablet device. We identified and reviewed a total of 102 applications. We ruled out 1 non-English application and 20 other applications that were created for entertainment purposes. Thus our final list includes 81 applications in the following five categories: diagnostic reading, decision support applications, medical books, interactive encyclopedias, and journal reading programs. Smartphones and tablets offer new opportunities for diagnostic imaging practitioners; these easy-to-use devices equipped with excellent display may be used for diagnostic reading, reference, learning, consultation, and for

  6. Smartphones, tablets and mobile applications for radiology

    Energy Technology Data Exchange (ETDEWEB)

    Székely, András, E-mail: andras.szekely@gmail.com [Kenézy Hospital Department of Radiology, 4043 Debrecen, Bartók Béla út 2-26 (Hungary); Talanow, Roland, E-mail: roland@talanow.info [P.O. Box 1570, Lincoln, CA 95648 (United States); Bágyi, Péter [Kenézy Hospital Department of Radiology, 4043 Debrecen, Bartók Béla út 2-26 (Hungary)

    2013-05-15

    Background: Smartphones are phone devices that may also be used for browsing, navigation and running smaller computer programs called applications. One may consider them as compact personal computers which are primarily to be used for making phone calls. Tablets or “tablet PCs” are fully functioning standalone computers the size of a thin LCD monitor that use the screen itself for control and data input. Both of these devices may be categorized based on the mobile operating system that they use. The aim of this study is to illustrate how smartphones and tablets can be used by diagnostic imaging professionals, radiographers and residents, and to introduce relevant applications that are available for their field. Materials and methods: A search was performed on iTunes, Android Market, Blackberry App World, and Windows Phone Marketplace for mobile applications pertinent to the field of diagnostic imaging. The following terms were applied for the search strategy: (1) radiology, (2) X-ray, (3) ultrasound, (4) MRI, (5) CT, (6) radiographer, (7) nuclear medicine. Two radiologists and one radiology resident reviewed the results. Our review was limited to english-language software. Additional applications were identified by reviewing the list of similar software provided in the description of each application. We downloaded and installed all applications that appeared relevant to an appropriate mobile phone or tablet device. Results: We identified and reviewed a total of 102 applications. We ruled out 1 non-English application and 20 other applications that were created for entertainment purposes. Thus our final list includes 81 applications in the following five categories: diagnostic reading, decision support applications, medical books, interactive encyclopedias, and journal reading programs. Conclusion: Smartphones and tablets offer new opportunities for diagnostic imaging practitioners; these easy-to-use devices equipped with excellent display may be used for

  7. Smartphones, tablets and mobile applications for radiology

    International Nuclear Information System (INIS)

    Székely, András; Talanow, Roland; Bágyi, Péter

    2013-01-01

    Background: Smartphones are phone devices that may also be used for browsing, navigation and running smaller computer programs called applications. One may consider them as compact personal computers which are primarily to be used for making phone calls. Tablets or “tablet PCs” are fully functioning standalone computers the size of a thin LCD monitor that use the screen itself for control and data input. Both of these devices may be categorized based on the mobile operating system that they use. The aim of this study is to illustrate how smartphones and tablets can be used by diagnostic imaging professionals, radiographers and residents, and to introduce relevant applications that are available for their field. Materials and methods: A search was performed on iTunes, Android Market, Blackberry App World, and Windows Phone Marketplace for mobile applications pertinent to the field of diagnostic imaging. The following terms were applied for the search strategy: (1) radiology, (2) X-ray, (3) ultrasound, (4) MRI, (5) CT, (6) radiographer, (7) nuclear medicine. Two radiologists and one radiology resident reviewed the results. Our review was limited to english-language software. Additional applications were identified by reviewing the list of similar software provided in the description of each application. We downloaded and installed all applications that appeared relevant to an appropriate mobile phone or tablet device. Results: We identified and reviewed a total of 102 applications. We ruled out 1 non-English application and 20 other applications that were created for entertainment purposes. Thus our final list includes 81 applications in the following five categories: diagnostic reading, decision support applications, medical books, interactive encyclopedias, and journal reading programs. Conclusion: Smartphones and tablets offer new opportunities for diagnostic imaging practitioners; these easy-to-use devices equipped with excellent display may be used for

  8. Budget constraints and optimization in sponsored search auctions

    CERN Document Server

    Yang, Yanwu

    2013-01-01

    The Intelligent Systems Series publishes reference works and handbooks in three core sub-topic areas: Intelligent Automation, Intelligent Transportation Systems, and Intelligent Computing. They include theoretical studies, design methods, and real-world implementations and applications. The series' readership is broad, but focuses on engineering, electronics, and computer science. Budget constraints and optimization in sponsored search auctions takes into account consideration of the entire life cycle of campaigns for researchers and developers working on search systems and ROI maximization

  9. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  10. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  11. Uncertain Fuzzy Preference Relations and Their Applications

    CERN Document Server

    Gong, Zaiwu; Yao, Tianxiang

    2013-01-01

    On the basis of fuzzy sets and some of their relevant generalizations, this book systematically presents the fundamental principles and applications of group decision making under different scenarios of preference relations. By using intuitionistic knowledge as the field of discourse, this work investigates by utilizing innovative research means the fundamental principles and methods of group decision making with various different intuitionistic preferences: Mathematical reasoning is employed to study the consistency of group decision making; Methods of fusing information are applied to look at the aggregation of multiple preferences; Techniques of soft computing and optimization are utilized to search for satisfactory decision alternatives.             Each chapter follows the following structurally clear format of presentation: literature review, development of basic theory, verification and reasoning of principles , construction of models and computational schemes, and numerical examples, which ...

  12. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    Science.gov (United States)

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  13. Do I Buy? Rent? Or Make My Own?

    Science.gov (United States)

    Sawin, Philip, Jr.; Hartz, Roger

    1977-01-01

    Selection of and search for appropriate instructional materials should begin while writing instructional objectives during the curriculum planning process. A five-phase system to help the teacher make selection decisions, in consultation with the school media specialist, is presented. (MF)

  14. Sundanese ancient manuscripts search engine using probability approach

    Science.gov (United States)

    Suryani, Mira; Hadi, Setiawan; Paulus, Erick; Nurma Yulita, Intan; Supriatna, Asep K.

    2017-10-01

    Today, Information and Communication Technology (ICT) has become a regular thing for every aspect of live include cultural and heritage aspect. Sundanese ancient manuscripts as Sundanese heritage are in damage condition and also the information that containing on it. So in order to preserve the information in Sundanese ancient manuscripts and make them easier to search, a search engine has been developed. The search engine must has good computing ability. In order to get the best computation in developed search engine, three types of probabilistic approaches: Bayesian Networks Model, Divergence from Randomness with PL2 distribution, and DFR-PL2F as derivative form DFR-PL2 have been compared in this study. The three probabilistic approaches supported by index of documents and three different weighting methods: term occurrence, term frequency, and TF-IDF. The experiment involved 12 Sundanese ancient manuscripts. From 12 manuscripts there are 474 distinct terms. The developed search engine tested by 50 random queries for three types of query. The experiment results showed that for the single query and multiple query, the best searching performance given by the combination of PL2F approach and TF-IDF weighting method. The performance has been evaluated using average time responds with value about 0.08 second and Mean Average Precision (MAP) about 0.33.

  15. Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS.

    Science.gov (United States)

    De-Arteaga, Maria; Eggel, Ivan; Do, Bao; Rubin, Daniel; Kahn, Charles E; Müller, Henning

    2015-08-01

    underrepresented, therefore RadLex was considered to be the best option for this task. The results show a surprising similarity between the usage behaviour in the two systems, but several subtle differences can also be noted. The average number of terms per query is 2.21 for GoldMiner and 2.07 for radTF, the used axes of RadLex (anatomy, pathology, findings, …) have almost the same distribution with clinical findings being the most frequent and the anatomical entity the second; also, combinations of RadLex axes are extremely similar between the two systems. Differences include a longer length of the sessions in radTF than in GoldMiner (3.4 and 1.9 queries per session on average). Several frequent search terms overlap but some strong differences exist in the details. In radTF the term "normal" is frequent, whereas in GoldMiner it is not. This makes intuitive sense, as in the literature normal cases are rarely described whereas in clinical work the comparison with normal cases is often a first step. The general similarity in many points is likely due to the fact that users of the two systems are influenced by their daily behaviour in using standard web search engines and follow this behaviour in their professional search. This means that many results and insights gained from standard web search can likely be transferred to more specialized search systems. Still, specialized log files can be used to find out more on reformulations and detailed strategies of users to find the right content. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. The Development of Mobile Application to Introduce Historical Monuments in Manado

    Directory of Open Access Journals (Sweden)

    Markhasi Rupilu Moshe

    2018-01-01

    Full Text Available Learning the historical value of a monument is important because it preserves cultural and historical values, as well as expanding our personal insight. In Indonesia, particularly in Manado, North Sulawesi, there are many monuments. The monuments are erected for history, religion, culture and past war, however these aren’t written in detail in the monuments. To get information on specific monument, manual search was required, i.e. asking related people or sources. Based on the problem, the development of an application which can utilize LBS (Location Based Service method and some algorithmic methods specifically designed for mobile devices such as Smartphone, was required so that information on every monument in Manado can be displayed in detail using GPS coordinate. The application was developed by KNN method with K-means algorithm and collaborative filtering to recommend monument information to tourist. Tourists will get recommended options filtered by distance. Then, this method was also used to look for the closest monument from user. KNN algorithm determines the closest location by making comparisons according to calculation of longitude and latitude of several monuments tourist wants to visit. With this application, tourists who want to know and find information on monuments in Manado can do them easily and quickly because monument information is recommended directly to user without having to make selection. Moreover, tourist can see recommended monument information and search several monuments in Manado in real time.

  17. The Development of Mobile Application to Introduce Historical Monuments in Manado

    Science.gov (United States)

    Rupilu, Moshe Markhasi; Suyoto; Santoso, Albertus Joko

    2018-02-01

    Learning the historical value of a monument is important because it preserves cultural and historical values, as well as expanding our personal insight. In Indonesia, particularly in Manado, North Sulawesi, there are many monuments. The monuments are erected for history, religion, culture and past war, however these aren't written in detail in the monuments. To get information on specific monument, manual search was required, i.e. asking related people or sources. Based on the problem, the development of an application which can utilize LBS (Location Based Service) method and some algorithmic methods specifically designed for mobile devices such as Smartphone, was required so that information on every monument in Manado can be displayed in detail using GPS coordinate. The application was developed by KNN method with K-means algorithm and collaborative filtering to recommend monument information to tourist. Tourists will get recommended options filtered by distance. Then, this method was also used to look for the closest monument from user. KNN algorithm determines the closest location by making comparisons according to calculation of longitude and latitude of several monuments tourist wants to visit. With this application, tourists who want to know and find information on monuments in Manado can do them easily and quickly because monument information is recommended directly to user without having to make selection. Moreover, tourist can see recommended monument information and search several monuments in Manado in real time.

  18. Decision making about healthcare-related tests and diagnostic test strategies. Paper 2: a review of methodological and practical challenges.

    Science.gov (United States)

    Mustafa, Reem A; Wiercioch, Wojtek; Cheung, Adrienne; Prediger, Barbara; Brozek, Jan; Bossuyt, Patrick; Garg, Amit X; Lelgemann, Monika; Büehler, Diedrich; Schünemann, Holger J

    2017-12-01

    In this first of a series of five articles, we provide an overview of how and why healthcare-related tests and diagnostic strategies are currently applied. We also describe how our findings can be integrated with existing frameworks for making decisions that guide the use of healthcare-related tests and diagnostic strategies. We searched MEDLINE, references of identified articles, chapters in relevant textbooks, and identified articles citing classic literature on this topic. We provide updated frameworks for the potential roles and applications of tests with suggested definitions and practical examples. We also discuss study designs that are commonly used to assess tests' performance and the effects of tests on people's health. These designs include diagnostic randomized controlled trials and retrospective validation. We describe the utility of these and other currently suggested designs, which questions they can answer and which ones they cannot. In addition, we summarize the challenges unique to decision-making resulting from the use of tests. This overview highlights current challenges in the application of tests in decision-making in healthcare, provides clarifications, and informs the proposed solutions. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. QLIKVIEW APPLICATION - SUPPORT IN DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Luminita SERBANESCU

    2017-12-01

    Full Text Available Control over the company, an objective that any manager wants, can only be exercised on the basis of real and complex business data. For this, a higher level of application is required, with Business Intelligence applications that provide information that no one else can offer faster. Winning time can be used to identify other issues related to available information or activities that add value to the company. After all, management time is a decision, and the decision is valuable only if it occurs at the right time. In this article I presented the benefits of implementing a business intelligence solution in a company, as well as how to design analytical reports using the QlikView application.

  20. NASA Taxonomies for Searching Problem Reports and FMEAs

    Science.gov (United States)

    Malin, Jane T.; Throop, David R.

    2006-01-01

    Many types of hazard and risk analyses are used during the life cycle of complex systems, including Failure Modes and Effects Analysis (FMEA), Hazard Analysis, Fault Tree and Event Tree Analysis, Probabilistic Risk Assessment, Reliability Analysis and analysis of Problem Reporting and Corrective Action (PRACA) databases. The success of these methods depends on the availability of input data and the analysts knowledge. Standard nomenclature can increase the reusability of hazard, risk and problem data. When nomenclature in the source texts is not standard, taxonomies with mapping words (sets of rough synonyms) can be combined with semantic search to identify items and tag them with metadata based on a rich standard nomenclature. Semantic search uses word meanings in the context of parsed phrases to find matches. The NASA taxonomies provide the word meanings. Spacecraft taxonomies and ontologies (generalization hierarchies with attributes and relationships, based on terms meanings) are being developed for types of subsystems, functions, entities, hazards and failures. The ontologies are broad and general, covering hardware, software and human systems. Semantic search of Space Station texts was used to validate and extend the taxonomies. The taxonomies have also been used to extract system connectivity (interaction) models and functions from requirements text. Now the Reconciler semantic search tool and the taxonomies are being applied to improve search in the Space Shuttle PRACA database, to discover recurring patterns of failure. Usual methods of string search and keyword search fall short because the entries are terse and have numerous shortcuts (irregular abbreviations, nonstandard acronyms, cryptic codes) and modifier words cannot be used in sentence context to refine the search. The limited and fixed FMEA categories associated with the entries do not make the fine distinctions needed in the search. The approach assigns PRACA report titles to problem classes in

  1. Neutral Supersymmetric Higgs Boson Searches at D0

    International Nuclear Information System (INIS)

    Robinson, S.

    2009-01-01

    In some Supersymmetric extensions of the Standard Model, including the Minimal Supersymmetric Standard Model (MSSM), the coupling of Higgs bosons to b-quarks is enhanced. This enhancement makes the associated production of the Higgs with b-quarks an interesting search channel for the Higgs and Supersymmetry at D0. The identification of b-quarks, both online and offline, is essential to this search effort. This thesis describes the author's involvement in the development of both types of b-tagging and in the application of these techniques to the MSSM Higgs search. Work was carried out on the Level-3 trigger b-tagging algorithms. The impact parameter (IP) b-tagger was retuned and the effects of increased instantaneous luminosity on the tagger were studied. An extension of the IP-tagger to use the z-tracking information was developed. A new b-tagger using secondary vertices was developed and commissioned. A tool was developed to allow the use of large multi-run samples for trigger studies involving b-quarks. Offline, a neural network (NN) b-tagger was trained combining the existing offline lifetime based b-tagging tools. The efficiency and fake rate of the NN b-tagger were measured in data and MC. This b-tagger was internally reviewed and certified by the Collaboration and now provides the official b-tagging for all analyses using the Run IIa dataset at D0. A search was performed for neutral MSSM Higgs bosons decaying to a b(bar b) pair and produced in association with one or more b-quarks. Limits are set on the cross-section times the branching ratio for such a process. The limits were interpreted in various MSSM scenarios. This analysis uses the NN b-tagger and was the first to use this tool. The analysis also relies on triggers using the Level-3 IP b-tagging tool described previously. A likelihood discriminant was used to improve the analysis and a neural network was developed to cross-check this technique. The result of the analysis has been submitted to PRL and

  2. A Review of Previous Studies on Information Processing in Career Decision Making among University Students

    OpenAIRE

    池田, 智子; Satoko, Ikeda

    2018-01-01

    This review of the researches of career choice of Japanese university students focused the studies on decision-making theory conducted in Japan. The present review suggested the necessity of examination of the effect of self-efficacy about career information search on the process of career choice. It is also needed to examine the relationship between specific self-efficacy about career information search and career decision-making self-efficacy, moreover, general self-efficacy.

  3. Localized saddle-point search and application to temperature-accelerated dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yunsic; Amar, Jacques G. [Department of Physics and Astronomy, University of Toledo, Toledo, Ohio 43606 (United States); Callahan, Nathan B. [Department of Physics and Astronomy, University of Toledo, Toledo, Ohio 43606 (United States); Department of Physics, Indiana University, Bloomington, Indiana 47405 (United States)

    2013-03-07

    We present a method for speeding up temperature-accelerated dynamics (TAD) simulations by carrying out a localized saddle-point (LSAD) search. In this method, instead of using the entire system to determine the energy barriers of activated processes, the calculation is localized by only including a small chunk of atoms around the atoms directly involved in the transition. Using this method, we have obtained N-independent scaling for the computational cost of the saddle-point search as a function of system size N. The error arising from localization is analyzed using a variety of model systems, including a variety of activated processes on Ag(100) and Cu(100) surfaces, as well as multiatom moves in Cu radiation damage and metal heteroepitaxial growth. Our results show significantly improved performance of TAD with the LSAD method, for the case of Ag/Ag(100) annealing and Cu/Cu(100) growth, while maintaining a negligibly small error in energy barriers.

  4. Examining Decision-Making Regarding Environmental Information

    Energy Technology Data Exchange (ETDEWEB)

    Marble, Julie Lynne; Medema, Heather Dawne; Hill, Susan Gardiner

    2001-10-01

    Eight participants were asked to view a computer-based multimedia presentation on an environmental phenomenon. Participants were asked to play a role as a senior aide to a national legislator. In this role, they were told that the legislator had asked them to review a multimedia presentation regarding the hypoxic zone phenomenon in the Gulf of Mexico. Their task in assuming the role of a senior aide was to decide how important a problem this issue was to the United States as a whole, and the proportion of the legislator’s research budget that should be devoted to study of the problem. The presentation was divided into 7 segments, each containing some new information not contained in the previous segments. After viewing each segment, participants were asked to indicate how close they were to making a decision and how certain they were that their current opinion would be their final decision. After indicating their current state of decision-making, participants were interviewed regarding the factors affecting their decision-making. Of interest was the process by which participants moved toward a decision. This experiment revealed a number of possible directions for future research. There appeared to be two approaches to decision-making: Some decision-makers moved steadily toward a decision, and occasionally reversed decisions after viewing information, while others abruptly reached a decision after a certain time period spent reviewing the information. Although the difference in estimates of distance to decisions did not differ statistically for these two groups, that difference was reflected in the participants’ estimates of confidence that their current opinion would be their final decision. The interviews revealed that the primary difference between these two groups was in their trade-offs between willingness to spend time in information search and the acquisition of new information. Participants who were less confident about their final decision, tended to be

  5. Children's Visual Scanning of Textual Documents: Effects of Document Organization, Search Goals, and Metatextual Knowledge

    Science.gov (United States)

    Potocki, Anna; Ros, Christine; Vibert, Nicolas; Rouet, Jean-François

    2017-01-01

    This study examines children's strategies when scanning a document to answer a specific question. More specifically, we wanted to know whether they make use of organizers (i.e., headings) when searching and whether strategic search is related to their knowledge of reading strategies. Twenty-six French fifth graders were asked to search single-page…

  6. The effectiveness of search dogs compared with humans in searching difficult terrain at turbine sites for bat fatalities

    Energy Technology Data Exchange (ETDEWEB)

    Mathews, Fiona

    2011-07-01

    Full text: Many wind farms in the UK and elsewhere in northern Europe are situated in habitat with dense tall vegetation such as arable fields and upland heaths. This makes surveying for bat fatalities extremely difficult. To facilitate a multi-centre study of the effects of wind turbines on British bats, we have therefore conducted controlled trials of the relative success of trained search dogs and ecologists in retrieving bat carcasses. Although dogs have been used previously in ecological surveys for bats, this is the first time they have been specifically trained for use in 'difficult to survey' habitats. Two ecologists and two Labrador dogs with handlers were each given the opportunity to retrieve up to 45 bat carcasses in a range of habitat types. Their efficiency in terms of overall search time, costs, and retrieval abilities were evaluated. Our results indicate that high rates of retrieval can be achieved by dogs, even in dense vegetation up to 75cm high. Further, a typical 100m2 search area can be surveyed in less than half the time taken by humans. The limitations of using search dogs, and their ability to detect the presence of bats that have been scavenged are also presented (presentation supported with video footage). (Author)

  7. Search strategies on the Internet: general and specific.

    Science.gov (United States)

    Bottrill, Krys

    2004-06-01

    Some of the most up-to-date information on scientific activity is to be found on the Internet; for example, on the websites of academic and other research institutions and in databases of currently funded research studies provided on the websites of funding bodies. Such information can be valuable in suggesting new approaches and techniques that could be applicable in a Three Rs context. However, the Internet is a chaotic medium, not subject to the meticulous classification and organisation of classical information resources. At the same time, Internet search engines do not match the sophistication of search systems used by database hosts. Also, although some offer relatively advanced features, user awareness of these tends to be low. Furthermore, much of the information on the Internet is not accessible to conventional search engines, giving rise to the concept of the "Invisible Web". General strategies and techniques for Internet searching are presented, together with a comparative survey of selected search engines. The question of how the Invisible Web can be accessed is discussed, as well as how to keep up-to-date with Internet content and improve searching skills.

  8. Prospect Theory and the Risks Involved in Decision-Making: Content Analysis in ProQuest Articles

    Directory of Open Access Journals (Sweden)

    Sady Darcy da Silva-Junior

    2016-04-01

    Full Text Available In this study, the objective is to perform content analysis on articles of a reliable database, dealing with the prospect theory and the risks involved in the decision making process, evaluating some criteria for the theoretical and methodological approaches that allow a joint analysis and comparative. Therefore, a search in ProQuest database was performed which resulted in 15 articles that were submitted to content analysis process, based on the evaluation of nine factors identified by researchers. Among the results highlight the critical attitude to the prospect theory, in contrast to the assertion of his representative capacity of real situations and application in various situations.

  9. Search Patterns

    CERN Document Server

    Morville, Peter

    2010-01-01

    What people are saying about Search Patterns "Search Patterns is a delight to read -- very thoughtful and thought provoking. It's the most comprehensive survey of designing effective search experiences I've seen." --Irene Au, Director of User Experience, Google "I love this book! Thanks to Peter and Jeffery, I now know that search (yes, boring old yucky who cares search) is one of the coolest ways around of looking at the world." --Dan Roam, author, The Back of the Napkin (Portfolio Hardcover) "Search Patterns is a playful guide to the practical concerns of search interface design. It cont

  10. How social information affects information search and choice in probabilistic inferences.

    Science.gov (United States)

    Puskaric, Marin; von Helversen, Bettina; Rieskamp, Jörg

    2018-01-01

    When making decisions, people are often exposed to relevant information stemming from qualitatively different sources. For instance, when making a choice between two alternatives people can rely on the advice of other people (i.e., social information) or search for factual information about the alternatives (i.e., non-social information). Prior research in categorization has shown that social information is given special attention when both social and non-social information is available, even when the social information has no additional informational value. The goal of the current work is to investigate whether framing information as social or non-social also influences information search and choice in probabilistic inferences. In a first study, we found that framing cues (i.e., the information used to make a decision) with medium validity as social increased the probability that they were searched for compared to a task where the same cues were framed as non-social information, but did not change the strategy people relied on. A second and a third study showed that framing a cue with high validity as social information facilitated learning to rely on a non-compensatory decision strategy. Overall, the results suggest that social in comparison to non-social information is given more attention and is learned faster than non-social information. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Application of neural network to multi-dimensional design window search in reactor core design

    International Nuclear Information System (INIS)

    Kugo, Teruhiko; Nakagawa, Masayuki

    1999-01-01

    In the reactor core design, many parametric survey calculations should be carried out to decide an optimal set of basic design parameter values. They consume a large amount of computation time and labor in the conventional way. To support design work, we investigate a procedure to search efficiently a design window, which is defined as feasible design parameter ranges satisfying design criteria and requirements, in a multi-dimensional space composed of several basic design parameters. The present method is applied to the neutronics and thermal hydraulics fields. The principle of the present method is to construct the multilayer neural network to simulate quickly a response of an analysis code through a training process, and to reduce computation time using the neural network without parametric study using analysis codes. To verify the applicability of the present method to the neutronics and the thermal hydraulics design, we have applied it to high conversion water reactors and examined effects of the structure of the neural network and the number of teaching patterns on the accuracy of the design window estimated by the neural network. From the results of the applications, a guideline to apply the present method is proposed and the present method can predict an appropriate design window in a reasonable computation time by following the guideline. (author)

  12. Personalized Search

    CERN Document Server

    AUTHOR|(SzGeCERN)749939

    2015-01-01

    As the volume of electronically available information grows, relevant items become harder to find. This work presents an approach to personalizing search results in scientific publication databases. This work focuses on re-ranking search results from existing search engines like Solr or ElasticSearch. This work also includes the development of Obelix, a new recommendation system used to re-rank search results. The project was proposed and performed at CERN, using the scientific publications available on the CERN Document Server (CDS). This work experiments with re-ranking using offline and online evaluation of users and documents in CDS. The experiments conclude that the personalized search result outperform both latest first and word similarity in terms of click position in the search result for global search in CDS.

  13. Mastering Search Analytics Measuring SEO, SEM and Site Search

    CERN Document Server

    Chaters, Brent

    2011-01-01

    Many companies still approach Search Engine Optimization (SEO) and paid search as separate initiatives. This in-depth guide shows you how to use these programs as part of a comprehensive strategy-not just to improve your site's search rankings, but to attract the right people and increase your conversion rate. Learn how to measure, test, analyze, and interpret all of your search data with a wide array of analytic tools. Gain the knowledge you need to determine the strategy's return on investment. Ideal for search specialists, webmasters, and search marketing managers, Mastering Search Analyt

  14. An information search model for online social Networks - MOBIRSE

    Directory of Open Access Journals (Sweden)

    Miguel Angel Niño Zambrano

    2015-09-01

    Full Text Available Online Social Networks (OSNs have been gaining great importance among Internet users in recent years.  These are sites where it is possible to meet people, publish, and share content in a way that is both easy and free of charge. As a result, the volume of information contained in these websites has grown exponentially, and web search has consequently become an important tool for users to easily find information relevant to their social networking objectives. Making use of ontologies and user profiles can make these searches more effective. This article presents a model for Information Retrieval in OSNs (MOBIRSE based on user profile and ontologies which aims to improve the relevance of retrieved information on these websites. The social network Facebook was chosen for a case study and as the instance for the proposed model. The model was validated using measures such as At-k Precision and Kappa statistics, to assess its efficiency.

  15. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  16. Assessment of hydrogen fuel cell applications using fuzzy multiple-criteria decision making method

    International Nuclear Information System (INIS)

    Chang, Pao-Long; Hsu, Chiung-Wen; Lin, Chiu-Yue

    2012-01-01

    Highlights: ► This study uses the fuzzy MCDM method to assess hydrogen fuel cell applications. ► We evaluate seven different hydrogen fuel cell applications based on 14 criteria. ► Results show that fuel cell backup power systems should be chosen for development in Taiwan. -- Abstract: Assessment is an essential process in framing government policy. It is critical to select the appropriate targets to meet the needs of national development. This study aimed to develop an assessment model for evaluating hydrogen fuel cell applications and thus provide a screening tool for decision makers. This model operates by selecting evaluation criteria, determining criteria weights, and assessing the performance of hydrogen fuel cell applications for each criterion. The fuzzy multiple-criteria decision making method was used to select the criteria and the preferred hydrogen fuel cell products based on information collected from a group of experts. Survey questionnaires were distributed to collect opinions from experts in different fields. After the survey, the criteria weights and a ranking of alternatives were obtained. The study first defined the evaluation criteria in terms of the stakeholders, so that comprehensive influence criteria could be identified. These criteria were then classified as environmental, technological, economic, or social to indicate the purpose of each criterion in the assessment process. The selected criteria included 14 indicators, such as energy efficiency and CO 2 emissions, as well as seven hydrogen fuel cell applications, such as forklifts and backup power systems. The results show that fuel cell backup power systems rank the highest, followed by household fuel cell electric-heat composite systems. The model provides a screening tool for decision makers to select hydrogen-related applications.

  17. New Searching Capability and OpenURL Linking in the ADS

    Science.gov (United States)

    Eichhorn, Guenther; Accomazzi, A.; Grant, C. S.; Henneken, E.; Kurtz, M. J.; Thompson, D. M.; Murray, S. S.

    2006-12-01

    The ADS is the search system of choice for the astronomical community. It also covers a large part of the physics and physics/astronomy education literature. In order to make access to this system as easy as possible, we developed a Google-like interface version of our search form. This one-field search parses the user input and automatically detects author names and year ranges. Firefox users can set up their browser to have this search field installed in the top right corner search field to have even easier access to the ADS search capability. The basic search is available from the ADS Homepage at: http://adsabs.harvard.edu To aid with access to subscription journals the ADS now supports OpenURL linking. If your library supports an OpenURL server, you can specify this server in the ADS preference settings. All links to journal articles will then automatically be directed to the OpenURL with the appropriate link information. We provide a selection of known OpenURL servers to choose from. If your server is not in this list, please send the necessary information to ads@cfa.harvard.edu and we will include it in our list. The ADS is funded by NASA grant NNG06GG68G.

  18. OERScout Technology Framework: A Novel Approach to Open Educational Resources Search

    OpenAIRE

    Ishan Sudeera Abeywardena; Chee Seng Chan; Choy Yoong Tham

    2013-01-01

    The open educational resources (OER) movement has gained momentum in the past few years. With this new drive towards making knowledge open and accessible, a large number of OER repositories have been established and made available online throughout the world. However, the inability of existing search engines such as Google, Yahoo!, and Bing to effectively search for useful OER which are of acceptable academic standard for teaching purposes is a major factor contributing to the slow uptake of ...

  19. Searches for new resonances decaying into bosons with the ATLAS detector

    CERN Document Server

    Shaw, Savanna Marie; The ATLAS collaboration

    2016-01-01

    Many extensions to the Standard Model predicts new particles decaying into two bosons (WW, WZ, ZZ , W/Zgamma, W/ZH and HH) making this a smoking gun signature. Searches for such diboson resonances have been performed in final states with different numbers of leptons , photon and jets where new identification techniques to disentangle the decay products in highly boosted configuration are being used. This talk summarizes ATLAS searches for diboson resonances with LHC Run 2 data.

  20. Search Help

    Science.gov (United States)

    Guidance and search help resource listing examples of common queries that can be used in the Google Search Appliance search request, including examples of special characters, or query term seperators that Google Search Appliance recognizes.

  1. Search route decision of environmental monitoring at emergency time

    International Nuclear Information System (INIS)

    Aoyama, Isao

    1979-01-01

    The search route decision method is reviewed, especially the adequate arrangement of monitors in view of time in the information-gathering activity by transferring the monitors on the horizontal space after the confirmation of the abnormal release of radioactive material. As for the field of the theory of search, the developmental history is explained, namely the experiences of the naval anti submarine operation in WW-2, the salvage activities and the search problem on the sea. The kinematics for search, the probability theory for detection and the optimum distribution for search are the most important contents of the application of theory of search relating to the environmental monitoring at emergency condition. The combination of a search model consists of the peculiarity of targets, the peculiarity of observers and the standard of optimality. The peculiarity of targets is divided into the space of search, the number of targets, the way of appearance of targets and the motion of targets. The peculiarity of observers is divided into the number of observers, the divisibility of efforts for search, the credibility of search information and the search process. The standard of optimality is divided into the maximum probability of detection, the minimum risk expected and the others. Each item written above of search model is explained. Concerning the formulation of the search model, the theoretical equations for detection probability, discovery potential and instantaneous detection probability, density are derived, and these equations are evaluated and explained. The future plan is to advance the search technology so as to evaluate the detection potential to decide the route of running a monitoring car for a nuclear power plant at accidental condition. (Nakai, Y.)

  2. Searches for very rare decays of kaons

    International Nuclear Information System (INIS)

    Lang, K.

    1997-01-01

    The physics motivation for searches for very rare kaon decays, either forbidden or suppressed within the Standard Model, is briefly discussed. Simple arguments conclude that such searches probe possible new forces at a 200 TeV mass scale or constitute a precision test of the electroweak model. The examples of such process are decays of K L 0 → μ ± e -+ , K + → π + μ + e - , K L 0 → μ + μ - , and K + → π → π + ν bar ν. We present the current experimental status and describe the new efforts to reach sensitivities down to one part in 10 12 . The discussion is focused on the experimental program at the Alternating Gradient Synchrotron at Brookhaven National Laboratory, where intense beams make such studies possible

  3. Searches for very rare decays of kaons

    International Nuclear Information System (INIS)

    Lang, K.

    1995-01-01

    The physics motivation for searches for very rare kaon decays, either forbidden or suppressed within the Standard Model, is briefly discussed. Simple arguments conclude that such searches probe possible new forces at a 200 TeV mass scale or constitute a precision test of the electroweak model. The examples of such processes are decays of K L O →μ ± e minus-plus , K + →π + μ + e - , K L O →μ - , and K + →π + ν bar ν. We present the current experimental status and describe the new efforts to reach sensitivities down to 1 part in 10 12 . The discussion is focused on the experimental program at the Alternating Gradient Synchrotron at Brookhaven National Laboratory, where intense beams make such studies possible

  4. Quantifying the semantics of search behavior before stock market moves.

    Science.gov (United States)

    Curme, Chester; Preis, Tobias; Stanley, H Eugene; Moat, Helen Susannah

    2014-08-12

    Technology is becoming deeply interwoven into the fabric of society. The Internet has become a central source of information for many people when making day-to-day decisions. Here, we present a method to mine the vast data Internet users create when searching for information online, to identify topics of interest before stock market moves. In an analysis of historic data from 2004 until 2012, we draw on records from the search engine Google and online encyclopedia Wikipedia as well as judgments from the service Amazon Mechanical Turk. We find evidence of links between Internet searches relating to politics or business and subsequent stock market moves. In particular, we find that an increase in search volume for these topics tends to precede stock market falls. We suggest that extensions of these analyses could offer insight into large-scale information flow before a range of real-world events.

  5. Fast Multi-blind Modification Search through Tandem Mass Spectrometry*

    Science.gov (United States)

    Na, Seungjin; Bandeira, Nuno; Paek, Eunok

    2012-01-01

    With great biological interest in post-translational modifications (PTMs), various approaches have been introduced to identify PTMs using MS/MS. Recent developments for PTM identification have focused on an unrestrictive approach that searches MS/MS spectra for all known and possibly even unknown types of PTMs at once. However, the resulting expanded search space requires much longer search time and also increases the number of false positives (incorrect identifications) and false negatives (missed true identifications), thus creating a bottleneck in high throughput analysis. Here we introduce MODa, a novel “multi-blind” spectral alignment algorithm that allows for fast unrestrictive PTM searches with no limitation on the number of modifications per peptide while featuring over an order of magnitude speedup in relation to existing approaches. We demonstrate the sensitivity of MODa on human shotgun proteomics data where it reveals multiple mutations, a wide range of modifications (including glycosylation), and evidence for several putative novel modifications. Based on the reported findings, we argue that the efficiency and sensitivity of MODa make it the first unrestrictive search tool with the potential to fully replace conventional restrictive identification of proteomics mass spectrometry data. PMID:22186716

  6. Introducing StatHand: A cross-platform mobile application to support students’ statistical decision making

    Directory of Open Access Journals (Sweden)

    Peter James Allen

    2016-02-01

    Full Text Available Although essential to professional competence in psychology, quantitative research methods are a known area of weakness for many undergraduate psychology students. Students find selecting appropriate statistical tests and procedures for different types of research questions, hypotheses and data types particularly challenging, and these skills are not often practiced in class. Decision trees (a type of graphic organizer are known to facilitate this decision making process, but extant trees have a number of limitations. Furthermore, emerging research suggests that mobile technologies offer many possibilities for facilitating learning. It is within this context that we have developed StatHand, a free cross-platform application designed to support students’ statistical decision making. Developed with the support of the Australian Government Office for Learning and Teaching, StatHand guides users through a series of simple, annotated questions to help them identify a statistical test or procedure appropriate to their circumstances. It further offers the guidance necessary to run these tests and procedures, then interpret and report their results. In this Technology Report we will overview the rationale behind StatHand, before describing the feature set of the application. We will then provide guidelines for integrating StatHand into the research methods curriculum, before concluding by outlining our road map for the ongoing development and evaluation of StatHand.

  7. Search and Tracking of an Unknown Number of Targets by a Team of Autonomous Agents Utilizing Time-evolving Partition Classification

    OpenAIRE

    Wood, Jared Gregory

    2011-01-01

    The advancement of computing technology has enabled the practical development of intelligent autonomous systems. Intelligent autonomous systems can be used to perform difficult sensing tasks. One such sensing task is to search for and track targets over large geographic areas. Searching for and tracking targets over geographic areas has important applications. These applications include search and rescue, boarder patrol, and reconnaissance. Inherent in applications such as these is the need ...

  8. Searching for better plasmonic materials

    DEFF Research Database (Denmark)

    West, P.; Ishii, S.; Naik, G.

    2010-01-01

    Plasmonics is a research area merging the fields of optics and nanoelectronics by confining light with relatively large free-space wavelength to the nanometer scale - thereby enabling a family of novel devices. Current plasmonic devices at telecommunication and optical frequencies face significan...... for realizing optimal plasmonic material properties for specific frequencies and applications, thereby providing a reference for those searching for better plasmonic materials....

  9. Tales from the Field: Search Strategies Applied in Web Searching

    Directory of Open Access Journals (Sweden)

    Soohyung Joo

    2010-08-01

    Full Text Available In their web search processes users apply multiple types of search strategies, which consist of different search tactics. This paper identifies eight types of information search strategies with associated cases based on sequences of search tactics during the information search process. Thirty-one participants representing the general public were recruited for this study. Search logs and verbal protocols offered rich data for the identification of different types of search strategies. Based on the findings, the authors further discuss how to enhance web-based information retrieval (IR systems to support each type of search strategy.

  10. Pose estimation for augmented reality applications using genetic algorithm.

    Science.gov (United States)

    Yu, Ying Kin; Wong, Kin Hong; Chang, Michael Ming Yuen

    2005-12-01

    This paper describes a genetic algorithm that tackles the pose-estimation problem in computer vision. Our genetic algorithm can find the rotation and translation of an object accurately when the three-dimensional structure of the object is given. In our implementation, each chromosome encodes both the pose and the indexes to the selected point features of the object. Instead of only searching for the pose as in the existing work, our algorithm, at the same time, searches for a set containing the most reliable feature points in the process. This mismatch filtering strategy successfully makes the algorithm more robust under the presence of point mismatches and outliers in the images. Our algorithm has been tested with both synthetic and real data with good results. The accuracy of the recovered pose is compared to the existing algorithms. Our approach outperformed the Lowe's method and the other two genetic algorithms under the presence of point mismatches and outliers. In addition, it has been used to estimate the pose of a real object. It is shown that the proposed method is applicable to augmented reality applications.

  11. Nuclear weapons decision-making; an application of organization theory to the mini-nuke case

    International Nuclear Information System (INIS)

    Kangas, J.L.

    1985-01-01

    This dissertation addresses the problem of constructing and developing normative theory responsive to the need for improving the quality of decision-making in the nuclear weapons policy-making. Against the background of a critical evaluation of various paradigms in the literature (systems analysis and opposed-systems designed, the bureaucratic politics model, and the cybernetic theory of decision) an attempt is made to design an alternative analytic framework based on the writings of numerous organization theorists such as Herbert Simon and Kenneth Arrow. The framework is applied to the case of mini-nukes, i.e., proposals in the mid-1970s to develop and deploy tens of thousands of very low-yield (sub-kiloton), miniaturized fission weapons in NATO. Heuristic case study identifies the type of study undertaken in the dissertation in contrast to the more familiar paradigmatic studies identified, for example, with the Harvard Weapons Project. Application of the analytic framework developed in the dissertation of the mini-nuke case resulted in an empirical understanding of why decision making concerning tactical nuclear weapons has been such a complex task and why force modernization issues in particular have been so controversial and lacking in policy resolution

  12. Internet Search Engines

    OpenAIRE

    Fatmaa El Zahraa Mohamed Abdou

    2004-01-01

    A general study about the internet search engines, the study deals main 7 points; the differance between search engines and search directories, components of search engines, the percentage of sites covered by search engines, cataloging of sites, the needed time for sites appearance in search engines, search capabilities, and types of search engines.

  13. Pressurized water reactor in-core nuclear fuel management by tabu search

    International Nuclear Information System (INIS)

    Hill, Natasha J.; Parks, Geoffrey T.

    2015-01-01

    Highlights: • We develop a tabu search implementation for PWR reload core design. • We conduct computational experiments to find optimal parameter values. • We test the performance of the algorithm on two representative PWR geometries. • We compare this performance with that given by established optimization methods. • Our tabu search implementation outperforms these methods in all cases. - Abstract: Optimization of the arrangement of fuel assemblies and burnable poisons when reloading pressurized water reactors has, in the past, been performed with many different algorithms in an attempt to make reactors more economic and fuel efficient. The use of the tabu search algorithm in tackling reload core design problems is investigated further here after limited, but promising, previous investigations. The performance of the tabu search implementation developed was compared with established genetic algorithm and simulated annealing optimization routines. Tabu search outperformed these existing programs for a number of different objective functions on two different representative core geometries

  14. University Students' Online Information Searching Strategies in Different Search Contexts

    Science.gov (United States)

    Tsai, Meng-Jung; Liang, Jyh-Chong; Hou, Huei-Tse; Tsai, Chin-Chung

    2012-01-01

    This study investigates the role of search context played in university students' online information searching strategies. A total of 304 university students in Taiwan were surveyed with questionnaires in which two search contexts were defined as searching for learning, and searching for daily life information. Students' online search strategies…

  15. Application of multiple tabu search algorithm to solve dynamic economic dispatch considering generator constraints

    International Nuclear Information System (INIS)

    Pothiya, Saravuth; Ngamroo, Issarachai; Kongprawechnon, Waree

    2008-01-01

    This paper presents a new optimization technique based on a multiple tabu search algorithm (MTS) to solve the dynamic economic dispatch (ED) problem with generator constraints. In the constrained dynamic ED problem, the load demand and spinning reserve capacity as well as some practical operation constraints of generators, such as ramp rate limits and prohibited operating zone are taken into consideration. The MTS algorithm introduces additional mechanisms such as initialization, adaptive searches, multiple searches, crossover and restarting process. To show its efficiency, the MTS algorithm is applied to solve constrained dynamic ED problems of power systems with 6 and 15 units. The results obtained from the MTS algorithm are compared to those achieved from the conventional approaches, such as simulated annealing (SA), genetic algorithm (GA), tabu search (TS) algorithm and particle swarm optimization (PSO). The experimental results show that the proposed MTS algorithm approaches is able to obtain higher quality solutions efficiently and with less computational time than the conventional approaches

  16. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature.

    Science.gov (United States)

    Song, Michael M; Simonsen, Cheryl K; Wilson, Joanna D; Jenkins, Marjorie R

    2016-02-01

    An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH.

  17. User Oriented Trajectory Search for Trip Recommendation

    KAUST Repository

    Ding, Ruogu

    2012-07-08

    Trajectory sharing and searching have received significant attention in recent years. In this thesis, we propose and investigate the methods to find and recommend the best trajectory to the traveler, and mainly focus on a novel technique named User Oriented Trajectory Search (UOTS) query processing. In contrast to conventional trajectory search by locations (spatial domain only), we consider both spatial and textual domains in the new UOTS query. Given a trajectory data set, the query input contains a set of intended places given by the traveler and a set of textual attributes describing the traveler’s preference. If a trajectory is connecting/close to the specified query locations, and the textual attributes of the trajectory are similar to the traveler’s preference, it will be recommended to the traveler. This type of queries can enable many popular applications such as trip planning and recommendation. There are two challenges in UOTS query processing, (i) how to constrain the searching range in two domains and (ii) how to schedule multiple query sources effectively. To overcome the challenges and answer the UOTS query efficiently, a novel collaborative searching approach is developed. Conceptually, the UOTS query processing is conducted in the spatial and textual domains alternately. A pair of upper and lower bounds are devised to constrain the searching range in two domains. In the meantime, a heuristic searching strategy based on priority ranking is adopted for scheduling the multiple query sources, which can further reduce the searching range and enhance the query efficiency notably. Furthermore, the devised collaborative searching approach can be extended to situations where the query locations are ordered. Extensive experiments are conducted on both real and synthetic trajectory data in road networks. Our approach is verified to be effective in reducing both CPU time and disk I/O time.

  18. User oriented trajectory search for trip recommendation

    KAUST Repository

    Shang, Shuo

    2012-01-01

    Trajectory sharing and searching have received significant attentions in recent years. In this paper, we propose and investigate a novel problem called User Oriented Trajectory Search (UOTS) for trip recommendation. In contrast to conventional trajectory search by locations (spatial domain only), we consider both spatial and textual domains in the new UOTS query. Given a trajectory data set, the query input contains a set of intended places given by the traveler and a set of textual attributes describing the traveler\\'s preference. If a trajectory is connecting/close to the specified query locations, and the textual attributes of the trajectory are similar to the traveler\\'e preference, it will be recommended to the traveler for reference. This type of queries can bring significant benefits to travelers in many popular applications such as trip planning and recommendation. There are two challenges in the UOTS problem, (i) how to constrain the searching range in two domains and (ii) how to schedule multiple query sources effectively. To overcome the challenges and answer the UOTS query efficiently, a novel collaborative searching approach is developed. Conceptually, the UOTS query processing is conducted in the spatial and textual domains alternately. A pair of upper and lower bounds are devised to constrain the searching range in two domains. In the meantime, a heuristic searching strategy based on priority ranking is adopted for scheduling the multiple query sources, which can further reduce the searching range and enhance the query efficiency notably. Furthermore, the devised collaborative searching approach can be extended to situations where the query locations are ordered. The performance of the proposed UOTS query is verified by extensive experiments based on real and synthetic trajectory data in road networks. © 2012 ACM.

  19. Multiple Signal Classification for Gravitational Wave Burst Search

    Science.gov (United States)

    Cao, Junwei; He, Zhengqi

    2013-01-01

    This work is mainly focused on the application of the multiple signal classification (MUSIC) algorithm for gravitational wave burst search. This algorithm extracts important gravitational wave characteristics from signals coming from detectors with arbitrary position, orientation and noise covariance. In this paper, the MUSIC algorithm is described in detail along with the necessary adjustments required for gravitational wave burst search. The algorithm's performance is measured using simulated signals and noise. MUSIC is compared with the Q-transform for signal triggering and with Bayesian analysis for direction of arrival (DOA) estimation, using the Ω-pipeline. Experimental results show that MUSIC has a lower resolution but is faster. MUSIC is a promising tool for real-time gravitational wave search for multi-messenger astronomy.

  20. A bio-inspired swarm robot coordination algorithm for multiple target searching

    Science.gov (United States)

    Meng, Yan; Gan, Jing; Desai, Sachi

    2008-04-01

    The coordination of a multi-robot system searching for multi targets is challenging under dynamic environment since the multi-robot system demands group coherence (agents need to have the incentive to work together faithfully) and group competence (agents need to know how to work together well). In our previous proposed bio-inspired coordination method, Local Interaction through Virtual Stigmergy (LIVS), one problem is the considerable randomness of the robot movement during coordination, which may lead to more power consumption and longer searching time. To address these issues, an adaptive LIVS (ALIVS) method is proposed in this paper, which not only considers the travel cost and target weight, but also predicting the target/robot ratio and potential robot redundancy with respect to the detected targets. Furthermore, a dynamic weight adjustment is also applied to improve the searching performance. This new method a truly distributed method where each robot makes its own decision based on its local sensing information and the information from its neighbors. Basically, each robot only communicates with its neighbors through a virtual stigmergy mechanism and makes its local movement decision based on a Particle Swarm Optimization (PSO) algorithm. The proposed ALIVS algorithm has been implemented on the embodied robot simulator, Player/Stage, in a searching target. The simulation results demonstrate the efficiency and robustness in a power-efficient manner with the real-world constraints.

  1. Simplified Models for LHC New Physics Searches

    International Nuclear Information System (INIS)

    Alves, Daniele; Arkani-Hamed, Nima; Arora, Sanjay; Bai, Yang; Baumgart, Matthew; Berger, Joshua; Butler, Bart; Chang, Spencer; Cheng, Hsin-Chia; Cheung, Clifford; Chivukula, R. Sekhar; Cho, Won Sang; Cotta, Randy; D'Alfonso, Mariarosaria; El Hedri, Sonia; Essig, Rouven; Fitzpatrick, Liam; Fox, Patrick; Franceschini, Roberto

    2012-01-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first ∼ 50-500 pb -1 of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  2. Simplified Models for LHC New Physics Searches

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Daniele; /SLAC; Arkani-Hamed, Nima; /Princeton, Inst. Advanced Study; Arora, Sanjay; /Rutgers U., Piscataway; Bai, Yang; /SLAC; Baumgart, Matthew; /Johns Hopkins U.; Berger, Joshua; /Cornell U., Phys. Dept.; Buckley, Matthew; /Fermilab; Butler, Bart; /SLAC; Chang, Spencer; /Oregon U. /UC, Davis; Cheng, Hsin-Chia; /UC, Davis; Cheung, Clifford; /UC, Berkeley; Chivukula, R.Sekhar; /Michigan State U.; Cho, Won Sang; /Tokyo U.; Cotta, Randy; /SLAC; D' Alfonso, Mariarosaria; /UC, Santa Barbara; El Hedri, Sonia; /SLAC; Essig, Rouven, (ed.); /SLAC; Evans, Jared A.; /UC, Davis; Fitzpatrick, Liam; /Boston U.; Fox, Patrick; /Fermilab; Franceschini, Roberto; /LPHE, Lausanne /Pittsburgh U. /Argonne /Northwestern U. /Rutgers U., Piscataway /Rutgers U., Piscataway /Carleton U. /CERN /UC, Davis /Wisconsin U., Madison /SLAC /SLAC /SLAC /Rutgers U., Piscataway /Syracuse U. /SLAC /SLAC /Boston U. /Rutgers U., Piscataway /Seoul Natl. U. /Tohoku U. /UC, Santa Barbara /Korea Inst. Advanced Study, Seoul /Harvard U., Phys. Dept. /Michigan U. /Wisconsin U., Madison /Princeton U. /UC, Santa Barbara /Wisconsin U., Madison /Michigan U. /UC, Davis /SUNY, Stony Brook /TRIUMF; /more authors..

    2012-06-01

    This document proposes a collection of simplified models relevant to the design of new-physics searches at the LHC and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first {approx} 50-500 pb{sup -1} of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

  3. Efficient privacy-preserving string search and an application in genomics.

    Science.gov (United States)

    Shimizu, Kana; Nuida, Koji; Rätsch, Gunnar

    2016-06-01

    Personal genomes carry inherent privacy risks and protecting privacy poses major social and technological challenges. We consider the case where a user searches for genetic information (e.g. an allele) on a server that stores a large genomic database and aims to receive allele-associated information. The user would like to keep the query and result private and the server the database. We propose a novel approach that combines efficient string data structures such as the Burrows-Wheeler transform with cryptographic techniques based on additive homomorphic encryption. We assume that the sequence data is searchable in efficient iterative query operations over a large indexed dictionary, for instance, from large genome collections and employing the (positional) Burrows-Wheeler transform. We use a technique called oblivious transfer that is based on additive homomorphic encryption to conceal the sequence query and the genomic region of interest in positional queries. We designed and implemented an efficient algorithm for searching sequences of SNPs in large genome databases. During search, the user can only identify the longest match while the server does not learn which sequence of SNPs the user queried. In an experiment based on 2184 aligned haploid genomes from the 1000 Genomes Project, our algorithm was able to perform typical queries within [Formula: see text] 4.6 s and [Formula: see text] 10.8 s for client and server side, respectively, on laptop computers. The presented algorithm is at least one order of magnitude faster than an exhaustive baseline algorithm. https://github.com/iskana/PBWT-sec and https://github.com/ratschlab/PBWT-sec shimizu-kana@aist.go.jp or Gunnar.Ratsch@ratschlab.org Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  4. Applying an efficient K-nearest neighbor search to forest attribute imputation

    Science.gov (United States)

    Andrew O. Finley; Ronald E. McRoberts; Alan R. Ek

    2006-01-01

    This paper explores the utility of an efficient nearest neighbor (NN) search algorithm for applications in multi-source kNN forest attribute imputation. The search algorithm reduces the number of distance calculations between a given target vector and each reference vector, thereby, decreasing the time needed to discover the NN subset. Results of five trials show gains...

  5. Motion Vector Estimation Using Line-Square Search Block Matching Algorithm for Video Sequences

    Directory of Open Access Journals (Sweden)

    Guo Bao-long

    2004-09-01

    Full Text Available Motion estimation and compensation techniques are widely used for video coding applications but the real-time motion estimation is not easily achieved due to its enormous computations. In this paper, a new fast motion estimation algorithm based on line search is presented, in which computation complexity is greatly reduced by using the line search strategy and a parallel search pattern. Moreover, the accurate search is achieved because the small square search pattern is used. It has a best-case scenario of only 9 search points, which is 4 search points less than the diamond search algorithm. Simulation results show that, compared with the previous techniques, the LSPS algorithm significantly reduces the computational requirements for finding motion vectors, and also produces close performance in terms of motion compensation errors.

  6. FDRAnalysis: a tool for the integrated analysis of tandem mass spectrometry identification results from multiple search engines.

    Science.gov (United States)

    Wedge, David C; Krishna, Ritesh; Blackhurst, Paul; Siepen, Jennifer A; Jones, Andrew R; Hubbard, Simon J

    2011-04-01

    Confident identification of peptides via tandem mass spectrometry underpins modern high-throughput proteomics. This has motivated considerable recent interest in the postprocessing of search engine results to increase confidence and calculate robust statistical measures, for example through the use of decoy databases to calculate false discovery rates (FDR). FDR-based analyses allow for multiple testing and can assign a single confidence value for both sets and individual peptide spectrum matches (PSMs). We recently developed an algorithm for combining the results from multiple search engines, integrating FDRs for sets of PSMs made by different search engine combinations. Here we describe a web-server and a downloadable application that makes this routinely available to the proteomics community. The web server offers a range of outputs including informative graphics to assess the confidence of the PSMs and any potential biases. The underlying pipeline also provides a basic protein inference step, integrating PSMs into protein ambiguity groups where peptides can be matched to more than one protein. Importantly, we have also implemented full support for the mzIdentML data standard, recently released by the Proteomics Standards Initiative, providing users with the ability to convert native formats to mzIdentML files, which are available to download.

  7. Search and localisation of CERN buildings

    CERN Multimedia

    TS/FM - Patrimony and Site Information Section

    2005-01-01

    A new, updated version of the web application allowing search and localisation of CERN buildings has been available since 20 May 2005. You can now find a specific building, road, car-park and other information regarding the CERN sites. For comments or enquiries please contact SIG.Support@cern.ch TS/FM - Patrimony and Site Information Section (ISP)

  8. Application of the ant colony search algorithm to reactive power pricing in an open electricity market

    International Nuclear Information System (INIS)

    Ketabi, Abbas; Alibabaee, Ahmad; Feuillet, R.

    2010-01-01

    Reactive power management is essential to transfer real energy and support power system security. Developing an accurate and feasible method for reactive power pricing is important in the electricity market. In conventional optimal power flow models the production cost of reactive power was ignored. In this paper, the production cost of reactive power and investment cost of capacitor banks were included into the objective function of the OPF problem. Then, using ant colony search algorithm, the optimal problem was solved. Marginal price theory was used for calculation of the cost of active and reactive power at each bus in competitive electric markets. Application of the proposed method on IEEE 14-bus system confirms its validity and effectiveness. Results from several case studies show clearly the effects of various factors on reactive power price. (author)

  9. Automated search method for AFM and profilers

    Science.gov (United States)

    Ray, Michael; Martin, Yves C.

    2001-08-01

    A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.

  10. Searches for massive neutrinos in nuclear beta decay

    International Nuclear Information System (INIS)

    Jaros, J.A.

    1992-10-01

    The status of searches for massive neutrinos in nuclear beta decay is reviewed. The claim by an ITEP group that the electron antineutrino mass > 17eV has been disputed by all the subsequent experiments. Current measurements of the tritium beta spectrum limit m bar νe < 10 eV. The status of the 17 keV neutrino is reviewed. The strong null results from INS Tokyo and Argonne, and deficiencies in the experiments which reported positive effects, make it unreasonable to ascribe the spectral distortions seen by Simpson, Hime, and others to a 17keV neutrino. Several new ideas on how to search for massive neutrinos in nuclear beta decay are discussed

  11. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    Science.gov (United States)

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  12. Application of an integrated multi-criteria decision making AHP-TOPSIS methodology for ETL software selection.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Actually, a set of ETL software (Extract, Transform and Load) is available to constitute a major investment market. Each ETL uses its own techniques for extracting, transforming and loading data into data warehouse, which makes the task of evaluating ETL software very difficult. However, choosing the right software of ETL is critical to the success or failure of any Business Intelligence project. As there are many impacting factors in the selection of ETL software, the same process is considered as a complex multi-criteria decision making (MCDM) problem. In this study, an application of decision-making methodology that employs the two well-known MCDM techniques, namely Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods is designed. In this respect, the aim of using AHP is to analyze the structure of the ETL software selection problem and obtain weights of the selected criteria. Then, TOPSIS technique is used to calculate the alternatives' ratings. An example is given to illustrate the proposed methodology. Finally, a software prototype for demonstrating both methods is implemented.

  13. Observed Variability and Values Matter : Toward a Better Understanding of Information Search and Decisions from Experience

    NARCIS (Netherlands)

    Mehlhorn, Katja; Ben-Asher, Noam; Dutt, Varun; Gonzalez, Cleotilde

    2014-01-01

    The search for different options before making a consequential choice is a central aspect of many important decisions, such as mate selection or purchasing a house. Despite its importance, surprisingly little is known about how search and choice are affected by the observed and objective properties

  14. INTERFACING GOOGLE SEARCH ENGINE TO CAPTURE USER WEB SEARCH BEHAVIOR

    OpenAIRE

    Fadhilah Mat Yamin; T. Ramayah

    2013-01-01

    The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utili...

  15. ATLAS searches for New Physics with Boosted Objects

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    With the increase of energy and luminosity at the LHC, searches for new physics are focusing on the multi-TeV mass range. Decays of heavy resonances associated with new physics in this mass range often result in highly boosted very massive objects such as W/Z bosons or Top quarks. New reconstruction techniques, based on jet sub-structure algorithms, are needed to efficiently reconstruct such decay signatures. We will review recent ATLAS developments of jet sub-structure reconstruction tools, and their application to searches for physics beyond the Standard Model.

  16. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    Science.gov (United States)

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  17. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  18. [Advanced online search techniques and dedicated search engines for physicians].

    Science.gov (United States)

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  19. SANSparallel: interactive homology search against Uniprot.

    Science.gov (United States)

    Somervuo, Panu; Holm, Liisa

    2015-07-01

    Proteins evolve by mutations and natural selection. The network of sequence similarities is a rich source for mining homologous relationships that inform on protein structure and function. There are many servers available to browse the network of homology relationships but one has to wait up to a minute for results. The SANSparallel webserver provides protein sequence database searches with immediate response and professional alignment visualization by third-party software. The output is a list, pairwise alignment or stacked alignment of sequence-similar proteins from Uniprot, UniRef90/50, Swissprot or Protein Data Bank. The stacked alignments are viewed in Jalview or as sequence logos. The database search uses the suffix array neighborhood search (SANS) method, which has been re-implemented as a client-server, improved and parallelized. The method is extremely fast and as sensitive as BLAST above 50% sequence identity. Benchmarks show that the method is highly competitive compared to previously published fast database search programs: UBLAST, DIAMOND, LAST, LAMBDA, RAPSEARCH2 and BLAT. The web server can be accessed interactively or programmatically at http://ekhidna2.biocenter.helsinki.fi/cgi-bin/sans/sans.cgi. It can be used to make protein functional annotation pipelines more efficient, and it is useful in interactive exploration of the detailed evidence supporting the annotation of particular proteins of interest. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. PARAMETRIC IDENTIFICATION OF STOCHASTIC SYSTEM BY NON-GRADIENT RANDOM SEARCHING

    Directory of Open Access Journals (Sweden)

    A. A. Lobaty

    2017-01-01

    Full Text Available At this moment we know a great variety of identification objects, tasks and methods and its significance is constantly increasing in various fields of science and technology.  The identification problem is dependent on a priori information about identification object, besides that  the existing approaches and methods of identification are determined by the form of mathematical models (deterministic, stochastic, frequency, temporal, spectral etc.. The paper considers a problem for determination of system parameters  (identification object which is assigned by the stochastic mathematical model including random functions of time. It has been shown  that while making optimization of the stochastic systems subject to random actions deterministic methods can be applied only for a limited approximate optimization of the system by taking into account average random effects and fixed structure of the system. The paper proposes an algorithm for identification of  parameters in a mathematical model of  the stochastic system by non-gradient random searching. A specific  feature  of the algorithm is its applicability  practically to mathematic models of any type because the applied algorithm does not depend on linearization and differentiability of functions included in the mathematical model of the system. The proposed algorithm  ensures searching of  an extremum for the specified quality criteria in terms of external uncertainties and limitations while using random searching of parameters for a mathematical model of the system. The paper presents results of the investigations on operational capability of the considered identification method  while using mathematical simulation of hypothetical control system with a priori unknown parameter values of the mathematical model. The presented results of the mathematical simulation obviously demonstrate the operational capability of the proposed identification method.