WorldWideScience

Sample records for based search tool

  1. Developing a Grid-based search and categorization tool

    CERN Document Server

    Haya, Glenn; Vigen, Jens

    2003-01-01

    Grid technology has the potential to improve the accessibility of digital libraries. The participants in Project GRACE (Grid Search And Categorization Engine) are in the process of developing a search engine that will allow users to search through heterogeneous resources stored in geographically distributed digital collections. What differentiates this project from current search tools is that GRACE will be run on the European Data Grid, a large distributed network, and will not have a single centralized index as current web search engines do. In some cases, the distributed approach offers advantages over the centralized approach since it is more scalable, can be used on otherwise inaccessible material, and can provide advanced search options customized for each data source.

  2. Supervised learning of tools for content-based search of image databases

    Science.gov (United States)

    Delanoy, Richard L.

    1996-03-01

    A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.

  3. MetaboSearch: tool for mass-based metabolite identification using multiple databases.

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    Full Text Available Searching metabolites against databases according to their masses is often the first step in metabolite identification for a mass spectrometry-based untargeted metabolomics study. Major metabolite databases include Human Metabolome DataBase (HMDB, Madison Metabolomics Consortium Database (MMCD, Metlin, and LIPID MAPS. Since each one of these databases covers only a fraction of the metabolome, integration of the search results from these databases is expected to yield a more comprehensive coverage. However, the manual combination of multiple search results is generally difficult when identification of hundreds of metabolites is desired. We have implemented a web-based software tool that enables simultaneous mass-based search against the four major databases, and the integration of the results. In addition, more complete chemical identifier information for the metabolites is retrieved by cross-referencing multiple databases. The search results are merged based on IUPAC International Chemical Identifier (InChI keys. Besides a simple list of m/z values, the software can accept the ion annotation information as input for enhanced metabolite identification. The performance of the software is demonstrated on mass spectrometry data acquired in both positive and negative ionization modes. Compared with search results from individual databases, MetaboSearch provides better coverage of the metabolome and more complete chemical identifier information.The software tool is available at http://omics.georgetown.edu/MetaboSearch.html.

  4. SA-Search: a web tool for protein structure mining based on a Structural Alphabet.

    Science.gov (United States)

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-07-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of fast 3D similarity searches such as the extraction of exact words using a suffix tree approach, and the search for fuzzy words viewed as a simple 1D sequence alignment problem. SA-Search is available at http://bioserv.rpbs.jussieu.fr/cgi-bin/SA-Search.

  5. SA-Search: a web tool for protein structure mining based on a Structural Alphabet

    OpenAIRE

    Guyon, Frédéric; Camproux, Anne-Claude; Hochez, Joëlle; Tufféry, Pierre

    2004-01-01

    SA-Search is a web tool that can be used to mine for protein structures and extract structural similarities. It is based on a hidden Markov model derived Structural Alphabet (SA) that allows the compression of three-dimensional (3D) protein conformations into a one-dimensional (1D) representation using a limited number of prototype conformations. Using such a representation, classical methods developed for amino acid sequences can be employed. Currently, SA-Search permits the performance of f...

  6. Promoting evidence based medicine in preclinical medical students via a federated literature search tool.

    Science.gov (United States)

    Keim, Samuel Mark; Howse, David; Bracke, Paul; Mendoza, Kathryn

    2008-01-01

    Medical educators are increasingly faced with directives to teach Evidence Based Medicine (EBM) skills. Because of its nature, integrating fundamental EBM educational content is a challenge in the preclinical years. To analyse preclinical medical student user satisfaction and feedback regarding a clinical EBM search strategy. The authors introduced a custom EBM search option with a self-contained education structure to first-year medical students. The implementation took advantage of a major curricular change towards case-based instruction. Medical student views and experiences were studied regarding the tool's convenience, problems and the degree to which they used it to answer questions raised by case-based instruction. Surveys were completed by 70% of the available first-year students. Student satisfaction and experiences were strongly positive towards the EBM strategy, especially of the tool's convenience and utility for answering issues raised during case-based learning sessions. About 90% of the students responded that the tool was easy to use, productive and accessed for half or more of their search needs. This study provides evidence that the integration of an educational EBM search tool can be positively received by preclinical medical students.

  7. The multi-copy simultaneous search methodology: a fundamental tool for structure-based drug design.

    Science.gov (United States)

    Schubert, Christian R; Stultz, Collin M

    2009-08-01

    Fragment-based ligand design approaches, such as the multi-copy simultaneous search (MCSS) methodology, have proven to be useful tools in the search for novel therapeutic compounds that bind pre-specified targets of known structure. MCSS offers a variety of advantages over more traditional high-throughput screening methods, and has been applied successfully to challenging targets. The methodology is quite general and can be used to construct functionality maps for proteins, DNA, and RNA. In this review, we describe the main aspects of the MCSS method and outline the general use of the methodology as a fundamental tool to guide the design of de novo lead compounds. We focus our discussion on the evaluation of MCSS results and the incorporation of protein flexibility into the methodology. In addition, we demonstrate on several specific examples how the information arising from the MCSS functionality maps has been successfully used to predict ligand binding to protein targets and RNA.

  8. Development of a PubMed Based Search Tool for Identifying Sex and Gender Specific Health Literature.

    Science.gov (United States)

    Song, Michael M; Simonsen, Cheryl K; Wilson, Joanna D; Jenkins, Marjorie R

    2016-02-01

    An effective literature search strategy is critical to achieving the aims of Sex and Gender Specific Health (SGSH): to understand sex and gender differences through research and to effectively incorporate the new knowledge into the clinical decision making process to benefit both male and female patients. The goal of this project was to develop and validate an SGSH literature search tool that is readily and freely available to clinical researchers and practitioners. PubMed, a freely available search engine for the Medline database, was selected as the platform to build the SGSH literature search tool. Combinations of Medical Subject Heading terms, text words, and title words were evaluated for optimal specificity and sensitivity. The search tool was then validated against reference bases compiled for two disease states, diabetes and stroke. Key sex and gender terms and limits were bundled to create a search tool to facilitate PubMed SGSH literature searches. During validation, the search tool retrieved 50 of 94 (53.2%) stroke and 62 of 95 (65.3%) diabetes reference articles selected for validation. A general keyword search of stroke or diabetes combined with sex difference retrieved 33 of 94 (35.1%) stroke and 22 of 95 (23.2%) diabetes reference base articles, with lower sensitivity and specificity for SGSH content. The Texas Tech University Health Sciences Center SGSH PubMed Search Tool provides higher sensitivity and specificity to sex and gender specific health literature. The tool will facilitate research, clinical decision-making, and guideline development relevant to SGSH.

  9. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  10. Project GRACE A grid based search tool for the global digital library

    CERN Document Server

    Scholze, Frank; Vigen, Jens; Prazak, Petra; The Seventh International Conference on Electronic Theses and Dissertations

    2004-01-01

    The paper will report on the progress of an ongoing EU project called GRACE - Grid Search and Categorization Engine (http://www.grace-ist.org). The project participants are CERN, Sheffield Hallam University, Stockholm University, Stuttgart University, GL 2006 and Telecom Italia. The project started in 2002 and will finish in 2005, resulting in a Grid based search engine that will search across a variety of content sources including a number of electronic thesis and dissertation repositories. The Open Archives Initiative (OAI) is expanding and is clearly an interesting movement for a community advocating open access to ETD. However, the OAI approach alone may not be sufficiently scalable to achieve a truly global ETD Digital Library. Many universities simply offer their collections to the world via their local web services without being part of any federated system for archiving and even those dissertations that are provided with OAI compliant metadata will not necessarily be picked up by a centralized OAI Ser...

  11. A Web-based Tool for SDSS and 2MASS Database Searches

    Science.gov (United States)

    Hendrickson, M. A.; Uomoto, A.; Golimowski, D. A.

    We have developed a web site using HTML, Php, Python, and MySQL that extracts, processes, and displays data from the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (2MASS). The goal is to locate brown dwarf candidates in the SDSS database by looking at color cuts; however, this site could also be useful for targeted searches of other databases as well. MySQL databases are created from broad searches of SDSS and 2MASS data. Broad queries on the SDSS and 2MASS database servers are run weekly so that observers have the most up-to-date information from which to select candidates for observation. Observers can look at detailed information about specific objects including finding charts, images, and available spectra. In addition, updates from previous observations can be added by any collaborators; this format makes observational collaboration simple. Observers can also restrict the database search, just before or during an observing run, to select objects of special interest.

  12. When does a protein become an allergen? Searching for a dynamic definition based on most advanced technology tools.

    Science.gov (United States)

    Mari, A

    2008-07-01

    Since the early beginning of allergology as a science considerable efforts have been made by clinicians and researchers to identify and characterize allergic triggers as raw allergenic materials, allergenic sources and tissues, and more recently basic allergenic structures defined as molecules. The last 15-20 years have witnessed many centres focusing on the identification and characterization of allergenic molecules leading to an expanding wealth of knowledge. The need to organize this information leads to the most important question 'when does a protein become an allergen?' In this article, I try to address this question by reviewing a few basic concepts of the immunology of IgE-mediated diseases, reporting on the current diagnostic and epidemiological tools used for allergic disease studies and discussing the usefulness of novel biotechnology tools (i.e. proteomics and molecular biology approaches), information technology tools (i.e. Internet-based resources) and microtechnology tools (i.e. proteomic microarray for IgE testing on molecular allergens). A step-wise staging of the identification and characterization process, including bench, clinical and epidemiological aspects, is proposed, in order to classify allergenic molecules dynamically. This proposal reflects the application and use of all the new tools available from current technologies.

  13. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    Science.gov (United States)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship

  14. Evidence-based Medicine Search: a customizable federated search engine.

    Science.gov (United States)

    Bracke, Paul J; Howse, David K; Keim, Samuel M

    2008-04-01

    This paper reports on the development of a tool by the Arizona Health Sciences Library (AHSL) for searching clinical evidence that can be customized for different user groups. The AHSL provides services to the University of Arizona's (UA's) health sciences programs and to the University Medical Center. Librarians at AHSL collaborated with UA College of Medicine faculty to create an innovative search engine, Evidence-based Medicine (EBM) Search, that provides users with a simple search interface to EBM resources and presents results organized according to an evidence pyramid. EBM Search was developed with a web-based configuration component that allows the tool to be customized for different specialties. Informal and anecdotal feedback from physicians indicates that EBM Search is a useful tool with potential in teaching evidence-based decision making. While formal evaluation is still being planned, a tool such as EBM Search, which can be configured for specific user populations, may help lower barriers to information resources in an academic health sciences center.

  15. Design and Testing of BACRA, a Web-Based Tool for Middle Managers at Health Care Facilities to Lead the Search for Solutions to Patient Safety Incidents

    Science.gov (United States)

    Mira, José Joaquín; Vicente, Maria Asuncion; Fernandez, Cesar; Guilabert, Mercedes; Ferrús, Lena; Zavala, Elena; Silvestre, Carmen; Pérez-Pérez, Pastora

    2016-01-01

    Background Lack of time, lack of familiarity with root cause analysis, or suspicion that the reporting may result in negative consequences hinder involvement in the analysis of safety incidents and the search for preventive actions that can improve patient safety. Objective The aim was develop a tool that enables hospitals and primary care professionals to immediately analyze the causes of incidents and to propose and implement measures intended to prevent their recurrence. Methods The design of the Web-based tool (BACRA) considered research on the barriers for reporting, review of incident analysis tools, and the experience of eight managers from the field of patient safety. BACRA’s design was improved in successive versions (BACRA v1.1 and BACRA v1.2) based on feedback from 86 middle managers. BACRA v1.1 was used by 13 frontline professionals to analyze incidents of safety; 59 professionals used BACRA v1.2 and assessed the respective usefulness and ease of use of both versions. Results BACRA contains seven tabs that guide the user through the process of analyzing a safety incident and proposing preventive actions for similar future incidents. BACRA does not identify the person completing each analysis since the password introduced to hide said analysis only is linked to the information concerning the incident and not to any personal data. The tool was used by 72 professionals from hospitals and primary care centers. BACRA v1.2 was assessed more favorably than BACRA v1.1, both in terms of its usefulness (z=2.2, P=.03) and its ease of use (z=3.0, P=.003). Conclusions BACRA helps to analyze incidents of safety and to propose preventive actions. BACRA guarantees anonymity of the analysis and reduces the reluctance of professionals to carry out this task. BACRA is useful and easy to use. PMID:27678308

  16. Design and Testing of BACRA, a Web-Based Tool for Middle Managers at Health Care Facilities to Lead the Search for Solutions to Patient Safety Incidents.

    Science.gov (United States)

    Carrillo, Irene; Mira, José Joaquín; Vicente, Maria Asuncion; Fernandez, Cesar; Guilabert, Mercedes; Ferrús, Lena; Zavala, Elena; Silvestre, Carmen; Pérez-Pérez, Pastora

    2016-09-27

    Lack of time, lack of familiarity with root cause analysis, or suspicion that the reporting may result in negative consequences hinder involvement in the analysis of safety incidents and the search for preventive actions that can improve patient safety. The aim was develop a tool that enables hospitals and primary care professionals to immediately analyze the causes of incidents and to propose and implement measures intended to prevent their recurrence. The design of the Web-based tool (BACRA) considered research on the barriers for reporting, review of incident analysis tools, and the experience of eight managers from the field of patient safety. BACRA's design was improved in successive versions (BACRA v1.1 and BACRA v1.2) based on feedback from 86 middle managers. BACRA v1.1 was used by 13 frontline professionals to analyze incidents of safety; 59 professionals used BACRA v1.2 and assessed the respective usefulness and ease of use of both versions. BACRA contains seven tabs that guide the user through the process of analyzing a safety incident and proposing preventive actions for similar future incidents. BACRA does not identify the person completing each analysis since the password introduced to hide said analysis only is linked to the information concerning the incident and not to any personal data. The tool was used by 72 professionals from hospitals and primary care centers. BACRA v1.2 was assessed more favorably than BACRA v1.1, both in terms of its usefulness (z=2.2, P=.03) and its ease of use (z=3.0, P=.003). BACRA helps to analyze incidents of safety and to propose preventive actions. BACRA guarantees anonymity of the analysis and reduces the reluctance of professionals to carry out this task. BACRA is useful and easy to use.

  17. PhAST: pharmacophore alignment search tool.

    Science.gov (United States)

    Hähnke, Volker; Hofmann, Bettina; Grgat, Tomislav; Proschak, Ewgenij; Steinhilber, Dieter; Schneider, Gisbert

    2009-04-15

    We present a ligand-based virtual screening technique (PhAST) for rapid hit and lead structure searching in large compound databases. Molecules are represented as strings encoding the distribution of pharmacophoric features on the molecular graph. In contrast to other text-based methods using SMILES strings, we introduce a new form of text representation that describes the pharmacophore of molecules. This string representation opens the opportunity for revealing functional similarity between molecules by sequence alignment techniques in analogy to homology searching in protein or nucleic acid sequence databases. We favorably compared PhAST with other current ligand-based virtual screening methods in a retrospective analysis using the BEDROC metric. In a prospective application, PhAST identified two novel inhibitors of 5-lipoxygenase product formation with minimal experimental effort. This outcome demonstrates the applicability of PhAST to drug discovery projects and provides an innovative concept of sequence-based compound screening with substantial scaffold hopping potential. 2008 Wiley Periodicals, Inc.

  18. The Theory of Planned Behaviour Applied to Search Engines as a Learning Tool

    Science.gov (United States)

    Liaw, Shu-Sheng

    2004-01-01

    Search engines have been developed for helping learners to seek online information. Based on theory of planned behaviour approach, this research intends to investigate the behaviour of using search engines as a learning tool. After factor analysis, the results suggest that perceived satisfaction of search engine, search engines as an information…

  19. GOKaRT: Graphical Online Search Tool for Maps

    Directory of Open Access Journals (Sweden)

    Mechthild Schüler

    2008-09-01

    Full Text Available The map department of the Staats- und Universitätsbibliothek Göttingen together with the Berlin State Library propose a project to develop a web-based graphic cataloguing and search system for maps, to be funded by the German Research Foundation. This tool shall be made available to all map holdings in archives, libraries, university departments and museums in Germany as a comfortable means for the administration of map holdings and as a search tool. Sheets belonging to map series as well as single maps (old and new will be registered cooperatively by the participants with simple tools. This cooperation in data maintenance will facilitate the work especially for understaffed map holdings. Depending on the type of map there are four different mechanisms for map reference. For map series electronic index sheets are used which will show information regarding the various issues of the map sheets. Due to the intuitive graphic search entry GOKaRT-users will easily find the required maps of a certain region available in a chosen holding. User administration modules ensure comfortable handling. GOKaRT is being developed on the basis of licence-free open source programmes. In case financing is provided by the German Research Foundation, GOKaRT can be used free of charge internationally. This would require a contract stipulating data exchange between the partners as well as permanent storage and usability of the data.

  20. Patient-Centered Tools for Medication Information Search.

    Science.gov (United States)

    Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H

    2014-05-20

    Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance.

  1. Challenging Google, Microsoft Unveils a Search Tool for Scholarly Articles

    Science.gov (United States)

    Carlson, Scott

    2006-01-01

    Microsoft has introduced a new search tool to help people find scholarly articles online. The service, which includes journal articles from prominent academic societies and publishers, puts Microsoft in direct competition with Google Scholar. The new free search tool, which should work on most Web browsers, is called Windows Live Academic Search…

  2. IRSS: a web-based tool for automatic layout and analysis of IRES secondary structure prediction and searching system in silico

    Directory of Open Access Journals (Sweden)

    Hong Jun-Jie

    2009-05-01

    Full Text Available Abstract Background Internal ribosomal entry sites (IRESs provide alternative, cap-independent translation initiation sites in eukaryotic cells. IRES elements are important factors in viral genomes and are also useful tools for bi-cistronic expression vectors. Most existing RNA structure prediction programs are unable to deal with IRES elements. Results We designed an IRES search system, named IRSS, to obtain better results for IRES prediction. RNA secondary structure prediction and comparison software programs were implemented to construct our two-stage strategy for the IRSS. Two software programs formed the backbone of IRSS: the RNAL fold program, used to predict local RNA secondary structures by minimum free energy method; and the RNA Align program, used to compare predicted structures. After complete viral genome database search, the IRSS have low error rate and up to 72.3% sensitivity in appropriated parameters. Conclusion IRSS is freely available at this website http://140.135.61.9/ires/. In addition, all source codes, precompiled binaries, examples and documentations are downloadable for local execution. This new search approach for IRES elements will provide a useful research tool on IRES related studies.

  3. Content Based Searching for INIS

    International Nuclear Information System (INIS)

    Jain, V.; Jain, R.K.

    2016-01-01

    Full text: Whatever a user wants is available on the internet, but to retrieve the information efficiently, a multilingual and most-relevant document search engine is a must. Most current search engines are word based or pattern based. They do not consider the meaning of the query posed to them; purely based on the keywords of the query; no support of multilingual query and and dismissal of nonrelevant results. Current information-retrieval techniques either rely on an encoding process, using a certain perspective or classification scheme, to describe a given item, or perform a full-text analysis, searching for user-specified words. Neither case guarantees content matching because an encoded description might reflect only part of the content and the mere occurrence of a word does not necessarily reflect the document’s content. For general documents, there doesn’t yet seem to be a much better option than lazy full-text analysis, by manually going through those endless results pages. In contrast to this, new search engine should extract the meaning of the query and then perform the search based on this extracted meaning. New search engine should also employ Interlingua based machine translation technology to present information in the language of choice of the user. (author

  4. Improving e-book access via a library-developed full-text search tool.

    Science.gov (United States)

    Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N

    2007-01-01

    This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.

  5. Improving e-book access via a library-developed full-text search tool*

    Science.gov (United States)

    Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.

    2007-01-01

    Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065

  6. The use of magnetic susceptibility as a forensic search tool.

    Science.gov (United States)

    Pringle, Jamie K; Giubertoni, Matteo; Cassidy, Nigel J; Wisniewski, Kristopher D; Hansen, James D; Linford, Neil T; Daniels, Rebecca M

    2015-01-01

    There are various techniques available for forensic search teams to employ to successfully detect a buried object. Near-surface geophysical search methods have been dominated by ground penetrating radar but recently other techniques, such as electrical resistivity, have become more common. This paper discusses magnetic susceptibility as a simple surface search tool illustrated by various research studies. These suggest magnetic susceptibility to be a relatively low cost, quick and effective tool, compared to other geophysical methods, to determine disturbed ground above buried objects and burnt surface remains in a variety of soil types. Further research should collect datasets over objects of known burial ages for comparison purposes and used in forensic search cases to validate the technique. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. The impact of PICO as a search strategy tool on literature search quality

    DEFF Research Database (Denmark)

    Eriksen, Mette Brandt; Frandsen, Tove Faber

    2018-01-01

    Objective: This review aimed to determine, if the use of the PICO model (Patient Intervention Comparison Outcome) as a search strategy tool affects the quality of the literature search. Methods: A comprehensive literature search was conducted in: PubMed, Embase, CINAHL, PsycInfo, Cochrane Library...... and three studies were included, data was extracted, risk of bias was assessed and a qualitative analysis was conducted. The included studies compared PICO to PIC or link to related articles in PubMed; PICOS and SPIDER. One study compared PICO to unguided searching. Due to differences in intervention...

  8. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  9. Utilizing AFIS searching tools to reduce errors in fingerprint casework.

    Science.gov (United States)

    Langenburg, Glenn; Hall, Carey; Rosemarie, Quincy

    2015-12-01

    Fifty-six (56) adjudicated, property crime cases involving fingerprint evidence were reviewed using a case-specific AFIS database tool. This tool allowed fingerprint experts to search latent prints in the cases against a database of friction ridge exemplars limited to only the individuals specific to that particular case. We utilized three different methods to encode and search the latent prints: automatic feature extraction, manual encoding performed by a student intern, and manual encoding performed by a fingerprint expert. Performance in the study was strongest when the encoding was conducted by the fingerprint expert. The results of the study showed that while the AFIS tools failed to locate all of the identifications originally reported by the initial fingerprint expert that worked the case, the AFIS tools helped to identify 7 additional latent prints that were not reported by the initial fingerprint expert. We conclude that this technology, when combined with fingerprint expertise, will reduce the number of instances where an erroneous exclusion could occur, increase the efficiency of a fingerprint unit, and be a useful tool for reviewing active or cold cases for missed opportunities to report identifications. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. SpolSimilaritySearch - A web tool to compare and search similarities between spoligotypes of Mycobacterium tuberculosis complex.

    Science.gov (United States)

    Couvin, David; Zozio, Thierry; Rastogi, Nalin

    2017-07-01

    Spoligotyping is one of the most commonly used polymerase chain reaction (PCR)-based methods for identification and study of genetic diversity of Mycobacterium tuberculosis complex (MTBC). Despite its known limitations if used alone, the methodology is particularly useful when used in combination with other methods such as mycobacterial interspersed repetitive units - variable number of tandem DNA repeats (MIRU-VNTRs). At a worldwide scale, spoligotyping has allowed identification of information on 103,856 MTBC isolates (corresponding to 98049 clustered strains plus 5807 unique isolates from 169 countries of patient origin) contained within the SITVIT2 proprietary database of the Institut Pasteur de la Guadeloupe. The SpolSimilaritySearch web-tool described herein (available at: http://www.pasteur-guadeloupe.fr:8081/SpolSimilaritySearch) incorporates a similarity search algorithm allowing users to get a complete overview of similar spoligotype patterns (with information on presence or absence of 43 spacers) in the aforementioned worldwide database. This tool allows one to analyze spread and evolutionary patterns of MTBC by comparing similar spoligotype patterns, to distinguish between widespread, specific and/or confined patterns, as well as to pinpoint patterns with large deleted blocks, which play an intriguing role in the genetic epidemiology of M. tuberculosis. Finally, the SpolSimilaritySearch tool also provides with the country distribution patterns for each queried spoligotype. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Global search tool for the Advanced Photon Source Integrated Relational Model of Installed Systems (IRMIS) database

    International Nuclear Information System (INIS)

    Quock, D.E.R.; Cianciarulo, M.B.

    2007-01-01

    The Integrated Relational Model of Installed Systems (IRMIS) is a relational database tool that has been implemented at the Advanced Photon Source to maintain an updated account of approximately 600 control system software applications, 400,000 process variables, and 30,000 control system hardware components. To effectively display this large amount of control system information to operators and engineers, IRMIS was initially built with nine Web-based viewers: Applications Organizing Index, IOC, PLC, Component Type, Installed Components, Network, Controls Spares, Process Variables, and Cables. However, since each viewer is designed to provide details from only one major category of the control system, the necessity for a one-stop global search tool for the entire database became apparent. The user requirements for extremely fast database search time and ease of navigation through search results led to the choice of Asynchronous JavaScript and XML (AJAX) technology in the implementation of the IRMIS global search tool. Unique features of the global search tool include a two-tier level of displayed search results, and a database data integrity validation and reporting mechanism.

  12. RAG-3D: a search tool for RNA 3D substructures

    Science.gov (United States)

    Zahran, Mai; Sevim Bayrak, Cigdem; Elmetwaly, Shereef; Schlick, Tamar

    2015-01-01

    To address many challenges in RNA structure/function prediction, the characterization of RNA's modular architectural units is required. Using the RNA-As-Graphs (RAG) database, we have previously explored the existence of secondary structure (2D) submotifs within larger RNA structures. Here we present RAG-3D—a dataset of RNA tertiary (3D) structures and substructures plus a web-based search tool—designed to exploit graph representations of RNAs for the goal of searching for similar 3D structural fragments. The objects in RAG-3D consist of 3D structures translated into 3D graphs, cataloged based on the connectivity between their secondary structure elements. Each graph is additionally described in terms of its subgraph building blocks. The RAG-3D search tool then compares a query RNA 3D structure to those in the database to obtain structurally similar structures and substructures. This comparison reveals conserved 3D RNA features and thus may suggest functional connections. Though RNA search programs based on similarity in sequence, 2D, and/or 3D structural elements are available, our graph-based search tool may be advantageous for illuminating similarities that are not obvious; using motifs rather than sequence space also reduces search times considerably. Ultimately, such substructuring could be useful for RNA 3D structure prediction, structure/function inference and inverse folding. PMID:26304547

  13. Mathematical programming solver based on local search

    CERN Document Server

    Gardi, Frédéric; Darlay, Julien; Estellon, Bertrand; Megel, Romain

    2014-01-01

    This book covers local search for combinatorial optimization and its extension to mixed-variable optimization. Although not yet understood from the theoretical point of view, local search is the paradigm of choice for tackling large-scale real-life optimization problems. Today's end-users demand interactivity with decision support systems. For optimization software, this means obtaining good-quality solutions quickly. Fast iterative improvement methods, like local search, are suited to satisfying such needs. Here the authors show local search in a new light, in particular presenting a new kind of mathematical programming solver, namely LocalSolver, based on neighborhood search. First, an iconoclast methodology is presented to design and engineer local search algorithms. The authors' concern about industrializing local search approaches is of particular interest for practitioners. This methodology is applied to solve two industrial problems with high economic stakes. Software based on local search induces ex...

  14. A sensitive short read homology search tool for paired-end read sequencing data.

    Science.gov (United States)

    Techa-Angkoon, Prapaporn; Sun, Yanni; Lei, Jikai

    2017-10-16

    Homology search is still a significant step in functional analysis for genomic data. Profile Hidden Markov Model-based homology search has been widely used in protein domain analysis in many different species. In particular, with the fast accumulation of transcriptomic data of non-model species and metagenomic data, profile homology search is widely adopted in integrated pipelines for functional analysis. While the state-of-the-art tool HMMER has achieved high sensitivity and accuracy in domain annotation, the sensitivity of HMMER on short reads declines rapidly. The low sensitivity on short read homology search can lead to inaccurate domain composition and abundance computation. Our experimental results showed that half of the reads were missed by HMMER for a RNA-Seq dataset. Thus, there is a need for better methods to improve the homology search performance for short reads. We introduce a profile homology search tool named Short-Pair that is designed for short paired-end reads. By using an approximate Bayesian approach employing distribution of fragment lengths and alignment scores, Short-Pair can retrieve the missing end and determine true domains. In particular, Short-Pair increases the accuracy in aligning short reads that are part of remote homologs. We applied Short-Pair to a RNA-Seq dataset and a metagenomic dataset and quantified its sensitivity and accuracy on homology search. The experimental results show that Short-Pair can achieve better overall performance than the state-of-the-art methodology of profile homology search. Short-Pair is best used for next-generation sequencing (NGS) data that lack reference genomes. It provides a complementary paired-end read homology search tool to HMMER. The source code is freely available at https://sourceforge.net/projects/short-pair/ .

  15. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all situat...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  16. Raising Reliability of Web Search Tool Research through Replication and Chaos Theory

    OpenAIRE

    Nicholson, Scott

    1999-01-01

    Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replica...

  17. Assessing teamwork performance in obstetrics: A systematic search and review of validated tools.

    Science.gov (United States)

    Fransen, Annemarie F; de Boer, Liza; Kienhorst, Dieneke; Truijens, Sophie E; van Runnard Heimel, Pieter J; Oei, S Guid

    2017-09-01

    Teamwork performance is an essential component for the clinical efficiency of multi-professional teams in obstetric care. As patient safety is related to teamwork performance, it has become an important learning goal in simulation-based education. In order to improve teamwork performance, reliable assessment tools are required. These can be used to provide feedback during training courses, or to compare learning effects between different types of training courses. The aim of the current study is to (1) identify the available assessment tools to evaluate obstetric teamwork performance in a simulated environment, and (2) evaluate their psychometric properties in order to identify the most valuable tool(s) to use. We performed a systematic search in PubMed, MEDLINE, and EMBASE to identify articles describing assessment tools for the evaluation of obstetric teamwork performance in a simulated environment. In order to evaluate the quality of the identified assessment tools the standards and grading rules have been applied as recommended by the Accreditation Council for Graduate Medical Education (ACGME) Committee on Educational Outcomes. The included studies were also assessed according to the Oxford Centre for Evidence Based Medicine (OCEBM) levels of evidence. This search resulted in the inclusion of five articles describing the following six tools: Clinical Teamwork Scale, Human Factors Rating Scale, Global Rating Scale, Assessment of Obstetric Team Performance, Global Assessment of Obstetric Team Performance, and the Teamwork Measurement Tool. Based on the ACGME guidelines we assigned a Class 3, level C of evidence, to all tools. Regarding the OCEBM levels of evidence, a level 3b was assigned to two studies and a level 4 to four studies. The Clinical Teamwork Scale demonstrated the most comprehensive validation, and the Teamwork Measurement Tool demonstrated promising results, however it is recommended to further investigate its reliability. Copyright © 2017

  18. Google vs. the Library (Part II): Student Search Patterns and Behaviors When Using Google and a Federated Search Tool

    Science.gov (United States)

    Georgas, Helen

    2014-01-01

    This study examines the information-seeking behavior of undergraduate students within a research context. Student searches were recorded while the participants used Google and a library (federated) search tool to find sources (one book, two articles, and one other source of their choosing) for a selected topic. The undergraduates in this study…

  19. Search Engine : an effective tool for exploring the Internet

    OpenAIRE

    Ranasinghe, W.M. Tharanga Dilruk

    2006-01-01

    The Internet has become the largest source of information. Today, millions of Websites exist and this number continuous to grow. Finding the right information at the right time is the challenge in the Internet age. Search engine is searchable database which allows locating the information on the Internet by submitting the keywords. Search engines can be divided into two categories as the Individual and Meta Search engines. This article discusses the features of these search engines in detail.

  20. Space based microlensing planet searches

    Directory of Open Access Journals (Sweden)

    Tisserand Patrick

    2013-04-01

    Full Text Available The discovery of extra-solar planets is arguably the most exciting development in astrophysics during the past 15 years, rivalled only by the detection of dark energy. Two projects unite the communities of exoplanet scientists and cosmologists: the proposed ESA M class mission EUCLID and the large space mission WFIRST, top ranked by the Astronomy 2010 Decadal Survey report. The later states that: “Space-based microlensing is the optimal approach to providing a true statistical census of planetary systems in the Galaxy, over a range of likely semi-major axes”. They also add: “This census, combined with that made by the Kepler mission, will determine how common Earth-like planets are over a wide range of orbital parameters”. We will present a status report of the results obtained by microlensing on exoplanets and the new objectives of the next generation of ground based wide field imager networks. We will finally discuss the fantastic prospect offered by space based microlensing at the horizon 2020–2025.

  1. Data Albums: An Event Driven Search, Aggregation and Curation Tool for Earth Science

    Science.gov (United States)

    Ramachandran, Rahul; Kulkarni, Ajinkya; Maskey, Manil; Bakare, Rohan; Basyal, Sabin; Li, Xiang; Flynn, Shannon

    2014-01-01

    Approaches used in Earth science research such as case study analysis and climatology studies involve discovering and gathering diverse data sets and information to support the research goals. To gather relevant data and information for case studies and climatology analysis is both tedious and time consuming. Current Earth science data systems are designed with the assumption that researchers access data primarily by instrument or geophysical parameter. In cases where researchers are interested in studying a significant event, they have to manually assemble a variety of datasets relevant to it by searching the different distributed data systems. This paper presents a specialized search, aggregation and curation tool for Earth science to address these challenges. The search rool automatically creates curated 'Data Albums', aggregated collections of information related to a specific event, containing links to relevant data files [granules] from different instruments, tools and services for visualization and analysis, and information about the event contained in news reports, images or videos to supplement research analysis. Curation in the tool is driven via an ontology based relevancy ranking algorithm to filter out non relevant information and data.

  2. Web Usage Mining Analysis of Federated Search Tools for Egyptian Scholars

    Science.gov (United States)

    Mohamed, Khaled A.; Hassan, Ahmed

    2008-01-01

    Purpose: This paper aims to examine the behaviour of the Egyptian scholars while accessing electronic resources through two federated search tools. The main purpose of this article is to provide guidance for federated search tool technicians and support teams about user issues, including the need for training. Design/methodology/approach: Log…

  3. Search for bright nearby M dwarfs with virtual observatory tools

    Energy Technology Data Exchange (ETDEWEB)

    Aberasturi, M.; Caballero, J. A.; Montesinos, B.; Gálvez-Ortiz, M. C.; Solano, E.; Martín, E. L. [Centro de Astrobiología (CSIC-INTA), Departamento de Astrofísica, P.O. Box 78, E-28691 Villanueva de la Cañada, Madrid (Spain)

    2014-08-01

    Using Virtual Observatory tools, we cross-matched the Carlsberg Meridian 14 and the 2MASS Point Source catalogs to select candidate nearby bright M dwarfs distributed over ∼25,000 deg{sup 2}. Here, we present reconnaissance low-resolution optical spectra for 27 candidates that were observed with the Intermediate Dispersion Spectrograph at the 2.5 m Isaac Newton Telescope (R≈ 1600). We derived spectral types from a new spectral index, R, which measures the ratio of fluxes at 7485-7015 Å and 7120-7150 Å. We also used VOSA, a Virtual Observatory tool for spectral energy distribution fitting, to derive effective temperatures and surface gravities for each candidate. The resulting 27 targets were M dwarfs brighter than J = 10.5 mag, 16 of which were completely new in the Northern hemisphere and 7 of which were located at less than 15 pc. For all of them, we also measured Hα and Na I pseudo-equivalent widths, determined photometric distances, and identified the most active stars. The targets with the weakest sodium absorption, namely, J0422+2439 (with X-ray and strong Hα emissions), J0435+2523, and J0439+2333, are new members in the young Taurus-Auriga star-forming region based on proper motion, spatial distribution, and location in the color-magnitude diagram, which reopens the discussion on the deficit of M2-4 Taurus stars. Finally, based on proper motion diagrams, we report on a new wide M dwarf binary system in the field, LSPM J0326+3929EW.

  4. Time analysis in astronomy: Tools for periodicity searches

    International Nuclear Information System (INIS)

    Buccheri, R.; Sacco, B.

    1985-01-01

    The authors discuss periodicity searches in radio and gamma-ray astronomy with special considerations for pulsar searches. The basic methodologies of fast Fourier transform, Rayleigh test, and epoch folding are reviewed with the main objective to compare cost and sensitivities in different applications. It is found that FFT procedures are convenient in unbiased searches for periodicity in radio astronomy, while in spark chamber gamma-ray astronomy, where the measurements are spread over a long integration time, unbiased searches are very difficult with the existing computing facilities and analyses with a-priori knowledge on the period values to look for are better done using the Rayleigh test with harmonics folding (Z /sub n/ test)

  5. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  6. Using Search Engine Data as a Tool to Predict Syphilis.

    Science.gov (United States)

    Young, Sean D; Torrone, Elizabeth A; Urata, John; Aral, Sevgi O

    2018-07-01

    Researchers have suggested that social media and online search data might be used to monitor and predict syphilis and other sexually transmitted diseases. Because people at risk for syphilis might seek sexual health and risk-related information on the internet, we investigated associations between internet state-level search query data (e.g., Google Trends) and reported weekly syphilis cases. We obtained weekly counts of reported primary and secondary syphilis for 50 states from 2012 to 2014 from the US Centers for Disease Control and Prevention. We collected weekly internet search query data regarding 25 risk-related keywords from 2012 to 2014 for 50 states using Google Trends. We joined 155 weeks of Google Trends data with 1-week lag to weekly syphilis data for a total of 7750 data points. Using the least absolute shrinkage and selection operator, we trained three linear mixed models on the first 10 weeks of each year. We validated models for 2012 and 2014 for the following 52 weeks and the 2014 model for the following 42 weeks. The models, consisting of different sets of keyword predictors for each year, accurately predicted 144 weeks of primary and secondary syphilis counts for each state, with an overall average R of 0.9 and overall average root mean squared error of 4.9. We used Google Trends search data from the prior week to predict cases of syphilis in the following weeks for each state. Further research could explore how search data could be integrated into public health monitoring systems.

  7. Exploratory search in an audio-visual archive: evaluating a professional search tool for non-professional users

    NARCIS (Netherlands)

    Bron, M.; van Gorp, J.; Nack, F.; de Rijke, M.

    2011-01-01

    As archives are opening up and publishing their content online, the general public can now directly access archive collections. To support access, archives typically provide the public with their internal search tools that were originally intended for professional archivists. We conduct a

  8. Analysis of semantic search within the domains of uncertainty: using Keyword Effectiveness Indexing as an evaluation tool.

    Science.gov (United States)

    Lorence, Daniel; Abraham, Joanna

    2006-01-01

    Medical and health-related searches pose a special case of risk when using the web as an information resource. Uninsured consumers, lacking access to a trained provider, will often rely on information from the internet for self-diagnosis and treatment. In areas where treatments are uncertain or controversial, most consumers lack the knowledge to make an informed decision. This exploratory technology assessment examines the use of Keyword Effectiveness Indexing (KEI) analysis as a potential tool for profiling information search and keyword retrieval patterns. Results demonstrate that the KEI methodology can be useful in identifying e-health search patterns, but is limited by semantic or text-based web environments.

  9. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  10. Process planning optimization on turning machine tool using a hybrid genetic algorithm with local search approach

    Directory of Open Access Journals (Sweden)

    Yuliang Su

    2015-04-01

    Full Text Available A turning machine tool is a kind of new type of machine tool that is equipped with more than one spindle and turret. The distinctive simultaneous and parallel processing abilities of turning machine tool increase the complexity of process planning. The operations would not only be sequenced and satisfy precedence constraints, but also should be scheduled with multiple objectives such as minimizing machining cost, maximizing utilization of turning machine tool, and so on. To solve this problem, a hybrid genetic algorithm was proposed to generate optimal process plans based on a mixed 0-1 integer programming model. An operation precedence graph is used to represent precedence constraints and help generate a feasible initial population of hybrid genetic algorithm. Encoding strategy based on data structure was developed to represent process plans digitally in order to form the solution space. In addition, a local search approach for optimizing the assignments of available turrets would be added to incorporate scheduling with process planning. A real-world case is used to prove that the proposed approach could avoid infeasible solutions and effectively generate a global optimal process plan.

  11. Ovid MEDLINE Instruction can be Evaluated Using a Validated Search Assessment Tool. A Review of: Rana, G. K., Bradley, D. R., Hamstra, S. J., Ross, P. T., Schumacher, R. E., Frohna, J. G., & Lypson, M. L. (2011. A validated search assessment tool: Assessing practice-based learning and improvement in a residency program. Journal of the Medical Library Association, 99(1, 77-81. doi:10.3163/1536-5050.99.1.013

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-01-01

    Full Text Available Objective – To determine the construct validity of a search assessment instrument that is used to evaluate search strategies in Ovid MEDLINE. Design – Cross-sectional, cohort study. Setting – The Academic Medical Center of the University of Michigan. Subjects – All 22 first-year residents in the Department of Pediatrics in 2004 (cohort 1; 10 senior pediatric residents in 2005 (cohort 2; and 9 faculty members who taught evidence based medicine (EBM and published on EBM topics. Methods – Two methods were employed to determine whether the University of Michigan MEDLINE Search Assessment Instrument (UMMSA could show differences between searchers’ construction of a MEDLINE search strategy.The first method tested the search skills of all 22 incoming pediatrics residents (cohort 1 after they received MEDLINE training in 2004, and again upon graduation in 2007. Only 15 of these residents were tested upon graduation; seven were either no longer in the residency program, or had quickly left the institution after graduation. The search test asked study participants to read a clinical scenario, identify the search question in the scenario, and perform an Ovid MEDLINE search. Two librarians scored the blinded search strategies.The second method compared the scores of the 22 residents with the scores of ten senior residents (cohort 2 and nine faculty volunteers. Unlike the first cohort, the ten senior residents had not received any MEDLINE training. The faculty members’ search strategies were used as the gold standard comparison for scoring the search skills of the two cohorts.Main Results – The search strategy scores of the 22 first-year residents, who received training, improved from 2004 to 2007 (mean improvement: 51.7 to 78.7; t(14=5.43, PConclusion – According to the authors, “the results of this study provide evidence for the validity of an instrument to evaluate MEDLINE search strategies” (p. 81, since the instrument under

  12. Searching in the Context of a Task: A Review of Methods and Tools

    Directory of Open Access Journals (Sweden)

    Ana Maguitman

    2018-04-01

    Full Text Available Contextual information extracted from the user task can help to better target retrieval to task-relevant content. In particular, topical context can be exploited to identify the subject of the information needs, contributing to reduce the information overload problem. A great number of methods exist to extract raw context data and contextual interaction patterns from the user task and to model this information using higher-level representations. Context can then be used as a source for automatic query generation, or as a means to refine or disambiguate user-generated queries. It can also be used to filter and rank results as well as to select domain-specific search engines with better capabilities to satisfy specific information requests. This article reviews methods that have been applied to deal with the problem of reflecting the current and long-term interests of a user in the search process. It discusses major difficulties encountered in the research area of context-based information retrieval and presents an overview of tools proposed since the mid-nineties to deal with the problem of context-based search.

  13. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    2000-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  14. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    1999-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  15. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    1998-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  16. A literature search tool for intelligent extraction of disease-associated genes.

    Science.gov (United States)

    Jung, Jae-Yoon; DeLuca, Todd F; Nelson, Tristan H; Wall, Dennis P

    2014-01-01

    To extract disorder-associated genes from the scientific literature in PubMed with greater sensitivity for literature-based support than existing methods. We developed a PubMed query to retrieve disorder-related, original research articles. Then we applied a rule-based text-mining algorithm with keyword matching to extract target disorders, genes with significant results, and the type of study described by the article. We compared our resulting candidate disorder genes and supporting references with existing databases. We demonstrated that our candidate gene set covers nearly all genes in manually curated databases, and that the references supporting the disorder-gene link are more extensive and accurate than other general purpose gene-to-disorder association databases. We implemented a novel publication search tool to find target articles, specifically focused on links between disorders and genotypes. Through comparison against gold-standard manually updated gene-disorder databases and comparison with automated databases of similar functionality we show that our tool can search through the entirety of PubMed to extract the main gene findings for human diseases rapidly and accurately.

  17. PLAST: parallel local alignment search tool for database comparison

    Directory of Open Access Journals (Sweden)

    Lavenier Dominique

    2009-10-01

    Full Text Available Abstract Background Sequence similarity searching is an important and challenging task in molecular biology and next-generation sequencing should further strengthen the need for faster algorithms to process such vast amounts of data. At the same time, the internal architecture of current microprocessors is tending towards more parallelism, leading to the use of chips with two, four and more cores integrated on the same die. The main purpose of this work was to design an effective algorithm to fit with the parallel capabilities of modern microprocessors. Results A parallel algorithm for comparing large genomic banks and targeting middle-range computers has been developed and implemented in PLAST software. The algorithm exploits two key parallel features of existing and future microprocessors: the SIMD programming model (SSE instruction set and the multithreading concept (multicore. Compared to multithreaded BLAST software, tests performed on an 8-processor server have shown speedup ranging from 3 to 6 with a similar level of accuracy. Conclusion A parallel algorithmic approach driven by the knowledge of the internal microprocessor architecture allows significant speedup to be obtained while preserving standard sensitivity for similarity search problems.

  18. Searching for Sentient Design Tools for Game Development

    DEFF Research Database (Denmark)

    Liapis, Antonios

    their generative algorithms) and to their human users (who must take all design decisions), respectively. This thesis argues that computers can be creative partners to human designers rather than mere slaves; game design tools can be aware of designer intentions, preferences and routines, and can accommodate them...... or even subvert them. This thesis presents Sentient Sketchbook, a tool for designing game level abstractions of different game genres, which assists the level designer as it automatically tests maps for playability constraints, evaluates and displays the map's gameplay properties and creates alternatives......Over the last twenty years, computer games have grown from a niche market targeting young adults to an important player in the global economy, engaging millions of people from different cultural backgrounds. As both the number and the size of computer games continue to rise, game companies handle...

  19. Personalizing Web Search based on User Profile

    OpenAIRE

    Utage, Sharyu; Ahire, Vijaya

    2016-01-01

    Web Search engine is most widely used for information retrieval from World Wide Web. These Web Search engines help user to find most useful information. When different users Searches for same information, search engine provide same result without understanding who is submitted that query. Personalized web search it is search technique for proving useful result. This paper models preference of users as hierarchical user profiles. a framework is proposed called UPS. It generalizes profile and m...

  20. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  1. A tool for model based diagnostics of the AGS Booster

    International Nuclear Information System (INIS)

    Luccio, A.

    1993-01-01

    A model-based algorithmic tool was developed to search for lattice errors by a systematic analysis of orbit data in the AGS Booster synchrotron. The algorithm employs transfer matrices calculated with MAD between points in the ring. Iterative model fitting of the data allows one to find and eventually correct magnet displacements and angles or field errors. The tool, implemented on a HP-Apollo workstation system, has proved very general and of immediate physical interpretation

  2. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  3. A Secured Cognitive Agent based Multi-strategic Intelligent Search System

    Directory of Open Access Journals (Sweden)

    Neha Gulati

    2018-04-01

    Full Text Available Search Engine (SE is the most preferred information retrieval tool ubiquitously used. In spite of vast scale involvement of users in SE’s, their limited capabilities to understand the user/searcher context and emotions places high cognitive, perceptual and learning load on the user to maintain the search momentum. In this regard, the present work discusses a Cognitive Agent (CA based approach to support the user in Web-based search process. The work suggests a framework called Secured Cognitive Agent based Multi-strategic Intelligent Search System (CAbMsISS to assist the user in search process. It helps to reduce the contextual and emotional mismatch between the SE’s and user. After implementation of the proposed framework, performance analysis shows that CAbMsISS framework improves Query Retrieval Time (QRT and effectiveness for retrieving relevant results as compared to Present Search Engine (PSE. Supplementary to this, it also provides search suggestions when user accesses a resource previously tagged with negative emotions. Overall, the goal of the system is to enhance the search experience for keeping the user motivated. The framework provides suggestions through the search log that tracks the queries searched, resources accessed and emotions experienced during the search. The implemented framework also considers user security. Keywords: BDI model, Cognitive Agent, Emotion, Information retrieval, Intelligent search, Search Engine

  4. Top-k Keyword Search Over Graphs Based On Backward Search

    Directory of Open Access Journals (Sweden)

    Zeng Jia-Hui

    2017-01-01

    Full Text Available Keyword search is one of the most friendly and intuitive information retrieval methods. Using the keyword search to get the connected subgraph has a lot of application in the graph-based cognitive computation, and it is a basic technology. This paper focuses on the top-k keyword searching over graphs. We implemented a keyword search algorithm which applies the backward search idea. The algorithm locates the keyword vertices firstly, and then applies backward search to find rooted trees that contain query keywords. The experiment shows that query time is affected by the iteration number of the algorithm.

  5. BEST: Next-Generation Biomedical Entity Search Tool for Knowledge Discovery from Biomedical Literature.

    Directory of Open Access Journals (Sweden)

    Sunwon Lee

    Full Text Available As the volume of publications rapidly increases, searching for relevant information from the literature becomes more challenging. To complement standard search engines such as PubMed, it is desirable to have an advanced search tool that directly returns relevant biomedical entities such as targets, drugs, and mutations rather than a long list of articles. Some existing tools submit a query to PubMed and process retrieved abstracts to extract information at query time, resulting in a slow response time and limited coverage of only a fraction of the PubMed corpus. Other tools preprocess the PubMed corpus to speed up the response time; however, they are not constantly updated, and thus produce outdated results. Further, most existing tools cannot process sophisticated queries such as searches for mutations that co-occur with query terms in the literature. To address these problems, we introduce BEST, a biomedical entity search tool. BEST returns, as a result, a list of 10 different types of biomedical entities including genes, diseases, drugs, targets, transcription factors, miRNAs, and mutations that are relevant to a user's query. To the best of our knowledge, BEST is the only system that processes free text queries and returns up-to-date results in real time including mutation information in the results. BEST is freely accessible at http://best.korea.ac.kr.

  6. Scintillating bolometers: A promising tool for rare decays search

    Energy Technology Data Exchange (ETDEWEB)

    Pattavina, L., E-mail: luca.pattavina@mib.infn.it

    2013-12-21

    The idea of using a scintillating bolometer was first suggested for solar neutrino experiments in 1989. After many years of developments, now we are able to exploit this experimental technique, based on the calorimetric approach with cryogenic particle detectors, to investigate rare events such as Neutrinoless Double Beta Decay and interaction of Dark Matter candidates. The possibility to have high resolution detectors in which a very large part of the natural background can be discriminated with respect to the weak expected signal is very appealing. The goal to distinguish the different types of interactions in the detector can be achieved by means of scintillating bolometer. The simultaneous read-out of the heat and scintillation signals made with two independent bolometers enable this precious feature leading to possible background free experiment. In the frame of the LUCIFER project we report on how exploiting this technique to investigate Double Beta Decay for different isotope candidates. Moreover we demonstrate how scintillating bolometers are suited for investigating other rare events such as α decays of long living isotopes of lead and bismuth.

  7. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    Science.gov (United States)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  8. Utilization and perceived problems of online medical resources and search tools among different groups of European physicians.

    Science.gov (United States)

    Kritz, Marlene; Gschwandtner, Manfred; Stefanov, Veronika; Hanbury, Allan; Samwald, Matthias

    2013-06-26

    There is a large body of research suggesting that medical professionals have unmet information needs during their daily routines. To investigate which online resources and tools different groups of European physicians use to gather medical information and to identify barriers that prevent the successful retrieval of medical information from the Internet. A detailed Web-based questionnaire was sent out to approximately 15,000 physicians across Europe and disseminated through partner websites. 500 European physicians of different levels of academic qualification and medical specialization were included in the analysis. Self-reported frequency of use of different types of online resources, perceived importance of search tools, and perceived search barriers were measured. Comparisons were made across different levels of qualification (qualified physicians vs physicians in training, medical specialists without professorships vs medical professors) and specialization (general practitioners vs specialists). Most participants were Internet-savvy, came from Austria (43%, 190/440) and Switzerland (31%, 137/440), were above 50 years old (56%, 239/430), stated high levels of medical work experience, had regular patient contact and were employed in nonacademic health care settings (41%, 177/432). All groups reported frequent use of general search engines and cited "restricted accessibility to good quality information" as a dominant barrier to finding medical information on the Internet. Physicians in training reported the most frequent use of Wikipedia (56%, 31/55). Specialists were more likely than general practitioners to use medical research databases (68%, 185/274 vs 27%, 24/88; χ²₂=44.905, Presources on the Internet and frequent reliance on general search engines and social media among physicians require further attention. Possible solutions may be increased governmental support for the development and popularization of user-tailored medical search tools and open

  9. PubMed and beyond: a survey of web tools for searching biomedical literature

    Science.gov (United States)

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search PMID:21245076

  10. PubMed and beyond: a survey of web tools for searching biomedical literature.

    Science.gov (United States)

    Lu, Zhiyong

    2011-01-01

    The past decade has witnessed the modern advances of high-throughput technology and rapid growth of research capacity in producing large-scale biological data, both of which were concomitant with an exponential growth of biomedical literature. This wealth of scholarly knowledge is of significant importance for researchers in making scientific discoveries and healthcare professionals in managing health-related matters. However, the acquisition of such information is becoming increasingly difficult due to its large volume and rapid growth. In response, the National Center for Biotechnology Information (NCBI) is continuously making changes to its PubMed Web service for improvement. Meanwhile, different entities have devoted themselves to developing Web tools for helping users quickly and efficiently search and retrieve relevant publications. These practices, together with maturity in the field of text mining, have led to an increase in the number and quality of various Web tools that provide comparable literature search service to PubMed. In this study, we review 28 such tools, highlight their respective innovations, compare them to the PubMed system and one another, and discuss directions for future development. Furthermore, we have built a website dedicated to tracking existing systems and future advances in the field of biomedical literature search. Taken together, our work serves information seekers in choosing tools for their needs and service providers and developers in keeping current in the field. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/search.

  11. (Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop

    CERN Document Server

    The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...

  12. Three dimensional pattern recognition using feature-based indexing and rule-based search

    Science.gov (United States)

    Lee, Jae-Kyu

    In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells. This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene. Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage. Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size

  13. An end user evaluation of query formulation and results review tools in three medical meta-search engines.

    Science.gov (United States)

    Leroy, Gondy; Xu, Jennifer; Chung, Wingyan; Eggers, Shauna; Chen, Hsinchun

    2007-01-01

    Retrieving sufficient relevant information online is difficult for many people because they use too few keywords to search and search engines do not provide many support tools. To further complicate the search, users often ignore support tools when available. Our goal is to evaluate in a realistic setting when users use support tools and how they perceive these tools. We compared three medical search engines with support tools that require more or less effort from users to form a query and evaluate results. We carried out an end user study with 23 users who were asked to find information, i.e., subtopics and supporting abstracts, for a given theme. We used a balanced within-subjects design and report on the effectiveness, efficiency and usability of the support tools from the end user perspective. We found significant differences in efficiency but did not find significant differences in effectiveness between the three search engines. Dynamic user support tools requiring less effort led to higher efficiency. Fewer searches were needed and more documents were found per search when both query reformulation and result review tools dynamically adjust to the user query. The query reformulation tool that provided a long list of keywords, dynamically adjusted to the user query, was used most often and led to more subtopics. As hypothesized, the dynamic result review tools were used more often and led to more subtopics than static ones. These results were corroborated by the usability questionnaires, which showed that support tools that dynamically optimize output were preferred.

  14. Computer-based literature search in medical institutions in India

    Directory of Open Access Journals (Sweden)

    Kalita Jayantee

    2007-01-01

    Full Text Available Aim: To study the use of computer-based literature search and its application in clinical training and patient care as a surrogate marker of evidence-based medicine. Materials and Methods: A questionnaire comprising of questions on purpose (presentation, patient management, research, realm (site accessed, nature and frequency of search, effect, infrastructure, formal training in computer based literature search and suggestions for further improvement were sent to residents and faculty of a Postgraduate Medical Institute (PGI and a Medical College. The responses were compared amongst different subgroups of respondents. Results: Out of 300 subjects approached 194 responded; of whom 103 were from PGI and 91 from Medical College. There were 97 specialty residents, 58 super-specialty residents and 39 faculty members. Computer-based literature search was done at least once a month by 89% though there was marked variability in frequency and extent. The motivation for computer-based literature search was for presentation in 90%, research in 65% and patient management in 60.3%. The benefit of search was acknowledged in learning and teaching by 80%, research by 65% and patient care by 64.4% of respondents. Formal training in computer based literature search was received by 41% of whom 80% were residents. Residents from PGI did more frequent and more extensive computer-based literature search, which was attributed to better infrastructure and training. Conclusion: Training and infrastructure both are crucial for computer-based literature search, which may translate into evidence based medicine.

  15. In Silico PCR Tools for a Fast Primer, Probe, and Advanced Searching.

    Science.gov (United States)

    Kalendar, Ruslan; Muterko, Alexandr; Shamekova, Malika; Zhambakin, Kabyl

    2017-01-01

    The polymerase chain reaction (PCR) is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. The principle of this technique has been further used and applied in plenty of other simple or complex nucleic acid amplification technologies (NAAT). In parallel to laboratory "wet bench" experiments for nucleic acid amplification technologies, in silico or virtual (bioinformatics) approaches have been developed, among which in silico PCR analysis. In silico NAAT analysis is a useful and efficient complementary method to ensure the specificity of primers or probes for an extensive range of PCR applications from homology gene discovery, molecular diagnosis, DNA fingerprinting, and repeat searching. Predicting sensitivity and specificity of primers and probes requires a search to determine whether they match a database with an optimal number of mismatches, similarity, and stability. In the development of in silico bioinformatics tools for nucleic acid amplification technologies, the prospects for the development of new NAAT or similar approaches should be taken into account, including forward-looking and comprehensive analysis that is not limited to only one PCR technique variant. The software FastPCR and the online Java web tool are integrated tools for in silico PCR of linear and circular DNA, multiple primer or probe searches in large or small databases and for advanced search. These tools are suitable for processing of batch files that are essential for automation when working with large amounts of data. The FastPCR software is available for download at http://primerdigital.com/fastpcr.html and the online Java version at http://primerdigital.com/tools/pcr.html .

  16. Accelerator-based neutrino oscillation searches

    International Nuclear Information System (INIS)

    Whitehouse, D.A.; Rameika, R.; Stanton, N.

    1993-01-01

    This paper attempts to summarize the neutrino oscillation section of the Workshop on Future Directions in Particle and Nuclear Physics at Multi-GeV Hadron Beam Facilities. There were very lively discussions about the merits of the different oscillation channels, experiments, and facilities, but we believe a substantial consensus emerged. First, the next decade is one of great potential for discovery in neutrino physics, but it is also one of great peril. The possibility that neutrino oscillations explain the solar neutrino and atmospheric neutrino experiments, and the indirect evidence that Hot Dark Matter (HDM) in the form of light neutrinos might make up 30% of the mass of the universe, point to areas where accelerator-based experiments could play a crucial role in piecing together the puzzle. At the same time, the field faces a very uncertain future. The LSND experiment at LAMPF is the only funded neutrino oscillation experiment in the United States and it is threatened by the abrupt shutdown of LAMPF proposed for fiscal 1994. The future of neutrino physics at the Brookhaven National Laboratory AGS depends the continuation of High Energy Physics (HEP) funding after the RHIC startup. Most proposed neutrino oscillation searches at Fermilab depend on the completion of the Main Injector project and on the construction of a new neutrino beamline, which is uncertain at this point. The proposed KAON facility at TRIUMF would provide a neutrino beam similar to that at the AGS but with a much increase intensity. The future of KAON is also uncertain. Despite the difficult obstacles present, there is a real possibility that we are on the verge of understanding the masses and mixings of the neutrinos. The physics importance of such a discovery can not be overstated. The current experimental status and future possibilities are discussed below

  17. Footprints: A Visual Search Tool that Supports Discovery and Coverage Tracking.

    Science.gov (United States)

    Isaacs, Ellen; Domico, Kelly; Ahern, Shane; Bart, Eugene; Singhal, Mudita

    2014-12-01

    Searching a large document collection to learn about a broad subject involves the iterative process of figuring out what to ask, filtering the results, identifying useful documents, and deciding when one has covered enough material to stop searching. We are calling this activity "discoverage," discovery of relevant material and tracking coverage of that material. We built a visual analytic tool called Footprints that uses multiple coordinated visualizations to help users navigate through the discoverage process. To support discovery, Footprints displays topics extracted from documents that provide an overview of the search space and are used to construct searches visuospatially. Footprints allows users to triage their search results by assigning a status to each document (To Read, Read, Useful), and those status markings are shown on interactive histograms depicting the user's coverage through the documents across dates, sources, and topics. Coverage histograms help users notice biases in their search and fill any gaps in their analytic process. To create Footprints, we used a highly iterative, user-centered approach in which we conducted many evaluations during both the design and implementation stages and continually modified the design in response to feedback.

  18. Google vs. the Library: Student Preferences and Perceptions when Doing Research Using Google and a Federated Search Tool

    Science.gov (United States)

    Georgas, Helen

    2013-01-01

    Federated searching was once touted as the library world's answer to Google, but ten years since federated searching technology's inception, how does it actually compare? This study focuses on undergraduate student preferences and perceptions when doing research using both Google and a federated search tool. Students were asked about their…

  19. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  20. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  1. Signature-based global searches at CDF

    International Nuclear Information System (INIS)

    Hocker, James Andrew

    2008-01-01

    Data collected in Run II of the Fermilab Tevatron are searched for indications of new electroweak scale physics. Rather than focusing on particular new physics scenarios, CDF data are analyzed for discrepancies with respect to the Standard Model prediction. Gross features of the data, mass bumps, and significant excesses of events with large summed transverse momentum are examined in a model-independent and quasi-model-independent approach. This global search for new physics in over three hundred exclusive final states in 2 fb -1 of p(bar p) collisions at √s = 1.96 TeV reveals no significant indication of physics beyond the Standard Model

  2. Object-based target templates guide attention during visual search

    OpenAIRE

    Berggren, Nick; Eimer, Martin

    2018-01-01

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target f...

  3. Impact of Glaucoma and Dry Eye on Text-Based Searching

    Science.gov (United States)

    Sun, Michelle J.; Rubin, Gary S.; Akpek, Esen K.; Ramulu, Pradeep Y.

    2017-01-01

    Purpose We determine if visual field loss from glaucoma and/or measures of dry eye severity are associated with difficulty searching, as judged by slower search times on a text-based search task. Methods Glaucoma patients with bilateral visual field (VF) loss, patients with clinically significant dry eye, and normally-sighted controls were enrolled from the Wilmer Eye Institute clinics. Subjects searched three Yellow Pages excerpts for a specific phone number, and search time was recorded. Results A total of 50 glaucoma subjects, 40 dry eye subjects, and 45 controls completed study procedures. On average, glaucoma patients exhibited 57% longer search times compared to controls (95% confidence interval [CI], 26%–96%, P Dry eye subjects demonstrated similar search times compared to controls, though worse Ocular Surface Disease Index (OSDI) vision-related subscores were associated with longer search times (P dry eye (P > 0.08 for Schirmer's testing without anesthesia, corneal fluorescein staining, and tear film breakup time). Conclusions Text-based visual search is slower for glaucoma patients with greater levels of VF loss and dry eye patients with greater self-reported visual difficulty, and these difficulties may contribute to decreased quality of life in these groups. Translational Relevance Visual search is impaired in glaucoma and dry eye groups compared to controls, highlighting the need for compensatory strategies and tools to assist individuals in overcoming their deficiencies. PMID:28670502

  4. Simrank: Rapid and sensitive general-purpose k-mer search tool

    Energy Technology Data Exchange (ETDEWEB)

    DeSantis, T.Z.; Keller, K.; Karaoz, U.; Alekseyenko, A.V; Singh, N.N.S.; Brodie, E.L; Pei, Z.; Andersen, G.L; Larsen, N.

    2011-04-01

    Terabyte-scale collections of string-encoded data are expected from consortia efforts such as the Human Microbiome Project (http://nihroadmap.nih.gov/hmp). Intra- and inter-project data similarity searches are enabled by rapid k-mer matching strategies. Software applications for sequence database partitioning, guide tree estimation, molecular classification and alignment acceleration have benefited from embedded k-mer searches as sub-routines. However, a rapid, general-purpose, open-source, flexible, stand-alone k-mer tool has not been available. Here we present a stand-alone utility, Simrank, which allows users to rapidly identify database strings the most similar to query strings. Performance testing of Simrank and related tools against DNA, RNA, protein and human-languages found Simrank 10X to 928X faster depending on the dataset. Simrank provides molecular ecologists with a high-throughput, open source choice for comparing large sequence sets to find similarity.

  5. METHODOLODICAL TOOLS FOR ORGANIZING THE PROCESS OF STAFF1 SEARCH AND SELECTION

    Directory of Open Access Journals (Sweden)

    T. Bilorus

    2015-08-01

    Full Text Available The relevance and the basic stages to meet the demand in personnel in the contemporary economy were determined. The necessity and requirements for the formation of maps of competencies as tools for personnel selection were proved and generalized. The algorithm of choosing the sources of candidates search for the vacant position was developed. Practical recommendations of candidates selection procedure for the organization were provided using the method of “ideal point”.

  6. Considerations for the development of task-based search engines

    DEFF Research Database (Denmark)

    Petcu, Paula; Dragusin, Radu

    2013-01-01

    Based on previous experience from working on a task-based search engine, we present a list of suggestions and ideas for an Information Retrieval (IR) framework that could inform the development of next generation professional search systems. The specific task that we start from is the clinicians......' information need in finding rare disease diagnostic hypotheses at the time and place where medical decisions are made. Our experience from the development of a search engine focused on supporting clinicians in completing this task has provided us valuable insights in what aspects should be considered...... by the developers of vertical search engines....

  7. Attribute-Based Proxy Re-Encryption with Keyword Search

    Science.gov (United States)

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for , and propose two concrete constructions for : key-policy and ciphertext-policy . In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography. PMID:25549257

  8. Attribute-based proxy re-encryption with keyword search.

    Science.gov (United States)

    Shi, Yanfeng; Liu, Jiqiang; Han, Zhen; Zheng, Qingji; Zhang, Rui; Qiu, Shuo

    2014-01-01

    Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (ABRKS), which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, ABRKS allows (i) the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii) the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy). We formalize the syntax and security definitions for ABRKS, and propose two concrete constructions for ABRKS: key-policy ABRKS and ciphertext-policy ABRKS. In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography.

  9. IBRI-CASONTO: Ontology-based semantic search engine

    Directory of Open Access Journals (Sweden)

    Awny Sayed

    2017-11-01

    Full Text Available The vast availability of information, that added in a very fast pace, in the data repositories creates a challenge in extracting correct and accurate information. Which has increased the competition among developers in order to gain access to technology that seeks to understand the intent researcher and contextual meaning of terms. While the competition for developing an Arabic Semantic Search systems are still in their infancy, and the reason could be traced back to the complexity of Arabic Language. It has a complex morphological, grammatical and semantic aspects, as it is a highly inflectional and derivational language. In this paper, we try to highlight and present an Ontological Search Engine called IBRI-CASONTO for Colleges of Applied Sciences, Oman. Our proposed engine supports both Arabic and English language. It is also employed two types of search which are a keyword-based search and a semantics-based search. IBRI-CASONTO is based on different technologies such as Resource Description Framework (RDF data and Ontological graph. The experiments represent in two sections, first it shows a comparison among Entity-Search and the Classical-Search inside the IBRI-CASONTO itself, second it compares the Entity-Search of IBRI-CASONTO with currently used search engines, such as Kngine, Wolfram Alpha and the most popular engine nowadays Google, in order to measure their performance and efficiency.

  10. RASOnD - A comprehensive resource and search tool for RAS superfamily oncogenes from various species

    Directory of Open Access Journals (Sweden)

    Singh Tej P

    2011-07-01

    Full Text Available Abstract Background The Ras superfamily plays an important role in the control of cell signalling and division. Mutations in the Ras genes convert them into active oncogenes. The Ras oncogenes form a major thrust of global cancer research as they are involved in the development and progression of tumors. This has resulted in the exponential growth of data on Ras superfamily across different public databases and in literature. However, no dedicated public resource is currently available for data mining and analysis on this family. The present database was developed to facilitate straightforward accession, retrieval and analysis of information available on Ras oncogenes from one particular site. Description We have developed the RAS Oncogene Database (RASOnD as a comprehensive knowledgebase that provides integrated and curated information on a single platform for oncogenes of Ras superfamily. RASOnD encompasses exhaustive genomics and proteomics data existing across diverse publicly accessible databases. This resource presently includes overall 199,046 entries from 101 different species. It provides a search tool to generate information about their nucleotide and amino acid sequences, single nucleotide polymorphisms, chromosome positions, orthologies, motifs, structures, related pathways and associated diseases. We have implemented a number of user-friendly search interfaces and sequence analysis tools. At present the user can (i browse the data (ii search any field through a simple or advance search interface and (iii perform a BLAST search and subsequently CLUSTALW multiple sequence alignment by selecting sequences of Ras oncogenes. The Generic gene browser, GBrowse, JMOL for structural visualization and TREEVIEW for phylograms have been integrated for clear perception of retrieved data. External links to related databases have been included in RASOnD. Conclusions This database is a resource and search tool dedicated to Ras oncogenes. It has

  11. Search for brown dwarfs in the IRAS data bases

    International Nuclear Information System (INIS)

    Low, F.J.

    1986-01-01

    A report is given on the initial searches for brown dwarf stars in the IRAS data bases. The paper was presented to the workshop on 'Astrophysics of brown dwarfs', Virginia, USA, 1985. To date no brown dwarfs have been discovered in the solar neighbourhood. Opportunities for future searches with greater sensitivity and different wavelengths are outlined. (U.K.)

  12. Knowledge base, information search and intention to adopt innovation

    NARCIS (Netherlands)

    Rijnsoever, van F.J.; Castaldi, C.

    2008-01-01

    Innovation is a process that involves searching for new information. This paper builds upon theoretical insights on individual and organizational learning and proposes a knowledge based model of how actors search for information when confronted with innovation. The model takes into account different

  13. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  14. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  15. Research on perturbation based Monte Carlo reactor criticality search

    International Nuclear Information System (INIS)

    Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang

    2013-01-01

    Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k_e_f_f and differential coefficients of concerned parameter, the polynomial estimator of k_e_f_f changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)

  16. Tools for model-independent bounds in direct dark matter searches

    DEFF Research Database (Denmark)

    Cirelli, M.; Del Nobile, E.; Panci, P.

    2013-01-01

    We discuss a framework (based on non-relativistic operators) and a self-contained set of numerical tools to derive the bounds from some current direct detection experiments on virtually any arbitrary model of Dark Matter elastically scattering on nuclei.......We discuss a framework (based on non-relativistic operators) and a self-contained set of numerical tools to derive the bounds from some current direct detection experiments on virtually any arbitrary model of Dark Matter elastically scattering on nuclei....

  17. Federated Search Tools in Fusion Centers: Bridging Databases in the Information Sharing Environment

    Science.gov (United States)

    2012-09-01

    Suspicious Activity Reporting Initiative ODNI Office of the Director of National Intelligence OSINT Open Source Intelligence PERF Police Executive...Fusion centers are encouraged to explore all available information sources to enhance the intelligence analysis process. It follows then that fusion...WSIC also utilizes ACCURINT, a web-based, subscription service. ACCURINT searches open source information and is able to collect and collate

  18. Evidence-based librarianship: searching for the needed EBL evidence.

    Science.gov (United States)

    Eldredge, J D

    2000-01-01

    This paper discusses the challenges of finding evidence needed to implement Evidence-Based Librarianship (EBL). Focusing first on database coverage for three health sciences librarianship journals, the article examines the information contents of different databases. Strategies are needed to search for relevant evidence in the library literature via these databases, and the problems associated with searching the grey literature of librarianship. Database coverage, plausible search strategies, and the grey literature of library science all pose challenges to finding the needed research evidence for practicing EBL. Health sciences librarians need to ensure that systems are designed that can track and provide access to needed research evidence to support Evidence-Based Librarianship (EBL).

  19. Google Sets, Google Suggest, and Google Search History: Three More Tools for the Reference Librarian's Bag of Tricks

    OpenAIRE

    Cirasella, Jill

    2008-01-01

    This article examines the features, quirks, and uses of Google Sets, Google Suggest, and Google Search History and argues that these three lesser-known Google tools warrant inclusion in the resourceful reference librarian’s bag of tricks.

  20. In search of the true universe the tools, shaping, and cost of cosmological thought

    CERN Document Server

    Harwit, Martin

    2013-01-01

    Astrophysicist and scholar Martin Harwit examines how our understanding of the cosmos advanced rapidly during the twentieth century and identifies the factors contributing to this progress. Astronomy, whose tools were largely imported from physics and engineering, benefited mid-century from the US policy of coupling basic research with practical national priorities. This strategy, initially developed for military and industrial purposes, provided astronomy with powerful tools yielding access - at virtually no cost - to radio, infrared, X-ray, and gamma-ray observations. Today, astronomers are investigating the new frontiers of dark matter and dark energy, critical to understanding the cosmos but of indeterminate socio-economic promise. Harwit addresses these current challenges in view of competing national priorities and proposes alternative new approaches in search of the true Universe. This is an engaging read for astrophysicists, policy makers, historians, and sociologists of science looking to learn and a...

  1. Using risk based tools in emergency response

    International Nuclear Information System (INIS)

    Dixon, B.W.; Ferns, K.G.

    1987-01-01

    Probabilistic Risk Assessment (PRA) techniques are used by the nuclear industry to model the potential response of a reactor subjected to unusual conditions. The knowledge contained in these models can aid in emergency response decision making. This paper presents requirements for a PRA based emergency response support system to date. A brief discussion of published work provides background for a detailed description of recent developments. A rapid deep assessment capability for specific portions of full plant models is presented. The program uses a screening rule base to control search space expansion in a combinational algorithm

  2. Multi-objective Search-based Mobile Testing

    OpenAIRE

    Mao, K.

    2017-01-01

    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis...

  3. Neurophysiological Based Methods of Guided Image Search

    National Research Council Canada - National Science Library

    Marchak, Frank

    2003-01-01

    .... We developed a model of visual feature detection, the Neuronal Synchrony Model, based on neurophysiological models of temporal neuronal processing, to improve the accuracy of automatic detection...

  4. Process-Based Quality (PBQ) Tools Development

    Energy Technology Data Exchange (ETDEWEB)

    Cummins, J.L.

    2001-12-03

    The objective of this effort is to benchmark the development of process-based quality tools for application in CAD (computer-aided design) model-based applications. The processes of interest are design, manufacturing, and quality process applications. A study was commissioned addressing the impact, current technologies, and known problem areas in application of 3D MCAD (3-dimensional mechanical computer-aided design) models and model integrity on downstream manufacturing and quality processes. The downstream manufacturing and product quality processes are profoundly influenced and dependent on model quality and modeling process integrity. The goal is to illustrate and expedite the modeling and downstream model-based technologies for available or conceptual methods and tools to achieve maximum economic advantage and advance process-based quality concepts.

  5. HDAPD: a web tool for searching the disease-associated protein structures

    Science.gov (United States)

    2010-01-01

    Background The protein structures of the disease-associated proteins are important for proceeding with the structure-based drug design to against a particular disease. Up until now, proteins structures are usually searched through a PDB id or some sequence information. However, in the HDAPD database presented here the protein structure of a disease-associated protein can be directly searched through the associated disease name keyed in. Description The search in HDAPD can be easily initiated by keying some key words of a disease, protein name, protein type, or PDB id. The protein sequence can be presented in FASTA format and directly copied for a BLAST search. HDAPD is also interfaced with Jmol so that users can observe and operate a protein structure with Jmol. The gene ontological data such as cellular components, molecular functions, and biological processes are provided once a hyperlink to Gene Ontology (GO) is clicked. Further, HDAPD provides a link to the KEGG map such that where the protein is placed and its relationship with other proteins in a metabolic pathway can be found from the map. The latest literatures namely titles, journals, authors, and abstracts searched from PubMed for the protein are also presented as a length controllable list. Conclusions Since the HDAPD data content can be routinely updated through a PHP-MySQL web page built, the new database presented is useful for searching the structures for some disease-associated proteins that may play important roles in the disease developing process for performing the structure-based drug design to against the diseases. PMID:20158919

  6. Tool-Based Curricula and Visual Learning

    Directory of Open Access Journals (Sweden)

    Dragica Vasileska

    2013-12-01

    Full Text Available In the last twenty years nanotechnology hasrevolutionized the world of information theory, computers andother important disciplines, such as medicine, where it hascontributed significantly in the creation of more sophisticateddiagnostic tools. Therefore, it is important for people working innanotechnology to better understand basic concepts to be morecreative and productive. To further foster the progress onNanotechnology in the USA, the National Science Foundation hascreated the Network for Computational Nanotechnology (NCNand the dissemination of all the information from member andnon-member participants of the NCN is enabled by thecommunity website www.nanoHUB.org. nanoHUB’s signatureservices online simulation that enables the operation ofsophisticated research and educational simulation engines with acommon browser. No software installation or local computingpower is needed. The simulation tools as well as nano-conceptsare augmented by educational materials, assignments, and toolbasedcurricula, which are assemblies of tools that help studentsexcel in a particular area.As elaborated later in the text, it is the visual mode of learningthat we are exploiting in achieving faster and better results withstudents that go through simulation tool-based curricula. Thereare several tool based curricula already developed on thenanoHUB and undergoing further development, out of which fiveare directly related to nanoelectronics. They are: ABACUS –device simulation module; ACUTE – Computational Electronicsmodule; ANTSY – bending toolkit; and AQME – quantummechanics module. The methodology behind tool-based curriculais discussed in details. Then, the current status of each module ispresented, including user statistics and student learningindicatives. Particular simulation tool is explored further todemonstrate the ease by which students can grasp information.Representative of Abacus is PN-Junction Lab; representative ofAQME is PCPBT tool; and

  7. Content-based Music Search and Recommendation System

    Science.gov (United States)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  8. In Search of...Brain-Based Education.

    Science.gov (United States)

    Bruer, John T.

    1999-01-01

    Debunks two ideas appearing in brain-based education articles: the educational significance of brain laterality (right brain versus left brain) and claims for a sensitive period of brain development in young children. Brain-based education literature provides a popular but misleading mix of fact, misinterpretation, and fantasy. (47 references (MLH)

  9. Internet-based tools for behaviour change

    Energy Technology Data Exchange (ETDEWEB)

    Bottrill, Catherine [Environmental Change Inst., Oxford Unversity Centre for the Environment (United Kingdom)

    2007-07-01

    Internet-based carbon calculators have the potential to be powerful tools for helping people to understand their personal energy use derived from fossil fuels and to take action to reduce the related carbon emissions. This paper reviews twenty-three calculators concluding that in most cases this environmental learning tool is falling short of giving people the ability to accurately monitor their energy use; to receive meaningful feedback and guidance for altering their energy use; or to connect with others also going through the same learning process of saving energy and conserving carbon. This paper presents the findings of research into the accuracy and effectiveness of carbon calculators. Based on the assessment of the calculators the paper discusses the opportunities Internet technology could be offering for engagement, communication, encouragement and guidance on low-carbon lifestyle choices. Finally, recommendations are made for the development of accurate, informative and social Internet-based carbon calculators.

  10. Update on CERN Search based on SharePoint 2013

    Science.gov (United States)

    Alvarez, E.; Fernandez, S.; Lossent, A.; Posada, I.; Silva, B.; Wagner, A.

    2017-10-01

    CERN’s enterprise Search solution “CERN Search” provides a central search solution for users and CERN service providers. A total of about 20 million public and protected documents from a wide range of document collections is indexed, including Indico, TWiki, Drupal, SharePoint, JACOW, E-group archives, EDMS, and CERN Web pages. In spring 2015, CERN Search was migrated to a new infrastructure based on SharePoint 2013. In the context of this upgrade, the document pre-processing and indexing process was redesigned and generalised. The new data feeding framework allows to profit from new functionality and it facilitates the long term maintenance of the system.

  11. Quantum signature scheme based on a quantum search algorithm

    International Nuclear Information System (INIS)

    Yoon, Chun Seok; Kang, Min Sung; Lim, Jong In; Yang, Hyung Jin

    2015-01-01

    We present a quantum signature scheme based on a two-qubit quantum search algorithm. For secure transmission of signatures, we use a quantum search algorithm that has not been used in previous quantum signature schemes. A two-step protocol secures the quantum channel, and a trusted center guarantees non-repudiation that is similar to other quantum signature schemes. We discuss the security of our protocol. (paper)

  12. A search for new hot subdwarf stars by means of Virtual Observatory tools

    Science.gov (United States)

    Oreiro, R.; Rodríguez-López, C.; Solano, E.; Ulla, A.; Østensen, R.; García-Torres, M.

    2011-06-01

    Context. Recent massive sky surveys in different bandwidths are providing new opportunities to modern astronomy. The Virtual Observatory (VO) provides the adequate framework to handle the huge amount of information available and filter out data according to specific requirements. Aims: Hot subdwarf stars are faint, blue objects, and are the main contributors to the far-UV excess observed in elliptical galaxies. They offer an excellent laboratory to study close and wide binary systems, and to scrutinize their interiors through asteroseismology, since some of them undergo stellar oscillations. However, their origins are still uncertain, and increasing the number of detections is crucial to undertake statistical studies. In this work, we aim at defining a strategy to find new, uncatalogued hot subdwarfs. Methods: Making use of VO tools we thoroughly search stellar catalogues to retrieve multi-colour photometry and astrometric information of a known sample of blue objects, including hot subdwarfs, white dwarfs, cataclysmic variables and main-sequence OB stars. We define a procedure to distinguish among these spectral classes, which is particularly designed to obtain a hot subdwarf sample with a low contamination factor. To check the validity of the method, this procedure is then applied to two test sky regions: to the Kepler FoV and to a test region of 300 deg2 around (α:225, δ:5) deg. Results: As a result of the procedure we obtained 38 hot subdwarf candidates, 23 of which had already a spectral classification. We have acquired spectroscopy for three other targets, and four additional ones have an available SDSS spectrum, which we used to determine their spectral type. A temperature estimate is provided for the candidates based on their spectral energy distribution, considering two-atmospheres fit for objects with clear infrared excess as a signature of the presence of a cool companion. Eventually, out of 30 candidates with spectral classification, 26 objects were

  13. Surveillance Tools Emerging From Search Engines and Social Media Data for Determining Eye Disease Patterns.

    Science.gov (United States)

    Deiner, Michael S; Lietman, Thomas M; McLeod, Stephen D; Chodosh, James; Porco, Travis C

    2016-09-01

    Internet-based search engine and social media data may provide a novel complementary source for better understanding the epidemiologic factors of infectious eye diseases, which could better inform eye health care and disease prevention. To assess whether data from internet-based social media and search engines are associated with objective clinic-based diagnoses of conjunctivitis. Data from encounters of 4143 patients diagnosed with conjunctivitis from June 3, 2012, to April 26, 2014, at the University of California San Francisco (UCSF) Medical Center, were analyzed using Spearman rank correlation of each weekly observation to compare demographics and seasonality of nonallergic conjunctivitis with allergic conjunctivitis. Data for patient encounters with diagnoses for glaucoma and influenza were also obtained for the same period and compared with conjunctivitis. Temporal patterns of Twitter and Google web search data, geolocated to the United States and associated with these clinical diagnoses, were compared with the clinical encounters. The a priori hypothesis was that weekly internet-based searches and social media posts about conjunctivitis may reflect the true weekly clinical occurrence of conjunctivitis. Weekly total clinical diagnoses at UCSF of nonallergic conjunctivitis, allergic conjunctivitis, glaucoma, and influenza were compared using Spearman rank correlation with equivalent weekly data on Tweets related to disease or disease-related keyword searches obtained from Google Trends. Seasonality of clinical diagnoses of nonallergic conjunctivitis among the 4143 patients (2364 females [57.1%] and 1776 males [42.9%]) with 5816 conjunctivitis encounters at UCSF correlated strongly with results of Google searches in the United States for the term pink eye (ρ, 0.68 [95% CI, 0.52 to 0.78]; P < .001) and correlated moderately with Twitter results about pink eye (ρ, 0.38 [95% CI, 0.16 to 0.56]; P < .001) and with clinical diagnosis of influenza (ρ, 0

  14. Robust object tacking based on self-adaptive search area

    Science.gov (United States)

    Dong, Taihang; Zhong, Sheng

    2018-02-01

    Discriminative correlation filter (DCF) based trackers have recently achieved excellent performance with great computational efficiency. However, DCF based trackers suffer boundary effects, which result in the unstable performance in challenging situations exhibiting fast motion. In this paper, we propose a novel method to mitigate this side-effect in DCF based trackers. We change the search area according to the prediction of target motion. When the object moves fast, broad search area could alleviate boundary effects and reserve the probability of locating object. When the object moves slowly, narrow search area could prevent effect of useless background information and improve computational efficiency to attain real-time performance. This strategy can impressively soothe boundary effects in situations exhibiting fast motion and motion blur, and it can be used in almost all DCF based trackers. The experiments on OTB benchmark show that the proposed framework improves the performance compared with the baseline trackers.

  15. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  16. Software tools for microprocessor based systems

    International Nuclear Information System (INIS)

    Halatsis, C.

    1981-01-01

    After a short review of the hardware and/or software tools for the development of single-chip, fixed instruction set microprocessor-based sytems we focus on the software tools for designing systems based on microprogrammed bit-sliced microprocessors. Emphasis is placed on meta-microassemblers and simulation facilties at the register-transfer-level and architecture level. We review available meta-microassemblers giving their most important features, advantages and disadvantages. We also make extentions to higher-level microprogramming languages and associated systems specifically developed for bit-slices. In the area of simulation facilities we first discuss the simulation objectives and the criteria for chosing the right simulation language. We consertrate to simulation facilities already used in bit-slices projects and discuss the gained experience. We conclude by describing the way the Signetics meta-microassembler and the ISPS simulation tool have been employed in the design of a fast microprogrammed machine, called MICE, made out of ECL bit-slices. (orig.)

  17. IPeak: An open source tool to combine results from multiple MS/MS search engines.

    Science.gov (United States)

    Wen, Bo; Du, Chaoqin; Li, Guilin; Ghali, Fawaz; Jones, Andrew R; Käll, Lukas; Xu, Shaohang; Zhou, Ruo; Ren, Zhe; Feng, Qiang; Xu, Xun; Wang, Jun

    2015-09-01

    Liquid chromatography coupled tandem mass spectrometry (LC-MS/MS) is an important technique for detecting peptides in proteomics studies. Here, we present an open source software tool, termed IPeak, a peptide identification pipeline that is designed to combine the Percolator post-processing algorithm and multi-search strategy to enhance the sensitivity of peptide identifications without compromising accuracy. IPeak provides a graphical user interface (GUI) as well as a command-line interface, which is implemented in JAVA and can work on all three major operating system platforms: Windows, Linux/Unix and OS X. IPeak has been designed to work with the mzIdentML standard from the Proteomics Standards Initiative (PSI) as an input and output, and also been fully integrated into the associated mzidLibrary project, providing access to the overall pipeline, as well as modules for calling Percolator on individual search engine result files. The integration thus enables IPeak (and Percolator) to be used in conjunction with any software packages implementing the mzIdentML data standard. IPeak is freely available and can be downloaded under an Apache 2.0 license at https://code.google.com/p/mzidentml-lib/. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A novel line segment detection algorithm based on graph search

    Science.gov (United States)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  19. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Science.gov (United States)

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  20. Towards the Development of Web-based Business intelligence Tools

    DEFF Research Database (Denmark)

    Georgiev, Lachezar; Tanev, Stoyan

    2011-01-01

    This paper focuses on using web search techniques in examining the co-creation strategies of technology driven firms. It does not focus on the co-creation results but describes the implementation of a software tool using data mining techniques to analyze the content on firms’ websites. The tool...

  1. Web-based drug repurposing tools: a survey.

    Science.gov (United States)

    Sam, Elizabeth; Athri, Prashanth

    2017-10-06

    Drug repurposing (a.k.a. drug repositioning) is the search for new indications or molecular targets distinct from a drug's putative activity, pharmacological effect or binding specificities. With the ever-increasing rates of termination of drugs in clinical trials, drug repositioning has risen as one of the effective solutions against the risk of drug failures. Repositioning finds a way to reverse the grim but real trend that Eroom's law portends for the pharmaceutical and biotech industry, and drug discovery in general. Further, the advent of high-throughput technologies to explore biological systems has enabled the generation of zeta bytes of data and a massive collection of databases that store them. Computational analytics and mining are frequently used as effective tools to explore this byzantine series of biological and biomedical data. However, advanced computational tools are often difficult to understand or use, thereby limiting their accessibility to scientists without a strong computational background. Hence it is of great importance to build user-friendly interfaces to extend the user-base beyond computational scientists, to include life scientists who may have deeper chemical and biological insights. This survey is focused on systematically presenting the available Web-based tools that aid in repositioning drugs. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Generating "fragment-based virtual library" using pocket similarity search of ligand-receptor complexes.

    Science.gov (United States)

    Khashan, Raed S

    2015-01-01

    As the number of available ligand-receptor complexes is increasing, researchers are becoming more dedicated to mine these complexes to aid in the drug design and development process. We present free software which is developed as a tool for performing similarity search across ligand-receptor complexes for identifying binding pockets which are similar to that of a target receptor. The search is based on 3D-geometric and chemical similarity of the atoms forming the binding pocket. For each match identified, the ligand's fragment(s) corresponding to that binding pocket are extracted, thus forming a virtual library of fragments (FragVLib) that is useful for structure-based drug design. The program provides a very useful tool to explore available databases.

  3. Differential Search Coils Based Magnetometers: Conditioning, Magnetic Sensitivity, Spatial Resolution

    Directory of Open Access Journals (Sweden)

    Timofeeva Maria

    2012-03-01

    Full Text Available A theoretical and experimental comparison of optimized search coils based magnetometers, operating either in the Flux mode or in the classical Lenz-Faraday mode, is presented. The improvements provided by the Flux mode in terms of bandwidth and measuring range of the sensor are detailed. Theory, SPICE model and measurements are in good agreement. The spatial resolution of the sensor is studied which is an important parameter for applications in non destructive evaluation. A general expression of the magnetic sensitivity of search coils sensors is derived. Solutions are proposed to design magnetometers with reduced weight and volume without degrading the magnetic sensitivity. An original differential search coil based magnetometer, made of coupled coils, operating in flux mode and connected to a differential transimpedance amplifier is proposed. It is shown that this structure is better in terms of volume occupancy than magnetometers using two separated coils without any degradation in magnetic sensitivity. Experimental results are in good agreement with calculations.

  4. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  5. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  6. A new generation of tools for search, recovery and quality evaluation of World Wide Web medical resources.

    Science.gov (United States)

    Aguillo, I

    2000-01-01

    Although the Internet is already a valuable information resource in medicine, there are important challenges to be faced before physicians and general users will have extensive access to this information. As a result of a research effort to compile a health-related Internet directory, new tools and strategies have been developed to solve key problems derived from the explosive growth of medical information on the Net and the great concern over the quality of such critical information. The current Internet search engines lack some important capabilities. We suggest using second generation tools (client-side based) able to deal with large quantities of data and to increase the usability of the records recovered. We tested the capabilities of these programs to solve health-related information problems, recognising six groups according to the kind of topics addressed: Z39.50 clients, downloaders, multisearchers, tracing agents, indexers and mappers. The evaluation of the quality of health information available on the Internet could require a large amount of human effort. A possible solution may be to use quantitative indicators based on the hypertext visibility of the Web sites. The cybermetric measures are valid for quality evaluation if they are derived from indirect peer review by experts with Web pages citing the site. The hypertext links acting as citations need to be extracted from a controlled sample of quality super-sites.

  7. Can social tagged images aid concept-based video search?

    NARCIS (Netherlands)

    Setz, A.T.; Snoek, C.G.M.

    2009-01-01

    This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We

  8. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  9. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  10. Constraint-based local search for container stowage slot planning

    DEFF Research Database (Denmark)

    Pacino, Dario; Jensen, Rune Møller; Bebbington, Tom

    2012-01-01

    -sea vessels. This paper describes the constrained-based local search algorithm used in the second phase of this approach where individual containers are assigned to slots in each bay section. The algorithm can solve this problem in an average of 0.18 seconds per bay, corresponding to a 20 seconds runtime...

  11. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  12. Computer-Assisted Search Of Large Textual Data Bases

    Science.gov (United States)

    Driscoll, James R.

    1995-01-01

    "QA" denotes high-speed computer system for searching diverse collections of documents including (but not limited to) technical reference manuals, legal documents, medical documents, news releases, and patents. Incorporates previously available and emerging information-retrieval technology to help user intelligently and rapidly locate information found in large textual data bases. Technology includes provision for inquiries in natural language; statistical ranking of retrieved information; artificial-intelligence implementation of semantics, in which "surface level" knowledge found in text used to improve ranking of retrieved information; and relevance feedback, in which user's judgements of relevance of some retrieved documents used automatically to modify search for further information.

  13. XSemantic: An Extension of LCA Based XML Semantic Search

    Science.gov (United States)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  14. THE METHOD OF APPLICATION OF A COLLECTIVE SEARCH ACTIVITY AS A TOOL DEVELOPING METHODOLOGICAL THINKING OF A TEACHER

    Directory of Open Access Journals (Sweden)

    Ibragimova Luiza Vahaevna

    2013-02-01

    Full Text Available To realize any pedagogical theory into practice it is necessary to transform the theoretical concepts in teaching methods. The development of all abilities, including thinking, occurs only in the activity, which is specially organized by creating the required pedagogical conditions, in this case – it is a the application of enhanced mental activity in teachers training course and vocational training b establishment of a "virtual university" for teachers in an institute of professional training c the organization of interdisciplinary interaction of teachers, based on conditions of the nonlinear didactics (training teachers of different subjects. The presented method is implemented for two years and consists of three phases: the motivational and educational, intellectual and developmental, innovative and reflective. At the motivational and educational stage, possibilities of collective search activity actualize during the course of training, group goals are set and chosen methods of their achieving by using the first pedagogical conditions. At intellectual and developmental stage, the development of skills to the collective search for effective teaching decisions during intercourse training with the first-and second-pedagogical conditions is carried out. The innovative step is the promotion of teachers to self-determination of techniques and tools that improve the quality of the educational process, providing assistance to each other in the development of teaching manuals, which is achieved with the help of all three pedagogical conditions.

  15. THE METHOD OF APPLICATION OF A COLLECTIVE SEARCH ACTIVITY AS A TOOL DEVELOPING METHODOLOGICAL THINKING OF A TEACHER

    Directory of Open Access Journals (Sweden)

    Луиза Вахаевна Ибрагимова

    2013-04-01

    Full Text Available To realize any pedagogical theory into practice it is necessary to transform the theoretical concepts in teaching methods. The development of all abilities, including thinking, occurs only in the activity, which is specially organized by creating the required pedagogical conditions, in this case – it is a the application of enhanced mental activity in teachers training course and vocational training b establishment of a "virtual university" for teachers in an institute of professional training c the organization of interdisciplinary interaction of teachers, based on conditions of the nonlinear didactics (training teachers of different subjects. The presented method is implemented for two years and consists of three phases: the motivational and educational, intellectual and developmental, innovative and reflective. At the motivational and educational stage, possibilities of collective search activity actualize during the course of training, group goals are set and chosen methods of their achieving by using the first pedagogical conditions. At intellectual and developmental stage, the development of skills to the collective search for effective teaching decisions during intercourse training with the first-and second-pedagogical conditions is carried out. The innovative step is the promotion of teachers to self-determination of techniques and tools that improve the quality of the educational process, providing assistance to each other in the development of teaching manuals, which is achieved with the help of all three pedagogical conditions.DOI: http://dx.doi.org/10.12731/2218-7405-2013-2-17

  16. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  18. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    Science.gov (United States)

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.

  19. Architecture for knowledge-based and federated search of online clinical evidence.

    Science.gov (United States)

    Coiera, Enrico; Walther, Martin; Nguyen, Ken; Lovell, Nigel H

    2005-10-24

    It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Clinicians performed 1662 searches over the trial. The average search duration was 4.9 +/- 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite the additional effort required to incorporate the

  20. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  1. COREMIC: a web-tool to search for a niche associated CORE MICrobiome

    Directory of Open Access Journals (Sweden)

    Richard R. Rodrigues

    2018-02-01

    Full Text Available Microbial diversity on earth is extraordinary, and soils alone harbor thousands of species per gram of soil. Understanding how this diversity is sorted and selected into habitat niches is a major focus of ecology and biotechnology, but remains only vaguely understood. A systems-biology approach was used to mine information from databases to show how it can be used to answer questions related to the core microbiome of habitat-microbe relationships. By making use of the burgeoning growth of information from databases, our tool “COREMIC” meets a great need in the search for understanding niche partitioning and habitat-function relationships. The work is unique, furthermore, because it provides a user-friendly statistically robust web-tool (http://coremic2.appspot.com or http://core-mic.com, developed using Google App Engine, to help in the process of database mining to identify the “core microbiome” associated with a given habitat. A case study is presented using data from 31 switchgrass rhizosphere community habitats across a diverse set of soil and sampling environments. The methodology utilizes an outgroup of 28 non-switchgrass (other grasses and forbs to identify a core switchgrass microbiome. Even across a diverse set of soils (five environments, and conservative statistical criteria (presence in more than 90% samples and FDR q-val <0.05% for Fisher’s exact test a core set of bacteria associated with switchgrass was observed. These included, among others, closely related taxa from Lysobacter spp., Mesorhizobium spp, and Chitinophagaceae. These bacteria have been shown to have functions related to the production of bacterial and fungal antibiotics and plant growth promotion. COREMIC can be used as a hypothesis generating or confirmatory tool that shows great potential for identifying taxa that may be important to the functioning of a habitat (e.g. host plant. The case study, in conclusion, shows that COREMIC can identify key habitat

  2. Supporting inter-topic entity search for biomedical Linked Data based on heterogeneous relationships.

    Science.gov (United States)

    Zong, Nansu; Lee, Sungin; Ahn, Jinhyun; Kim, Hong-Gee

    2017-08-01

    The keyword-based entity search restricts search space based on the preference of search. When given keywords and preferences are not related to the same biomedical topic, existing biomedical Linked Data search engines fail to deliver satisfactory results. This research aims to tackle this issue by supporting an inter-topic search-improving search with inputs, keywords and preferences, under different topics. This study developed an effective algorithm in which the relations between biomedical entities were used in tandem with a keyword-based entity search, Siren. The algorithm, PERank, which is an adaptation of Personalized PageRank (PPR), uses a pair of input: (1) search preferences, and (2) entities from a keyword-based entity search with a keyword query, to formalize the search results on-the-fly based on the index of the precomputed Individual Personalized PageRank Vectors (IPPVs). Our experiments were performed over ten linked life datasets for two query sets, one with keyword-preference topic correspondence (intra-topic search), and the other without (inter-topic search). The experiments showed that the proposed method achieved better search results, for example a 14% increase in precision for the inter-topic search than the baseline keyword-based search engine. The proposed method improved the keyword-based biomedical entity search by supporting the inter-topic search without affecting the intra-topic search based on the relations between different entities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Livestock associated epidemiological information profiling in New Sandwip Island (Jahajerchar of the Meghna estuary, Noakhali using participatory disease searching tool

    Directory of Open Access Journals (Sweden)

    SK Shaheenur Islam

    2017-09-01

    Conclusion: This place is potential for sheep and buffalo raising rather than cattle. The study has validated the significance of accepting participatory disease searching tool in order to capture voluntarily submitted epidemiological data towards establishing a cost effective, unique national disease surveillance system in Bangladesh. [J Adv Vet Anim Res 2017; 4(3.000: 267-273

  4. Search-based model identification of smart-structure damage

    Science.gov (United States)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  5. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  6. Development of a Google-based search engine for data mining radiology reports.

    Science.gov (United States)

    Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul

    2009-08-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.

  7. Artificial Intelligence Based Selection of Optimal Cutting Tool and Process Parameters for Effective Turning and Milling Operations

    Science.gov (United States)

    Saranya, Kunaparaju; John Rozario Jegaraj, J.; Ramesh Kumar, Katta; Venkateshwara Rao, Ghanta

    2016-06-01

    With the increased trend in automation of modern manufacturing industry, the human intervention in routine, repetitive and data specific activities of manufacturing is greatly reduced. In this paper, an attempt has been made to reduce the human intervention in selection of optimal cutting tool and process parameters for metal cutting applications, using Artificial Intelligence techniques. Generally, the selection of appropriate cutting tool and parameters in metal cutting is carried out by experienced technician/cutting tool expert based on his knowledge base or extensive search from huge cutting tool database. The present proposed approach replaces the existing practice of physical search for tools from the databooks/tool catalogues with intelligent knowledge-based selection system. This system employs artificial intelligence based techniques such as artificial neural networks, fuzzy logic and genetic algorithm for decision making and optimization. This intelligence based optimal tool selection strategy is developed using Mathworks Matlab Version 7.11.0 and implemented. The cutting tool database was obtained from the tool catalogues of different tool manufacturers. This paper discusses in detail, the methodology and strategies employed for selection of appropriate cutting tool and optimization of process parameters based on multi-objective optimization criteria considering material removal rate, tool life and tool cost.

  8. A web-based endodontic case difficulty assessment tool.

    Science.gov (United States)

    Shah, P K; Chong, B S

    2018-01-25

    To develop a web-based tool to facilitate identification, evaluation and management of teeth requiring endodontic treatment. Following a literature search and thorough analysis of existing case difficulty assessment forms, the web-based tool was developed using an online survey builder (Qualtrics, Qualtrics Lab, UT, USA). Following feedback from a pilot study, it was refined and improved. A study was performed, using the updated version (EndoApp) on a cohort (n = 53) of dental professionals and dental students. The participants were e-mailed instructions detailing the assessment of five test cases using EndoApp, followed by completion of a structured feedback form. Analysis of the EndoApp responses was used to evaluate usage times, whereas the results of the feedback forms were used to assess user experience and relevance, other potential applications and comments on further improvement/s. The average usage time was 2 min 7 s; the average times needed for the last three (Cases 3-5) were significantly less than the preceding two (Cases 1 & 2) test cases. An overwhelming majority of participants expressed favourable views on user experience and relevance of the web-based case difficulty assessment tool. Only two participants (4%) were unlikely or very unlikely to use EndoApp again. The potential application of EndoApp as an 'educational tool' and for 'primary care triage' was deemed the most popular features and of greater importance than the secondary options of 'fee setting' and as a 'dento-legal justification tool'. Within the study limitations, owing to its ability to quantify the level of difficulty and provide guidance, EndoApp was considered user-friendly and helped facilitate endodontic case difficulty assessment. From the feedback, further improvements and the development of a Smartphone App version are in progress. EndoApp may facilitate treatment planning, improve treatment cost-effectiveness and reduce frequency of procedural errors by providing

  9. Web-based information search and retrieval: effects of strategy use and age on search success.

    Science.gov (United States)

    Stronge, Aideen J; Rogers, Wendy A; Fisk, Arthur D

    2006-01-01

    The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.

  10. Are Job Search Programs a Promising Tool? : A Microeconometric Evaluation for Austria

    OpenAIRE

    Weber, Andrea Maria; Hofer, Helmut

    2004-01-01

    In Austria job search programs were introduced on a large scale in 1999. These programs aim at activating unemployed at an early stage and bringing them back to work by training job search related skills. We evaluate the impact of active labour market programs in Austria on individual unemployment durations, and allow program effects to vary between job search programs and formal training programs. We use the timing-of-events method which estimates the program effect as a shift in the transit...

  11. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    Science.gov (United States)

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  12. SHOP: scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Linusson, Anna; Zamora, Ismael

    2007-01-01

    A new GRID-based method for scaffold hopping (SHOP) is presented. In a fully automatic manner, scaffolds were identified in a database based on three types of 3D-descriptors. SHOP's ability to recover scaffolds was assessed and validated by searching a database spiked with fragments of known...... scaffolds were in the 31 top-ranked scaffolds. SHOP also identified new scaffolds with substantially different chemotypes from the queries. Docking analysis indicated that the new scaffolds would have similar binding modes to those of the respective query scaffolds observed in X-ray structures...

  13. Web-Based Tools in Education

    Directory of Open Access Journals (Sweden)

    Lupasc Adrian

    2016-07-01

    Full Text Available Technology is advancing at a rapid pace, and what we knew a year ago is likely to no longer apply today. With it, the technology brings new ways of transmitting information, machining and processing, storage and socializing. The continuous development of information technologies contributes more than ever to the increase of access to information for any field of activity, including education. For this reason, education must help young people (pupils and students to collect and select from the sheer volume of information available, to access them and learn how to use them. Therefore, education must constantly adapt to social change; it must pass on the achievements and richness of human experience. At the same time, technology supports didactic activity because it leads learning beyond the classroom, involving all actors in the school community and prepares young people for their profession. Moreover, web tools available for education can yield added benefits, which is why, especially at higher levels of the education system, their integration starts being more obvious and the results are soon to be seen. Moreover, information technologies produce changes in the classic way of learning, thus suffering rapid and profound transformations. In addition, current information technologies offer many types of applications, representing the argument for a new system of providing education and for building knowledge. In this regard, the paper aims to highlight the impact and benefits of current information technologies, particularly web-based, on the educational process.

  14. Give Them a Tool Kit: Demystifying the Job Search Process for Marketing Students

    Science.gov (United States)

    Morris, Paula T.; LeBaron, David; Arvi, Leonard

    2015-01-01

    Few, if any, marketing students are adequately prepared to conduct a thorough job search that will lead them to enjoyable, meaningful employment upon graduation. We present a method we have used in several classes that helps demystify the job search process for students. Using our approach, students have been able to discover their career passions…

  15. Tool-Body Assimilation Model Based on Body Babbling and Neurodynamical System

    Directory of Open Access Journals (Sweden)

    Kuniyuki Takahashi

    2015-01-01

    Full Text Available We propose the new method of tool use with a tool-body assimilation model based on body babbling and a neurodynamical system for robots to use tools. Almost all existing studies for robots to use tools require predetermined motions and tool features; the motion patterns are limited and the robots cannot use novel tools. Other studies fully search for all available parameters for novel tools, but this leads to massive amounts of calculations. To solve these problems, we took the following approach: we used a humanoid robot model to generate random motions based on human body babbling. These rich motion experiences were used to train recurrent and deep neural networks for modeling a body image. Tool features were self-organized in parametric bias, modulating the body image according to the tool in use. Finally, we designed a neural network for the robot to generate motion only from the target image. Experiments were conducted with multiple tools for manipulating a cylindrical target object. The results show that the tool-body assimilation model is capable of motion generation.

  16. A DE-Based Scatter Search for Global Optimization Problems

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.

  17. Applying Cuckoo Search for analysis of LFSR based cryptosystem

    Directory of Open Access Journals (Sweden)

    Maiya Din

    2016-09-01

    Full Text Available Cryptographic techniques are employed for minimizing security hazards to sensitive information. To make the systems more robust, cyphers or crypts being used need to be analysed for which cryptanalysts require ways to automate the process, so that cryptographic systems can be tested more efficiently. Evolutionary algorithms provide one such resort as these are capable of searching global optimal solution very quickly. Cuckoo Search (CS Algorithm has been used effectively in cryptanalysis of conventional systems like Vigenere and Transposition cyphers. Linear Feedback Shift Register (LFSR is a crypto primitive used extensively in design of cryptosystems. In this paper, we analyse LFSR based cryptosystem using Cuckoo Search to find correct initial states of used LFSR. Primitive polynomials of degree 11, 13, 17 and 19 are considered to analyse text crypts of length 200, 300 and 400 characters. Optimal solutions were obtained for the following CS parameters: Levy distribution parameter (β = 1.5 and Alien eggs discovering probability (pa = 0.25.

  18. Querying archetype-based EHRs by search ontology-based XPath engineering.

    Science.gov (United States)

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  19. Supporting ontology-based keyword search over medical databases.

    Science.gov (United States)

    Kementsietsidis, Anastasios; Lim, Lipyeow; Wang, Min

    2008-11-06

    The proliferation of medical terms poses a number of challenges in the sharing of medical information among different stakeholders. Ontologies are commonly used to establish relationships between different terms, yet their role in querying has not been investigated in detail. In this paper, we study the problem of supporting ontology-based keyword search queries on a database of electronic medical records. We present several approaches to support this type of queries, study the advantages and limitations of each approach, and summarize the lessons learned as best practices.

  20. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Directory of Open Access Journals (Sweden)

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  1. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    2000-01-01

    .... In the first two years we carried out two experiments. The first equated experience by comparing perceptual skills of expert radiologists with lay people searching non-medical pictorial scenes for hidden targets...

  2. WCSTools 3.0: More Tools for Image Astrometry and Catalog Searching

    Science.gov (United States)

    Mink, Douglas J.

    For five years, WCSTools has provided image astrometry for astronomers who need accurate positions for objects they wish to observe. Other functions have been added and improved since the package was first released. Support has been added for new catalogs, such as the GSC-ACT, 2MASS Point Source Catalog, and GSC II, as they have been published. A simple command line interface can search any supported catalog, returning information in several standard formats, whether the catalog is on a local disk or searchable over the World Wide Web. The catalog searching routine can be located on either end (or both ends!) of such a web connection, and the output from one catalog search can be used as the input to another search.

  3. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  4. Develop risk-based procurement management tools for SMEs

    NARCIS (Netherlands)

    Staal, Anne; Hagelaar, Geoffrey; Walhof, Gert; Holman, Richard

    2016-01-01

    This paper provides guidance for developing risk-based management tools to improve the procurement (purchasing) performance of SMEs. Extant academic literature only offers little support on developing such tools and does not consider the wide variety of SMEs. The paper defines a procurement tool for

  5. Gains Based Remedies: the misguided search for a doctrine

    Directory of Open Access Journals (Sweden)

    Tom Stafford

    2016-12-01

    Full Text Available ADVANCE ACCESSIn this article Tom Stafford (Paralegal at Clyde & Co LLP examines the phenomenon of “Gains Based Remedies”. These are awards that, unlike classical damage awards which are calculated by reference to the loss suffered by the claimant, correlate to the gain made by the defendant. A couple of common examples include an account of profits for breach of trust claims, or the “disgorgement” damages that were awarded in AG v Blake. These awards are however available for a spectrum of varied wrongs. Their seeming lack of unity has often baffled commentators who have tried to search for an underpinning doctrine. One particularly renowned commentary is that of Professor Edelman’s, who suggests that these wrongs can be understood by being broken down into one of two categories: awards which seek to deter wrongdoing, and awards which reverse a wrongful transfer of value. The purpose of this article is to discuss the flaws of this view of the law, and to suggest that in fact, any search for a doctrinal underpinning to Gains Based Remedies is misguided. The cases in which these awards are granted have only one feature common to all: the claimant’s loss is, for whatever reason, difficult or impossible to assess. For that reason, the courts use the only other measure of the wrong available: the defendant’s gain.

  6. A Dynamic Neighborhood Learning-Based Gravitational Search Algorithm.

    Science.gov (United States)

    Zhang, Aizhu; Sun, Genyun; Ren, Jinchang; Li, Xiaodong; Wang, Zhenjie; Jia, Xiuping

    2018-01-01

    Balancing exploration and exploitation according to evolutionary states is crucial to meta-heuristic search (M-HS) algorithms. Owing to its simplicity in theory and effectiveness in global optimization, gravitational search algorithm (GSA) has attracted increasing attention in recent years. However, the tradeoff between exploration and exploitation in GSA is achieved mainly by adjusting the size of an archive, named , which stores those superior agents after fitness sorting in each iteration. Since the global property of remains unchanged in the whole evolutionary process, GSA emphasizes exploitation over exploration and suffers from rapid loss of diversity and premature convergence. To address these problems, in this paper, we propose a dynamic neighborhood learning (DNL) strategy to replace the model and thereby present a DNL-based GSA (DNLGSA). The method incorporates the local and global neighborhood topologies for enhancing the exploration and obtaining adaptive balance between exploration and exploitation. The local neighborhoods are dynamically formed based on evolutionary states. To delineate the evolutionary states, two convergence criteria named limit value and population diversity, are introduced. Moreover, a mutation operator is designed for escaping from the local optima on the basis of evolutionary states. The proposed algorithm was evaluated on 27 benchmark problems with different characteristic and various difficulties. The results reveal that DNLGSA exhibits competitive performances when compared with a variety of state-of-the-art M-HS algorithms. Moreover, the incorporation of local neighborhood topology reduces the numbers of calculations of gravitational force and thus alleviates the high computational cost of GSA.

  7. A quality assessment tool for markup-based clinical guidelines.

    Science.gov (United States)

    Shalom, Erez; Shahar, Yuval; Taieb-Maimon, Meirav; Lunenfeld, Eitan

    2008-11-06

    We introduce a tool for quality assessment of procedural and declarative knowledge. We developed this tool for evaluating the specification of mark-up-based clinical GLs. Using this graphical tool, the expert physician and knowledge engineer collaborate to perform scoring, using pre-defined scoring scale, each of the knowledge roles of the mark-ups, comparing it to a gold standard. The tool enables scoring the mark-ups simultaneously at different sites by different users at different locations.

  8. Memoryless cooperative graph search based on the simulated annealing algorithm

    International Nuclear Information System (INIS)

    Hou Jian; Yan Gang-Feng; Fan Zhen

    2011-01-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment. (interdisciplinary physics and related areas of science and technology)

  9. Information retrieval for children based on the aggregated search paradigm

    NARCIS (Netherlands)

    Duarte Torres, Sergio

    This report presents research to develop information services for children by expanding and adapting current Information retrieval technologies according to the search characteristics and needs of children. Concretely, we will employ the aggregated search paradigm as theoretical framework. The

  10. Managing Online Search Statistics with dBASE III Plus.

    Science.gov (United States)

    Speer, Susan C.

    1987-01-01

    Describes a computer program designed to manage statistics about online searches which reports the number of searches by vendor, purpose, and librarian; calculates charges to departments and individuals; and prints monthly invoices to users with standing accounts. (CLB)

  11. Personalized Profile Based Search Interface With Ranked and Clustered Display

    National Research Council Canada - National Science Library

    Kumar, Sachin; Oztekin, B. U; Ertoz, Levent; Singhal, Saurabh; Han, Euihong; Kumar, Vipin

    2001-01-01

    We have developed an experimental meta-search engine, which takes the snippets from traditional search engines and presents them to the user either in the form of clusters, indices or re-ranked list...

  12. Biobotic insect swarm based sensor networks for search and rescue

    Science.gov (United States)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  13. A Framework for IT-based Design Tools

    DEFF Research Database (Denmark)

    Hartvig, Susanne C

    The thesis presents a new apprach to develop design tools that can be integrated, bypresenting a framework consisting of a set of guidelines for design tools, an integration andcommunication scheme, and a set of design tool schemes.This framework has been based onanalysis of requirements to integ...... to integrated design enviornments, and analysis of engineeringdesign and design problem solving methods. And the developed framework has been testedby applying it to development of prototype design tools for realistic design scenarios.......The thesis presents a new apprach to develop design tools that can be integrated, bypresenting a framework consisting of a set of guidelines for design tools, an integration andcommunication scheme, and a set of design tool schemes.This framework has been based onanalysis of requirements...

  14. Writing for the Robot: How Employer Search Tools Have Influenced Resume Rhetoric and Ethics

    Science.gov (United States)

    Amare, Nicole; Manning, Alan

    2009-01-01

    To date, business communication scholars and textbook writers have encouraged resume rhetoric that accommodates technology, for example, recommending keyword-enhancing techniques to attract the attention of searchbots: customized search engines that allow companies to automatically scan resumes for relevant keywords. However, few scholars have…

  15. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  16. ASCOT: a text mining-based web-service for efficient search and assisted creation of clinical trials

    Science.gov (United States)

    2012-01-01

    Clinical trials are mandatory protocols describing medical research on humans and among the most valuable sources of medical practice evidence. Searching for trials relevant to some query is laborious due to the immense number of existing protocols. Apart from search, writing new trials includes composing detailed eligibility criteria, which might be time-consuming, especially for new researchers. In this paper we present ASCOT, an efficient search application customised for clinical trials. ASCOT uses text mining and data mining methods to enrich clinical trials with metadata, that in turn serve as effective tools to narrow down search. In addition, ASCOT integrates a component for recommending eligibility criteria based on a set of selected protocols. PMID:22595088

  17. search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Carrie L. Iwema

    2016-07-01

    Full Text Available The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1 to make their research immediately and freely available and (2 to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint.

  18. Mutagenesis Objective Search and Selection Tool (MOSST: an algorithm to predict structure-function related mutations in proteins

    Directory of Open Access Journals (Sweden)

    Asenjo Juan A

    2011-04-01

    Full Text Available Abstract Background Functionally relevant artificial or natural mutations are difficult to assess or predict if no structure-function information is available for a protein. This is especially important to correctly identify functionally significant non-synonymous single nucleotide polymorphisms (nsSNPs or to design a site-directed mutagenesis strategy for a target protein. A new and powerful methodology is proposed to guide these two decision strategies, based only on conservation rules of physicochemical properties of amino acids extracted from a multiple alignment of a protein family where the target protein belongs, with no need of explicit structure-function relationships. Results A statistical analysis is performed over each amino acid position in the multiple protein alignment, based on different amino acid physical or chemical characteristics, including hydrophobicity, side-chain volume, charge and protein conformational parameters. The variances of each of these properties at each position are combined to obtain a global statistical indicator of the conservation degree of each property. Different types of physicochemical conservation are defined to characterize relevant and irrelevant positions. The differences between statistical variances are taken together as the basis of hypothesis tests at each position to search for functionally significant mutable sites and to identify specific mutagenesis targets. The outcome is used to statistically predict physicochemical consensus sequences based on different properties and to calculate the amino acid propensities at each position in a given protein. Hence, amino acid positions are identified that are putatively responsible for function, specificity, stability or binding interactions in a family of proteins. Once these key functional positions are identified, position-specific statistical distributions are applied to divide the 20 common protein amino acids in each position of the protein

  19. Proceedings of the ECIR 2012 Workshop on Task-Based and Aggregated Search (TBAS2012)

    DEFF Research Database (Denmark)

    2012-01-01

    Task-based search aims to understand the user's current task and desired outcomes, and how this may provide useful context for the Information Retrieval (IR) process. An example of task-based search is situations where additional user information on e.g. the purpose of the search or what the user...

  20. A Systematic Understanding of Successful Web Searches in Information-Based Tasks

    Science.gov (United States)

    Zhou, Mingming

    2013-01-01

    The purpose of this study is to research how Chinese university students solve information-based problems. With the Search Performance Index as the measure of search success, participants were divided into high, medium and low-performing groups. Based on their web search logs, these three groups were compared along five dimensions of the search…

  1. Noesis: Ontology based Scoped Search Engine and Resource Aggregator for Atmospheric Science

    Science.gov (United States)

    Ramachandran, R.; Movva, S.; Li, X.; Cherukuri, P.; Graves, S.

    2006-12-01

    The goal for search engines is to return results that are both accurate and complete. The search engines should find only what you really want and find everything you really want. Search engines (even meta search engines) lack semantics. The basis for search is simply based on string matching between the user's query term and the resource database and the semantics associated with the search string is not captured. For example, if an atmospheric scientist is searching for "pressure" related web resources, most search engines return inaccurate results such as web resources related to blood pressure. In this presentation Noesis, which is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities will be described. Noesis uses domain ontologies to help the user scope the search query to ensure that the search results are both accurate and complete. The domain ontologies guide the user to refine their search query and thereby reduce the user's burden of experimenting with different search strings. Semantics are captured by refining the query terms to cover synonyms, specializations, generalizations and related concepts. Noesis also serves as a resource aggregator. It categorizes the search results from different online resources such as education materials, publications, datasets, web search engines that might be of interest to the user.

  2. Part-based deep representation for product tagging and search

    Science.gov (United States)

    Chen, Keqing

    2017-06-01

    Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.

  3. FASTERp: A Feature Array Search Tool for Estimating Resemblance of Protein Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Macklin, Derek; Egan, Rob; Wang, Zhong

    2014-03-14

    Metagenome sequencing efforts have provided a large pool of billions of genes for identifying enzymes with desirable biochemical traits. However, homology search with billions of genes in a rapidly growing database has become increasingly computationally impractical. Here we present our pilot efforts to develop a novel alignment-free algorithm for homology search. Specifically, we represent individual proteins as feature vectors that denote the presence or absence of short kmers in the protein sequence. Similarity between feature vectors is then computed using the Tanimoto score, a distance metric that can be rapidly computed on bit string representations of feature vectors. Preliminary results indicate good correlation with optimal alignment algorithms (Spearman r of 0.87, ~;;1,000,000 proteins from Pfam), as well as with heuristic algorithms such as BLAST (Spearman r of 0.86, ~;;1,000,000 proteins). Furthermore, a prototype of FASTERp implemented in Python runs approximately four times faster than BLAST on a small scale dataset (~;;1000 proteins). We are optimizing and scaling to improve FASTERp to enable rapid homology searches against billion-protein databases, thereby enabling more comprehensive gene annotation efforts.

  4. Web-Based Tools for Text-Based Patient-Provider Communication in Chronic Conditions: Scoping Review.

    Science.gov (United States)

    Voruganti, Teja; Grunfeld, Eva; Makuwaza, Tutsirai; Bender, Jacqueline L

    2017-10-27

    Patients with chronic conditions require ongoing care which not only necessitates support from health care providers outside appointments but also self-management. Web-based tools for text-based patient-provider communication, such as secure messaging, allow for sharing of contextual information and personal narrative in a simple accessible medium, empowering patients and enabling their providers to address emerging care needs. The objectives of this study were to (1) conduct a systematic search of the published literature and the Internet for Web-based tools for text-based communication between patients and providers; (2) map tool characteristics, their intended use, contexts in which they were used, and by whom; (3) describe the nature of their evaluation; and (4) understand the terminology used to describe the tools. We conducted a scoping review using the MEDLINE (Medical Literature Analysis and Retrieval System Online) and EMBASE (Excerpta Medica Database) databases. We summarized information on the characteristics of the tools (structure, functions, and communication paradigm), intended use, context and users, evaluation (study design and outcomes), and terminology. We performed a parallel search of the Internet to compare with tools identified in the published literature. We identified 54 papers describing 47 unique tools from 13 countries studied in the context of 68 chronic health conditions. The majority of tools (77%, 36/47) had functions in addition to communication (eg, viewable care plan, symptom diary, or tracker). Eight tools (17%, 8/47) were described as allowing patients to communicate with the team or multiple health care providers. Most of the tools were intended to support communication regarding symptom reporting (49%, 23/47), and lifestyle or behavior modification (36%, 17/47). The type of health care providers who used tools to communicate with patients were predominantly allied health professionals of various disciplines (30%, 14

  5. JAVA based LCD Reconstruction and Analysis Tools

    International Nuclear Information System (INIS)

    Bower, G.

    2004-01-01

    We summarize the current status and future developments of the North American Group's Java-based system for studying physics and detector design issues at a linear collider. The system is built around Java Analysis Studio (JAS) an experiment-independent Java-based utility for data analysis. Although the system is an integrated package running in JAS, many parts of it are also standalone Java utilities

  6. Java based LCD reconstruction and analysis tools

    International Nuclear Information System (INIS)

    Bower, Gary; Cassell, Ron; Graf, Norman; Johnson, Tony; Ronan, Mike

    2001-01-01

    We summarize the current status and future developments of the North American Group's Java-based system for studying physics and detector design issues at a linear collider. The system is built around Java Analysis Studio (JAS) an experiment-independent Java-based utility for data analysis. Although the system is an integrated package running in JAS, many parts of it are also standalone Java utilities

  7. OPUS: A Comprehensive Search Tool for Remote Sensing Observations of the Outer Planets. Now with Enhanced Geometric Metadata for Cassini and New Horizons Optical Remote Sensing Instruments.

    Science.gov (United States)

    Gordon, M. K.; Showalter, M. R.; Ballard, L.; Tiscareno, M.; French, R. S.; Olson, D.

    2017-06-01

    The PDS RMS Node hosts OPUS - an accurate, comprehensive search tool for spacecraft remote sensing observations. OPUS supports Cassini: CIRS, ISS, UVIS, VIMS; New Horizons: LORRI, MVIC; Galileo SSI; Voyager ISS; and Hubble: ACS, STIS, WFC3, WFPC2.

  8. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  9. Harmony Search Based Parameter Ensemble Adaptation for Differential Evolution

    Directory of Open Access Journals (Sweden)

    Rammohan Mallipeddi

    2013-01-01

    Full Text Available In differential evolution (DE algorithm, depending on the characteristics of the problem at hand and the available computational resources, different strategies combined with a different set of parameters may be effective. In addition, a single, well-tuned combination of strategies and parameters may not guarantee optimal performance because different strategies combined with different parameter settings can be appropriate during different stages of the evolution. Therefore, various adaptive/self-adaptive techniques have been proposed to adapt the DE strategies and parameters during the course of evolution. In this paper, we propose a new parameter adaptation technique for DE based on ensemble approach and harmony search algorithm (HS. In the proposed method, an ensemble of parameters is randomly sampled which form the initial harmony memory. The parameter ensemble evolves during the course of the optimization process by HS algorithm. Each parameter combination in the harmony memory is evaluated by testing them on the DE population. The performance of the proposed adaptation method is evaluated using two recently proposed strategies (DE/current-to-pbest/bin and DE/current-to-gr_best/bin as basic DE frameworks. Numerical results demonstrate the effectiveness of the proposed adaptation technique compared to the state-of-the-art DE based algorithms on a set of challenging test problems (CEC 2005.

  10. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  11. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    Science.gov (United States)

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  12. Evidence-based medicine - searching the medical literature. Part 1.

    African Journals Online (AJOL)

    Ann Burgess

    password for access via HINARIa2 use that to log in. Then you can retrieve articles from the 6000 journals that will be available to you. You cannot retrieve the full text from journals that do not allow free access or HINARI access. How to search the literature on the internet. Before you start your search take a moment to think ...

  13. On the network-based emulation of human visual search

    NARCIS (Netherlands)

    Gerrissen, J.F.

    1991-01-01

    We describe the design of a computer emulator of human visual search. The emulator mechanism is eventually meant to support ergonomic assessment of the effect of display structure and protocol on search performance. As regards target identification and localization, it mimics a number of

  14. Tools for the Knowledge-Based Organization

    DEFF Research Database (Denmark)

    Ravn, Ib

    2002-01-01

    exist. They include a Master’s degree in knowledge management, a web- or print-based intelligence hub for the knowledge society, collaboration with the Danish intellectual capital reporting project, ongoing research on expertise and ethics in knowledge workers, a comparative study of competence...

  15. Evidence Based Management as a Tool for Special Libraries

    Directory of Open Access Journals (Sweden)

    Bill Fisher

    2007-12-01

    Full Text Available Objective ‐ To examine the evidence based management literature, as an example of evidence based practice, and determine how applicable evidence based management might be in the special library environment. Methods ‐ Recent general management literature and the subject‐focused literature of evidence based management were reviewed; likewise recent library/information science management literature and the subject‐focused literature of evidence based librarianshipwere reviewed to identify relevant examples of the introduction and use of evidence based practice in organizations. Searches were conducted in major business/management databases, major library/information science databases, and relevant Web sites, blogs and wikis. Citation searches on key articles and follow‐up searches on cited references were also conducted. Analysis of the retrieved literature was conducted to find similarities and/or differences between the management literature and the library/information scienceliterature, especially as it related to special libraries.Results ‐ The barriers to introducing evidence based management into most organizations were found to apply to many special libraries and are similar to issues involved with evidence based practice in librarianship in general. Despite these barriers, a set of resources to assist special librarians in accessing research‐based information to help them use principles of evidence based management is identified.Conclusion ‐ While most special librarians are faced with a number of barriers to using evidence based management, resources do exist to help overcome these obstacles.

  16. World Wide Web-based system for the calculation of substituent parameters and substituent similarity searches.

    Science.gov (United States)

    Ertl, P

    1998-02-01

    Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.

  17. Infodemiology of status epilepticus: A systematic validation of the Google Trends-based search queries.

    Science.gov (United States)

    Bragazzi, Nicola Luigi; Bacigaluppi, Susanna; Robba, Chiara; Nardone, Raffaele; Trinka, Eugen; Brigo, Francesco

    2016-02-01

    People increasingly use Google looking for health-related information. We previously demonstrated that in English-speaking countries most people use this search engine to obtain information on status epilepticus (SE) definition, types/subtypes, and treatment. Now, we aimed at providing a quantitative analysis of SE-related web queries. This analysis represents an advancement, with respect to what was already previously discussed, in that the Google Trends (GT) algorithm has been further refined and correlational analyses have been carried out to validate the GT-based query volumes. Google Trends-based SE-related query volumes were well correlated with information concerning causes and pharmacological and nonpharmacological treatments. Google Trends can provide both researchers and clinicians with data on realities and contexts that are generally overlooked and underexplored by classic epidemiology. In this way, GT can foster new epidemiological studies in the field and can complement traditional epidemiological tools. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Agent Based Modeling as an Educational Tool

    Science.gov (United States)

    Fuller, J. H.; Johnson, R.; Castillo, V.

    2012-12-01

    Motivation is a key element in high school education. One way to improve motivation and provide content, while helping address critical thinking and problem solving skills, is to have students build and study agent based models in the classroom. This activity visually connects concepts with their applied mathematical representation. "Engaging students in constructing models may provide a bridge between frequently disconnected conceptual and mathematical forms of knowledge." (Levy and Wilensky, 2011) We wanted to discover the feasibility of implementing a model based curriculum in the classroom given current and anticipated core and content standards.; Simulation using California GIS data ; Simulation of high school student lunch popularity using aerial photograph on top of terrain value map.

  19. Analysis on the Correlation of Traffic Flow in Hainan Province Based on Baidu Search

    Science.gov (United States)

    Chen, Caixia; Shi, Chun

    2018-03-01

    Internet search data records user’s search attention and consumer demand, providing necessary database for the Hainan traffic flow model. Based on Baidu Index, with Hainan traffic flow as example, this paper conduct both qualitative and quantitative analysis on the relationship between search keyword from Baidu Index and actual Hainan tourist traffic flow, and build multiple regression model by SPSS.

  20. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  1. Omicseq: a web-based search engine for exploring omics datasets

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S.; Xu, Tianlei; Chen, Li; Zwick, Michael E.; Jiang, Xiaoqian; Wang, Fusheng

    2017-01-01

    Abstract The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve ‘findability’ of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. PMID:28402462

  2. Omicseq: a web-based search engine for exploring omics datasets.

    Science.gov (United States)

    Sun, Xiaobo; Pittard, William S; Xu, Tianlei; Chen, Li; Zwick, Michael E; Jiang, Xiaoqian; Wang, Fusheng; Qin, Zhaohui S

    2017-07-03

    The development and application of high-throughput genomics technologies has resulted in massive quantities of diverse omics data that continue to accumulate rapidly. These rich datasets offer unprecedented and exciting opportunities to address long standing questions in biomedical research. However, our ability to explore and query the content of diverse omics data is very limited. Existing dataset search tools rely almost exclusively on the metadata. A text-based query for gene name(s) does not work well on datasets wherein the vast majority of their content is numeric. To overcome this barrier, we have developed Omicseq, a novel web-based platform that facilitates the easy interrogation of omics datasets holistically to improve 'findability' of relevant data. The core component of Omicseq is trackRank, a novel algorithm for ranking omics datasets that fully uses the numerical content of the dataset to determine relevance to the query entity. The Omicseq system is supported by a scalable and elastic, NoSQL database that hosts a large collection of processed omics datasets. In the front end, a simple, web-based interface allows users to enter queries and instantly receive search results as a list of ranked datasets deemed to be the most relevant. Omicseq is freely available at http://www.omicseq.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  4. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  5. The Cellular Differential Evolution Based on Chaotic Local Search

    Directory of Open Access Journals (Sweden)

    Qingfeng Ding

    2015-01-01

    Full Text Available To avoid immature convergence and tune the selection pressure in the differential evolution (DE algorithm, a new differential evolution algorithm based on cellular automata and chaotic local search (CLS or ccDE is proposed. To balance the exploration and exploitation tradeoff of differential evolution, the interaction among individuals is limited in cellular neighbors instead of controlling parameters in the canonical DE. To improve the optimizing performance of DE, the CLS helps by exploring a large region to avoid immature convergence in the early evolutionary stage and exploiting a small region to refine the final solutions in the later evolutionary stage. What is more, to improve the convergence characteristics and maintain the population diversity, the binomial crossover operator in the canonical DE may be instead by the orthogonal crossover operator without crossover rate. The performance of ccDE is widely evaluated on a set of 14 bound constrained numerical optimization problems compared with the canonical DE and several DE variants. The simulation results show that ccDE has better performances in terms of convergence rate and solution accuracy than other optimizers.

  6. Database with web interface and search engine as a diagnostics tool for electromagnetic calorimeter

    CERN Document Server

    Paluoja, Priit

    2017-01-01

    During 2016 data collection, the Compact Muon Solenoid Data Acquisition (CMS DAQ) system has shown a very good reliability. Nevertheless, the high complexity of the hardware and the software involved is, by its nature, prone to some occasional problems. As CMS subdetector, electromagnetic calorimeter (ECAL) is affected in the same way. Some of the issues are not predictable and can appear during the year more than once such as components getting noisy, power shortcuts or failing communication between machines. The chain detection-diagnosis-intervention must be as fast as possible to minimise the downtime of the detector. The aim of this project was to create a diagnostic software for ECAL crew, which consists of database and its web interface that allows to search, add and edit the contents of the database.

  7. Microcantilever-based platforms as biosensing tools.

    Science.gov (United States)

    Alvarez, Mar; Lechuga, Laura M

    2010-05-01

    The fast and progressive growth of the biotechnology and pharmaceutical fields forces the development of new and powerful sensing techniques for process optimization and detection of biomolecules at very low concentrations. During the last years, the simplest MEMS structures, i.e. microcantilevers, have become an emerging and promising technology for biosensing applications, due to their small size, fast response, high sensitivity and their compatible integration into "lab-on-a-chip" devices. This article provides an overview of some of the most interesting bio-detections carried out during the last 2-3 years with the microcantilever-based platforms, which highlight the continuous expansion of this kind of sensor in the medical diagnosis field, reaching limits of detection at the single molecule level.

  8. Building and evaluating an informatics tool to facilitate analysis of a biomedical literature search service in an academic medical center library.

    Science.gov (United States)

    Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha

    2013-01-01

    This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.

  9. OS2: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain

    Science.gov (United States)

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search (OS2) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, OS2 ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables OS2 to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of OS2 is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations. PMID:28692697

  10. Alephweb: a search engine based on the federated structure ...

    African Journals Online (AJOL)

    Revue d'Information Scientifique et Technique. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 7, No 1 (1997) >. Log in or Register to get access to full text downloads.

  11. A modified harmony search based method for optimal rural radial ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 3 (2010) >. Log in or Register to get access to full text downloads.

  12. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis.

    Science.gov (United States)

    Huang, Yan Xin; Bao, Yong Li; Guo, Shu Yan; Wang, Yan; Zhou, Chun Guang; Li, Yu Xin

    2008-12-16

    The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb) in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes), Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation coefficient (MCC), sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be superior to that of other similar programs

  13. Chapter 11: Web-based Tools - VO Region Inventory Service

    Science.gov (United States)

    Good, J. C.

    As the size and number of datasets available through the VO grows, it becomes increasingly critical to have services that aid in locating and characterizing data pertinent to a particular scientific problem. At the same time, this same increase makes that goal more and more difficult to achieve. With a small number of datasets, it is feasible to simply retrieve the data itself (as the NVO DataScope service does). At intermediate scales, "count" DBMS searches (searches of the actual datasets which return record counts rather than full data subsets) sent to each data provider will work. However, neither of these approaches scale as the number of datasets expands into the hundreds or thousands. Dealing with the same problem internally, IRSA developed a compact and extremely fast scheme for determining source counts for positional catalogs (and in some cases image metadata) over arbitrarily large regions for multiple catalogs in a fraction of a second. To show applicability to the VO in general, this service has been extended with indices for all 4000+ catalogs in CDS Vizier (essentially all published catalogs and source tables). In this chapter, we will briefly describe the architecture of this service, and then describe how this can be used in a distributed system to retrieve rapid inventories of all VO holdings in a way that places an insignificant load on any data supplier. Further, we show and this tool can be used in conjunction with VO Registries and catalog services to zero in on those datasets that are appropriate to the user's needs. The initial implementation of this service consolidates custom binary index file structures (external to any DBMS and therefore portable) at a single site to minimize search times and implements the search interface as a simple CGI program. However, the architecture is amenable to distribution. The next phase of development will focus on metadata harvesting from data archives through a standard program interface and distribution

  14. A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories

    Science.gov (United States)

    Nguyen, Hung D.; Steele, Gynelle C.

    2016-01-01

    This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.

  15. Content-based video indexing and searching with wavelet transformation

    Science.gov (United States)

    Stumpf, Florian; Al-Jawad, Naseer; Du, Hongbo; Jassim, Sabah

    2006-05-01

    Biometric databases form an essential tool in the fight against international terrorism, organised crime and fraud. Various government and law enforcement agencies have their own biometric databases consisting of combination of fingerprints, Iris codes, face images/videos and speech records for an increasing number of persons. In many cases personal data linked to biometric records are incomplete and/or inaccurate. Besides, biometric data in different databases for the same individual may be recorded with different personal details. Following the recent terrorist atrocities, law enforcing agencies collaborate more than before and have greater reliance on database sharing. In such an environment, reliable biometric-based identification must not only determine who you are but also who else you are. In this paper we propose a compact content-based video signature and indexing scheme that can facilitate retrieval of multiple records in face biometric databases that belong to the same person even if their associated personal data are inconsistent. We shall assess the performance of our system using a benchmark audio visual face biometric database that has multiple videos for each subject but with different identity claims. We shall demonstrate that retrieval of relatively small number of videos that are nearest, in terms of the proposed index, to any video in the database results in significant proportion of that individual biometric data.

  16. Cluster-based DBMS Management Tool with High-Availability

    Directory of Open Access Journals (Sweden)

    Jae-Woo Chang

    2005-02-01

    Full Text Available A management tool which is needed for monitoring and managing cluster-based DBMSs has been little studied. So, we design and implement a cluster-based DBMS management tool with high-availability that monitors the status of nodes in a cluster system as well as the status of DBMS instances in a node. The tool enables users to recognize a single virtual system image and provides them with the status of all the nodes and resources in the system by using a graphic user interface (GUI. By using a load balancer, our management tool can increase the performance of a cluster-based DBMS as well as can overcome the limitation of the existing parallel DBMSs.

  17. Recode-2: new design, new search tools, and many more genes.

    LENUS (Irish Health Repository)

    Bekaert, Michaël

    2010-01-01

    \\'Recoding\\' is a term used to describe non-standard read-out of the genetic code, and encompasses such phenomena as programmed ribosomal frameshifting, stop codon readthrough, selenocysteine insertion and translational bypassing. Although only a small proportion of genes utilize recoding in protein synthesis, accurate annotation of \\'recoded\\' genes lags far behind annotation of \\'standard\\' genes. In order to address this issue, provide a service to researchers in the field, and offer training data for developers of gene-annotation software, we have gathered together known cases of recoding within the Recode database. Recode-2 is an improved and updated version of the database. It provides access to detailed information on genes known to utilize translational recoding and allows complex search queries, browsing of recoding data and enhanced visualization of annotated sequence elements. At present, the Recode-2 database stores information on approximately 1500 genes that are known to utilize recoding in their expression--a factor of approximately three increase over the previous version of the database. Recode-2 is available at http:\\/\\/recode.ucc.ie.

  18. HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.

    Science.gov (United States)

    O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D

    2015-04-01

    The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  20. PMD2HD--a web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection.

    Science.gov (United States)

    Bohne-Lang, Andreas; Lang, Elke; Taube, Anke

    2005-06-27

    Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.

  1. New Tools to Search for Data in the European Space Agency's Planetary Science Archive

    Science.gov (United States)

    Grotheer, E.; Macfarlane, A. J.; Rios, C.; Arviset, C.; Heather, D.; Fraga, D.; Vallejo, F.; De Marchi, G.; Barbarisi, I.; Saiz, J.; Barthelemy, M.; Docasal, R.; Martinez, S.; Besse, S.; Lim, T.

    2016-12-01

    The European Space Agency's (ESA) Planetary Science Archive (PSA), which can be accessed at http://archives.esac.esa.int/psa, provides public access to the archived data of Europe's missions to our neighboring planets. These datasets are compliant with the Planetary Data System (PDS) standards. Recently, a new interface has been released, which includes upgrades to make PDS4 data available from newer missions such as ExoMars and BepiColombo. Additionally, the PSA development team has been working to ensure that the legacy PDS3 data will be more easily accessible via the new interface as well. In addition to a new querying interface, the new PSA also allows access via the EPN-TAP and PDAP protocols. This makes the PSA data sets compatible with other archive-related tools and projects, such as the Virtual European Solar and Planetary Access (VESPA) project for creating a virtual observatory.

  2. Performance Evaluation of Java Based Object Relational Mapping Tools

    Directory of Open Access Journals (Sweden)

    Shoaib Mahmood Bhatti

    2013-04-01

    Full Text Available Object persistency is the hot issue in the form of ORM (Object Relational Mapping tools in industry as developers use these tools during software development. This paper presents the performance evaluation of Java based ORM tools. For this purpose, Hibernate, Ebean and TopLinkhave been selected as the ORM tools which are popular and open source. Their performance has been measured from execution point of view. The results show that ORM tools are the good option for the developers considering the system throughput in shorter setbacks and they can be used efficiently and effectively for performing mapping of the objects into the relational dominated world of database, thus creating a hope for a better and well dominated future of this technology.

  3. Cross entropy-based memetic algorithms: An application study over the tool switching problem

    Directory of Open Access Journals (Sweden)

    Jhon Edgar Amaya

    2013-05-01

    Full Text Available This paper presents a parameterized schema for building memetic algorithms based on cross-entropy (CE methods. This novel schema is general in nature, and features multiple probability mass functions and Lamarckian learning. The applicability of the approach is assessed by considering the Tool Switching Problem, a complex combinatorial problem in the field of Flexible Manufacturing Systems. An exhaustive evaluation (including techniques ranging from local search and evolutionary algorithms to constructive methods provides evidence of the effectiveness of CE-based memetic algorithms.

  4. Data-base tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    The authors use a commercial data-base software package to create several data-base products that enhance the ability of experimental physicists to analyze data from the TMX-U experiment. This software resides on a Dec-20 computer in M-Divisions's user service center (USC), where data can be analyzed separately from the main acquisition computers. When these data-base tools are combined with interactive data analysis programs, physicists can perform automated (batch-style) processing or interactive data analysis on the computers in the USC or on the supercomputers of the NMFECC, in addition to the normal processing done on the acquisition system. One data-base tool provides highly reduced data for searching and correlation analysis of several diagnostic signals for a single shot or many shots. A second data-base tool provides retrieval and storage of unreduced data for detailed analysis of one or more diagnostic signals. The authors report how these data-base tools form the core of an evolving off-line data-analysis environment on the USC computers

  5. The Search Engine for Multi-Proteoform Complexes: An Online Tool for the Identification and Stoichiometry Determination of Protein Complexes.

    Science.gov (United States)

    Skinner, Owen S; Schachner, Luis F; Kelleher, Neil L

    2016-12-08

    Recent advances in top-down mass spectrometry using native electrospray now enable the analysis of intact protein complexes with relatively small sample amounts in an untargeted mode. Here, we describe how to characterize both homo- and heteropolymeric complexes with high molecular specificity using input data produced by tandem mass spectrometry of whole protein assemblies. The tool described is a "search engine for multi-proteoform complexes," (SEMPC) and is available for free online. The output is a list of candidate multi-proteoform complexes and scoring metrics, which are used to define a distinct set of one or more unique protein subunits, their overall stoichiometry in the intact complex, and their pre- and post-translational modifications. Thus, we present an approach for the identification and characterization of intact protein complexes from native mass spectrometry data. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  6. Finding research information on the web: how to make the most of Google and other free search tools.

    Science.gov (United States)

    Blakeman, Karen

    2013-01-01

    The Internet and the World Wide Web has had a major impact on the accessibility of research information. The move towards open access and development of institutional repositories has resulted in increasing amounts of information being made available free of charge. Many of these resources are not included in conventional subscription databases and Google is not always the best way to ensure that one is picking up all relevant material on a topic. This article will look at how Google's search engine works, how to use Google more effectively for identifying research information, alternatives to Google and will review some of the specialist tools that have evolved to cope with the diverse forms of information that now exist in electronic form.

  7. ART-Ada: An Ada-based expert system tool

    Science.gov (United States)

    Lee, S. Daniel; Allen, Bradley P.

    1991-01-01

    The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. Automated Reasoning Tool (ART) Ada, an Ada Expert system tool is described. ART-Ada allow applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom Program and the U.S. Air Force.

  8. Web-based tools from AHRQ's National Resource Center.

    Science.gov (United States)

    Cusack, Caitlin M; Shah, Sapna

    2008-11-06

    The Agency for Healthcare Research and Quality (AHRQ) has made an investment of over $216 million in research around health information technology (health IT). As part of their investment, AHRQ has developed the National Resource Center for Health IT (NRC) which includes a public domain Web site. New content for the web site, such as white papers, toolkits, lessons from the health IT portfolio and web-based tools, is developed as needs are identified. Among the tools developed by the NRC are the Compendium of Surveys and the Clinical Decision Support (CDS) Resources. The Compendium of Surveys is a searchable repository of health IT evaluation surveys made available for public use. The CDS Resources contains content which may be used to develop clinical decision support tools, such as rules, reminders and templates. This live demonstration will show the access, use, and content of both these freely available web-based tools.

  9. Model based methods and tools for process systems engineering

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    need to be integrated with work-flows and data-flows for specific product-process synthesis-design problems within a computer-aided framework. The framework therefore should be able to manage knowledge-data, models and the associated methods and tools needed by specific synthesis-design work...... of model based methods and tools within a computer aided framework for product-process synthesis-design will be highlighted.......Process systems engineering (PSE) provides means to solve a wide range of problems in a systematic and efficient manner. This presentation will give a perspective on model based methods and tools needed to solve a wide range of problems in product-process synthesis-design. These methods and tools...

  10. SearchSmallRNA: a graphical interface tool for the assemblage of viral genomes using small RNA libraries data.

    Science.gov (United States)

    de Andrade, Roberto R S; Vaslin, Maite F S

    2014-03-07

    Next-generation parallel sequencing (NGS) allows the identification of viral pathogens by sequencing the small RNAs of infected hosts. Thus, viral genomes may be assembled from host immune response products without prior virus enrichment, amplification or purification. However, mapping of the vast information obtained presents a bioinformatics challenge. In order to by pass the need of line command and basic bioinformatics knowledge, we develop a mapping software with a graphical interface to the assemblage of viral genomes from small RNA dataset obtained by NGS. SearchSmallRNA was developed in JAVA language version 7 using NetBeans IDE 7.1 software. The program also allows the analysis of the viral small interfering RNAs (vsRNAs) profile; providing an overview of the size distribution and other features of the vsRNAs produced in infected cells. The program performs comparisons between each read sequenced present in a library and a chosen reference genome. Reads showing Hamming distances smaller or equal to an allowed mismatched will be selected as positives and used to the assemblage of a long nucleotide genome sequence. In order to validate the software, distinct analysis using NGS dataset obtained from HIV and two plant viruses were used to reconstruct viral whole genomes. SearchSmallRNA program was able to reconstructed viral genomes using NGS of small RNA dataset with high degree of reliability so it will be a valuable tool for viruses sequencing and discovery. It is accessible and free to all research communities and has the advantage to have an easy-to-use graphical interface. SearchSmallRNA was written in Java and is freely available at http://www.microbiologia.ufrj.br/ssrna/.

  11. MATT: Multi Agents Testing Tool Based Nets within Nets

    Directory of Open Access Journals (Sweden)

    Sara Kerraoui

    2016-12-01

    As part of this effort, we propose a model based testing approach for multi agent systems based on such a model called Reference net, where a tool, which aims to providing a uniform and automated approach is developed. The feasibility and the advantage of the proposed approach are shown through a short case study.

  12. Enhancing Image Retrieval System Using Content Based Search ...

    African Journals Online (AJOL)

    The output shows more efficiency in retrieval because instead of performing the search on the entire image database, the image category option directs the retrieval engine to the specified category. Also, there is provision to update or modify the different image categories in the image database as need arise. Keywords: ...

  13. A multi-variate discrimination technique based on range-searching

    International Nuclear Information System (INIS)

    Carli, T.; Koblitz, B.

    2003-01-01

    We present a fast and transparent multi-variate event classification technique, called PDE-RS, which is based on sampling the signal and background densities in a multi-dimensional phase space using range-searching. The employed algorithm is presented in detail and its behaviour is studied with simple toy examples representing basic patterns of problems often encountered in High Energy Physics data analyses. In addition an example relevant for the search for instanton-induced processes in deep-inelastic scattering at HERA is discussed. For all studied examples, the new presented method performs as good as artificial Neural Networks and has furthermore the advantage to need less computation time. This allows to carefully select the best combination of observables which optimally separate the signal and background and for which the simulations describe the data best. Moreover, the systematic and statistical uncertainties can be easily evaluated. The method is therefore a powerful tool to find a small number of signal events in the large data samples expected at future particle colliders

  14. Assessing teamwork performance in obstetrics : a systematic search and review of validated tools

    NARCIS (Netherlands)

    Fransen, Annemarie F.; de Boer, Liza; Kienhorst, Dieneke; Truijens, Sophie E.; van Runnard Heimel, Pieter J.; Oei, S. Guid

    2017-01-01

    Teamwork performance is an essential component for the clinical efficiency of multi-professional teams in obstetric care. As patient safety is related to teamwork performance, it has become an important learning goal in simulation-based education. In order to improve teamwork performance, reliable

  15. Development and Evaluation of Thesauri-Based Bibliographic Biomedical Search Engine

    Science.gov (United States)

    Alghoson, Abdullah

    2017-01-01

    Due to the large volume and exponential growth of biomedical documents (e.g., books, journal articles), it has become increasingly challenging for biomedical search engines to retrieve relevant documents based on users' search queries. Part of the challenge is the matching mechanism of free-text indexing that performs matching based on…

  16. Teaching AI Search Algorithms in a Web-Based Educational System

    Science.gov (United States)

    Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis

    2013-01-01

    In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…

  17. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  18. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  19. Reduction of inequalities in health: assessing evidence-based tools

    Directory of Open Access Journals (Sweden)

    Shea Beverley

    2006-09-01

    Full Text Available Abstract Background The reduction of health inequalities is a focus of many national and international health organisations. The need for pragmatic evidence-based approaches has led to the development of a number of evidence-based equity initiatives. This paper describes a new program that focuses upon evidence- based tools, which are useful for policy initiatives that reduce inequities. Methods This paper is based on a presentation that was given at the "Regional Consultation on Policy Tools: Equity in Population Health Reports," held in Toronto, Canada in June 2002. Results Five assessment tools were presented. 1. A database of systematic reviews on the effects of educational, legal, social, and health interventions to reduce unfair inequalities is being established through the Cochrane and Campbell Collaborations. 2 Decision aids and shared decision making can be facilitated in disadvantaged groups by 'health coaches' to help people become better decision makers, negotiators, and navigators of the health system; a pilot study in Chile has provided proof of this concept. 3. The CIET Cycle: Combining adapted cluster survey techniques with qualitative methods, CIET's population based applications support evidence-based decision making at local and national levels. The CIET map generates maps directly from survey or routine institutional data, to be used as evidence-based decisions aids. Complex data can be displayed attractively, providing an important tool for studying and comparing health indicators among and between different populations. 4. The Ottawa Equity Gauge is applying the Global Equity Gauge Alliance framework to an industrialised country setting. 5 The Needs-Based Health Assessment Toolkit, established to assemble information on which clinical and health policy decisions can be based, is being expanded to ensure a focus on distribution and average health indicators. Conclusion Evidence-based planning tools have much to offer the

  20. A Study on Control System Design Based on ARM Sea Target Search System

    Directory of Open Access Journals (Sweden)

    Lin Xinwei

    2015-01-01

    Full Text Available The infrared detector is used for sea target search, which can assist humans in searching suspicious objects at night and under poor visibility conditions, and improving search efficiency. This paper applies for interrupt and stack technology to solve problems of data losses that may be caused by one-to-many multi-byte protocol communication. Meanwhile, this paper implements hardware and software design of the system based on industrial-grade ARM control chip and uC / OS-II embedded operating system. The control system in the sea target search system is an information exchange and control center of the whole system, which solves the problem of controlling over the shooting angle of the infrared detector in the process of target search. After testing, the control system operates stably and reliably, and realizes rotation and control functions of the pan/tilt platform during automatic search, manual search and track.

  1. FDRAnalysis: a tool for the integrated analysis of tandem mass spectrometry identification results from multiple search engines.

    Science.gov (United States)

    Wedge, David C; Krishna, Ritesh; Blackhurst, Paul; Siepen, Jennifer A; Jones, Andrew R; Hubbard, Simon J

    2011-04-01

    Confident identification of peptides via tandem mass spectrometry underpins modern high-throughput proteomics. This has motivated considerable recent interest in the postprocessing of search engine results to increase confidence and calculate robust statistical measures, for example through the use of decoy databases to calculate false discovery rates (FDR). FDR-based analyses allow for multiple testing and can assign a single confidence value for both sets and individual peptide spectrum matches (PSMs). We recently developed an algorithm for combining the results from multiple search engines, integrating FDRs for sets of PSMs made by different search engine combinations. Here we describe a web-server and a downloadable application that makes this routinely available to the proteomics community. The web server offers a range of outputs including informative graphics to assess the confidence of the PSMs and any potential biases. The underlying pipeline also provides a basic protein inference step, integrating PSMs into protein ambiguity groups where peptides can be matched to more than one protein. Importantly, we have also implemented full support for the mzIdentML data standard, recently released by the Proteomics Standards Initiative, providing users with the ability to convert native formats to mzIdentML files, which are available to download.

  2. Image based method for aberration measurement of lithographic tools

    Science.gov (United States)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  3. HASSP and HEAVY: Tools for automated heavy atom searches and refinement

    International Nuclear Information System (INIS)

    Terwilliger, T.T.

    1994-06-01

    In this tutorial, a simple example using model data for one derivative with anomalous information will be used to demonstrate the use of HASSP and HEAVY in heavy atom determination and refinement. The data used here will actually be based on model MAD data that has been converted to MIR format using MADMRG, but the treatment is identical to that for any other SIR+anomalous data. The data nd most of the programs discussed here can be obtained by e-mail from ''terwil at sign prov2.lanl.gov'' along with VAX-specific command files to run the data through

  4. IBRI-CASONTO: Ontology-based semantic search engine

    OpenAIRE

    Awny Sayed; Amal Al Muqrishi

    2017-01-01

    The vast availability of information, that added in a very fast pace, in the data repositories creates a challenge in extracting correct and accurate information. Which has increased the competition among developers in order to gain access to technology that seeks to understand the intent researcher and contextual meaning of terms. While the competition for developing an Arabic Semantic Search systems are still in their infancy, and the reason could be traced back to the complexity of Arabic ...

  5. Semiconductor-based experiments for neutrinoless double beta decay search

    International Nuclear Information System (INIS)

    Barnabé Heider, Marik

    2012-01-01

    Three experiments are employing semiconductor detectors in the search for neutrinoless double beta (0νββ) decay: COBRA, Majorana and GERDA. COBRA is studying the prospects of using CdZnTe detectors in terms of achievable energy resolution and background suppression. These detectors contain several ββ emitters and the most promising for 0νββ-decay search is 116 Cd. Majorana and GERDA will use isotopically enriched high purity Ge detectors to search for 0νββ-decay of 76 Ge. Their aim is to achieve a background ⩽10 −3 counts/(kg⋅y⋅keV) at the Q improvement compared to the present state-of-art. Majorana will operate Ge detectors in electroformed-Cu vacuum cryostats. A first cryostat housing a natural-Ge detector array is currently under preparation. In contrast, GERDA is operating bare Ge detectors submerged in liquid argon. The construction of the GERDA experiment is completed and a commissioning run started in June 2010. A string of natural-Ge detectors is operated to test the complete experimental setup and to determine the background before submerging the detectors enriched in 76 Ge. An overview and a comparison of these three experiments will be presented together with the latest results and developments.

  6. Content Based Retrieval Database Management System with Support for Similarity Searching and Query Refinement

    National Research Council Canada - National Science Library

    Ortega-Binderberger, Michael

    2002-01-01

    ... as a critical area of research. This thesis explores how to enhance database systems with content based search over arbitrary abstract data types in a similarity based framework with query refinement...

  7. Cost Based Value Stream Mapping as a Sustainable Construction Tool for Underground Pipeline Construction Projects

    Directory of Open Access Journals (Sweden)

    Murat Gunduz

    2017-11-01

    Full Text Available This paper deals with application of Value Stream Mapping (VSM as a sustainable construction tool on a real construction project of installation of underground pipelines. VSM was adapted to reduce the high percentage of non-value-added activities and time wastes during each construction stage and the paper searched for an effective way to consider the cost for studied construction of underground pipeline. This paper is unique in its way that it adopts cost implementation of VSM to improve the productivity in underground pipeline projects. The data was observed and collected from site during construction, indicating the cycle time, value added and non-value added of each construction stage. The current state was built based on these details. This was an eye-opening exercise and a process management tool as a trigger for improvement. After the current state assessment, a future state is attempted by Value Stream Mapping tool balancing the resources using a Line of Balance (LOB technique. Moreover, a sustainable cost estimation model was developed during current state and future state to calculate the cost of underground pipeline construction. The result shows a cost reduction of 20.8% between current and future states. This reflects the importance of the cost based Value Stream Mapping in construction as a sustainable measurement tool. This new tool could be utilized in construction industry to add the sustainability and effective cost management.

  8. A Critical Study of Effect of Web-Based Software Tools in Finding and Sharing Digital Resources--A Literature Review

    Science.gov (United States)

    Baig, Muntajeeb Ali

    2010-01-01

    The purpose of this paper is to review the effect of web-based software tools for finding and sharing digital resources. A positive correlation between learning and studying through online tools has been found in recent researches. In traditional classroom, searching resources are limited to the library and sharing of resources is limited to the…

  9. Reverse Asteroids: Searching for an Effective Tool to Combat Asteroid Belt Misconceptions

    Science.gov (United States)

    Summers, F.; Eisenhamer, B.

    2014-12-01

    The public 'knows' that asteroid belts are densely packed and dangerous for spaceships to cross. Visuals from "Star Wars" to, unfortunately, the recent "Cosmos" TV series have firmly established this astronomical misconception. However, even scientifically correct graphics, such as the Minor Planet Center's plot of the inner solar system, reinforces that view. Each pixel in the image is more than a million kilometers in width, making an accurate representation of the object density impossible.To address this widespread misconception, we are investigating an educational exercise built around a computer interactive that we call "Reverse Asteroids". In the arcade classic video game, the asteroids came to the player's spaceship. For our reverse implementation, we consider an inquiry-based activity in which the spaceship must go hunting for the asteroids, using a database of real objects in our solar system. Both 3D data visualization and basic statistical analysis play crucial roles in bringing out the true space density within the asteroid belt, and perhaps a reconciliation between imagination and reality. We also emphasize that a partnership of scientists and educators is fundamental to the success of such projects.

  10. [Formula: see text]: Oblivious similarity based searching for encrypted data outsourced to an untrusted domain.

    Science.gov (United States)

    Pervez, Zeeshan; Ahmad, Mahmood; Khattak, Asad Masood; Ramzan, Naeem; Khan, Wajahat Ali

    2017-01-01

    Public cloud storage services are becoming prevalent and myriad data sharing, archiving and collaborative services have emerged which harness the pay-as-you-go business model of public cloud. To ensure privacy and confidentiality often encrypted data is outsourced to such services, which further complicates the process of accessing relevant data by using search queries. Search over encrypted data schemes solve this problem by exploiting cryptographic primitives and secure indexing to identify outsourced data that satisfy the search criteria. Almost all of these schemes rely on exact matching between the encrypted data and search criteria. A few schemes which extend the notion of exact matching to similarity based search, lack realism as those schemes rely on trusted third parties or due to increase storage and computational complexity. In this paper we propose Oblivious Similarity based Search ([Formula: see text]) for encrypted data. It enables authorized users to model their own encrypted search queries which are resilient to typographical errors. Unlike conventional methodologies, [Formula: see text] ranks the search results by using similarity measure offering a better search experience than exact matching. It utilizes encrypted bloom filter and probabilistic homomorphic encryption to enable authorized users to access relevant data without revealing results of search query evaluation process to the untrusted cloud service provider. Encrypted bloom filter based search enables [Formula: see text] to reduce search space to potentially relevant encrypted data avoiding unnecessary computation on public cloud. The efficacy of [Formula: see text] is evaluated on Google App Engine for various bloom filter lengths on different cloud configurations.

  11. Towards ontology based search and knowledgesharing using domain ontologies

    DEFF Research Database (Denmark)

    Zambach, Sine

    verbs for relations in the ontology modeling. For this work we use frequency lists from a biomedical text corpus of different genres as well as a study of the relations used in other biomedical text mining tools. In addition, we discuss how these relations can be used in broarder perspective....

  12. Risk based decision tool for space exploration missions

    Science.gov (United States)

    Meshkat, Leila; Cornford, Steve; Moran, Terrence

    2003-01-01

    This paper presents an approach and corresponding tool to assess and analyze the risks involved in a mission during the pre-phase A design process. This approach is based on creating a risk template for each subsystem expert involved in the mission design process and defining appropriate interactions between the templates.

  13. An interactive, web-based tool for genealogical entity resolution

    NARCIS (Netherlands)

    Efremova, I.; Ranjbar-Sahraei, B.; Oliehoek, F.A.; Calders, T.G.K.; Tuyls, K.P.

    2013-01-01

    We demonstrate an interactive, web-based tool which helps historians to do Genealogical Entitiy Resolution. This work has two main goals. First, it uses Machine Learning (ML) algorithms to assist humanites researchers to perform Genealogical Entity Resolution. Second, it facilitates the generation

  14. Graphics-based intelligent search and abstracting using Data Modeling

    Science.gov (United States)

    Jaenisch, Holger M.; Handley, James W.; Case, Carl T.; Songy, Claude G.

    2002-11-01

    This paper presents an autonomous text and context-mining algorithm that converts text documents into point clouds for visual search cues. This algorithm is applied to the task of data-mining a scriptural database comprised of the Old and New Testaments from the Bible and the Book of Mormon, Doctrine and Covenants, and the Pearl of Great Price. Results are generated which graphically show the scripture that represents the average concept of the database and the mining of the documents down to the verse level.

  15. IBES: A Tool for Creating Instructions Based on Event Segmentation

    Directory of Open Access Journals (Sweden)

    Katharina eMura

    2013-12-01

    Full Text Available Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, twenty participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, ten and twelve participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  16. IBES: a tool for creating instructions based on event segmentation.

    Science.gov (United States)

    Mura, Katharina; Petersen, Nils; Huff, Markus; Ghose, Tandra

    2013-12-26

    Receiving informative, well-structured, and well-designed instructions supports performance and memory in assembly tasks. We describe IBES, a tool with which users can quickly and easily create multimedia, step-by-step instructions by segmenting a video of a task into segments. In a validation study we demonstrate that the step-by-step structure of the visual instructions created by the tool corresponds to the natural event boundaries, which are assessed by event segmentation and are known to play an important role in memory processes. In one part of the study, 20 participants created instructions based on videos of two different scenarios by using the proposed tool. In the other part of the study, 10 and 12 participants respectively segmented videos of the same scenarios yielding event boundaries for coarse and fine events. We found that the visual steps chosen by the participants for creating the instruction manual had corresponding events in the event segmentation. The number of instructional steps was a compromise between the number of fine and coarse events. Our interpretation of results is that the tool picks up on natural human event perception processes of segmenting an ongoing activity into events and enables the convenient transfer into meaningful multimedia instructions for assembly tasks. We discuss the practical application of IBES, for example, creating manuals for differing expertise levels, and give suggestions for research on user-oriented instructional design based on this tool.

  17. Process-Based Quality (PBQ) Tools Development; TOPICAL

    International Nuclear Information System (INIS)

    Cummins, J.L.

    2001-01-01

    The objective of this effort is to benchmark the development of process-based quality tools for application in CAD (computer-aided design) model-based applications. The processes of interest are design, manufacturing, and quality process applications. A study was commissioned addressing the impact, current technologies, and known problem areas in application of 3D MCAD (3-dimensional mechanical computer-aided design) models and model integrity on downstream manufacturing and quality processes. The downstream manufacturing and product quality processes are profoundly influenced and dependent on model quality and modeling process integrity. The goal is to illustrate and expedite the modeling and downstream model-based technologies for available or conceptual methods and tools to achieve maximum economic advantage and advance process-based quality concepts

  18. Nucleonica. Web-based software tools for simulation and analysis

    International Nuclear Information System (INIS)

    Magill, J.; Dreher, R.; Soti, Z.

    2014-01-01

    The authors present a description of the Nucleonica web-based portal for simulation and analysis for a wide range of commonly encountered nuclear science applications. Advantages of a web-based approach include availability wherever there is internet access, intuitive user-friendly interface, remote access to high-power computing resources, and continual maintenance, improvement, and addition of tools and techniques common to the nuclear science industry. A description of the nuclear data resources, and some applications is given.

  19. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    Science.gov (United States)

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  20. The role of space and time in object-based visual search

    NARCIS (Netherlands)

    Schreij, D.B.B.; Olivers, C.N.L.

    2013-01-01

    Recently we have provided evidence that observers more readily select a target from a visual search display if the motion trajectory of the display object suggests that the observer has dealt with it before. Here we test the prediction that this object-based memory effect on search breaks down if

  1. Effect of Reading Ability and Internet Experience on Keyword-Based Image Search

    Science.gov (United States)

    Lei, Pei-Lan; Lin, Sunny S. J.; Sun, Chuen-Tsai

    2013-01-01

    Image searches are now crucial for obtaining information, constructing knowledge, and building successful educational outcomes. We investigated how reading ability and Internet experience influence keyword-based image search behaviors and performance. We categorized 58 junior-high-school students into four groups of high/low reading ability and…

  2. Keyword-based Ciphertext Search Algorithm under Cloud Storage

    Directory of Open Access Journals (Sweden)

    Ren Xunyi

    2016-01-01

    Full Text Available With the development of network storage services, cloud storage have the advantage of high scalability , inexpensive, without access limit and easy to manage. These advantages make more and more small or medium enterprises choose to outsource large quantities of data to a third party. This way can make lots of small and medium enterprises get rid of costs of construction and maintenance, so it has broad market prospects. But now lots of cloud storage service providers can not protect data security.This result leakage of user data, so many users have to use traditional storage method.This has become one of the important factors that hinder the development of cloud storage. In this article, establishing keyword index by extracting keywords from ciphertext data. After that, encrypted data and the encrypted index upload cloud server together.User get related ciphertext by searching encrypted index, so it can response data leakage problem.

  3. Multi Sector Planning Tools for Trajectory-Based Operations

    Science.gov (United States)

    Prevot, Thomas; Mainini, Matthew; Brasil, Connie

    2010-01-01

    This paper discusses a suite of multi sector planning tools for trajectory-based operations that were developed and evaluated in the Airspace Operations Laboratory (AOL) at the NASA Ames Research Center. The toolset included tools for traffic load and complexity assessment as well as trajectory planning and coordination. The situation assessment tools included an integrated suite of interactive traffic displays, load tables, load graphs, and dynamic aircraft filters. The planning toolset allowed for single and multi aircraft trajectory planning and data communication-based coordination of trajectories between operators. Also newly introduced was a real-time computation of sector complexity into the toolset that operators could use in lieu of aircraft count to better estimate and manage sector workload, especially in situations with convective weather. The tools were used during a joint NASA/FAA multi sector planner simulation in the AOL in 2009 that had multiple objectives with the assessment of the effectiveness of the tools being one of them. Current air traffic control operators who were experienced as area supervisors and traffic management coordinators used the tools throughout the simulation and provided their usefulness and usability ratings in post simulation questionnaires. This paper presents these subjective assessments as well as the actual usage data that was collected during the simulation. The toolset was rated very useful and usable overall. Many elements received high scores by the operators and were used frequently and successfully. Other functions were not used at all, but various requests for new functions and capabilities were received that could be added to the toolset.

  4. Model-based setup assistant for progressive tools

    Science.gov (United States)

    Springer, Robert; Gräler, Manuel; Homberg, Werner; Henke, Christian; Trächtler, Ansgar

    2018-05-01

    In the field of production systems, globalization and technological progress lead to increasing requirements regarding part quality, delivery time and costs. Hence, today's production is challenged much more than a few years ago: it has to be very flexible and produce economically small batch sizes to satisfy consumer's demands and avoid unnecessary stock. Furthermore, a trend towards increasing functional integration continues to lead to an ongoing miniaturization of sheet metal components. In the industry of electric connectivity for example, the miniaturized connectors are manufactured by progressive tools, which are usually used for very large batches. These tools are installed in mechanical presses and then set up by a technician, who has to manually adjust a wide range of punch-bending operations. Disturbances like material thickness, temperatures, lubrication or tool wear complicate the setup procedure. In prospect of the increasing demand of production flexibility, this time-consuming process has to be handled more and more often. In this paper, a new approach for a model-based setup assistant is proposed as a solution, which is exemplarily applied in combination with a progressive tool. First, progressive tools, more specifically, their setup process is described and based on that, the challenges are pointed out. As a result, a systematic process to set up the machines is introduced. Following, the process is investigated with an FE-Analysis regarding the effects of the disturbances. In the next step, design of experiments is used to systematically develop a regression model of the system's behaviour. This model is integrated within an optimization in order to calculate optimal machine parameters and the following necessary adjustment of the progressive tool due to the disturbances. Finally, the assistant is tested in a production environment and the results are discussed.

  5. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  6. Patient-completed or symptom-based screening tools for endometriosis: a scoping review.

    Science.gov (United States)

    Surrey, Eric; Carter, Cathryn M; Soliman, Ahmed M; Khan, Shahnaz; DiBenedetti, Dana B; Snabes, Michael C

    2017-08-01

    The objective of this review was to evaluate existing patient-completed screening questionnaires and/or symptom-based predictive models with respect to their potential for use as screening tools for endometriosis in adult women. Validated instruments were of particular interest. We conducted structured searches of PubMed and targeted searches of the gray literature to identify studies reporting on screening instruments used in endometriosis. Studies were screened according to inclusion and exclusion criteria that followed the PICOS (population, intervention, comparison, outcomes, study design) framework. A total of 16 studies were identified, of which 10 described measures for endometriosis in general, 2 described measures for endometriosis at specific sites, and 4 described measures for deep-infiltrating endometriosis. Only 1 study evaluated a questionnaire that was solely patient-completed. Most measures required physician, imaging, or laboratory assessments in addition to patient-completed questionnaires, and several measures relied on complex scoring. Validation for use as a screening tool in adult women with potential endometriosis was lacking in all studies, as most studies focused on diagnosis versus screening. This literature review did not identify any fully validated, symptom-based, patient-reported questionnaires for endometriosis screening in adult women.

  7. Electronic tools for health information exchange: an evidence-based analysis.

    Science.gov (United States)

    2013-01-01

    As patients experience transitions in care, there is a need to share information between care providers in an accurate and timely manner. With the push towards electronic medical records and other electronic tools (eTools) (and away from paper-based health records) for health information exchange, there remains uncertainty around the impact of eTools as a form of communication. To examine the impact of eTools for health information exchange in the context of care coordination for individuals with chronic disease in the community. A literature search was performed on April 26, 2012, using OVID MEDLINE, OVID MEDLINE In-Process and Other Non-Indexed Citations, OVID EMBASE, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), the Wiley Cochrane Library, and the Centre for Reviews and Dissemination database, for studies published until April 26, 2012 (no start date limit was applied). A systematic literature search was conducted, and meta-analysis conducted where appropriate. Outcomes of interest fell into 4 categories: health services utilization, disease-specific clinical outcomes, process-of-care indicators, and measures of efficiency. The quality of the evidence was assessed individually for each outcome. Expert panels were assembled for stakeholder engagement and contextualization. Eleven articles were identified (4 randomized controlled trials and 7 observational studies). There was moderate quality evidence of a reduction in hospitalizations, hospital length of stay, and emergency department visits following the implementation of an electronically generated laboratory report with recommendations based on clinical guidelines. The evidence showed no difference in disease-specific outcomes; there was no evidence of a positive impact on process-of-care indicators or measures of efficiency. A limited body of research specifically examined eTools for health information exchange in the population and setting of interest. This evidence included a

  8. Designing the Search Service for Enterprise Portal based on Oracle Universal Content Management

    Science.gov (United States)

    Bauer, K. S.; Kuznetsov, D. Y.; Pominov, A. D.

    2017-01-01

    Enterprise Portal is an important part of an organization in informative and innovative space. The portal provides collaboration between employees and the organization. This article gives a valuable background of Enterprise Portal and technologies. The paper presents Oracle WebCenter Portal and UCM Server integration in detail. The focus is on tools for Enterprise Portal and on Search Service in particular. The paper also presents several UML diagrams to describe the use of cases for Search Service and main components of this application.

  9. The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet

    Science.gov (United States)

    Hill, Paul; Rader, Heidi B.; Hino, Jeff

    2012-01-01

    For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…

  10. The Design of Tools for Sketching Sensor-Based Interaction

    DEFF Research Database (Denmark)

    Brynskov, Martin; Lunding, Rasmus; Vestergaard, Lasse Steenbock

    2012-01-01

    In this paper we motivate, present, and give an initial evaluation of DUL Radio, a small wireless toolkit for sketching sensor-based interaction. In the motivation, we discuss the purpose of this specific platform, which aims to balance ease-of-use (learning, setup, initialization), size, speed......, flexibility and cost, aimed at wearable and ultra-mobile prototyping where fast reaction is needed (e.g. in controlling sound), and we discuss the general issues facing this category of embodied interaction design tools. We then present the platform in more detail, both regarding hard- ware and software....... In the brief evaluation, we present our initial experiences with the platform both in design projects and in teaching. We conclude that DUL Radio does seem to be a relatively easy-to-use tool for sketching sensor-based interaction compared to other solutions, but that there are many ways to improve it. Target...

  11. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    OpenAIRE

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wave...

  12. Internet MEMS design tools based on component technology

    Science.gov (United States)

    Brueck, Rainer; Schumer, Christian

    1999-03-01

    The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.

  13. Voice and gesture-based 3D multimedia presentation tool

    Science.gov (United States)

    Fukutake, Hiromichi; Akazawa, Yoshiaki; Okada, Yoshihiro

    2007-09-01

    This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.

  14. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Application of a faith-based integration tool to assess mental and physical health interventions.

    Science.gov (United States)

    Saunders, Donna M; Leak, Jean; Carver, Monique E; Smith, Selina A

    2017-01-01

    To build on current research involving faith-based interventions (FBIs) for addressing mental and physical health, this study a) reviewed the extent to which relevant publications integrate faith concepts with health and b) initiated analysis of the degree of FBI integration with intervention outcomes. Derived from a systematic search of articles published between 2007 and 2017, 36 studies were assessed with a Faith-Based Integration Assessment Tool (FIAT) to quantify faith-health integration. Basic statistical procedures were employed to determine the association of faith-based integration with intervention outcomes. The assessed studies possessed (on average) moderate, inconsistent integration because of poor use of faith measures, and moderate, inconsistent use of faith practices. Analysis procedures for determining the effect of FBI integration on intervention outcomes were inadequate for formulating practical conclusions. Regardless of integration, interventions were associated with beneficial outcomes. To determine the link between FBI integration and intervention outcomes, additional analyses are needed.

  16. A Web-Based Validation Tool for GEWEX

    Science.gov (United States)

    Smith, R. A.; Gibson, S.; Heckert, E.; Minnis, P.; Sun-Mack, S.; Chen, Y.; Stubenrauch, C.; Kinne, S. A.; Ackerman, S. A.; Baum, B. A.; Chepfer, H.; Di Girolamo, L.; Heidinger, A. K.; Getzewich, B. J.; Guignard, A.; Maddux, B. C.; Menzel, W. P.; Platnick, S. E.; Poulsen, C.; Raschke, E. A.; Riedi, J.; Rossow, W. B.; Sayer, A. M.; Walther, A.; Winker, D. M.

    2011-12-01

    The Global Energy and Water Cycle Experiment (GEWEX) Cloud assessment was initiated by the GEWEX Radiation Panel (GRP) in 2005 to evaluate the variability of available, global, long-term cloud data products. Since then, eleven cloud data records have been established from various instruments, mostly onboard polar orbiting satellites. Cloud properties under study include cloud amount, cloud pressure, cloud temperature, cloud infrared (IR) emissivity and visible (VIS) optical thickness, cloud thermodynamic phase, as well as bulk microphysical properties. The volume of data and variations in parameters, spatial, and temporal resolution for the different datasets constitute a significant challenge for understanding the differences and the value of having more than one dataset. To address this issue, this paper presents a NASA Langley web-based tool to facilitate comparisons among the different cloud data sets. With this tool, the operator can choose to view numeric or graphic presentations to allow comparison between products. Multiple records are displayed in time series graphs, global maps, or zonal plots. The tool has been made flexible so that additional teams can easily add their data sets to the record selection list for use in their own analyses. This tool has possible applications to other climate and weather datasets.

  17. Correction tool for Active Shape Model based lumbar muscle segmentation.

    Science.gov (United States)

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  18. Integrated environmental decision support tool based on GIS technology

    International Nuclear Information System (INIS)

    Doctor, P.G.; O'Neil, T.K.; Sackschewsky, M.R.; Becker, J.M.; Rykiel, E.J.; Walters, T.B.; Brandt, C.A.; Hall, J.A.

    1995-01-01

    Environmental restoration and management decisions facing the US Department of Energy require balancing trade-offs between diverse land uses and impacts over multiple spatial and temporal scales. Many types of environmental data have been collected for the Hanford Site and the Columbia River in Washington State over the past fifty years. Pacific Northwest National Laboratory (PNNL) is integrating these data into a Geographic Information System (GIS) based computer decision support tool. This tool provides a comprehensive and concise description of the current environmental landscape that can be used to evaluate the ecological and monetary trade-offs between future land use, restoration and remediation options before action is taken. Ecological impacts evaluated include effects to individual species of concern and habitat loss and fragmentation. Monetary impacts include those associated with habitat mitigation. The tool is organized as both a browsing tool for educational purposes, and as a framework that leads a project manager through the steps needed to be in compliance with environmental requirements

  19. SciRide Finder: a citation-based paradigm in biomedical literature search.

    Science.gov (United States)

    Volanakis, Adam; Krawczyk, Konrad

    2018-04-18

    There are more than 26 million peer-reviewed biomedical research items according to Medline/PubMed. This breadth of information is indicative of the progress in biomedical sciences on one hand, but an overload for scientists performing literature searches on the other. A major portion of scientific literature search is to find statements, numbers and protocols that can be cited to build an evidence-based narrative for a new manuscript. Because science builds on prior knowledge, such information has likely been written out and cited in an older manuscript. Thus, Cited Statements, pieces of text from scientific literature supported by citing other peer-reviewed publications, carry significant amount of condensed information on prior art. Based on this principle, we propose a literature search service, SciRide Finder (finder.sciride.org), which constrains the search corpus to such Cited Statements only. We demonstrate that Cited Statements can carry different information to this found in titles/abstracts and full text, giving access to alternative literature search results than traditional search engines. We further show how presenting search results as a list of Cited Statements allows researchers to easily find information to build an evidence-based narrative for their own manuscripts.

  20. Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG

    Science.gov (United States)

    Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu

    2016-12-01

    Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.

  1. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  2. Q-learning-based adjustable fixed-phase quantum Grover search algorithm

    International Nuclear Information System (INIS)

    Guo Ying; Shi Wensha; Wang Yijun; Hu, Jiankun

    2017-01-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one. (author)

  3. Computer-Based Tools for Evaluating Graphical User Interfaces

    Science.gov (United States)

    Moore, Loretta A.

    1997-01-01

    The user interface is the component of a software system that connects two very complex system: humans and computers. Each of these two systems impose certain requirements on the final product. The user is the judge of the usability and utility of the system; the computer software and hardware are the tools with which the interface is constructed. Mistakes are sometimes made in designing and developing user interfaces because the designers and developers have limited knowledge about human performance (e.g., problem solving, decision making, planning, and reasoning). Even those trained in user interface design make mistakes because they are unable to address all of the known requirements and constraints on design. Evaluation of the user inter-face is therefore a critical phase of the user interface development process. Evaluation should not be considered the final phase of design; but it should be part of an iterative design cycle with the output of evaluation being feed back into design. The goal of this research was to develop a set of computer-based tools for objectively evaluating graphical user interfaces. The research was organized into three phases. The first phase resulted in the development of an embedded evaluation tool which evaluates the usability of a graphical user interface based on a user's performance. An expert system to assist in the design and evaluation of user interfaces based upon rules and guidelines was developed during the second phase. During the final phase of the research an automatic layout tool to be used in the initial design of graphical inter- faces was developed. The research was coordinated with NASA Marshall Space Flight Center's Mission Operations Laboratory's efforts in developing onboard payload display specifications for the Space Station.

  4. Simulation tools for guided wave based structural health monitoring

    Science.gov (United States)

    Mesnil, Olivier; Imperiale, Alexandre; Demaldent, Edouard; Baronian, Vahan; Chapuis, Bastien

    2018-04-01

    Structural Health Monitoring (SHM) is a thematic derived from Non Destructive Evaluation (NDE) based on the integration of sensors onto or into a structure in order to monitor its health without disturbing its regular operating cycle. Guided wave based SHM relies on the propagation of guided waves in plate-like or extruded structures. Using piezoelectric transducers to generate and receive guided waves is one of the most widely accepted paradigms due to the low cost and low weight of those sensors. A wide range of techniques for flaw detection based on the aforementioned setup is available in the literature but very few of these techniques have found industrial applications yet. A major difficulty comes from the sensitivity of guided waves to a substantial number of parameters such as the temperature or geometrical singularities, making guided wave measurement difficult to analyze. In order to apply guided wave based SHM techniques to a wider spectrum of applications and to transfer those techniques to the industry, the CEA LIST develops novel numerical methods. These methods facilitate the evaluation of the robustness of SHM techniques for multiple applicative cases and ease the analysis of the influence of various parameters, such as sensors positioning or environmental conditions. The first numerical tool is the guided wave module integrated to the commercial software CIVA, relying on a hybrid modal-finite element formulation to compute the guided wave response of perturbations (cavities, flaws…) in extruded structures of arbitrary cross section such as rails or pipes. The second numerical tool is based on the spectral element method [2] and simulates guided waves in both isotropic (metals) and orthotropic (composites) plate like-structures. This tool is designed to match the widely accepted sparse piezoelectric transducer array SHM configuration in which each embedded sensor acts as both emitter and receiver of guided waves. This tool is under development and

  5. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-08-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  6. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    Directory of Open Access Journals (Sweden)

    Filistea Naude

    2010-12-01

    Full Text Available This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The results of this study show that academics have indeed accepted the open Web as a useful information resource and Web search engines as retrieval tools when seeking information for academic and research work. The majority of respondents used the open Web and Web search engines on a daily or weekly basis to source academic and research information. The main obstacles presented by using the open Web and Web search engines included lack of time to search and browse the Web, information overload, poor network speed and the slow downloading speed of webpages.

  7. Empirical comparison of web-based antimicrobial peptide prediction tools.

    Science.gov (United States)

    Gabere, Musa Nur; Noble, William Stafford

    2017-07-01

    Antimicrobial peptides (AMPs) are innate immune molecules that exhibit activities against a range of microbes, including bacteria, fungi, viruses and protozoa. Recent increases in microbial resistance against current drugs has led to a concomitant increase in the need for novel antimicrobial agents. Over the last decade, a number of AMP prediction tools have been designed and made freely available online. These AMP prediction tools show potential to discriminate AMPs from non-AMPs, but the relative quality of the predictions produced by the various tools is difficult to quantify. We compiled two sets of AMP and non-AMP peptides, separated into three categories-antimicrobial, antibacterial and bacteriocins. Using these benchmark data sets, we carried out a systematic evaluation of ten publicly available AMP prediction methods. Among the six general AMP prediction tools-ADAM, CAMPR3(RF), CAMPR3(SVM), MLAMP, DBAASP and MLAMP-we find that CAMPR3(RF) provides a statistically significant improvement in performance, as measured by the area under the receiver operating characteristic (ROC) curve, relative to the other five methods. Surprisingly, for antibacterial prediction, the original AntiBP method significantly outperforms its successor, AntiBP2 based on one benchmark dataset. The two bacteriocin prediction tools, BAGEL3 and BACTIBASE, both provide very good performance and BAGEL3 outperforms its predecessor, BACTIBASE, on the larger of the two benchmarks. gaberemu@ngha.med.sa or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  8. Experience of Developing a Meta-Semantic Search Engine

    OpenAIRE

    Mukhopadhyay, Debajyoti; Sharma, Manoj; Joshi, Gajanan; Pagare, Trupti; Palwe, Adarsha

    2013-01-01

    Thinking of todays web search scenario which is mainly keyword based, leads to the need of effective and meaningful search provided by Semantic Web. Existing search engines are vulnerable to provide relevant answers to users query due to their dependency on simple data available in web pages. On other hand, semantic search engines provide efficient and relevant results as the semantic web manages information with well defined meaning using ontology. A Meta-Search engine is a search tool that ...

  9. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Web-based CERES Clouds QC Property Viewing Tool

    Science.gov (United States)

    Smith, R. A.; Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Minnis, P.

    2014-12-01

    This presentation will display the capabilities of a web-based CERES cloud property viewer. Terra data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool. A laptop will hopefully be available to allow conference attendees to try navigating the tool.

  11. An Infrastructure for UML-Based Code Generation Tools

    Science.gov (United States)

    Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.

    The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.

  12. Design tool for TOF and SL based 3D cameras.

    Science.gov (United States)

    Bouquet, Gregory; Thorstensen, Jostein; Bakke, Kari Anne Hestnes; Risholm, Petter

    2017-10-30

    Active illumination 3D imaging systems based on Time-of-flight (TOF) and Structured Light (SL) projection are in rapid development, and are constantly finding new areas of application. In this paper, we present a theoretical design tool that allows prediction of 3D imaging precision. Theoretical expressions are developed for both TOF and SL imaging systems. The expressions contain only physically measurable parameters and no fitting parameters. We perform 3D measurements with both TOF and SL imaging systems, showing excellent agreement between theoretical and measured distance precision. The theoretical framework can be a powerful 3D imaging design tool, as it allows for prediction of 3D measurement precision already in the design phase.

  13. Compression-Based Tools for Navigation with an Image Database

    Directory of Open Access Journals (Sweden)

    Giovanni Motta

    2012-01-01

    Full Text Available We present tools that can be used within a larger system referred to as a passive assistant. The system receives information from a mobile device, as well as information from an image database such as Google Street View, and employs image processing to provide useful information about a local urban environment to a user who is visually impaired. The first stage acquires and computes accurate location information, the second stage performs texture and color analysis of a scene, and the third stage provides specific object recognition and navigation information. These second and third stages rely on compression-based tools (dimensionality reduction, vector quantization, and coding that are enhanced by knowledge of (approximate location of objects.

  14. Sim-based detection tools to minimize motorcycle theft

    Science.gov (United States)

    Triansyah, F. A.; Mudhafar, Z.; Lestari, C.; Amilia, S.; Ruswana, N. D.; Junaeti, E.

    2018-05-01

    The number of motorcycles in Indonesia spurs the increased criminal acts of motorcycle theft. In addition, the number of motorcycles increases the number of traffic accidents caused by improper motorists. The purpose of this research is to make METEOR (SIM Detector) which is a tool to detect the feasibility of SIM (driver license) which is used to operate and protect motorcycle against theft. METEOR is made through the assembly, encoding, testing, and sequencing stages of the motorcycle. Based on the research that has been done, METEOR generated that can detect the SIM by using additional RFID chip and can be set on the motorcycle. Without the proper SIM, motorized chests coupled with METEOR cannot be turned on. So it can be concluded that motorcycles with additional METEOR is able to be a safety device against theft and as a tool to test the feasibility of motorcycle riders.

  15. Development of IFC based fire safety assesment tools

    DEFF Research Database (Denmark)

    Taciuc, Anca; Karlshøj, Jan; Dederichs, Anne

    2016-01-01

    Due to the impact that the fire safety design has on the building's layout and on other complementary systems, as installations, it is important during the conceptual design stage to evaluate continuously the safety level in the building. In case that the task is carried out too late, additional...... changes need to be implemented, involving supplementary work and costs with negative impact on the client. The aim of this project is to create a set of automatic compliance checking rules for prescriptive design and to develop a web application tool for performance based design that retrieves data from...... Building Information Models (BIM) to evacuate the safety level in the building during the conceptual design stage. The findings show that the developed tools can be useful in AEC industry. Integrating BIM from conceptual design stage for analyzing the fire safety level can ensure precision in further...

  16. Smartphone based face recognition tool for the blind.

    Science.gov (United States)

    Kramer, K M; Hedin, D S; Rolkosky, D J

    2010-01-01

    The inability to identify people during group meetings is a disadvantage for blind people in many professional and educational situations. To explore the efficacy of face recognition using smartphones in these settings, we have prototyped and tested a face recognition tool for blind users. The tool utilizes Smartphone technology in conjunction with a wireless network to provide audio feedback of the people in front of the blind user. Testing indicated that the face recognition technology can tolerate up to a 40 degree angle between the direction a person is looking and the camera's axis and a 96% success rate with no false positives. Future work will be done to further develop the technology for local face recognition on the smartphone in addition to remote server based face recognition.

  17. Structure-Based Search for New Inhibitors of Cholinesterases

    Directory of Open Access Journals (Sweden)

    Barbara Malawska

    2013-03-01

    Full Text Available Cholinesterases are important biological targets responsible for regulation of cholinergic transmission, and their inhibitors are used for the treatment of Alzheimer’s disease. To design new cholinesterase inhibitors, of different structure-based design strategies was followed, including the modification of compounds from a previously developed library and a fragment-based design approach. This led to the selection of heterodimeric structures as potential inhibitors. Synthesis and biological evaluation of selected candidates confirmed that the designed compounds were acetylcholinesterase inhibitors with IC50 values in the mid-nanomolar to low micromolar range, and some of them were also butyrylcholinesterase inhibitors.

  18. Development of health information search engine based on metadata and ontology.

    Science.gov (United States)

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  19. Inference-Based Similarity Search in Randomized Montgomery Domains for Privacy-Preserving Biometric Identification.

    Science.gov (United States)

    Wang, Yi; Wan, Jianwu; Guo, Jun; Cheung, Yiu-Ming; C Yuen, Pong

    2017-07-14

    Similarity search is essential to many important applications and often involves searching at scale on high-dimensional data based on their similarity to a query. In biometric applications, recent vulnerability studies have shown that adversarial machine learning can compromise biometric recognition systems by exploiting the biometric similarity information. Existing methods for biometric privacy protection are in general based on pairwise matching of secured biometric templates and have inherent limitations in search efficiency and scalability. In this paper, we propose an inference-based framework for privacy-preserving similarity search in Hamming space. Our approach builds on an obfuscated distance measure that can conceal Hamming distance in a dynamic interval. Such a mechanism enables us to systematically design statistically reliable methods for retrieving most likely candidates without knowing the exact distance values. We further propose to apply Montgomery multiplication for generating search indexes that can withstand adversarial similarity analysis, and show that information leakage in randomized Montgomery domains can be made negligibly small. Our experiments on public biometric datasets demonstrate that the inference-based approach can achieve a search accuracy close to the best performance possible with secure computation methods, but the associated cost is reduced by orders of magnitude compared to cryptographic primitives.

  20. Optimization of interactive visual-similarity-based search

    NARCIS (Netherlands)

    Nguyen, G.P.; Worring, M.

    2008-01-01

    At one end of the spectrum, research in interactive content-based retrieval concentrates on machine learning methods for effective use of relevance feedback. On the other end, the information visualization community focuses on effective methods for conveying information to the user. What is lacking

  1. Creation of a Web-Based GIS Server and Custom Geoprocessing Tools for Enhanced Hydrologic Applications

    Science.gov (United States)

    Welton, B.; Chouinard, K.; Sultan, M.; Becker, D.; Milewski, A.; Becker, R.

    2010-12-01

    Rising populations in the arid and semi arid parts of the World are increasing the demand for fresh water supplies worldwide. Many data sets needed for assessment of hydrologic applications across vast regions of the world are expensive, unpublished, difficult to obtain, or at varying scales which complicates their use. Fortunately, this situation is changing with the development of global remote sensing datasets and web-based platforms such as GIS Server. GIS provides a cost effective vehicle for comparing, analyzing, and querying a variety of spatial datasets as geographically referenced layers. We have recently constructed a web-based GIS, that incorporates all relevant geological, geochemical, geophysical, and remote sensing data sets that were readily used to identify reservoir types and potential well locations on local and regional scales in various tectonic settings including: (1) extensional environment (Red Sea rift), (2) transcurrent fault system (Najd Fault in the Arabian-Nubian Shield), and (3) compressional environments (Himalayas). The web-based GIS could also be used to detect spatial and temporal trends in precipitation, recharge, and runoff in large watersheds on local, regional, and continental scales. These applications were enabled through the construction of a web-based ArcGIS Server with Google Map’s interface and the development of customized geoprocessing tools. ArcGIS Server provides out-of-the-box setups that are generic in nature. This platform includes all of the standard web based GIS tools (e.g. pan, zoom, identify, search, data querying, and measurement). In addition to the standard suite of tools provided by ArcGIS Server an additional set of advanced data manipulation and display tools was also developed to allow for a more complete and customizable view of the area of interest. The most notable addition to the standard GIS Server tools is the custom on-demand geoprocessing tools (e.g., graph, statistical functions, custom raster

  2. ONTOLOGY BASED MEANINGFUL SEARCH USING SEMANTIC WEB AND NATURAL LANGUAGE PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    K. Palaniammal

    2013-10-01

    Full Text Available The semantic web extends the current World Wide Web by adding facilities for the machine understood description of meaning. The ontology based search model is used to enhance efficiency and accuracy of information retrieval. Ontology is the core technology for the semantic web and this mechanism for representing formal and shared domain descriptions. In this paper, we proposed ontology based meaningful search using semantic web and Natural Language Processing (NLP techniques in the educational domain. First we build the educational ontology then we present the semantic search system. The search model consisting three parts which are embedding spell-check, finding synonyms using WordNet API and querying ontology using SPARQL language. The results are both sensitive to spell check and synonymous context. This paper provides more accurate results and the complete details for the selected field in a single page.

  3. Support patient search on pathology reports with interactive online learning based data extraction.

    Science.gov (United States)

    Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng

    2015-01-01

    Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable

  4. Support patient search on pathology reports with interactive online learning based data extraction

    Directory of Open Access Journals (Sweden)

    Shuai Zheng

    2015-01-01

    Full Text Available Background: Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user′s interaction with minimal human effort. Methods : We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system′s data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users′ corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. Results: We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of

  5. Particle Swarm Optimization and harmony search based clustering and routing in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Veena Anand

    2017-01-01

    Full Text Available Wireless Sensor Networks (WSN has the disadvantage of limited and non-rechargeable energy resource in WSN creates a challenge and led to development of various clustering and routing algorithms. The paper proposes an approach for improving network lifetime by using Particle swarm optimization based clustering and Harmony Search based routing in WSN. So in this paper, global optimal cluster head are selected and Gateway nodes are introduced to decrease the energy consumption of the CH while sending aggregated data to the Base station (BS. Next, the harmony search algorithm based Local Search strategy finds best routing path for gateway nodes to the Base Station. Finally, the proposed algorithm is presented.

  6. A product feature-based user-centric product search model

    OpenAIRE

    Ben Jabeur, Lamjed; Soulier, Laure; Tamine, Lynda; Mousset, Paul

    2016-01-01

    During the online shopping process, users would search for interesting products and quickly access those that fit with their needs among a long tail of similar or closely related products. Our contribution addresses head queries that are frequently submitted on e-commerce Web sites. Head queries usually target featured products with several variations, accessories, and complementary products. We present in this paper a product feature-based user-centric model for product search involving in a...

  7. Prospects for SUSY discovery based on inclusive searches with the ATLAS detector

    International Nuclear Information System (INIS)

    Ventura, Andrea

    2009-01-01

    The search for Supersymmetry (SUSY) among the possible scenarios of new physics is one of the most relevant goals of the ATLAS experiment running at CERN's Large Hadron Collider. In the present work the expected prospects for discovering SUSY with the ATLAS detector are reviewed, in particular for the first fb -1 of collected integrated luminosity. All studies and results reported here are based on inclusive search analyses realized with Monte Carlo signal and background data simulated through the ATLAS apparatus.

  8. Computer-aided tool for the teaching of relational algebra in data base courses

    Directory of Open Access Journals (Sweden)

    Johnny Villalobos Murillo

    2016-03-01

    Full Text Available This article describes the design and implementation of computer-aided tool called Relational Algebra Translator (RAT in data base courses, for the teaching of relational algebra. There was a problem when introducing the relational algebra topic in the course EIF 211 Design and Implementation of Databases, which belongs to the career of Engineering in Information Systems of the National University of Costa Rica, because students attending this course were lacking profound mathematical knowledge, which led to a learning problem, being this an important subject to understand what the data bases search and request do RAT comes along to enhance the teaching-learning process. It introduces the architectural and design principles required for its implementation, such as: the language symbol table, the gramatical rules and the basic algorithms that RAT uses to translate from relational algebra to SQL language. This tool has been used for one periods and has demonstrated to be effective in the learning-teaching process.  This urged investigators to publish it in the web site: www.slinfo.una.ac.cr in order for this tool to be used in other university courses.

  9. Ontology-based Semantic Search Engine for Healthcare Services

    OpenAIRE

    Jotsna Molly Rajan; M. Deepa Lakshmi

    2012-01-01

    With the development of Web Services, the retrieval of relevant services has become a challenge. The keyword-based discovery mechanism using UDDI and WSDL is insufficient due to the retrievalof a large amount of irrelevant information. Also, keywords are insufficient in expressing semantic concepts since a single concept can be referred using syntactically different terms. Hence, service capabilities need to be manually analyzed, which lead to the development of the Semantic Web for automatic...

  10. A GIS-based Quantitative Approach for the Search of Clandestine Graves, Italy.

    Science.gov (United States)

    Somma, Roberta; Cascio, Maria; Silvestro, Massimiliano; Torre, Eliana

    2018-05-01

    Previous research on the RAG color-coded prioritization systems for the discovery of clandestine graves has not considered all the factors influencing the burial site choice within a GIS project. The goal of this technical note was to discuss a GIS-based quantitative approach for the search of clandestine graves. The method is based on cross-referenced RAG maps with cumulative suitability factors to host a burial, leading to the editing of different search scenarios for ground searches showing high-(Red), medium-(Amber), and low-(Green) priority areas. The application of this procedure allowed several outcomes to be determined: If the concealment occurs at night, then the "search scenario without the visibility" will be the most effective one; if the concealment occurs in daylight, then the "search scenario with the DSM-based visibility" will be most appropriate; the different search scenarios may be cross-referenced with offender's confessions and eyewitnesses' testimonies to verify the veracity of their statements. © 2017 American Academy of Forensic Sciences.

  11. Using the Wii Fit as a tool for balance assessment and neurorehabilitation: the first half decade of "Wii-search".

    Science.gov (United States)

    Goble, Daniel J; Cone, Brian L; Fling, Brett W

    2014-02-08

    The Nintendo Wii Fit was released just over five years ago as a means of improving basic fitness and overall well-being. Despite this broad mission, the Wii Fit has generated specific interest in the domain of neurorehabilitation as a biobehavioral measurement and training device for balance ability. Growing interest in Wii Fit technology is likely due to the ubiquitous nature of poor balance and catastrophic falls, which are commonly seen in older adults and various disability conditions. The present review provides the first comprehensive summary of Wii Fit balance research, giving specific insight into the system's use for the assessment and training of balance. Overall, at the time of the fifth anniversary, work in the field showed that custom applications using the Wii Balance Board as a proxy for a force platform have great promise as a low cost and portable way to assess balance. On the other hand, use of Wii Fit software-based balance metrics has been far less effective in determining balance status. As an intervention tool, positive balance outcomes have typically been obtained using Wii Fit balance games, advocating their use for neurorehabilitative training. Despite this, limited sample sizes and few randomized control designs indicate that research regarding use of the Wii Fit system for balance intervention remains subject to improvement. Future work aimed at conducting studies with larger scale randomized control designs and a greater mechanistic focus is recommended to further advance the efficacy of this impactful neurorehabilitation tool.

  12. PACS project management utilizing web-based tools

    Science.gov (United States)

    Patel, Sunil; Levin, Brad; Gac, Robert J., Jr.; Harding, Douglas, Jr.; Chacko, Anna K.; Radvany, Martin; Romlein, John R.

    2000-05-01

    As Picture Archiving and Communications Systems (PACS) implementations become more widespread, the management of deploying large, multi-facility PACS will become a more frequent occurrence. The tools and usability of the World Wide Web to disseminate project management information obviates time, distance, participant availability, and data format constraints, allowing for the effective collection and dissemination of PACS planning, implementation information, for a potentially limitless number of concurrent PACS sites. This paper will speak to tools, such as (1) a topic specific discussion board, (2) a 'restricted' Intranet, within a 'project' Intranet. We will also discuss project specific methods currently in use in a leading edge, regional PACS implementation concerning the sharing of project schedules, physical drawings, images of implementations, site-specific data, point of contacts lists, project milestones, and a general project overview. The individual benefits realized for the end user from each tool will also be covered. These details will be presented, balanced with a spotlight on communication as a critical component of any project management undertaking. Using today's technology, the web arguably provides the most cost and resource effective vehicle to facilitate the broad based, interactive sharing of project information.

  13. MTK: An AI tool for model-based reasoning

    Science.gov (United States)

    Erickson, William K.; Schwartz, Mary R.

    1987-01-01

    A 1988 goal for the Systems Autonomy Demonstration Project Office of the NASA Ames Research Center is to apply model-based representation and reasoning techniques in a knowledge-based system that will provide monitoring, fault diagnosis, control and trend analysis of the space station Thermal Management System (TMS). A number of issues raised during the development of the first prototype system inspired the design and construction of a model-based reasoning tool called MTK, which was used in the building of the second prototype. These issues are outlined, along with examples from the thermal system to highlight the motivating factors behind them. An overview of the capabilities of MTK is given.

  14. Haystack, a web-based tool for metabolomics research.

    Science.gov (United States)

    Grace, Stephen C; Embry, Stephen; Luo, Heng

    2014-01-01

    Liquid chromatography coupled to mass spectrometry (LCMS) has become a widely used technique in metabolomics research for differential profiling, the broad screening of biomolecular constituents across multiple samples to diagnose phenotypic differences and elucidate relevant features. However, a significant limitation in LCMS-based metabolomics is the high-throughput data processing required for robust statistical analysis and data modeling for large numbers of samples with hundreds of unique chemical species. To address this problem, we developed Haystack, a web-based tool designed to visualize, parse, filter, and extract significant features from LCMS datasets rapidly and efficiently. Haystack runs in a browser environment with an intuitive graphical user interface that provides both display and data processing options. Total ion chromatograms (TICs) and base peak chromatograms (BPCs) are automatically displayed, along with time-resolved mass spectra and extracted ion chromatograms (EICs) over any mass range. Output files in the common .csv format can be saved for further statistical analysis or customized graphing. Haystack's core function is a flexible binning procedure that converts the mass dimension of the chromatogram into a set of interval variables that can uniquely identify a sample. Binned mass data can be analyzed by exploratory methods such as principal component analysis (PCA) to model class assignment and identify discriminatory features. The validity of this approach is demonstrated by comparison of a dataset from plants grown at two light conditions with manual and automated peak detection methods. Haystack successfully predicted class assignment based on PCA and cluster analysis, and identified discriminatory features based on analysis of EICs of significant bins. Haystack, a new online tool for rapid processing and analysis of LCMS-based metabolomics data is described. It offers users a range of data visualization options and supports non

  15. Nearby Search Indekos Based Android Using A Star (A*) Algorithm

    Science.gov (United States)

    Siregar, B.; Nababan, EB; Rumahorbo, JA; Andayani, U.; Fahmi, F.

    2018-03-01

    Indekos or rented room is a temporary residence for months or years. Society of academicians who come from out of town need a temporary residence, such as Indekos or rented room during their education, teaching, or duties. They are often found difficulty in finding a Indekos because lack of information about the Indekos. Besides, new society of academicians don’t recognize the areas around the campus and desire the shortest path from Indekos to get to the campus. The problem can be solved by implementing A Star (A*) algorithm. This algorithm is one of the shortest path algorithm to a finding shortest path from campus to the Indekos application, where the faculties in the campus as the starting point of the finding. Determination of the starting point used in this study aims to allow students to determine the starting point in finding the Indekos. The mobile based application facilitates the finding anytime and anywhere. Based on the experimental results, A* algorithm can find the shortest path with 86,67% accuracy.

  16. Knowledge Based Product Configuration - a documentatio tool for configuration projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Malis, Martin

    2003-01-01

    . A lot of knowledge isput into these systems and many domain experts are involved. This calls for an effective documentation system in order to structure this knowledge in a way that fits to the systems. Standard configuration systems do not support this kind of documentation. The chapter deals...... with the development of a Lotus Notes application that serves as a knowledge based documentation tool for configuration projects. A prototype has been developed and tested empirically in an industrial case-company. It has proved to be a succes....

  17. PV-WEB: internet-based PV information tool

    Energy Technology Data Exchange (ETDEWEB)

    Cowley, P

    2003-07-01

    This report gives details of a project to create a web-based information system on photovoltaic (PV) systems for the British PV Association (PV-UK) for use by decision makers in government, the utilities, and the housing and construction sectors. The project, which aims to provide an easily accessible tool for UK companies, promote PV technology, increase competitiveness, and identify market opportunities, is described. The design of the web site and its implementation and the evolution are discussed, along with the maintenance of the site by PV-UK and the opportunities offered to PV-UK Members.

  18. PV-WEB: internet-based PV information tool

    International Nuclear Information System (INIS)

    Cowley, P.

    2003-01-01

    This report gives details of a project to create a web-based information system on photovoltaic (PV) systems for the British PV Association (PV-UK) for use by decision makers in government, the utilities, and the housing and construction sectors. The project, which aims to provide an easily accessible tool for UK companies, promote PV technology, increase competitiveness, and identify market opportunities, is described. The design of the web site and its implementation and the evolution are discussed, along with the maintenance of the site by PV-UK and the opportunities offered to PV-UK Members

  19. Molecular tools for the construction of peptide-based materials.

    Science.gov (United States)

    Ramakers, B E I; van Hest, J C M; Löwik, D W P M

    2014-04-21

    Proteins and peptides are fundamental components of living systems where they play crucial roles at both functional and structural level. The versatile biological properties of these molecules make them interesting building blocks for the construction of bio-active and biocompatible materials. A variety of molecular tools can be used to fashion the peptides necessary for the assembly of these materials. In this tutorial review we shall describe five of the main techniques, namely solid phase peptide synthesis, native chemical ligation, Staudinger ligation, NCA polymerisation, and genetic engineering, that have been used to great effect for the construction of a host of peptide-based materials.

  20. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  1. An Analysis of Literature Searching Anxiety in Evidence-Based Medicine Education

    Directory of Open Access Journals (Sweden)

    Hui-Chin Chang

    2014-01-01

    Full Text Available Introduction. Evidence-Based Medicine (EBM is hurtling towards a cornerstone in lifelong learning for healthcare personnel worldwide. This study aims to evaluate the literature searching anxiety in graduate students in practicing EBM. Method The study participants were 48 graduate students who enrolled the EBM course at aMedical Universityin central Taiwan. Student’s t-test, Pearson correlation and multivariate regression, interviewing are used to evaluate the students’ literature searching anxiety of EBM course. The questionnaire is Literature Searching Anxiety Rating Scale -LSARS. Results The sources of anxiety are uncertainty of database selection, literatures evaluation and selection, technical assistance request, computer programs use, English and EBM education programs were disclosed. The class performance is negatively related to LSARS score, however, the correlation is statistically insignificant with the adjustment of gender, degree program, age category and experience of publication. Conclusion This study helps in understanding the causes and the extent of anxiety in order to work on a better teaching program planning to improve user’s searching skills and the capability of utilization the information; At the same time, provide friendly-user facilities of evidence searching. In short, we need to upgrade the learner’s searching 45 skills and reduce theanxiety. We also need to stress on the auxiliary teaching program for those with the prevalent and profoundanxiety during literature searching.

  2. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account.

  3. Research on the optimization strategy of web search engine based on data mining

    Science.gov (United States)

    Chen, Ronghua

    2018-04-01

    With the wide application of search engines, web site information has become an important way for people to obtain information. People have found that they are growing in an increasingly explosive manner. Web site information is verydifficult to find the information they need, and now the search engine can not meet the need, so there is an urgent need for the network to provide website personalized information service, data mining technology for this new challenge is to find a breakthrough. In order to improve people's accuracy of finding information from websites, a website search engine optimization strategy based on data mining is proposed, and verified by website search engine optimization experiment. The results show that the proposed strategy improves the accuracy of the people to find information, and reduces the time for people to find information. It has an important practical value.

  4. Using the open Web as an information resource and scholarly Web search engines as retrieval tools for academic and research purposes

    OpenAIRE

    Filistea Naude; Chris Rensleigh; Adeline S.A. du Toit

    2010-01-01

    This study provided insight into the significance of the open Web as an information resource and Web search engines as research tools amongst academics. The academic staff establishment of the University of South Africa (Unisa) was invited to participate in a questionnaire survey and included 1188 staff members from five colleges. This study culminated in a PhD dissertation in 2008. One hundred and eighty seven respondents participated in the survey which gave a response rate of 15.7%. The re...

  5. Addressing special structure in the relevance feedback learning problem through aspect-based image search

    NARCIS (Netherlands)

    M.J. Huiskes (Mark)

    2004-01-01

    textabstractIn this paper we focus on a number of issues regarding special structure in the relevance feedback learning problem, most notably the effects of image selection based on partial relevance on the clustering behavior of examples. We propose a simple scheme, aspect-based image search, which

  6. A data-based conservation planning tool for Florida panthers

    Science.gov (United States)

    Murrow, Jennifer L.; Thatcher, Cindy A.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    Habitat loss and fragmentation are the greatest threats to the endangered Florida panther (Puma concolor coryi). We developed a data-based habitat model and user-friendly interface so that land managers can objectively evaluate Florida panther habitat. We used a geographic information system (GIS) and the Mahalanobis distance statistic (D2) to develop a model based on broad-scale landscape characteristics associated with panther home ranges. Variables in our model were Euclidean distance to natural land cover, road density, distance to major roads, human density, amount of natural land cover, amount of semi-natural land cover, amount of permanent or semi-permanent flooded area–open water, and a cost–distance variable. We then developed a Florida Panther Habitat Estimator tool, which automates and replicates the GIS processes used to apply the statistical habitat model. The estimator can be used by persons with moderate GIS skills to quantify effects of land-use changes on panther habitat at local and landscape scales. Example applications of the tool are presented.

  7. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  8. Development of a tool for knowledge base verification of expert system based on Design/CPN

    International Nuclear Information System (INIS)

    Kim, Jong Hyun

    1998-02-01

    Verification is a necessary work in developing a reliable expert system. Verification is a process aimed at demonstrating whether a system meets it's specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base. Generally, verification process requires computational support by automated tools. For this reason, this study developed a tool for knowledge base verification based on Design/CPN, which is a tool for editing, modeling, and simulating Colored Petri net. This tool uses Enhanced Colored Petri net as a modeling method. By applying this tool to the knowledge base of nuclear power plant, it is noticed that it can successfully check most of the anomalies that can occur in a knowledge base

  9. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  10. Composition-based statistics and translated nucleotide searches: Improving the TBLASTN module of BLAST

    Directory of Open Access Journals (Sweden)

    Schäffer Alejandro A

    2006-12-01

    Full Text Available Abstract Background TBLASTN is a mode of operation for BLAST that aligns protein sequences to a nucleotide database translated in all six frames. We present the first description of the modern implementation of TBLASTN, focusing on new techniques that were used to implement composition-based statistics for translated nucleotide searches. Composition-based statistics use the composition of the sequences being aligned to generate more accurate E-values, which allows for a more accurate distinction between true and false matches. Until recently, composition-based statistics were available only for protein-protein searches. They are now available as a command line option for recent versions of TBLASTN and as an option for TBLASTN on the NCBI BLAST web server. Results We evaluate the statistical and retrieval accuracy of the E-values reported by a baseline version of TBLASTN and by two variants that use different types of composition-based statistics. To test the statistical accuracy of TBLASTN, we ran 1000 searches using scrambled proteins from the mouse genome and a database of human chromosomes. To test retrieval accuracy, we modernize and adapt to translated searches a test set previously used to evaluate the retrieval accuracy of protein-protein searches. We show that composition-based statistics greatly improve the statistical accuracy of TBLASTN, at a small cost to the retrieval accuracy. Conclusion TBLASTN is widely used, as it is common to wish to compare proteins to chromosomes or to libraries of mRNAs. Composition-based statistics improve the statistical accuracy, and therefore the reliability, of TBLASTN results. The algorithms used by TBLASTN are not widely known, and some of the most important are reported here. The data used to test TBLASTN are available for download and may be useful in other studies of translated search algorithms.

  11. Older Cancer Patients' User Experiences With Web-Based Health Information Tools: A Think-Aloud Study.

    Science.gov (United States)

    Bolle, Sifra; Romijn, Geke; Smets, Ellen M A; Loos, Eugene F; Kunneman, Marleen; van Weert, Julia C M

    2016-07-25

    Health information is increasingly presented on the Internet. Several Web design guidelines for older Web users have been proposed; however, these guidelines are often not applied in website development. Furthermore, although we know that older individuals use the Internet to search for health information, we lack knowledge on how they use and evaluate Web-based health information. This study evaluates user experiences with existing Web-based health information tools among older (≥ 65 years) cancer patients and survivors and their partners. The aim was to gain insight into usability issues and the perceived usefulness of cancer-related Web-based health information tools. We conducted video-recorded think-aloud observations for 7 Web-based health information tools, specifically 3 websites providing cancer-related information, 3 Web-based question prompt lists (QPLs), and 1 values clarification tool, with colorectal cancer patients or survivors (n=15) and their partners (n=8) (median age: 73; interquartile range 70-79). Participants were asked to think aloud while performing search, evaluation, and application tasks using the Web-based health information tools. Overall, participants perceived Web-based health information tools as highly useful and indicated a willingness to use such tools. However, they experienced problems in terms of usability and perceived usefulness due to difficulties in using navigational elements, shortcomings in the layout, a lack of instructions on how to use the tools, difficulties with comprehensibility, and a large amount of variety in terms of the preferred amount of information. Although participants frequently commented that it was easy for them to find requested information, we observed that the large majority of the participants were not able to find it. Overall, older cancer patients appreciate and are able to use cancer information websites. However, this study shows the importance of maintaining awareness of age-related problems

  12. Older Cancer Patients’ User Experiences With Web-Based Health Information Tools: A Think-Aloud Study

    Science.gov (United States)

    Romijn, Geke; Smets, Ellen M A; Loos, Eugene F; Kunneman, Marleen; van Weert, Julia C M

    2016-01-01

    Background Health information is increasingly presented on the Internet. Several Web design guidelines for older Web users have been proposed; however, these guidelines are often not applied in website development. Furthermore, although we know that older individuals use the Internet to search for health information, we lack knowledge on how they use and evaluate Web-based health information. Objective This study evaluates user experiences with existing Web-based health information tools among older (≥ 65 years) cancer patients and survivors and their partners. The aim was to gain insight into usability issues and the perceived usefulness of cancer-related Web-based health information tools. Methods We conducted video-recorded think-aloud observations for 7 Web-based health information tools, specifically 3 websites providing cancer-related information, 3 Web-based question prompt lists (QPLs), and 1 values clarification tool, with colorectal cancer patients or survivors (n=15) and their partners (n=8) (median age: 73; interquartile range 70-79). Participants were asked to think aloud while performing search, evaluation, and application tasks using the Web-based health information tools. Results Overall, participants perceived Web-based health information tools as highly useful and indicated a willingness to use such tools. However, they experienced problems in terms of usability and perceived usefulness due to difficulties in using navigational elements, shortcomings in the layout, a lack of instructions on how to use the tools, difficulties with comprehensibility, and a large amount of variety in terms of the preferred amount of information. Although participants frequently commented that it was easy for them to find requested information, we observed that the large majority of the participants were not able to find it. Conclusions Overall, older cancer patients appreciate and are able to use cancer information websites. However, this study shows the importance

  13. Identification of Fuzzy Inference Systems by Means of a Multiobjective Opposition-Based Space Search Algorithm

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2013-01-01

    Full Text Available We introduce a new category of fuzzy inference systems with the aid of a multiobjective opposition-based space search algorithm (MOSSA. The proposed MOSSA is essentially a multiobjective space search algorithm improved by using an opposition-based learning that employs a so-called opposite numbers mechanism to speed up the convergence of the optimization algorithm. In the identification of fuzzy inference system, the MOSSA is exploited to carry out the parametric identification of the fuzzy model as well as to realize its structural identification. Experimental results demonstrate the effectiveness of the proposed fuzzy models.

  14. Interactive text mining with Pipeline Pilot: a bibliographic web-based tool for PubMed.

    Science.gov (United States)

    Vellay, S G P; Latimer, N E Miller; Paillard, G

    2009-06-01

    Text mining has become an integral part of all research in the medical field. Many text analysis software platforms support particular use cases and only those. We show an example of a bibliographic tool that can be used to support virtually any use case in an agile manner. Here we focus on a Pipeline Pilot web-based application that interactively analyzes and reports on PubMed search results. This will be of interest to any scientist to help identify the most relevant papers in a topical area more quickly and to evaluate the results of query refinement. Links with Entrez databases help both the biologist and the chemist alike. We illustrate this application with Leishmaniasis, a neglected tropical disease, as a case study.

  15. Selecting a risk-based tool to aid in decision making

    Energy Technology Data Exchange (ETDEWEB)

    Bendure, A.O.

    1995-03-01

    Selecting a risk-based tool to aid in decision making is as much of a challenge as properly using the tool once it has been selected. Failure to consider customer and stakeholder requirements and the technical bases and differences in risk-based decision making tools will produce confounding and/or politically unacceptable results when the tool is used. Selecting a risk-based decisionmaking tool must therefore be undertaken with the same, if not greater, rigor than the use of the tool once it is selected. This paper presents a process for selecting a risk-based tool appropriate to a set of prioritization or resource allocation tasks, discusses the results of applying the process to four risk-based decision-making tools, and identifies the ``musts`` for successful selection and implementation of a risk-based tool to aid in decision making.

  16. MAST – A Mobile Agent-based Security Tool

    Directory of Open Access Journals (Sweden)

    Marco Carvalho

    2004-08-01

    Full Text Available One of the chief computer security problems is not the long list of viruses and other potential vulnerabilities, but the vast number of systems that continue to be easy prey, as their system administrators or owners simply are not able to keep up with all of the available patches, updates, or needed configuration changes in order to protect them from those known vulnerabilities. Even up-to-date systems could become vulnerable to attacks, due to inappropriate configuration or combined used of applications and services. Our mobile agent-based security tool (MAST is designed to bridge this gap, and provide automated methods to make sure that all of the systems in a specific domain or network are secured and up-to-date with all patches and updates. The tool is also designed to check systems for misconfigurations that make them vulnerable. Additionally, this user interface is presented in a domain knowledge model known as a Concept Map that provides a continuous learning experience for the system administrator.

  17. SNPversity: a web-based tool for visualizing diversity

    Science.gov (United States)

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  18. Power law-based local search in spider monkey optimisation for lower order system modelling

    Science.gov (United States)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  19. Using Web-Based Technologies for Network Management Tools

    National Research Council Canada - National Science Library

    Agami, Arie

    1997-01-01

    .... New solutions to current network management tools problems may be found in the increasingly popular World Wide Web, Internet tools such as Java, and remote database access through the Internet...

  20. TENTube: A Video-based Connection Tool Supporting Competence Development

    Directory of Open Access Journals (Sweden)

    Albert A Angehrn

    2008-07-01

    Full Text Available The vast majority of knowledge management initiatives fail because they do not take sufficiently into account the emotional, psychological and social needs of individuals. Only if users see real value for themselves will they actively use and contribute their own knowledge to the system, and engage with other users. Connection dynamics can make this easier, and even enjoyable, by connecting people and bringing them closer through shared experiences such as playing a game together. A higher connectedness of people to other people, and to relevant knowledge assets, will motivate them to participate more actively and increase system usage. In this paper, we describe the design of TENTube, a video-based connection tool we are developing to support competence development. TENTube integrates rich profiling and network visualization and navigation with agent-enhanced game-like connection dynamics.

  1. GOMA: functional enrichment analysis tool based on GO modules

    Institute of Scientific and Technical Information of China (English)

    Qiang Huang; Ling-Yun Wu; Yong Wang; Xiang-Sun Zhang

    2013-01-01

    Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology.A variety of enrichment analysis tools have been developed in recent years,but most output a long list of significantly enriched terms that are often redundant,making it difficult to extract the most meaningful functions.In this paper,we present GOMA,a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules.With this method,we systematically revealed functional GO modules,i.e.,groups of functionally similar GO terms,via an optimization model and then ranked them by enrichment scores.Our new method simplifies enrichment analysis results by reducing redundancy,thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results.

  2. Students are Confident Using Federated Search Tools as much as Single Databases. A Review of: Armstrong, A. (2009. Student perceptions of federated searching vs. single database searching. Reference Services Review, 37(3, 291-303. doi:10.1108/00907320910982785

    Directory of Open Access Journals (Sweden)

    Deena Yanofsky

    2011-09-01

    Full Text Available Objective – To measure students’ perceptions of the ease-of-use and efficacy of a federated search tool versus a single multidisciplinary database.Design – An evaluation worksheet, employing a combination of quantitative and qualitative questions.Setting – A required, first-year English composition course taught at the University of Illinois at Chicago (UIC.Subjects – Thirty-one undergraduate students completed and submitted the worksheet.Methods – Students attended two library instruction sessions. The first session introduced participants to basic Boolean searching (using AND only, selecting appropriate keywords and searching for books in the library catalogue. In the second library session, students were handed an evaluation worksheet and, with no introduction to the process of searching article databases, were asked to find relevant articles on a research topic of their own choosing using both a federated search tool and a single multidisciplinary database. The evaluation worksheet was divided into four sections: step-by-step instructions for accessing the single multidisciplinary database and the federated search tool; space to record search strings in both resources; space to record the titles of up to five relevant articles; and a series of quantitative and qualitative questions regarding ease-of-use, relevancy of results, overall preference (if any between the two resources, likeliness of future use and other preferred research tools. Half of the participants received a worksheet with instructions to search the federated search tool before the single database; the order was reversed for the other half of the students. The evaluation worksheet was designed to be completed in one hour.Participant responses to qualitative questions were analyzed, codified and grouped into thematic categories. If a student mentioned more than one factor in responding to a question, their response was recorded in multiple categories.Main Results

  3. The Arabidopsis co-expression tool (act): a WWW-based tool and database for microarray-based gene expression analysis

    DEFF Research Database (Denmark)

    Jen, C. H.; Manfield, I. W.; Michalopoulos, D. W.

    2006-01-01

    be examined using the novel clique finder tool to determine the sets of genes most likely to be regulated in a similar manner. In combination, these tools offer three levels of analysis: creation of correlation lists of co-expressed genes, refinement of these lists using two-dimensional scatter plots......We present a new WWW-based tool for plant gene analysis, the Arabidopsis Co-Expression Tool (act) , based on a large Arabidopsis thaliana microarray data set obtained from the Nottingham Arabidopsis Stock Centre. The co-expression analysis tool allows users to identify genes whose expression...

  4. Genomic-based-breeding tools for tropical maize improvement.

    Science.gov (United States)

    Chakradhar, Thammineni; Hindu, Vemuri; Reddy, Palakolanu Sudhakar

    2017-12-01

    Maize has traditionally been the main staple diet in the Southern Asia and Sub-Saharan Africa and widely grown by millions of resource poor small scale farmers. Approximately, 35.4 million hectares are sown to tropical maize, constituting around 59% of the developing worlds. Tropical maize encounters tremendous challenges besides poor agro-climatic situations with average yields recorded <3 tones/hectare that is far less than the average of developed countries. On the contrary to poor yields, the demand for maize as food, feed, and fuel is continuously increasing in these regions. Heterosis breeding introduced in early 90 s improved maize yields significantly, but genetic gains is still a mirage, particularly for crop growing under marginal environments. Application of molecular markers has accelerated the pace of maize breeding to some extent. The availability of array of sequencing and genotyping technologies offers unrivalled service to improve precision in maize-breeding programs through modern approaches such as genomic selection, genome-wide association studies, bulk segregant analysis-based sequencing approaches, etc. Superior alleles underlying complex traits can easily be identified and introgressed efficiently using these sequence-based approaches. Integration of genomic tools and techniques with advanced genetic resources such as nested association mapping and backcross nested association mapping could certainly address the genetic issues in maize improvement programs in developing countries. Huge diversity in tropical maize and its inherent capacity for doubled haploid technology offers advantage to apply the next generation genomic tools for accelerating production in marginal environments of tropical and subtropical world. Precision in phenotyping is the key for success of any molecular-breeding approach. This article reviews genomic technologies and their application to improve agronomic traits in tropical maize breeding has been reviewed in

  5. Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.

    Science.gov (United States)

    Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin

    2014-10-01

    With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Efficient Multi-keyword Ranked Search over Outsourced Cloud Data based on Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Nie Mengxi

    2016-01-01

    Full Text Available With the development of cloud computing, more and more data owners are motivated to outsource their data to the cloud server for great flexibility and less saving expenditure. Because the security of outsourced data must be guaranteed, some encryption methods should be used which obsoletes traditional data utilization based on plaintext, e.g. keyword search. To solve the search of encrypted data, some schemes were proposed to solve the search of encrypted data, e.g. top-k single or multiple keywords retrieval. However, the efficiency of these proposed schemes is not high enough to be impractical in the cloud computing. In this paper, we propose a new scheme based on homomorphic encryption to solve this challenging problem of privacy-preserving efficient multi-keyword ranked search over outsourced cloud data. In our scheme, the inner product is adopted to measure the relevance scores and the technique of relevance feedback is used to reflect the search preference of the data users. Security analysis shows that the proposed scheme can meet strict privacy requirements for such a secure cloud data utilization system. Performance evaluation demonstrates that the proposed scheme can achieve low overhead on both computation and communication.

  7. PLACNETw: a web-based tool for plasmid reconstruction from bacterial genomes.

    Science.gov (United States)

    Vielva, Luis; de Toro, María; Lanza, Val F; de la Cruz, Fernando

    2017-12-01

    PLACNET is a graph-based tool for reconstruction of plasmids from next generation sequence pair-end datasets. PLACNET graphs contain two types of nodes (assembled contigs and reference genomes) and two types of edges (scaffold links and homology to references). Manual pruning of the graphs is a necessary requirement in PLACNET, but this is difficult for users without solid bioinformatic background. PLACNETw, a webtool based on PLACNET, provides an interactive graphic interface, automates BLAST searches, and extracts the relevant information for decision making. It allows a user with domain expertise to visualize the scaffold graphs and related information of contigs as well as reference sequences, so that the pruning operations can be done interactively from a personal computer without the need for additional tools. After successful pruning, each plasmid becomes a separate connected component subgraph. The resulting data are automatically downloaded by the user. PLACNETw is freely available at https://castillo.dicom.unican.es/upload/. delacruz@unican.es. A tutorial video and several solved examples are available at https://castillo.dicom.unican.es/placnetw_video/ and https://castillo.dicom.unican.es/examples/. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. GENECODIS-Grid: An online grid-based tool to predict functional information in gene lists

    International Nuclear Information System (INIS)

    Nogales, R.; Mejia, E.; Vicente, C.; Montes, E.; Delgado, A.; Perez Griffo, F. J.; Tirado, F.; Pascual-Montano, A.

    2007-01-01

    In this work we introduce GeneCodis-Grid, a grid-based alternative to a bioinformatics tool named Genecodis that integrates different sources of biological information to search for biological features (annotations) that frequently co-occur in a set of genes and rank them by statistical significance. GeneCodis-Grid is a web-based application that takes advantage of two independent grid networks and a computer cluster managed by a meta-scheduler and a web server that host the application. The mining of concurrent biological annotations provides significant information for the functional analysis of gene list obtained by high throughput experiments in biology. Due to the large popularity of this tool, that has registered more than 13000 visits since its publication in January 2007, there is a strong need to facilitate users from different sites to access the system simultaneously. In addition, the complexity of some of the statistical tests used in this approach has made this technique a good candidate for its implementation in a Grid opportunistic environment. (Author)

  9. Development of Nylon Based FDM Filament for Rapid Tooling Application

    Science.gov (United States)

    Singh, R.; Singh, S.

    2014-04-01

    There has been critical need for development of cost effective nylon based wire to be used as feed stock filament for fused deposition modelling (FDM) machine. But hitherto, very less work has been reported for development of alternate solution of acrylonitrile butadiene styrene (ABS) based wire which is presently used in most of FDM machines. The present research work is focused on development of nylon based wire as an alternative of ABS wire (which is to be used as feedstock filament on FDM) without changing any hardware or software of machine. For the present study aluminium oxide (Al2O3) as additive in different proportion has been used with nylon fibre. Single screw extruder was used for wire preparation and wire thus produced was tested on FDM. Mechanical properties i.e. tensile strength and percentage elongation of finally developed wire have been optimized by Taguchi L9 technique. The work represented major development in reducing cost and time in rapid tooling applications.

  10. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  11. EARS: An Online Bibliographic Search and Retrieval System Based on Ordered Explosion.

    Science.gov (United States)

    Ramesh, R.; Drury, Colin G.

    1987-01-01

    Provides overview of Ergonomics Abstracts Retrieval System (EARS), an online bibliographic search and retrieval system in the area of human factors engineering. Other online systems are described, the design of EARS based on inverted file organization is explained, and system expansions including a thesaurus are discussed. (Author/LRW)

  12. Agent-oriented Architecture for Task-based Information Search System

    NARCIS (Netherlands)

    Aroyo, Lora; de Bra, Paul M.E.; De Bra, P.; Hardman, L.

    1999-01-01

    The topic of the reported research discusses an agent-oriented architecture of an educational information search system AIMS - a task-based learner support system. It is implemented within the context of 'Courseware Engineering' on-line course at the Faculty of Educational Science and Technology,

  13. COORDINATE-BASED META-ANALYTIC SEARCH FOR THE SPM NEUROIMAGING PIPELINE

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Szewczyk, Marcin; Rasmussen, Peter Mondrup

    2009-01-01

    . BredeQuery offers a direct link from SPM5 to the Brede Database coordinate-based search engine. BredeQuery is able to ‘grab’ brain location coordinates from the SPM windows and enter them as a query for the Brede Database. Moreover, results of the query can be displayed in an SPM window and/or exported...

  14. A novel approach towards skill-based search and services of Open Educational Resources

    NARCIS (Netherlands)

    Ha, Kyung-Hun; Niemann, Katja; Schwertel, Uta; Holtkamp, Philipp; Pirkkalainen, Henri; Börner, Dirk; Kalz, Marco; Pitsilis, Vassilis; Vidalis, Ares; Pappa, Dimitra; Bick, Markus; Pawlowski, Jan; Wolpers, Martin

    2011-01-01

    Ha, K.-H., Niemann, K., Schwertel, U., Holtkamp, P., Pirkkalainen, H., Börner, D. et al (2011). A novel approach towards skill-based search and services of Open Educational Resources. In E. Garcia-Barriocanal, A. Öztürk, & M. C. Okur (Eds.), Metadata and Semantics Research: 5th International

  15. Web-Based Search and Plot System for Nuclear Reaction Data

    International Nuclear Information System (INIS)

    Otuka, N.; Nakagawa, T.; Fukahori, T.; Katakura, J.; Aikawa, M.; Suda, T.; Naito, K.; Korennov, S.; Arai, K.; Noto, H.; Ohnishi, A.; Kato, K.

    2005-01-01

    A web-based search and plot system for nuclear reaction data has been developed, covering experimental data in EXFOR format and evaluated data in ENDF format. The system is implemented for Linux OS, with Perl and MySQL used for CGI scripts and the database manager, respectively. Two prototypes for experimental and evaluated data are presented

  16. MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.

    Science.gov (United States)

    Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn

    2013-12-01

    We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.

  17. Parallel content-based sub-image retrieval using hierarchical searching.

    Science.gov (United States)

    Yang, Lin; Qi, Xin; Xing, Fuyong; Kurc, Tahsin; Saltz, Joel; Foran, David J

    2014-04-01

    The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.

  18. Open-Source Tools for Enhancing Full-Text Searching of OPACs: Use of Koha, Greenstone and Fedora

    Science.gov (United States)

    Anuradha, K. T.; Sivakaminathan, R.; Kumar, P. Arun

    2011-01-01

    Purpose: There are many library automation packages available as open-source software, comprising two modules: staff-client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full-text…

  19. Theoretical and Empirical Analyses of an Improved Harmony Search Algorithm Based on Differential Mutation Operator

    Directory of Open Access Journals (Sweden)

    Longquan Yong

    2012-01-01

    Full Text Available Harmony search (HS method is an emerging metaheuristic optimization algorithm. In this paper, an improved harmony search method based on differential mutation operator (IHSDE is proposed to deal with the optimization problems. Since the population diversity plays an important role in the behavior of evolution algorithm, the aim of this paper is to calculate the expected population mean and variance of IHSDE from theoretical viewpoint. Numerical results, compared with the HSDE, NGHS, show that the IHSDE method has good convergence property over a test-suite of well-known benchmark functions.

  20. Mobile Visual Search Based on Histogram Matching and Zone Weight Learning

    Science.gov (United States)

    Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong

    2018-01-01

    In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.

  1. New generation of the multimedia search engines

    Science.gov (United States)

    Mijes Cruz, Mario Humberto; Soto Aldaco, Andrea; Maldonado Cano, Luis Alejandro; López Rodríguez, Mario; Rodríguez Vázqueza, Manuel Antonio; Amaya Reyes, Laura Mariel; Cano Martínez, Elizabeth; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Flores Secundino, Jesús Abimelek; Rivera Martínez, José Luis; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Sánchez Valenzuela, Juan Carlos; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Current search engines are based upon search methods that involve the combination of words (text-based search); which has been efficient until now. However, the Internet's growing demand indicates that there's more diversity on it with each passing day. Text-based searches are becoming limited, as most of the information on the Internet can be found in different types of content denominated multimedia content (images, audio files, video files). Indeed, what needs to be improved in current search engines is: search content, and precision; as well as an accurate display of expected search results by the user. Any search can be more precise if it uses more text parameters, but it doesn't help improve the content or speed of the search itself. One solution is to improve them through the characterization of the content for the search in multimedia files. In this article, an analysis of the new generation multimedia search engines is presented, focusing the needs according to new technologies. Multimedia content has become a central part of the flow of information in our daily life. This reflects the necessity of having multimedia search engines, as well as knowing the real tasks that it must comply. Through this analysis, it is shown that there are not many search engines that can perform content searches. The area of research of multimedia search engines of new generation is a multidisciplinary area that's in constant growth, generating tools that satisfy the different needs of new generation systems.

  2. Web based educational tool for neural network robot control

    Directory of Open Access Journals (Sweden)

    Jure Čas

    2007-05-01

    Full Text Available Abstract— This paper describes the application for teleoperations of the SCARA robot via the internet. The SCARA robot is used by students of mehatronics at the University of Maribor as a remote educational tool. The developed software consists of two parts i.e. the continuous neural network sliding mode controller (CNNSMC and the graphical user interface (GUI. Application is based on two well-known commercially available software packages i.e. MATLAB/Simulink and LabVIEW. Matlab/Simulink and the DSP2 Library for Simulink are used for control algorithm development, simulation and executable code generation. While this code is executing on the DSP-2 Roby controller and through the analog and digital I/O lines drives the real process, LabVIEW virtual instrument (VI, running on the PC, is used as a user front end. LabVIEW VI provides the ability for on-line parameter tuning, signal monitoring, on-line analysis and via Remote Panels technology also teleoperation. The main advantage of a CNNSMC is the exploitation of its self-learning capability. When friction or an unexpected impediment occurs for example, the user of a remote application has no information about any changed robot dynamic and thus is unable to dispatch it manually. This is not a control problem anymore because, when a CNNSMC is used, any approximation of changed robot dynamic is estimated independently of the remote’s user. Index Terms—LabVIEW; Matlab/Simulink; Neural network control; remote educational tool; robotics

  3. An expert system based software sizing tool, phase 2

    Science.gov (United States)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  4. A hybrid firefly algorithm and pattern search technique for SSSC based power oscillation damping controller design

    Directory of Open Access Journals (Sweden)

    Srikanta Mahapatra

    2014-12-01

    Full Text Available In this paper, a novel hybrid Firefly Algorithm and Pattern Search (h-FAPS technique is proposed for a Static Synchronous Series Compensator (SSSC-based power oscillation damping controller design. The proposed h-FAPS technique takes the advantage of global search capability of FA and local search facility of PS. In order to tackle the drawback of using the remote signal that may impact reliability of the controller, a modified signal equivalent to the remote speed deviation signal is constructed from the local measurements. The performances of the proposed controllers are evaluated in SMIB and multi-machine power system subjected to various transient disturbances. To show the effectiveness and robustness of the proposed design approach, simulation results are presented and compared with some recently published approaches such as Differential Evolution (DE and Particle Swarm Optimization (PSO. It is observed that the proposed approach yield superior damping performance compared to some recently reported approaches.

  5. Three hybridization models based on local search scheme for job shop scheduling problem

    Science.gov (United States)

    Balbi Fraga, Tatiana

    2015-05-01

    This work presents three different hybridization models based on the general schema of Local Search Heuristics, named Hybrid Successive Application, Hybrid Neighborhood, and Hybrid Improved Neighborhood. Despite similar approaches might have already been presented in the literature in other contexts, in this work these models are applied to analyzes the solution of the job shop scheduling problem, with the heuristics Taboo Search and Particle Swarm Optimization. Besides, we investigate some aspects that must be considered in order to achieve better solutions than those obtained by the original heuristics. The results demonstrate that the algorithms derived from these three hybrid models are more robust than the original algorithms and able to get better results than those found by the single Taboo Search.

  6. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    Science.gov (United States)

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-10-01

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  7. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  8. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  9. Construction of web-based nutrition education contents and searching engine for usage of healthy menu of children

    Science.gov (United States)

    Lee, Tae-Kyong; Chung, Hea-Jung; Park, Hye-Kyung; Lee, Eun-Ju; Nam, Hye-Seon; Jung, Soon-Im; Cho, Jee-Ye; Lee, Jin-Hee; Kim, Gon; Kim, Min-Chan

    2008-01-01

    A diet habit, which is developed in childhood, lasts for a life time. In this sense, nutrition education and early exposure to healthy menus in childhood is important. Children these days have easy access to the internet. Thus, a web-based nutrition education program for children is an effective tool for nutrition education of children. This site provides the material of the nutrition education for children with characters which are personified nutrients. The 151 menus are stored in the site together with video script of the cooking process. The menus are classified by the criteria based on age, menu type and the ethnic origin of the menu. The site provides a search function. There are three kinds of search conditions which are key words, menu type and "between" expression of nutrients such as calorie and other nutrients. The site is developed with the operating system Windows 2003 Server, the web server ZEUS 5, development language JSP, and database management system Oracle 10 g. PMID:20126375

  10. Integration issues of information engineering based I-CASE tools

    OpenAIRE

    Kurbel, Karl; Schnieder, Thomas

    1994-01-01

    Problems and requirements regarding integration of methods and tools across phases of the software-development life cycle are discussed. Information engineering (IE) methodology and I-CASE (integrated CASE) tools supporting IE claim to have an integrated view across major stages of enterprise-wide information-system development: information strategy planning, business area analysis, system design, and construction. In the main part of this paper, two comprehensive I-CASE tools, ADW (Applicati...

  11. A Novel Chaos-Based Voice Controlled FTP Tool

    Directory of Open Access Journals (Sweden)

    Muhammed Maruf Ozturk

    2015-08-01

    Full Text Available To manage file transfer operation various tools have been developed so far. However these tools can not respond adequately for conduct a secure transfer. Also few works have been done using encrypted voice controlled system yet. By regarding this lack we investigate how to built a useful and secure tool. This work presents a novel improved voice controlled FTP tool Wb-CFTP using chaotic system. A chaotic system called as logistic map is associated with Wb-FTP designed on the basis of Asp.Net and C. Here we depict the prominence of encryption in voice controlled systems.

  12. A novel framework for diagnosing automatic tool changer and tool life based on cloud computing

    Directory of Open Access Journals (Sweden)

    Shang-Liang Chen

    2016-03-01

    Full Text Available Tool change is one among the most frequently performed machining processes, and if there is improper percussion as the tool’s position is changed, the spindle bearing can be damaged. A spindle malfunction can cause problems, such as a knife being dropped or bias in a machined hole. The measures currently taken to avoid such issues, which arose from the available machine tools, only involve determining whether the clapping knife’s state is correct using a spindle and the air adhesion method, which is also used to satisfy the high precision required from mechanical components. Therefore, it cannot be used with any type of machine tool; in addition, improper tapping of the spindle during an automatic tool change cannot be detected. Therefore, this study proposes a new type of diagnostic framework that combines cloud computing and vibration sensors, among of which, tool change is automatically diagnosed using an architecture to identify abnormalities and thereby enhances the reliability and productivity of the machine and equipment.

  13. An informatics supported web-based data annotation and query tool to expedite translational research for head and neck malignancies

    International Nuclear Information System (INIS)

    Amin, Waqas; Kang, Hyunseok P; Egloff, Ann Marie; Singh, Harpreet; Trent, Kerry; Ridge-Hetrick, Jennifer; Seethala, Raja R; Grandis, Jennifer; Parwani, Anil V

    2009-01-01

    The Specialized Program of Research Excellence (SPORE) in Head and Neck Cancer neoplasm virtual biorepository is a bioinformatics-supported system to incorporate data from various clinical, pathological, and molecular systems into a single architecture based on a set of common data elements (CDEs) that provides semantic and syntactic interoperability of data sets. The various components of this annotation tool include the Development of Common Data Elements (CDEs) that are derived from College of American Pathologists (CAP) Checklist and North American Association of Central Cancer Registries (NAACR) standards. The Data Entry Tool is a portable and flexible Oracle-based data entry device, which is an easily mastered web-based tool. The Data Query Tool helps investigators and researchers to search de-identified information within the warehouse/resource through a 'point and click' interface, thus enabling only the selected data elements to be essentially copied into a data mart using a multi dimensional model from the warehouse's relational structure. The SPORE Head and Neck Neoplasm Database contains multimodal datasets that are accessible to investigators via an easy to use query tool. The database currently holds 6553 cases and 10607 tumor accessions. Among these, there are 965 metastatic, 4227 primary, 1369 recurrent, and 483 new primary cases. The data disclosure is strictly regulated by user's authorization. The SPORE Head and Neck Neoplasm Virtual Biorepository is a robust translational biomedical informatics tool that can facilitate basic science, clinical, and translational research. The Data Query Tool acts as a central source providing a mechanism for researchers to efficiently find clinically annotated datasets and biospecimens that are relevant to their research areas. The tool protects patient privacy by revealing only de-identified data in accordance with regulations and approvals of the IRB and scientific review committee

  14. Allen Brain Atlas-Driven Visualizations: a web-based gene expression energy visualization tool.

    Science.gov (United States)

    Zaldivar, Andrew; Krichmar, Jeffrey L

    2014-01-01

    The Allen Brain Atlas-Driven Visualizations (ABADV) is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA) across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  15. Allen Brain Atlas-Driven Visualizations: A Web-Based Gene Expression Energy Visualization Tool

    Directory of Open Access Journals (Sweden)

    Andrew eZaldivar

    2014-05-01

    Full Text Available The Allen Brain Atlas-Driven Visualizations (ABADV is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  16. Evidence-based decision making and asthma in the internet age: the tools of the trade.

    Science.gov (United States)

    Jadad, A R

    2002-01-01

    At the dawn of the Information Age, the practice of evidence-based decision making (EBDM) is still hindered by many important barriers related to the decision makers, to the evidence per se or to the health system. Some of these barriers, particularly those related to the distillation, dissemination and packaging of research evidence, could be overcome by recent and ongoing developments in portable/wearable computers, internet appliances, multimedia and wireless broadband internet traffic. This article describes specific EBDM-related tools, with emphasis on internet-enabled "how to" books; and tools to improve the quality of reporting research, to formulate questions; to search for evidence; to access journals, systematic reviews and guidelines; to interact with organizations promoting EBDM; and to tailor evidence to individual cases. However, thinking that all barriers to the practice of EBDM could be solved by fancy information technology is naïve. Barriers related to the generation, interpretation, integration and use of the evidence demand more complex and perhaps unfeasible solutions, as overcoming them will require substantial changes in the structure of the health system, in the politics of science and in the way in which humans think and behave.

  17. GPU-BSM: a GPU-based tool to map bisulfite-treated reads.

    Directory of Open Access Journals (Sweden)

    Andrea Manconi

    Full Text Available Cytosine DNA methylation is an epigenetic mark implicated in several biological processes. Bisulfite treatment of DNA is acknowledged as the gold standard technique to study methylation. This technique introduces changes in the genomic DNA by converting cytosines to uracils while 5-methylcytosines remain nonreactive. During PCR amplification 5-methylcytosines are amplified as cytosine, whereas uracils and thymines as thymine. To detect the methylation levels, reads treated with the bisulfite must be aligned against a reference genome. Mapping these reads to a reference genome represents a significant computational challenge mainly due to the increased search space and the loss of information introduced by the treatment. To deal with this computational challenge we devised GPU-BSM, a tool based on modern Graphics Processing Units. Graphics Processing Units are hardware accelerators that are increasingly being used successfully to accelerate general-purpose scientific applications. GPU-BSM is a tool able to map bisulfite-treated reads from whole genome bisulfite sequencing and reduced representation bisulfite sequencing, and to estimate methylation levels, with the goal of detecting methylation. Due to the massive parallelization obtained by exploiting graphics cards, GPU-BSM aligns bisulfite-treated reads faster than other cutting-edge solutions, while outperforming most of them in terms of unique mapped reads.

  18. Molecular Imaging: A Useful Tool for the Development of Natural Killer Cell-Based Immunotherapies

    Directory of Open Access Journals (Sweden)

    Prakash Gangadaran

    2017-09-01

    Full Text Available Molecular imaging is a relatively new discipline that allows visualization, characterization, and measurement of the biological processes in living subjects, including humans, at a cellular and molecular level. The interaction between cancer cells and natural killer (NK cells is complex and incompletely understood. Despite our limited knowledge, progress in the search for immune cell therapies against cancer could be significantly improved by dynamic and non-invasive visualization and tracking of immune cells and by visualization of the response of cancer cells to therapies in preclinical and clinical studies. Molecular imaging is an essential tool for these studies, and a multimodal molecular imaging approach can be applied to monitor immune cells in vivo, for instance, to visualize therapeutic effects. In this review, we discuss the usefulness of NK cells in cancer therapies and the preclinical and clinical usefulness of molecular imaging in NK cell-based therapies. Furthermore, we discuss different molecular imaging modalities for use with NK cell-based therapies, and their preclinical and clinical applications in animal and human subjects. Molecular imaging has contributed to the development of NK cell-based therapies against cancers in animal models and to the refinement of current cell-based cancer immunotherapies. Developing sensitive and reproducible non-invasive molecular imaging technologies for in vivo NK cell monitoring and for real-time assessment of therapeutic effects will accelerate the development of NK cell therapies.

  19. A Novel Quad Harmony Search Algorithm for Grid-Based Path Finding

    Directory of Open Access Journals (Sweden)

    Saso Koceski

    2014-09-01

    Full Text Available A novel approach to the problem of grid-based path finding has been introduced. The method is a block-based search algorithm, founded on the bases of two algorithms, namely the quad-tree algorithm, which offered a great opportunity for decreasing the time needed to compute the solution, and the harmony search (HS algorithm, a meta-heuristic algorithm used to obtain the optimal solution. This quad HS algorithm uses the quad-tree decomposition of free space in the grid to mark the free areas and treat them as a single node, which greatly improves the execution. The results of the quad HS algorithm have been compared to other meta-heuristic algorithms, i.e., ant colony, genetic algorithm, particle swarm optimization and simulated annealing, and it was proved to obtain the best results in terms of time and giving the optimal path.

  20. Road Traffic Congestion Management Based on a Search-Allocation Approach

    Directory of Open Access Journals (Sweden)

    Raiyn Jamal

    2017-03-01

    Full Text Available This paper introduces a new scheme for road traffic management in smart cities, aimed at reducing road traffic congestion. The scheme is based on a combination of searching, updating, and allocation techniques (SUA. An SUA approach is proposed to reduce the processing time for forecasting the conditions of all road sections in real-time, which is typically considerable and complex. It searches for the shortest route based on historical observations, then computes travel time forecasts based on vehicular location in real-time. Using updated information, which includes travel time forecasts and accident forecasts, the vehicle is allocated the appropriate section. The novelty of the SUA scheme lies in its updating of vehicles in every time to reduce traffic congestion. Furthermore, the SUA approach supports autonomy and management by self-regulation, which recommends its use in smart cities that support internet of things (IoT technologies.

  1. Hidden policy ciphertext-policy attribute-based encryption with keyword search against keyword guessing attack

    Institute of Scientific and Technical Information of China (English)

    Shuo; QIU; Jiqiang; LIU; Yanfeng; SHI; Rui; ZHANG

    2017-01-01

    Attribute-based encryption with keyword search(ABKS) enables data owners to grant their search capabilities to other users by enforcing an access control policy over the outsourced encrypted data. However,existing ABKS schemes cannot guarantee the privacy of the access structures, which may contain some sensitive private information. Furthermore, resulting from the exposure of the access structures, ABKS schemes are susceptible to an off-line keyword guessing attack if the keyword space has a polynomial size. To solve these problems, we propose a novel primitive named hidden policy ciphertext-policy attribute-based encryption with keyword search(HP-CPABKS). With our primitive, the data user is unable to search on encrypted data and learn any information about the access structure if his/her attribute credentials cannot satisfy the access control policy specified by the data owner. We present a rigorous selective security analysis of the proposed HP-CPABKS scheme, which simultaneously keeps the indistinguishability of the keywords and the access structures. Finally,the performance evaluation verifies that our proposed scheme is efficient and practical.

  2. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  3. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  4. Is Internet search better than structured instruction for web-based health education?

    Science.gov (United States)

    Finkelstein, Joseph; Bedra, McKenzie

    2013-01-01

    Internet provides access to vast amounts of comprehensive information regarding any health-related subject. Patients increasingly use this information for health education using a search engine to identify education materials. An alternative approach of health education via Internet is based on utilizing a verified web site which provides structured interactive education guided by adult learning theories. Comparison of these two approaches in older patients was not performed systematically. The aim of this study was to compare the efficacy of a web-based computer-assisted education (CO-ED) system versus searching the Internet for learning about hypertension. Sixty hypertensive older adults (age 45+) were randomized into control or intervention groups. The control patients spent 30 to 40 minutes searching the Internet using a search engine for information about hypertension. The intervention patients spent 30 to 40 minutes using the CO-ED system, which provided computer-assisted instruction about major hypertension topics. Analysis of pre- and post- knowledge scores indicated a significant improvement among CO-ED users (14.6%) as opposed to Internet users (2%). Additionally, patients using the CO-ED program rated their learning experience more positively than those using the Internet.

  5. A Spreadsheet-based GIS tool for planning aerial photography

    Science.gov (United States)

    The U.S.EPA's Pacific Coastal Ecology Branch has developed a tool which facilitates planning aerial photography missions. This tool is an Excel spreadsheet which accepts various input parameters such as desired photo-scale and boundary coordinates of the study area and compiles ...

  6. Simulation Tools for Power Electronics Courses Based on Java Technologies

    Science.gov (United States)

    Canesin, Carlos A.; Goncalves, Flavio A. S.; Sampaio, Leonardo P.

    2010-01-01

    This paper presents interactive power electronics educational tools. These interactive tools make use of the benefits of Java language to provide a dynamic and interactive approach to simulating steady-state ideal rectifiers (uncontrolled and controlled; single-phase and three-phase). Additionally, this paper discusses the development and use of…

  7. Estimation of toxicity using a Java based software tool

    Science.gov (United States)

    A software tool has been developed that will allow a user to estimate the toxicity for a variety of endpoints (such as acute aquatic toxicity). The software tool is coded in Java and can be accessed using a web browser (or alternatively downloaded and ran as a stand alone applic...

  8. Ceramic tools insert assesment based on vickers indentation methodology

    Science.gov (United States)

    Husni; Rizal, Muhammad; Aziz M, M.; Wahyu, M.

    2018-05-01

    In the interrupted cutting process, the risk of tool chipping or fracture is higher than continues cutting. Therefore, the selection of suitable ceramic tools for interrupted cutting application become an important issue to assure that the cutting process is running effectively. At present, the performance of ceramics tools is assessed by conducting some cutting tests, which is required time and cost consuming. In this study, the performance of ceramic tools evaluated using hardness tester machine. The technique, in general, has a certain advantage compare with the more conventional methods; the experimental is straightforward involving minimal specimen preparation and the amount of material needed is small. Three types of ceramic tools AS10, CC650 and K090 have been used, each tool was polished then Vickers indentation test were performed with the load were 0.2, 0.5, 1, 2.5, 5 and 10 kgf. The results revealed that among the load used in the tests, the indentation loads of 5 kgf always produce well cracks as compared with others. Among the cutting tool used in the tests, AS10 has produced the shortest crack length and follow by CC 670, and K090. It is indicated that the shortest crack length of AS10 reflected that the tool has a highest dynamic load resistance among others insert.

  9. Category Theory Approach to Solution Searching Based on Photoexcitation Transfer Dynamics

    Directory of Open Access Journals (Sweden)

    Makoto Naruse

    2017-07-01

    Full Text Available Solution searching that accompanies combinatorial explosion is one of the most important issues in the age of artificial intelligence. Natural intelligence, which exploits natural processes for intelligent functions, is expected to help resolve or alleviate the difficulties of conventional computing paradigms and technologies. In fact, we have shown that a single-celled organism such as an amoeba can solve constraint satisfaction problems and related optimization problems as well as demonstrate experimental systems based on non-organic systems such as optical energy transfer involving near-field interactions. However, the fundamental mechanisms and limitations behind solution searching based on natural processes have not yet been understood. Herein, we present a theoretical background of solution searching based on optical excitation transfer from a category-theoretic standpoint. One important indication inspired by the category theory is that the satisfaction of short exact sequences is critical for an adequate computational operation that determines the flow of time for the system and is termed as “short-exact-sequence-based time.” In addition, the octahedral and braid structures known in triangulated categories provide a clear understanding of the underlying mechanisms, including a quantitative indication of the difficulties of obtaining solutions based on homology dimension. This study contributes to providing a fundamental background of natural intelligence.

  10. Enterprise KM System: IT based Tool for Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohamad Safuan Sulaiman; Siti Nurbahyah Hamdan; Mohd Dzul Aiman Aslan

    2014-01-01

    Implementation of right and suitable tool for enterprise Knowledge Management (KM) system to an organization is not an easy task. Everything needs to be taken into account before its implementation come true. One of them is to ensure full cooperation is given by the whole entire organization to succeed the knowledge sharing culture utilizing the tool. From selection of potential tools until the implementation and deployment strategies, these shall be thoroughly and carefully organized. A study of choosing the suitable tools and those strategies has been done in Nuclear Malaysia as resulted from Process Oriented Knowledge Management (POKM) project. As far as enterprise KM system is concerned, Microsoft Share Point technology is one of the potential tools in this context. This paper articulates approach and methodology of choosing the technology including its planning, deployment and implementation strategies. (author)

  11. Search for magnetic minerals in Martian rocks: Overview of the Rock Abrasion Tool (RAT) magnet investigation on Spirit and Opportunity

    DEFF Research Database (Denmark)

    Goetz, W.; Leer, K.; Gunnlaugsson, H.P.

    2008-01-01

    The Rock Abrasion Tool (RAT) on board the Mars Exploration Rovers (MER) is a grinding tool designed to remove dust coatings and/or weathering rinds from rocks and expose fresh rock material. Four magnets of different strengths that are built into the structure of the RAT have been attracting...... is interpreted as magnetite. The amount of abraded rock material adhering to the magnets varied strongly during the mission and is correlated in a consistent way to the amount of magnetite inferred from Mossbauer spectra for the corresponding rock. The RAT magnet experiment as performed on Opportunity also...

  12. A smarter way to search, share and utilize open-spatial online data for energy R&D - Custom machine learning and GIS tools in U.S. DOE's virtual data library & laboratory, EDX

    Science.gov (United States)

    Rose, K.; Bauer, J.; Baker, D.; Barkhurst, A.; Bean, A.; DiGiulio, J.; Jones, K.; Jones, T.; Justman, D.; Miller, R., III; Romeo, L.; Sabbatino, M.; Tong, A.

    2017-12-01

    As spatial datasets are increasingly accessible through open, online systems, the opportunity to use these resources to address a range of Earth system questions grows. Simultaneously, there is a need for better infrastructure and tools to find and utilize these resources. We will present examples of advanced online computing capabilities, hosted in the U.S. DOE's Energy Data eXchange (EDX), that address these needs for earth-energy research and development. In one study the computing team developed a custom, machine learning, big data computing tool designed to parse the web and return priority datasets to appropriate servers to develop an open-source global oil and gas infrastructure database. The results of this spatial smart search approach were validated against expert-driven, manual search results which required a team of seven spatial scientists three months to produce. The custom machine learning tool parsed online, open systems, including zip files, ftp sites and other web-hosted resources, in a matter of days. The resulting resources were integrated into a geodatabase now hosted for open access via EDX. Beyond identifying and accessing authoritative, open spatial data resources, there is also a need for more efficient tools to ingest, perform, and visualize multi-variate, spatial data analyses. Within the EDX framework, there is a growing suite of processing, analytical and visualization capabilities that allow multi-user teams to work more efficiently in private, virtual workspaces. An example of these capabilities are a set of 5 custom spatio-temporal models and data tools that form NETL's Offshore Risk Modeling suite that can be used to quantify oil spill risks and impacts. Coupling the data and advanced functions from EDX with these advanced spatio-temporal models has culminated with an integrated web-based decision-support tool. This platform has capabilities to identify and combine data across scales and disciplines, evaluate potential environmental

  13. Designing and Implementing Web-Based Scaffolding Tools for Technology-Enhanced Socioscientific Inquiry

    Science.gov (United States)

    Shin, Suhkyung; Brush, Thomas A.; Glazewski, Krista D.

    2017-01-01

    This study explores how web-based scaffolding tools provide instructional support while implementing a socio-scientific inquiry (SSI) unit in a science classroom. This case study focused on how students used web-based scaffolding tools during SSI activities, and how students perceived the SSI unit and the scaffolding tools embedded in the SSI…

  14. In Search of Samoan Research Approaches to Education: Tofa'a'Anolasi and the Foucauldian Tool Box

    Science.gov (United States)

    Galuvao, Akata Sisigafu'aapulematumua

    2018-01-01

    This article introduces Tofa'a'anolasi, a novel Samoan research framework created by drawing on the work of other Samoan and Pacific education researchers, in combination with adapting the 'Foucauldian tool box' to use for research carried out from a Samoan perspective. The article starts with an account and explanation of the process of…

  15. The Bristol Radiology Report Assessment Tool (BRRAT): developing a workplace-based assessment tool for radiology reporting skills.

    Science.gov (United States)

    Wallis, A; Edey, A; Prothero, D; McCoubrie, P

    2013-11-01

    To review the development of a workplace-based assessment tool to assess the quality of written radiology reports and assess its reliability, feasibility, and validity. A comprehensive literature review and rigorous Delphi study enabled the development of the Bristol Radiology Report Assessment Tool (BRRAT), which consists of 19 questions and a global assessment score. Three assessors applied the assessment tool to 240 radiology reports provided by 24 radiology trainees. The reliability coefficient for the 19 questions was 0.79 and the equivalent coefficient for the global assessment scores was 0.67. Generalizability coefficients demonstrate that higher numbers of assessors and assessments are needed to reach acceptable levels of reliability for summative assessments due to assessor subjectivity. The study methodology gives good validity and strong foundation in best-practice. The assessment tool developed for radiology reporting is reliable and most suited to formative assessments. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  16. The Bristol Radiology Report Assessment Tool (BRRAT): Developing a workplace-based assessment tool for radiology reporting skills

    International Nuclear Information System (INIS)

    Wallis, A.; Edey, A.; Prothero, D.; McCoubrie, P.

    2013-01-01

    Aim: To review the development of a workplace-based assessment tool to assess the quality of written radiology reports and assess its reliability, feasibility, and validity. Materials and methods: A comprehensive literature review and rigorous Delphi study enabled the development of the Bristol Radiology Report Assessment Tool (BRRAT), which consists of 19 questions and a global assessment score. Three assessors applied the assessment tool to 240 radiology reports provided by 24 radiology trainees. Results: The reliability coefficient for the 19 questions was 0.79 and the equivalent coefficient for the global assessment scores was 0.67. Generalizability coefficients demonstrate that higher numbers of assessors and assessments are needed to reach acceptable levels of reliability for summative assessments due to assessor subjectivity. Conclusion: The study methodology gives good validity and strong foundation in best-practice. The assessment tool developed for radiology reporting is reliable and most suited to formative assessments

  17. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  18. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    Science.gov (United States)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  19. Spatial planning via extremal optimization enhanced by cell-based local search

    International Nuclear Information System (INIS)

    Sidiropoulos, Epaminondas

    2014-01-01

    A new treatment is presented for land use planning problems by means of extremal optimization in conjunction to cell-based neighborhood local search. Extremal optimization, inspired by self-organized critical models of evolution has been applied mainly to the solution of classical combinatorial optimization problems. Cell-based local search has been employed by the author elsewhere in problems of spatial resource allocation in combination with genetic algorithms and simulated annealing. In this paper it complements extremal optimization in order to enhance its capacity for a spatial optimization problem. The hybrid method thus formed is compared to methods of the literature on a specific characteristic problem. It yields better results both in terms of objective function values and in terms of compactness. The latter is an important quantity for spatial planning. The present treatment yields significant compactness values as emergent results

  20. Using fuzzy rule-based knowledge model for optimum plating conditions search

    Science.gov (United States)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.

    2018-03-01

    The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.

  1. Hybrid fuzzy charged system search algorithm based state estimation in distribution networks

    Directory of Open Access Journals (Sweden)

    Sachidananda Prasad

    2017-06-01

    Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.

  2. Search for vector-like T' quarks using tools for the analysis of jet substructure with the CMS experiment

    CERN Document Server

    Höing, Rebekka Sophie; Haller, Johannes

    2015-01-01

    A search for pairs of vector-like T' quark produced in proton-proton collisions recorded with the CMS experiment at p s = 8 TeV is presented. The search is optimized for decays of T' quarks to top quarks and Higgs bosons, where the top quarks and Higgs bosons decay hadronically. The T'-quark mass range between 500 and 1000 GeV is investigated. The top quarks and Higgs bosons produced in decays of the heavy T' quarks acquire large Lorentz boosts. The signatures of these particles in the detector can overlap and are therefore dicult to resolve using classical jet reconstruction methods. Large-radius jets are reconstructed and subjets formed from their constituents. The decay products of particles with large Lorentz boosts are highly collimated and can all be found within a single one of these large-radius jets. Top jets containing hadronic top-quark decays are identied with a top-tagging algorithm that analyzes the jet substructure. A b-tagging algorithm is applied to the reconstructed subjets in order to nd bo...

  3. Development of a multivariate tool to reject background in a WZ diboson search for the CDF experiment

    Energy Technology Data Exchange (ETDEWEB)

    Cremonesi, Matteo [Univ. of of Rome Tor Vergata (Italy)

    2015-08-27

    In the frame of the strong on-going data analysis effort of the CDF collaboration at Fermilab, a method was developed by the candidate to improve the background rejection efficiency in the search for associated pair production of electroweak W, Z bosons. The performaces of the method for vetoing the tt background in a WZ/ZZ → fνq$\\bar{q}$ diboson search are reported. The method was developed in the inclusive 2-jets sample and applied to the “tag-2 jets" region, the subsample defined by the request that the two jets carry beauty flavor. In this region the tt production is one of the largest backgrounds. The tt veto proceeds in two steps: first, a set of pre-selection cuts are applied in a candidate sample where up to two leptons are accepted in addition to a jet pair, and the ZZ component of the signal is thus preserved; next, a Neural Network is trained to indicate the probability that the event be top-pair production. To validate the the method as developed in the inclusive 2-jets sample, it is applied to veto region providing a significant rejection of this important background.

  4. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  5. MinHash-Based Fuzzy Keyword Search of Encrypted Data across Multiple Cloud Servers

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2018-05-01

    Full Text Available To enhance the efficiency of data searching, most data owners store their data files in different cloud servers in the form of cipher-text. Thus, efficient search using fuzzy keywords becomes a critical issue in such a cloud computing environment. This paper proposes a method that aims at improving the efficiency of cipher-text retrieval and lowering storage overhead for fuzzy keyword search. In contrast to traditional approaches, the proposed method can reduce the complexity of Min-Hash-based fuzzy keyword search by using Min-Hash fingerprints to avoid the need to construct the fuzzy keyword set. The method will utilize Jaccard similarity to rank the results of retrieval, thus reducing the amount of calculation for similarity and saving a lot of time and space overhead. The method will also take consideration of multiple user queries through re-encryption technology and update user permissions dynamically. Security analysis demonstrates that the method can provide better privacy preservation and experimental results show that efficiency of cipher-text using the proposed method can improve the retrieval time and lower storage overhead as well.

  6. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  7. A BLE-Based Pedestrian Navigation System for Car Searching in Indoor Parking Garages.

    Science.gov (United States)

    Wang, Sheng-Shih

    2018-05-05

    The continuous global increase in the number of cars has led to an increase in parking issues, particularly with respect to the search for available parking spaces and finding cars. In this paper, we propose a navigation system for car owners to find their cars in indoor parking garages. The proposed system comprises a car-searching mobile app and a positioning-assisting subsystem. The app guides car owners to their cars based on a “turn-by-turn” navigation strategy, and has the ability to correct the user’s heading orientation. The subsystem uses beacon technology for indoor positioning, supporting self-guidance of the car-searching mobile app. This study also designed a local coordinate system to support the identification of the locations of parking spaces and beacon devices. We used Android as the platform to implement the proposed car-searching mobile app, and used Bytereal HiBeacon devices to implement the proposed positioning-assisting subsystem. We also deployed the system in a parking lot in our campus for testing. The experimental results verified that the proposed system not only works well, but also provides the car owner with the correct route guidance information.

  8. cFS-based Autonomous Requirements Testing Tool, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The S&K Team proposes design of a tool suite, Autonomy Requirements Tester (ART), to address the difficulty of stating autonomous requirements and the links to...

  9. Simulation-Based Tool for Traffic Management Training, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Both the current NAS, as well as NextGen, need successful use of advanced tools. Successful training is required today because more information gathering and...

  10. Simulation-Based Tool for Traffic Management Training, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Both the current NAS, as well as NextGen, need successful use of advanced tools. Successful training is required today because more information gathering and...

  11. Integrated knowledge base tool for acquisition and verification of NPP alarm systems

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1998-01-01

    Knowledge acquisition and knowledge base verification are important activities in developing knowledge-based systems such as alarm processing systems. In this work, we developed the integrated tool, for knowledge acquisition and verification of NPP alarm processing systems, by using G2 tool. The tool integrates document analysis method and ECPN matrix analysis method, for knowledge acquisition and knowledge verification, respectively. This tool enables knowledge engineers to perform their tasks from knowledge acquisition to knowledge verification consistently

  12. A Web-based Tool Combining Different Type Analyses

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program...... the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type....

  13. The possibilities of searching for new materials based on isocationic analogs of ZnBVI

    Science.gov (United States)

    Kirovskaya, I. A.; Mironova, E. V.; Kosarev, B. A.; Yureva, A. V.; Ekkert, R. V.

    2017-08-01

    Acid-base properties of chalcogenides - analogs of ZnBVI - were investigated in detail by modern techniques. The regularities of their composition-dependent changes were set, these regularities correlating with the dependencies "bulk physicochemical property - composition". The main reason for such correlations was found, facilitating the search for new materials of corresponding sensors. In this case, it was the sensors for basic gases impurities.

  14. A peak value searching method of the MCA based on digital logic devices

    International Nuclear Information System (INIS)

    Sang Ziru; Huang Shanshan; Chen Lian; Jin Ge

    2010-01-01

    Digital multi-channel analyzers play a more important role in multi-channel pulse height analysis technique. The direction of digitalization are characterized by powerful pulse processing ability, high throughput, improved stability and flexibility. This paper introduces a method of searching peak value of waveform based on digital logic with FPGA. This method reduce the dead time. Then data correction offline can improvement the non-linearity of MCA. It gives the α energy spectrum of 241 Am. (authors)

  15. An Efficient Energy Constraint Based UAV Path Planning for Search and Coverage

    OpenAIRE

    Gramajo, German; Shankar, Praveen

    2017-01-01

    A path planning strategy for a search and coverage mission for a small UAV that maximizes the area covered based on stored energy and maneuverability constraints is presented. The proposed formulation has a high level of autonomy, without requiring an exact choice of optimization parameters, and is appropriate for real-time implementation. The computed trajectory maximizes spatial coverage while closely satisfying terminal constraints on the position of the vehicle and minimizing the time of ...

  16. Tool Wear Detection Based on Duffing-Holmes Oscillator

    Directory of Open Access Journals (Sweden)

    Wanqing Song

    2008-01-01

    Full Text Available The cutting sound in the audible range includes plenty of tool wear information. The sound is sampled by the acoustic emission (AE sensor as a short-time sequence, then worn wear can be detected by the Duffing-Holmes oscillator. A novel engineering method is proposed for determining the chaotic threshold of the Duffing-Holmes oscillator. First, a rough threshold value is calculated by local Lyapunov exponents with a step size 0.1. Second, the exact threshold value is calculated by the Duffing-Holmes system in terms of the law of the golden section. The advantage of the method is low computation cost. The feasibility for tool condition detection is demonstrated by the 27 kinds of cutting conditions with sharp tool and worn tool in turning experiments. The 54 group data sampled as noisy are embedded into the Duffing-Holmes oscillator, respectively. Finally, one chaotic threshold is determined conveniently which can distinguish between worn tool or sharp tool.

  17. Afghanistan, history and beyond - GIS based application tool

    Science.gov (United States)

    Swamy, Rahul Chidananda

    The emphasis of this tool is to provide an insight into the history of Afghanistan. Afghanistan has been a warring nation for decades; this tool provides a brief account of the reasons behind the importance of Afghanistan, which led to its invasion by Britain, Russia and USA. The timeline for this thesis was set from 1879 to 1990 which ranges from Barakzai Dynasty to the soviet invasion. Maps are used judiciously to show battles during the British invasion. Maps that show roads, rivers, lakes and provinces are incorporated into the tool to provide an overview of the present situation. The user has options to filter this data by using the timeline and a filtering tool. To quench the users thirst for more information, HTML pages are used judiciously. HTML pages are embedded in key events to provide detailed insight into these events with the help of pictures and videos. An intuitive slider is used to show the people who played a significant role in Afghanistan. The user interface was made intuitive and easy to use, keeping in mind the novice user. A help menu is provided to guide the user on the tool. Spending time researching about Afghanistan has helped me again a new perspective on Afghanistan and its people. With this tool, I hope I can provide a valuable channel for people to understand Afghanistan and gain a fresh perspective into this war ridden nation.

  18. Search Engines for Tomorrow's Scholars

    Science.gov (United States)

    Fagan, Jody Condit

    2011-01-01

    Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…

  19. [Advanced online search techniques and dedicated search engines for physicians].

    Science.gov (United States)

    Nahum, Yoav

    2008-02-01

    In recent years search engines have become an essential tool in the work of physicians. This article will review advanced search techniques from the world of information specialists, as well as some advanced search engine operators that may help physicians improve their online search capabilities, and maximize the yield of their searches. This article also reviews popular dedicated scientific and biomedical literature search engines.

  20. Search Advertising

    OpenAIRE

    Cornière (de), Alexandre

    2016-01-01

    Search engines enable advertisers to target consumers based on the query they have entered. In a framework with horizontal product differentiation, imperfect product information and in which consumers incur search costs, I study a game in which advertisers have to choose a price and a set of relevant keywords. The targeting mechanism brings about three kinds of efficiency gains, namely lower search costs, better matching, and more intense product market price-competition. A monopolistic searc...

  1. Expectation violations in sensorimotor sequences: shifting from LTM-based attentional selection to visual search.

    Science.gov (United States)

    Foerster, Rebecca M; Schneider, Werner X

    2015-03-01

    Long-term memory (LTM) delivers important control signals for attentional selection. LTM expectations have an important role in guiding the task-driven sequence of covert attention and gaze shifts, especially in well-practiced multistep sensorimotor actions. What happens when LTM expectations are disconfirmed? Does a sensory-based visual-search mode of attentional selection replace the LTM-based mode? What happens when prior LTM expectations become valid again? We investigated these questions in a computerized version of the number-connection test. Participants clicked on spatially distributed numbered shapes in ascending order while gaze was recorded. Sixty trials were performed with a constant spatial arrangement. In 20 consecutive trials, either numbers, shapes, both, or no features switched position. In 20 reversion trials, participants worked on the original arrangement. Only the sequence-affecting number switches elicited slower clicking, visual search-like scanning, and lower eye-hand synchrony. The effects were neither limited to the exchanged numbers nor to the corresponding actions. Thus, expectation violations in a well-learned sensorimotor sequence cause a regression from LTM-based attentional selection to visual search beyond deviant-related actions and locations. Effects lasted for several trials and reappeared during reversion. © 2015 New York Academy of Sciences.

  2. Chapter 18: Web-based Tools - NED VO Services

    Science.gov (United States)

    Mazzarella, J. M.; NED Team

    The NASA/IPAC Extragalactic Database (NED) is a thematic, web-based research facility in widespread use by scientists, educators, space missions, and observatory operations for observation planning, data analysis, discovery, and publication of research about objects beyond our Milky Way galaxy. NED is a portal into a systematic fusion of data from hundreds of sky surveys and tens of thousands of research publications. The contents and services span the entire electromagnetic spectrum from gamma rays through radio frequencies, and are continuously updated to reflect the current literature and releases of large-scale sky survey catalogs. NED has been on the Internet since 1990, growing in content, automation and services with the evolution of information technology. NED is the world's largest database of crossidentified extragalactic objects. As of December 2006, the system contains approximately 10 million objects and 15 million multi-wavelength cross-IDs. Over 4 thousand catalogs and published lists covering the entire electromagnetic spectrum have had their objects cross-identified or associated, with fundamental data parameters federated for convenient queries and retrieval. This chapter describes the interoperability of NED services with other components of the Virtual Observatory (VO). Section 1 is a brief overview of the primary NED web services. Section 2 provides a tutorial for using NED services currently available through the NVO Registry. The "name resolver" provides VO portals and related internet services with celestial coordinates for objects specified by catalog identifier (name); any alias can be queried because this service is based on the source cross-IDs established by NED. All major services have been updated to provide output in VOTable (XML) format that can be accessed directly from the NED web interface or using the NVO registry. These include access to images via SIAP, Cone- Search queries, and services providing fundamental, multi

  3. Usability of Browser-Based Tools for Web-Search Privacy

    Science.gov (United States)

    2010-03-01

    What might Italians call maize? — Polenta From which country do french fries originate? — Belgium In Peru, which color potatoes are grown, in addition...What is kartofflen? — Potato Dumplings What is the name of the flatbread eaten with most Indian cuisine ? — Naan What type of cuisine offers Dim Sum

  4. Searching PubMed for molecular epidemiology studies: the case of chromosome aberrations

    DEFF Research Database (Denmark)

    Ugolini, Donatella; Neri, Monica; Knudsen, Lisbeth E

    2006-01-01

    to environmental pollutants. The search, done on the PubMed/MedLine database, was based on a strategy combining descriptors listed in the PubMed Medical Subject Headings (MeSH) Thesaurus and other available tools (free text or phrase search tools). 178 articles were retrieved by searching the period from January 1...

  5. Neural network based pattern matching and spike detection tools and services--in the CARMEN neuroinformatics project.

    Science.gov (United States)

    Fletcher, Martyn; Liang, Bojian; Smith, Leslie; Knowles, Alastair; Jackson, Tom; Jessop, Mark; Austin, Jim

    2008-10-01

    In the study of information flow in the nervous system, component processes can be investigated using a range of electrophysiological and imaging techniques. Although data is difficult and expensive to produce, it is rarely shared and collaboratively exploited. The Code Analysis, Repository and Modelling for e-Neuroscience (CARMEN) project addresses this challenge through the provision of a virtual neuroscience laboratory: an infrastructure for sharing data, tools and services. Central to the CARMEN concept are federated CARMEN nodes, which provide: data and metadata storage, new, thirdparty and legacy services, and tools. In this paper, we describe the CARMEN project as well as the node infrastructure and an associated thick client tool for pattern visualisation and searching, the Signal Data Explorer (SDE). We also discuss new spike detection methods, which are central to the services provided by CARMEN. The SDE is a client application which can be used to explore data in the CARMEN repository, providing data visualization, signal processing and a pattern matching capability. It performs extremely fast pattern matching and can be used to search for complex conditions composed of many different patterns across the large datasets that are typical in neuroinformatics. Searches can also be constrained by specifying text based metadata filters. Spike detection services which use wavelet and morphology techniques are discussed, and have been shown to outperform traditional thresholding and template based systems. A number of different spike detection and sorting techniques will be deployed as services within the CARMEN infrastructure, to allow users to benchmark their performance against a wide range of reference datasets.

  6. SPECTRa-T: machine-based data extraction and semantic searching of chemistry e-theses.

    Science.gov (United States)

    Downing, Jim; Harvey, Matt J; Morgan, Peter B; Murray-Rust, Peter; Rzepa, Henry S; Stewart, Diana C; Tonge, Alan P; Townsend, Joe A

    2010-02-22

    The SPECTRa-T project has developed text-mining tools to extract named chemical entities (NCEs), such as chemical names and terms, and chemical objects (COs), e.g., experimental spectral assignments and physical chemistry properties, from electronic theses (e-theses). Although NCEs were readily identified within the two major document formats studied, only the use of structured documents enabled identification of chemical objects and their association with the relevant chemical entity (e.g., systematic chemical name). A corpus of theses was analyzed and it is shown that a high degree of semantic information can be extracted from structured documents. This integrated information has been deposited in a persistent Resource Description Framework (RDF) triple-store that allows users to conduct semantic searches. The strength and weaknesses of several document formats are reviewed.

  7. DWFS: A Wrapper Feature Selection Tool Based on a Parallel Genetic Algorithm

    KAUST Repository

    Soufan, Othman

    2015-02-26

    Many scientific problems can be formulated as classification tasks. Data that harbor relevant information are usually described by a large number of features. Frequently, many of these features are irrelevant for the class prediction. The efficient implementation of classification models requires identification of suitable combinations of features. The smaller number of features reduces the problem\\'s dimensionality and may result in higher classification performance. We developed DWFS, a web-based tool that allows for efficient selection of features for a variety of problems. DWFS follows the wrapper paradigm and applies a search strategy based on Genetic Algorithms (GAs). A parallel GA implementation examines and evaluates simultaneously large number of candidate collections of features. DWFS also integrates various filteringmethods thatmay be applied as a pre-processing step in the feature selection process. Furthermore, weights and parameters in the fitness function of GA can be adjusted according to the application requirements. Experiments using heterogeneous datasets from different biomedical applications demonstrate that DWFS is fast and leads to a significant reduction of the number of features without sacrificing performance as compared to several widely used existing methods. DWFS can be accessed online at www.cbrc.kaust.edu.sa/dwfs.

  8. DWFS: A Wrapper Feature Selection Tool Based on a Parallel Genetic Algorithm

    KAUST Repository

    Soufan, Othman; Kleftogiannis, Dimitrios A.; Kalnis, Panos; Bajic, Vladimir B.

    2015-01-01

    Many scientific problems can be formulated as classification tasks. Data that harbor relevant information are usually described by a large number of features. Frequently, many of these features are irrelevant for the class prediction. The efficient implementation of classification models requires identification of suitable combinations of features. The smaller number of features reduces the problem's dimensionality and may result in higher classification performance. We developed DWFS, a web-based tool that allows for efficient selection of features for a variety of problems. DWFS follows the wrapper paradigm and applies a search strategy based on Genetic Algorithms (GAs). A parallel GA implementation examines and evaluates simultaneously large number of candidate collections of features. DWFS also integrates various filteringmethods thatmay be applied as a pre-processing step in the feature selection process. Furthermore, weights and parameters in the fitness function of GA can be adjusted according to the application requirements. Experiments using heterogeneous datasets from different biomedical applications demonstrate that DWFS is fast and leads to a significant reduction of the number of features without sacrificing performance as compared to several widely used existing methods. DWFS can be accessed online at www.cbrc.kaust.edu.sa/dwfs.

  9. The Malaria System MicroApp: A New, Mobile Device-Based Tool for Malaria Diagnosis.

    Science.gov (United States)

    Oliveira, Allisson Dantas; Prats, Clara; Espasa, Mateu; Zarzuela Serrat, Francesc; Montañola Sales, Cristina; Silgado, Aroa; Codina, Daniel Lopez; Arruda, Mercia Eliane; I Prat, Jordi Gomez; Albuquerque, Jones

    2017-04-25

    Malaria is a public health problem that affects remote areas worldwide. Climate change has contributed to the problem by allowing for the survival of Anopheles in previously uninhabited areas. As such, several groups have made developing news systems for the automated diagnosis of malaria a priority. The objective of this study was to develop a new, automated, mobile device-based diagnostic system for malaria. The system uses Giemsa-stained peripheral blood samples combined with light microscopy to identify the Plasmodium falciparum species in the ring stage of development. The system uses image processing and artificial intelligence techniques as well as a known face detection algorithm to identify Plasmodium parasites. The algorithm is based on integral image and haar-like features concepts, and makes use of weak classifiers with adaptive boosting learning. The search scope of the learning algorithm is reduced in the preprocessing step by removing the background around blood cells. As a proof of concept experiment, the tool was used on 555 malaria-positive and 777 malaria-negative previously-made slides. The accuracy of the system was, on average, 91%, meaning that for every 100 parasite-infected samples, 91 were identified correctly. Accessibility barriers of low-resource countries can be addressed with low-cost diagnostic tools. Our system, developed for mobile devices (mobile phones and tablets), addresses this by enabling access to health centers in remote communities, and importantly, not depending on extensive malaria expertise or expensive diagnostic detection equipment. ©Allisson Dantas Oliveira, Clara Prats, Mateu Espasa, Francesc Zarzuela Serrat, Cristina Montañola Sales, Aroa Silgado, Daniel Lopez Codina, Mercia Eliane Arruda, Jordi Gomez i Prat, Jones Albuquerque. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 25.04.2017.

  10. Emerging Network-Based Tools in Movement Ecology.

    Science.gov (United States)

    Jacoby, David M P; Freeman, Robin

    2016-04-01

    New technologies have vastly increased the available data on animal movement and behaviour. Consequently, new methods deciphering the spatial and temporal interactions between individuals and their environments are vital. Network analyses offer a powerful suite of tools to disentangle the complexity within these dynamic systems, and we review these tools, their application, and how they have generated new ecological and behavioural insights. We suggest that network theory can be used to model and predict the influence of ecological and environmental parameters on animal movement, focusing on spatial and social connectivity, with fundamental implications for conservation. Refining how we construct and randomise spatial networks at different temporal scales will help to establish network theory as a prominent, hypothesis-generating tool in movement ecology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. MSP-Tool: a VBA-based software tool for the analysis of multispecimen paleointensity data

    Science.gov (United States)

    Monster, Marilyn; de Groot, Lennart; Dekkers, Mark

    2015-12-01

    The multispecimen protocol (MSP) is a method to estimate the Earth's magnetic field's past strength from volcanic rocks or archeological materials. By reducing the amount of heating steps and aligning the specimens parallel to the applied field, thermochemical alteration and multi-domain effects are minimized. We present a new software tool, written for Microsoft Excel 2010 in Visual Basic for Applications (VBA), that evaluates paleointensity data acquired using this protocol. In addition to the three ratios (standard, fraction-corrected and domain-state-corrected) calculated following Dekkers and Böhnel (2006) and Fabian and Leonhardt (2010) and a number of other parameters proposed by Fabian and Leonhardt (2010), it also provides several reliability criteria. These include an alteration criterion, whether or not the linear regression intersects the y axis within the theoretically prescribed range, and two directional checks. Overprints and misalignment are detected by isolating the remaining natural remanent magnetization (NRM) and the partial thermoremanent magnetization (pTRM) gained and comparing their declinations and inclinations. The NRM remaining and pTRM gained are then used to calculate alignment-corrected multispecimen plots. Data are analyzed using bootstrap statistics. The program was tested on lava samples that were given a full TRM and that acquired their pTRMs at angles of 0, 15, 30 and 90° with respect to their NRMs. MSP-Tool adequately detected and largely corrected these artificial alignment errors.

  12. Real-Time Search in Clouds

    OpenAIRE

    Uddin, Misbah; Skinner, Amy; Stadler, Rolf; Clemm, Alexander

    2013-01-01

    We developed a novel approach for management of networks/networked systems based on network search [4]. Network search provides a simple, uniform interface, through which human administrators and management applications can obtain network information, configuration or operational, without knowing its schema and location. We believe that the capability of network search will spur the development of new tools for human administrators and enable the rapid development of new classes of network co...

  13. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-01-01

    Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

  14. Parameter Search Algorithms for Microwave Radar-Based Breast Imaging: Focal Quality Metrics as Fitness Functions.

    Science.gov (United States)

    O'Loughlin, Declan; Oliveira, Bárbara L; Elahi, Muhammad Adnan; Glavin, Martin; Jones, Edward; Popović, Milica; O'Halloran, Martin

    2017-12-06

    Inaccurate estimation of average dielectric properties can have a tangible impact on microwave radar-based breast images. Despite this, recent patient imaging studies have used a fixed estimate although this is known to vary from patient to patient. Parameter search algorithms are a promising technique for estimating the average dielectric properties from the reconstructed microwave images themselves without additional hardware. In this work, qualities of accurately reconstructed images are identified from point spread functions. As the qualities of accurately reconstructed microwave images are similar to the qualities of focused microscopic and photographic images, this work proposes the use of focal quality metrics for average dielectric property estimation. The robustness of the parameter search is evaluated using experimental dielectrically heterogeneous phantoms on the three-dimensional volumetric image. Based on a very broad initial estimate of the average dielectric properties, this paper shows how these metrics can be used as suitable fitness functions in parameter search algorithms to reconstruct clear and focused microwave radar images.

  15. Simulation Optimization of Search and Rescue in Disaster Relief Based on Distributed Auction Mechanism

    Directory of Open Access Journals (Sweden)

    Jian Tang

    2017-11-01

    Full Text Available In this paper, we optimize the search and rescue (SAR in disaster relief through agent-based simulation. We simulate rescue teams’ search behaviors with the improved Truncated Lévy walks. Then we propose a cooperative rescue plan based on a distributed auction mechanism, and illustrate it with the case of landslide disaster relief. The simulation is conducted in three scenarios, including “fatal”, “serious” and “normal”. Compared with the non-cooperative rescue plan, the proposed rescue plan in this paper would increase victims’ relative survival probability by 7–15%, increase the ratio of survivors getting rescued by 5.3–12.9%, and decrease the average elapsed time for one site getting rescued by 16.6–21.6%. The robustness analysis shows that search radius can affect the rescue efficiency significantly, while the scope of cooperation cannot. The sensitivity analysis shows that the two parameters, the time limit for completing rescue operations in one buried site and the maximum turning angle for next step, both have a great influence on rescue efficiency, and there exists optimal value for both of them in view of rescue efficiency.

  16. uVis: A Formula-Based Visualization Tool

    DEFF Research Database (Denmark)

    Pantazos, Kostas; Xu, Shangjin; Kuhail, Mohammad Amin

    Several tools use programming approaches for developing advanced visualizations. Others can with a few steps create simple visualizations with built-in patterns, and users with limited IT experience can use them. However, it is programming and time demanding to create and customize...... these visualizations. We introduce uVis, a tool that allows users with advanced spreadsheet-like IT knowledge and basic database understanding to create simple as well as advanced visualizations. These users construct visualizations by combining building blocks (i.e. controls, shapes). They specify spreadsheet...

  17. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul......We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  18. A Privacy-Preserving Intelligent Medical Diagnosis System Based on Oblivious Keyword Search

    Directory of Open Access Journals (Sweden)

    Zhaowen Lin

    2017-01-01

    Full Text Available One of the concerns people have is how to get the diagnosis online without privacy being jeopardized. In this paper, we propose a privacy-preserving intelligent medical diagnosis system (IMDS, which can efficiently solve the problem. In IMDS, users submit their health examination parameters to the server in a protected form; this submitting process is based on Paillier cryptosystem and will not reveal any information about their data. And then the server retrieves the most likely disease (or multiple diseases from the database and returns it to the users. In the above search process, we use the oblivious keyword search (OKS as a basic framework, which makes the server maintain the computational ability but cannot learn any personal information over the data of users. Besides, this paper also provides a preprocessing method for data stored in the server, to make our protocol more efficient.

  19. Search-free license plate localization based on saliency and local variance estimation

    Science.gov (United States)

    Safaei, Amin; Tang, H. L.; Sanei, S.

    2015-02-01

    In recent years, the performance and accuracy of automatic license plate number recognition (ALPR) systems have greatly improved, however the increasing number of applications for such systems have made ALPR research more challenging than ever. The inherent computational complexity of search dependent algorithms remains a major problem for current ALPR systems. This paper proposes a novel search-free method of localization based on the estimation of saliency and local variance. Gabor functions are then used to validate the choice of candidate license plate. The algorithm was applied to three image datasets with different levels of complexity and the results compared with a number of benchmark methods, particularly in terms of speed. The proposed method outperforms the state of the art methods and can be used for real time applications.

  20. Neural Based Tabu Search method for solving unit commitment problem with cooling-banking constraints

    Directory of Open Access Journals (Sweden)

    Rajan Asir Christober Gnanakkan Charles

    2009-01-01

    Full Text Available This paper presents a new approach to solve short-term unit commitment problem (UCP using Neural Based Tabu Search (NBTS with cooling and banking constraints. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for next H hours. A 7-unit utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 10, 26 and 34 units. Numerical results are shown to compare the superiority of the cost solutions obtained using the Tabu Search (TS method, Dynamic Programming (DP and Lagrangian Relaxation (LR methods in reaching proper unit commitment.

  1. Key characteristics for tool choice in indicator-based sustainability assessment at farm level

    Directory of Open Access Journals (Sweden)

    Fleur Marchand

    2014-09-01

    Full Text Available Although the literature on sustainability assessment tools to support decision making in agriculture is rapidly growing, little attention has been paid to the actual tool choice. We focused on the choice of more complex integrated indicator-based tools at the farm level. The objective was to determine key characteristics as criteria for tool choice. This was done with an in-depth comparison of 2 cases: the Monitoring Tool for Integrated Farm Sustainability and the Public Goods Tool. They differ in characteristics that may influence tool choice: data, time, and budgetary requirements. With an enhanced framework, we derived 11 key characteristics to describe differences between the case tools. Based on the key characteristics, we defined 2 types of indicator-based tools: full sustainability assessment (FSA and rapid sustainability assessment (RSA. RSA tools are more oriented toward communicating and learning. They are therefore more suitable for use by a larger group of farmers, can help to raise awareness, trigger farmers to become interested in sustainable farming, and highlight areas of good or bad performance. If and when farmers increase their commitment to on-farm sustainability, they can gain additional insight by using an FSA tool. Based on complementary and modular use of the tools, practical recommendations for the different end users, i.e., researchers, farmers, advisers, and so forth, have been suggested.

  2. Integration of ROOT Notebooks as an ATLAS analysis web-based tool in outreach and public data release

    CERN Document Server

    Sanchez, Arturo; The ATLAS collaboration

    2016-01-01

    The integration of the ROOT data analysis framework with the Jupyter Notebook technology presents an incredible potential in the enhance and expansion of educational and training programs: starting from university students in their early years, passing to new ATLAS PhD students and post doctoral researchers, to those senior analysers and professors that want to restart their contact with the analysis of data or to include a more friendly but yet very powerful open source tool in the classroom. Such tools have been already tested in several environments and a fully web-based integration together with Open Access Data repositories brings the possibility to go a step forward in the search of ATLAS for integration between several CERN projects in the field of the education and training, developing new computing solutions on the way.

  3. UV-POSIT: Web-Based Tools for Rapid and Facile Structural Interpretation of Ultraviolet Photodissociation (UVPD) Mass Spectra

    Science.gov (United States)

    Rosenberg, Jake; Parker, W. Ryan; Cammarata, Michael B.; Brodbelt, Jennifer S.

    2018-04-01

    UV-POSIT (Ultraviolet Photodissociation Online Structure Interrogation Tools) is a suite of web-based tools designed to facilitate the rapid interpretation of data from native mass spectrometry experiments making use of 193 nm ultraviolet photodissociation (UVPD). The suite includes four separate utilities which assist in the calculation of fragment ion abundances as a function of backbone cleavage sites and sequence position; the localization of charge sites in intact proteins; the calculation of hydrogen elimination propensity for a-type fragment ions; and mass-offset searching of UVPD spectra to identify unknown modifications and assess false positive fragment identifications. UV-POSIT is implemented as a Python/Flask web application hosted at http://uv-posit.cm.utexas.edu. UV-POSIT is available under the MIT license, and the source code is available at https://github.com/jarosenb/UV_POSIT. [Figure not available: see fulltext.

  4. UV-POSIT: Web-Based Tools for Rapid and Facile Structural Interpretation of Ultraviolet Photodissociation (UVPD) Mass Spectra.

    Science.gov (United States)

    Rosenberg, Jake; Parker, W Ryan; Cammarata, Michael B; Brodbelt, Jennifer S

    2018-04-06

    UV-POSIT (Ultraviolet Photodissociation Online Structure Interrogation Tools) is a suite of web-based tools designed to facilitate the rapid interpretation of data from native mass spectrometry experiments making use of 193 nm ultraviolet photodissociation (UVPD). The suite includes four separate utilities which assist in the calculation of fragment ion abundances as a function of backbone cleavage sites and sequence position; the localization of charge sites in intact proteins; the calculation of hydrogen elimination propensity for a-type fragment ions; and mass-offset searching of UVPD spectra to identify unknown modifications and assess false positive fragment identifications. UV-POSIT is implemented as a Python/Flask web application hosted at http://uv-posit.cm.utexas.edu . UV-POSIT is available under the MIT license, and the source code is available at https://github.com/jarosenb/UV_POSIT . Graphical Abstract.

  5. Implementing iRound: A Computer-Based Auditing Tool.

    Science.gov (United States)

    Brady, Darcie

    Many hospitals use rounding or auditing as a tool to help identify gaps and needs in quality and process performance. Some hospitals are also using rounding to help improve patient experience. It is known that purposeful rounding helps improve Hospital Consumer Assessment of Healthcare Providers and Systems scores by helping manage patient expectations, provide service recovery, and recognize quality caregivers. Rounding works when a standard method is used across the facility, where data are comparable and trustworthy. This facility had a pen-and-paper process in place that made data reporting difficult, created a silo culture between departments, and most audits and rounds were completed differently on each unit. It was recognized that this facility needed to standardize the rounding and auditing process. The tool created by the Advisory Board called iRound was chosen as the tool this facility would use for patient experience rounds as well as process and quality rounding. The success of the iRound tool in this facility depended on several factors that started many months before implementation to current everyday usage.

  6. Advanced Tools for Smartphone-Based Experiments: Phyphox

    Science.gov (United States)

    Staacks, S.; Hütz, S.; Stampfer, C.; Heinke, H.

    2018-01-01

    The sensors in modern smartphones are a promising and cost-effective tool for experimentation in physics education, but many experiments face practical problems. Often the phone is inaccessible during the experiment and the data usually needs to be analyzed subsequently on a computer. We address both problems by introducing a new app, called…

  7. A tool to ascertain taxonomic relatedness based on features derived ...

    Indian Academy of Sciences (India)

    MADHU

    gene to investigate the evolutionary relationships. They ... 1Environmental Genomics Unit, National Environmental Engineering ... However, use of 16S rRNA data does have some problems, ... This tool undergoes unsupervised learning and is particularly .... to be conducted (Buchala et al. ... using two layers of connections.

  8. The Design of Tools for Sketching Sensor-Based Interaction

    DEFF Research Database (Denmark)

    Brynskov, Martin; Lunding, Rasmus; Vestergaard, Lasse Steenbock

    2012-01-01

    , flexibility and cost, aimed at wearable and ultra-mobile prototyping where fast reaction is needed (e.g. in controlling sound), and we discuss the general issues facing this category of embodied interaction design tools. We then present the platform in more detail, both regarding hard- ware and software...

  9. Grip Strength Survey Based on Hand Tool Usage

    Directory of Open Access Journals (Sweden)

    Erman ÇAKIT

    2016-12-01

    Full Text Available Hand grip strength is broadly used for performing tasks involving equipment in production and processing activities. Most professionals in this field rely on grip strength to perform their tasks. There were three main aims of this study: i determining various hand grip strength measurements for the group of hand tool users, ii investigating the effects of height, weight, age, hand dominance, body mass index, previous Cumulative Trauma Disorder (CTD diagnosis, and hand tool usage experience on hand grip strength, and iii comparing the obtained results with existing data for other populations. The study groups comprised 71 healthy male facility workers. The values of subjects’ ages was observed between 26 and 74 years. The data were statistically analyzed to assess the normality of data and the percentile values of grip strength. The results of this study demonstrate that there were no significance differences noted between dominant and non-dominant hands. However, there were highly significant differences between the CTD group and the other group. Hand grip strength for the dominant hand was positively correlated to height, weight, and body mass index, and negatively correlated to age and tool usage experience. Hand dominance, height, weight, body mass index, age and tool usage experience should be considered when establishing normal values for grip strength.

  10. Image edge detection based tool condition monitoring with morphological component analysis.

    Science.gov (United States)

    Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng

    2017-07-01

    The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Science.gov (United States)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  12. GIS based application tool -- history of East India Company

    Science.gov (United States)

    Phophaliya, Sudhir

    The emphasis of the thesis is to build an intuitive and robust GIS (Geographic Information systems) Tool which gives an in depth information on history of East India Company. The GIS tool also incorporates various achievements of East India Company which helped to establish their business all over world especially India. The user has the option to select these movements and acts by clicking on any of the marked states on the World map. The World Map also incorporates key features for East India Company like landing of East India Company in India, Darjeeling Tea Establishment, East India Company Stock Redemption Act etc. The user can know more about these features simply by clicking on each of them. The primary focus of the tool is to give the user a unique insight about East India Company; for this the tool has several HTML (Hypertext markup language) pages which the user can select. These HTML pages give information on various topics like the first Voyage, Trade with China, 1857 Revolt etc. The tool has been developed in JAVA. For the Indian map MOJO (Map Objects Java Objects) is used. MOJO is developed by ESRI. The major features shown on the World map was designed using MOJO. MOJO made it easy to incorporate the statistical data with these features. The user interface was intentionally kept simple and easy to use. To keep the user engaged, key aspects are explained using HTML pages. The idea is that pictures will help the user garner interest in the history of East India Company.

  13. ANN Based Tool Condition Monitoring System for CNC Milling Machines

    Directory of Open Access Journals (Sweden)

    Mota-Valtierra G.C.

    2011-10-01

    Full Text Available Most of the companies have as objective to manufacture high-quality products, then by optimizing costs, reducing and controlling the variations in its production processes it is possible. Within manufacturing industries a very important issue is the tool condition monitoring, since the tool state will determine the quality of products. Besides, a good monitoring system will protect the machinery from severe damages. For determining the state of the cutting tools in a milling machine, there is a great variety of models in the industrial market, however these systems are not available to all companies because of their high costs and the requirements of modifying the machining tool in order to attach the system sensors. This paper presents an intelligent classification system which determines the status of cutt ers in a Computer Numerical Control (CNC milling machine. This tool state is mainly detected through the analysis of the cutting forces drawn from the spindle motors currents. This monitoring system does not need sensors so it is no necessary to modify the machine. The correct classification is made by advanced digital signal processing techniques. Just after acquiring a signal, a FIR digital filter is applied to the data to eliminate the undesired noisy components and to extract the embedded force components. A Wavelet Transformation is applied to the filtered signal in order to compress the data amount and to optimize the classifier structure. Then a multilayer perceptron- type neural network is responsible for carrying out the classification of the signal. Achieving a reliability of 95%, the system is capable of detecting breakage and a worn cutter.

  14. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context

    Science.gov (United States)

    French, Beverley; Thomas, Lois H; Baker, Paula; Burton, Christopher R; Pennington, Lindsay; Roddam, Hazel

    2009-01-01

    Background Given the current emphasis on networks as vehicles for innovation and change in health service delivery, the ability to conceptualise and measure organisational enablers for the social construction of knowledge merits attention. This study aimed to develop a composite tool to measure the organisational context for evidence-based practice (EBP) in healthcare. Methods A structured search of the major healthcare and management databases for measurement tools from four domains: research utilisation (RU), research activity (RA), knowledge management (KM), and organisational learning (OL). Included studies were reports of the development or use of measurement tools that included organisational factors. Tools were appraised for face and content validity, plus development and testing methods. Measurement tool items were extracted, merged across the four domains, and categorised within a constructed framework describing the absorptive and receptive capacities of organisations. Results Thirty measurement tools were identified and appraised. Eighteen tools from the four domains were selected for item extraction and analysis. The constructed framework consists of seven categories relating to three core organisational attributes of vision, leadership, and a learning culture, and four stages of knowledge need, acquisition of new knowledge, knowledge sharing, and knowledge use. Measurement tools from RA or RU domains had more items relating to the categories of leadership, and acquisition of new knowledge; while tools from KM or learning organisation domains had more items relating to vision, learning culture, knowledge need, and knowledge sharing. There was equal emphasis on knowledge use in the different domains. Conclusion If the translation of evidence into knowledge is viewed as socially mediated, tools to measure the organisational context of EBP in healthcare could be enhanced by consideration of related concepts from the organisational and management sciences

  15. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context

    Directory of Open Access Journals (Sweden)

    Pennington Lindsay

    2009-05-01

    Full Text Available Abstract Background Given the current emphasis on networks as vehicles for innovation and change in health service delivery, the ability to conceptualise and measure organisational enablers for the social construction of knowledge merits attention. This study aimed to develop a composite tool to measure the organisational context for evidence-based practice (EBP in healthcare. Methods A structured search of the major healthcare and management databases for measurement tools from four domains: research utilisation (RU, research activity (RA, knowledge management (KM, and organisational learning (OL. Included studies were reports of the development or use of measurement tools that included organisational factors. Tools were appraised for face and content validity, plus development and testing methods. Measurement tool items were extracted, merged across the four domains, and categorised within a constructed framework describing the absorptive and receptive capacities of organisations. Results Thirty measurement tools were identified and appraised. Eighteen tools from the four domains were selected for item extraction and analysis. The constructed framework consists of seven categories relating to three core organisational attributes of vision, leadership, and a learning culture, and four stages of knowledge need, acquisition of new knowledge, knowledge sharing, and knowledge use. Measurement tools from RA or RU domains had more items relating to the categories of leadership, and acquisition of new knowledge; while tools from KM or learning organisation domains had more items relating to vision, learning culture, knowledge need, and knowledge sharing. There was equal emphasis on knowledge use in the different domains. Conclusion If the translation of evidence into knowledge is viewed as socially mediated, tools to measure the organisational context of EBP in healthcare could be enhanced by consideration of related concepts from the organisational

  16. What can management theories offer evidence-based practice? A comparative analysis of measurement tools for organisational context.

    Science.gov (United States)

    French, Beverley; Thomas, Lois H; Baker, Paula; Burton, Christopher R; Pennington, Lindsay; Roddam, Hazel

    2009-05-19

    Given the current emphasis on networks as vehicles for innovation and change in health service delivery, the ability to conceptualize and measure organisational enablers for the social construction of knowledge merits attention. This study aimed to develop a composite tool to measure the organisational context for evidence-based practice (EBP) in healthcare. A structured search of the major healthcare and management databases for measurement tools from four domains: research utilisation (RU), research activity (RA), knowledge management (KM), and organisational learning (OL). Included studies were reports of the development or use of measurement tools that included organisational factors. Tools were appraised for face and content validity, plus development and testing methods. Measurement tool items were extracted, merged across the four domains, and categorised within a constructed framework describing the absorptive and receptive capacities of organisations. Thirty measurement tools were identified and appraised. Eighteen tools from the four domains were selected for item extraction and analysis. The constructed framework consists of seven categories relating to three core organisational attributes of vision, leadership, and a learning culture, and four stages of knowledge need, acquisition of new knowledge, knowledge sharing, and knowledge use. Measurement tools from RA or RU domains had more items relating to the categories of leadership, and acquisition of new knowledge; while tools from KM or learning organisation domains had more items relating to vision, learning culture, knowledge need, and knowledge sharing. There was equal emphasis on knowledge use in the different domains. If the translation of evidence into knowledge is viewed as socially mediated, tools to measure the organisational context of EBP in healthcare could be enhanced by consideration of related concepts from the organisational and management sciences. Comparison of measurement tools across

  17. Extended-Search, Bézier Curve-Based Lane Detection and Reconstruction System for an Intelligent Vehicle

    Directory of Open Access Journals (Sweden)

    Xiaoyun Huang

    2015-09-01

    Full Text Available To improve the real-time performance and detection rate of a Lane Detection and Reconstruction (LDR system, an extended-search-based lane detection method and a Bézier curve-based lane reconstruction algorithm are proposed in this paper. The extended-search-based lane detection method is designed to search boundary blocks from the initial position, in an upwards direction and along the lane, with small search areas including continuous search, discontinuous search and bending search in order to detect different lane boundaries. The Bézier curve-based lane reconstruction algorithm is employed to describe a wide range of lane boundary forms with comparatively simple expressions. In addition, two Bézier curves are adopted to reconstruct the lanes' outer boundaries with large curvature variation. The lane detection and reconstruction algorithm — including initial-blocks' determining, extended search, binarization processing and lane boundaries' fitting in different scenarios — is verified in road tests. The results show that this algorithm is robust against different shadows and illumination variations; the average processing time per frame is 13 ms. Significantly, it presents an 88.6% high-detection rate on curved lanes with large or variable curvatures, where the accident rate is higher than that of straight lanes.

  18. Web-Based Undergraduate Chemistry Problem-Solving: The Interplay of Task Performance, Domain Knowledge and Web-Searching Strategies

    Science.gov (United States)

    She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua

    2012-01-01

    This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…

  19. GLIDERS - A web-based search engine for genome-wide linkage disequilibrium between HapMap SNPs

    Directory of Open Access Journals (Sweden)

    Broxholme John

    2009-10-01

    Full Text Available Abstract Background A number of tools for the examination of linkage disequilibrium (LD patterns between nearby alleles exist, but none are available for quickly and easily investigating LD at longer ranges (>500 kb. We have developed a web-based query tool (GLIDERS: Genome-wide LInkage DisEquilibrium Repository and Search engine that enables the retrieval of pairwise associations with r2 ≥ 0.3 across the human genome for any SNP genotyped within HapMap phase 2 and 3, regardless of distance between the markers. Description GLIDERS is an easy to use web tool that only requires the user to enter rs numbers of SNPs they want to retrieve genome-wide LD for (both nearby and long-range. The intuitive web interface handles both manual entry of SNP IDs as well as allowing users to upload files of SNP IDs. The user can limit the resulting inter SNP associations with easy to use menu options. These include MAF limit (5-45%, distance limits between SNPs (minimum and maximum, r2 (0.3 to 1, HapMap population sample (CEU, YRI and JPT+CHB combined and HapMap build/release. All resulting genome-wide inter-SNP associations are displayed on a single output page, which has a link to a downloadable tab delimited text file. Conclusion GLIDERS is a quick and easy way to retrieve genome-wide inter-SNP associations and to explore LD patterns for any number of SNPs of interest. GLIDERS can be useful in identifying SNPs with long-range LD. This can highlight mis-mapping or other potential association signal localisation problems.

  20. The Surface Extraction from TIN based Search-space Minimization (SETSM) algorithm

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2017-07-01

    Digital Elevation Models (DEMs) provide critical information for a wide range of scientific, navigational and engineering activities. Submeter resolution, stereoscopic satellite imagery with high geometric and radiometric quality, and wide spatial coverage are becoming increasingly accessible for generating stereo-photogrammetric DEMs. However, low contrast and repeatedly-textured surfaces, such as snow and glacial ice at high latitudes, and mountainous terrains challenge existing stereo-photogrammetric DEM generation techniques, particularly without a-priori information such as existing seed DEMs or the manual setting of terrain-specific parameters. To utilize these data for fully-automatic DEM extraction at a large scale, we developed the Surface Extraction from TIN-based Search-space Minimization (SETSM) algorithm. SETSM is fully automatic (i.e. no search parameter settings are needed) and uses only the sensor model Rational Polynomial Coefficients (RPCs). SETSM adopts a hierarchical, combined image- and object-space matching strategy utilizing weighted normalized cross-correlation with both original distorted and geometrically corrected images for overcoming ambiguities caused by foreshortening and occlusions. In addition, SETSM optimally minimizes search-spaces to extract optimal matches over problematic terrains by iteratively updating object surfaces within a Triangulated Irregular Network, and utilizes a geometric-constrained blunder and outlier detection in object space. We prove the ability of SETSM to mitigate typical stereo-photogrammetric matching problems over a range of challenging terrains. SETSM is the primary DEM generation software for the US National Science Foundation's ArcticDEM project.

  1. Optimization of fuel cells for BWR based in Tabu modified search

    International Nuclear Information System (INIS)

    Martin del Campo M, C.; Francois L, J.L.; Palomera P, M.A.

    2004-01-01

    The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)

  2. New Internet search volume-based weighting method for integrating various environmental impacts

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-01-15

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  3. New Internet search volume-based weighting method for integrating various environmental impacts

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  4. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  5. Pulsar Timing Array Based Search for Supermassive Black Hole Binaries in the Square Kilometer Array Era.

    Science.gov (United States)

    Wang, Yan; Mohanty, Soumya D

    2017-04-14

    The advent of next generation radio telescope facilities, such as the Square Kilometer Array (SKA), will usher in an era where a pulsar timing array (PTA) based search for gravitational waves (GWs) will be able to use hundreds of well timed millisecond pulsars rather than the few dozens in existing PTAs. A realistic assessment of the performance of such an extremely large PTA must take into account the data analysis challenge posed by an exponential increase in the parameter space volume due to the large number of so-called pulsar phase parameters. We address this problem and present such an assessment for isolated supermassive black hole binary (SMBHB) searches using a SKA era PTA containing 10^{3} pulsars. We find that an all-sky search will be able to confidently detect nonevolving sources with a redshifted chirp mass of 10^{10}  M_{⊙} out to a redshift of about 28 (corresponding to a rest-frame chirp mass of 3.4×10^{8}  M_{⊙}). We discuss the important implications that the large distance reach of a SKA era PTA has on GW observations from optically identified SMBHB candidates. If no SMBHB detections occur, a highly unlikely scenario in the light of our results, the sky-averaged upper limit on strain amplitude will be improved by about 3 orders of magnitude over existing limits.

  6. Applying Russian Search Engines to market Finnish Corporates

    OpenAIRE

    Pankratovs, Vladimirs

    2013-01-01

    The goal of this thesis work is to provide basic knowledge of Russia-based Search Engines marketing capabilities. After reading this material, the user is able to diverge different kinds of Search Engine Marketing tools and can perform advertising campaigns. This study includes information about the majority of tools available to the user and provides up to date screenshots of Russian Search engines front-end, which can be useful in further work. Study discusses the main principles and ba...

  7. NDT-based bridge condition assessment supported by expert tools

    Science.gov (United States)

    Bień, J.; KuŻawa, M.

    2016-06-01

    This paper is focused on the progress in the application of Expert Tools supporting integration of inspection and NDT testing findings in order to effectuate effective decision making by bridge owners. Possibilities of knowledge representation in the intelligent computer Expert Tools by means of the multi-level hybrid network technology are described. These multi-level hybrid networks can be built of neural, fuzzy and functional components depending on the problem that needs to be solved and on the type of available information. Application of the technology is illustrated by an example of the Bridge Evaluation Expert Function (BEEF) implemented in the Railway Bridge Management System "SMOK" operated by the Polish State Railways.

  8. Development of hydrogeological modelling tools based on NAMMU

    Energy Technology Data Exchange (ETDEWEB)

    Marsic, N. [Kemakta Konsult AB, Stockholm (Sweden); Hartley, L.; Jackson, P.; Poole, M. [AEA Technology, Harwell (United Kingdom); Morvik, A. [Bergen Software Services International AS, Bergen (Norway)

    2001-09-01

    A number of relatively sophisticated hydrogeological models were developed within the SR 97 project to handle issues such as nesting of scales and the effects of salinity. However, these issues and others are considered of significant importance and generality to warrant further development of the hydrogeological methodology. Several such developments based on the NAMMU package are reported here: - Embedded grid: nesting of the regional- and site-scale models within the same numerical model has given greater consistency in the structural model representation and in the flow between scales. Since there is a continuous representation of the regional- and site-scales the modelling of pathways from the repository no longer has to be contained wholly by the site-scale region. This allows greater choice in the size of the site-scale. - Implicit Fracture Zones (IFZ): this method of incorporating the structural model is very efficient and allows changes to either the mesh or fracture zones to be implemented quickly. It also supports great flexibility in the properties of the structures and rock mass. - Stochastic fractures: new functionality has been added to IFZ to allow arbitrary combinations of stochastic or deterministic fracture zones with the rock-mass. Whether a fracture zone is modelled deterministically or stochastically its statistical properties can be defined independently. - Stochastic modelling: efficient methods for Monte-Carlo simulation of stochastic permeability fields have been implemented and tested on SKB's computers. - Visualisation: the visualisation tool Avizier for NAMMU has been enhanced such that it is efficient for checking models and presentation. - PROPER interface: NAMMU outputs pathlines in PROPER format so that it can be included in PA workflow. The developed methods are illustrated by application to stochastic nested modelling of the Beberg site using data from SR 97. The model properties were in accordance with the regional- and site

  9. Development of hydrogeological modelling tools based on NAMMU

    International Nuclear Information System (INIS)

    Marsic, N.; Hartley, L.; Jackson, P.; Poole, M.; Morvik, A.

    2001-09-01

    A number of relatively sophisticated hydrogeological models were developed within the SR 97 project to handle issues such as nesting of scales and the effects of salinity. However, these issues and others are considered of significant importance and generality to warrant further development of the hydrogeological methodology. Several such developments based on the NAMMU package are reported here: - Embedded grid: nesting of the regional- and site-scale models within the same numerical model has given greater consistency in the structural model representation and in the flow between scales. Since there is a continuous representation of the regional- and site-scales the modelling of pathways from the repository no longer has to be contained wholly by the site-scale region. This allows greater choice in the size of the site-scale. - Implicit Fracture Zones (IFZ): this method of incorporating the structural model is very efficient and allows changes to either the mesh or fracture zones to be implemented quickly. It also supports great flexibility in the properties of the structures and rock mass. - Stochastic fractures: new functionality has been added to IFZ to allow arbitrary combinations of stochastic or deterministic fracture zones with the rock-mass. Whether a fracture zone is modelled deterministically or stochastically its statistical properties can be defined independently. - Stochastic modelling: efficient methods for Monte-Carlo simulation of stochastic permeability fields have been implemented and tested on SKB's computers. - Visualisation: the visualisation tool Avizier for NAMMU has been enhanced such that it is efficient for checking models and presentation. - PROPER interface: NAMMU outputs pathlines in PROPER format so that it can be included in PA workflow. The developed methods are illustrated by application to stochastic nested modelling of the Beberg site using data from SR 97. The model properties were in accordance with the regional- and site

  10. Automated speech quality monitoring tool based on perceptual evaluation

    OpenAIRE

    Vozňák, Miroslav; Rozhon, Jan

    2010-01-01

    The paper deals with a speech quality monitoring tool which we have developed in accordance with PESQ (Perceptual Evaluation of Speech Quality) and is automatically running and calculating the MOS (Mean Opinion Score). Results are stored into database and used in a research project investigating how meteorological conditions influence the speech quality in a GSM network. The meteorological station, which is located in our university campus provides information about a temperature,...

  11. Advanced tools for smartphone-based experiments: phyphox

    OpenAIRE

    Staacks, Sebastian; Hütz, Simon; Heinke, Heidrun; Stampfer, Christoph

    2018-01-01

    The sensors in modern smartphones are a promising and cost-effective tool for experimentation in physics education, but many experiments face practical problems. Often the phone is inaccessible during the experiment and the data usually needs to be analyzed subsequently on a computer. We address both problems by introducing a new app, called "phyphox", which is specifically designed for utilizing experiments in physics teaching. The app is free and designed to offer the same set of features o...

  12. A Statistically Based Training Diagnostic Tool for Marine Aviation

    Science.gov (United States)

    2014-06-01

    the behavioral and social sciences in pursuit of an assessment tool to measure the tactical cognitive skills of officers in the combat arms...presence, flow and interaction, media for mood and arousal, media to attract and persuade, media for emotional effects, media to attract attention, and...expletive] better know how to read and execute checklist items, or they shouldn’t have made it out of the FRS. I could rant , but I won’t. Suffice it

  13. Tool-based requirement traceability between requirement and design artifacts

    CERN Document Server

    Turban, Bernhard

    2013-01-01

    Processes for developing safety-critical systems impose special demands on ensuring requirements traceability. Achieving valuable traceability information, however, is especially difficult concerning the transition from requirements to design. Bernhard Turban analyzes systems and software engineering theories cross-cutting the issue (embedded systems development, systems engineering, software engineering, requirements engineering and management, design theory and processes for safety-critical systems). As a solution, the author proposes a new tool approach to support designers in their thinkin

  14. Category-based guidance of spatial attention during visual search for feature conjunctions.

    Science.gov (United States)

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. An Analysis of the Applicability of Federal Law Regarding Hash-Based Searches of Digital Media

    Science.gov (United States)

    2014-06-01

    similarity matching, Fourth Amend- ment, federal law, search and seizure, warrant search, consent search, border search. 15. NUMBER OF PAGES 107 16. PRICE ...containing a white powdery substance labeled flour [53]. 3.3.17 United States v Heckenkamp 482 F.3d 1142 (9th Circuit 2007) People have a reasonable

  16. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    Martin del Campo, C.; Francois, J.L.

    2003-01-01

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  17. BSSF: a fingerprint based ultrafast binding site similarity search and function analysis server

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-01-01

    Full Text Available Abstract Background Genome sequencing and post-genomics projects such as structural genomics are extending the frontier of the study of sequence-structure-function relationship of genes and their products. Although many sequence/structure-based methods have been devised with the aim of deciphering this delicate relationship, there still remain large gaps in this fundamental problem, which continuously drives researchers to develop novel methods to extract relevant information from sequences and structures and to infer the functions of newly identified genes by genomics technology. Results Here we present an ultrafast method, named BSSF(Binding Site Similarity & Function, which enables researchers to conduct similarity searches in a comprehensive three-dimensional binding site database extracted from PDB structures. This method utilizes a fingerprint representation of the binding site and a validated statistical Z-score function scheme to judge the similarity between the query and database items, even if their similarities are only constrained in a sub-pocket. This fingerprint based similarity measurement was also validated on a known binding site dataset by comparing with geometric hashing, which is a standard 3D similarity method. The comparison clearly demonstrated the utility of this ultrafast method. After conducting the database searching, the hit list is further analyzed to provide basic statistical information about the occurrences of Gene Ontology terms and Enzyme Commission numbers, which may benefit researchers by helping them to design further experiments to study the query proteins. Conclusions This ultrafast web-based system will not only help researchers interested in drug design and structural genomics to identify similar binding sites, but also assist them by providing further analysis of hit list from database searching.

  18. A web-based tool for the Comprehensive Unit-based Safety Program (CUSP).

    Science.gov (United States)

    Pronovost, Peter J; King, Jay; Holzmueller, Christine G; Sawyer, Melinda; Bivens, Shauna; Michael, Michelle; Haig, Kathy; Paine, Lori; Moore, Dana; Miller, Marlene

    2006-03-01

    An organization's ability to change is driven by its culture, which in turn has a significant impact on safety. The six-step Comprehensive Unit-Based Safety Program (CUSP) is intended to improve local culture and safety. A Web-based project management tool for CUSP was developed and then pilot tested at two hospitals. HOW ECUSP WORKS: Once a patient safety concern is identified (step 3), a unit-level interdisciplinary safety committee determines issue criticality and starts up the projects (step 4), which are managed using project management tools within eCUSP (step 5). On a project's completion, the results are disseminated through a shared story (step 6). OSF St. Joseph's Medical Center-The Medical Birthing Center (Bloomington, Illinois), identified 11 safety issues, implemented 11 projects, and created 9 shared stories--including one for its Armband Project. The Johns Hopkins Hospital (Baltimore) Medical Progressive Care (MPC4) Unit identified 5 safety issues and implemented 4 ongoing projects, including the intravenous (IV) Tubing Compliance Project. The eCUSP tool's success depends on an organizational commitment to creating a culture of safety.

  19. A novel ternary content addressable memory design based on resistive random access memory with high intensity and low search energy

    Science.gov (United States)

    Han, Runze; Shen, Wensheng; Huang, Peng; Zhou, Zheng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    A novel ternary content addressable memory (TCAM) design based on resistive random access memory (RRAM) is presented. Each TCAM cell consists of two parallel RRAM to both store and search for ternary data. The cell size of the proposed design is 8F2, enable a ∼60× cell area reduction compared with the conventional static random access memory (SRAM) based implementation. Simulation results also show that the search delay and energy consumption of the proposed design at the 64-bit word search are 2 ps and 0.18 fJ/bit/search respectively at 22 nm technology node, where significant improvements are achieved compared to previous works. The desired characteristics of RRAM for implementation of the high performance TCAM search chip are also discussed.

  20. A searching and reporting system for relational databases using a graph-based metadata representation.

    Science.gov (United States)

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  1. A survey and comparison of transformation tools based on the transformation tool contest

    NARCIS (Netherlands)

    Jakumeit, E.; Van Gorp, P.; Buchwald, S.; Rose, L.; Wagelaar, D.; Dan, L.; Hegedüs, Á; Hermannsdörfer, M.; Horn, T.; Kalnina, E.; Krause, C.; Lano, K.; Lepper, M.; Rensink, Arend; Rose, L.M.; Wätzoldt, S.; Mazanek, S.

    Model transformation is one of the key tasks in model-driven engineering and relies on the efficient matching and modification of graph-based data structures; its sibling graph rewriting has been used to successfully model problems in a variety of domains. Over the last years, a wide range of graph

  2. A TOOL FOR EMOTIONAL USER EXPERIENCE ASSESSMENT OF WEB-BASED MEDICAL SERVICES

    OpenAIRE

    Alexander Nikov; Tramaine Alaina Gumaia

    2016-01-01

    Emotional User Experience Design (eUXD) has become increasingly important for web-based services. The primary objective of this study is to enable users to use websites that are easy to understand and operate and pleasing to use. A checklist tool for an emotional user experience (eUX) assessment that supports web-based medical services is proposed. This tool measures user moods while using medical services’ websites. The tool allocates emotive design-oriented problems and thus defines relevan...

  3. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (psearching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. An image-based search for pulsars among Fermi unassociated LAT sources

    Science.gov (United States)

    Frail, D. A.; Ray, P. S.; Mooley, K. P.; Hancock, P.; Burnett, T. H.; Jagannathan, P.; Ferrara, E. C.; Intema, H. T.; de Gasperin, F.; Demorest, P. B.; Stovall, K.; McKinnon, M. M.

    2018-03-01

    We describe an image-based method that uses two radio criteria, compactness, and spectral index, to identify promising pulsar candidates among Fermi Large Area Telescope (LAT) unassociated sources. These criteria are applied to those radio sources from the Giant Metrewave Radio Telescope all-sky survey at 150 MHz (TGSS ADR1) found within the error ellipses of unassociated sources from the 3FGL catalogue and a preliminary source list based on 7 yr of LAT data. After follow-up interferometric observations to identify extended or variable sources, a list of 16 compact, steep-spectrum candidates is generated. An ongoing search for pulsations in these candidates, in gamma rays and radio, has found 6 ms pulsars and one normal pulsar. A comparison of this method with existing selection criteria based on gamma-ray spectral and variability properties suggests that the pulsar discovery space using Fermi may be larger than previously thought. Radio imaging is a hitherto underutilized source selection method that can be used, as with other multiwavelength techniques, in the search for Fermi pulsars.

  5. An Experiment and Detection Scheme for Cavity-Based Light Cold Dark Matter Particle Searches

    Directory of Open Access Journals (Sweden)

    Masroor H. S. Bukhari

    2017-01-01

    Full Text Available A resonance detection scheme and some useful ideas for cavity-based searches of light cold dark matter particles (such as axions are presented, as an effort to aid in the on-going endeavors in this direction as well as for future experiments, especially in possibly developing a table-top experiment. The scheme is based on our idea of a resonant detector, incorporating an integrated tunnel diode (TD and GaAs HEMT/HFET (High-Electron Mobility Transistor/Heterogeneous FET transistor amplifier, weakly coupled to a cavity in a strong transverse magnetic field. The TD-amplifier combination is suggested as a sensitive and simple technique to facilitate resonance detection within the cavity while maintaining excellent noise performance, whereas our proposed Halbach magnet array could serve as a low-noise and permanent solution replacing the conventional electromagnets scheme. We present some preliminary test results which demonstrate resonance detection from simulated test signals in a small optimal axion mass range with superior signal-to-noise ratios (SNR. Our suggested design also contains an overview of a simpler on-resonance dc signal read-out scheme replacing the complicated heterodyne read-out. We believe that all these factors and our propositions could possibly improve or at least simplify the resonance detection and read-out in cavity-based DM particle detection searches (and other spectroscopy applications and reduce the complications (and associated costs, in addition to reducing the electromagnetic interference and background.

  6. QUALITY SERVICES EVALUATION MODEL BASED ON DEDICATED SOFTWARE TOOL

    Directory of Open Access Journals (Sweden)

    ANDREEA CRISTINA IONICĂ

    2012-10-01

    Full Text Available In this paper we introduced a new model, called Service Quality (SQ, which combines QFD and SERVQUAL methods. This model takes from the SERVQUAL method the five dimensions of requirements and three of characteristics and from the QFD method the application methodology. The originality of the SQ model consists in computing a global index that reflects the customers’ requirements accomplishment level by the quality characteristics. In order to prove the viability of the SQ model, there was developed a software tool that was applied for the evaluation of a health care services provider.

  7. The ship-borne infrared searching and tracking system based on the inertial platform

    Science.gov (United States)

    Li, Yan; Zhang, Haibo

    2011-08-01

    As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing

  8. ARBOOK: Development and Assessment of a Tool Based on Augmented Reality for Anatomy

    Science.gov (United States)

    Ferrer-Torregrosa, J.; Torralba, J.; Jimenez, M. A.; García, S.; Barcia, J. M.

    2015-01-01

    The evolution of technologies and the development of new tools with educational purposes are growing up. This work presents the experience of a new tool based on augmented reality (AR) focusing on the anatomy of the lower limb. ARBOOK was constructed and developed based on TC and MRN images, dissections and drawings. For ARBOOK evaluation, a…

  9. Delivering Electronic Resources with Web OPACs and Other Web-based Tools: Needs of Reference Librarians.

    Science.gov (United States)

    Bordeianu, Sever; Carter, Christina E.; Dennis, Nancy K.

    2000-01-01

    Describes Web-based online public access catalogs (Web OPACs) and other Web-based tools as gateway methods for providing access to library collections. Addresses solutions for overcoming barriers to information, such as through the implementation of proxy servers and other authentication tools for remote users. (Contains 18 references.)…

  10. Using Web-Based Technologies and Tools in Future Choreographers' Training: British Experience

    Science.gov (United States)

    Bidyuk, Dmytro

    2016-01-01

    In the paper the problem of using effective web-based technologies and tools in teaching choreography in British higher education institutions has been discussed. Researches on the usage of web-based technologies and tools for practical dance courses in choreographers' professional training at British higher education institutions by such British…

  11. Principles and tools for collaborative entity-based intelligence analysis.

    Science.gov (United States)

    Bier, Eric A; Card, Stuart K; Bodnar, John W

    2010-01-01

    Software tools that make it easier for analysts to collaborate as a natural part of their work will lead to better analysis that is informed by more perspectives. We are interested to know if software tools can be designed that support collaboration even as they allow analysts to find documents and organize information (including evidence, schemas, and hypotheses). We have modified the Entity Workspace system, described previously, to test such designs. We have evaluated the resulting design in both a laboratory study and a study where it is situated with an analysis team. In both cases, effects on collaboration appear to be positive. Key aspects of the design include an evidence notebook optimized for organizing entities (rather than text characters), information structures that can be collapsed and expanded, visualization of evidence that emphasizes events and documents (rather than emphasizing the entity graph), and a notification system that finds entities of mutual interest to multiple analysts. Long-term tests suggest that this approach can support both top-down and bottom-up styles of analysis.

  12. Evidence-based medicine - an appropriate tool for evidence-based health policy? A case study from Norway.

    Science.gov (United States)

    Malterud, Kirsti; Bjelland, Anne Karen; Elvbakken, Kari Tove

    2016-03-05

    Evidence-based policy (EBP), a concept modelled on the principles of evidence-based medicine (EBM), is widely used in different areas of policymaking. Systematic reviews (SRs) with meta-analyses gradually became the methods of choice for synthesizing research evidence about interventions and judgements about quality of evidence and strength of recommendations. Critics have argued that the relation between research evidence and service policies is weak, and that the notion of EBP rests on a misunderstanding of policy processes. Having explored EBM standards and knowledge requirements for health policy decision-making, we present an empirical point of departure for discussing the relationship between EBM and EBP. In a case study exploring the Norwegian Knowledge Centre for the Health Services (NOKC), an independent government unit, we first searched for information about the background and development of the NOKC to establish a research context. We then identified, selected and organized official NOKC publications as an empirical sample of typical top-of-the-line knowledge delivery adhering to EBM standards. Finally, we explored conclusions in this type of publication, specifically addressing their potential as policy decision tools. From a total sample of 151 SRs published by the NOKC in the period 2004-2013, a purposive subsample from 2012 (14 publications) advised major caution about their conclusions because of the quality or relevance of the underlying documentation. Although the case study did not include a systematic investigation of uptake and policy consequences, SRs were found to be inappropriate as universal tools for health policy decision-making. The case study demonstrates that EBM is not necessarily suited to knowledge provision for every kind of policy decision-making. Our analysis raises the question of whether the evidence-based movement, represented here by an independent government organization, undertakes too broad a range of commissions using

  13. An opposition-based harmony search algorithm for engineering optimization problems

    Directory of Open Access Journals (Sweden)

    Abhik Banerjee

    2014-03-01

    Full Text Available Harmony search (HS is a derivative-free real parameter optimization algorithm. It draws inspiration from the musical improvisation process of searching for a perfect state of harmony. The proposed opposition-based HS (OHS of the present work employs opposition-based learning for harmony memory initialization and also for generation jumping. The concept of opposite number is utilized in OHS to improve the convergence rate of the HS algorithm. The potential of the proposed algorithm is assessed by means of an extensive comparative study of the numerical results on sixteen benchmark test functions. Additionally, the effectiveness of the proposed algorithm is tested for reactive power compensation of an autonomous power system. For real-time reactive power compensation of the studied model, Takagi Sugeno fuzzy logic (TSFL is employed. Time-domain simulation reveals that the proposed OHS-TSFL yields on-line, off-nominal model parameters, resulting in real-time incremental change in terminal voltage response profile.

  14. Forecasting Energy CO2 Emissions Using a Quantum Harmony Search Algorithm-Based DMSFE Combination Model

    Directory of Open Access Journals (Sweden)

    Xingsheng Gu

    2013-03-01

    Full Text Available he accurate forecasting of carbon dioxide (CO2 emissions from fossil fuel energy consumption is a key requirement for making energy policy and environmental strategy. In this paper, a novel quantum harmony search (QHS algorithm-based discounted mean square forecast error (DMSFE combination model is proposed. In the DMSFE combination forecasting model, almost all investigations assign the discounting factor (β arbitrarily since β varies between 0 and 1 and adopt one value for all individual models and forecasting periods. The original method doesn’t consider the influences of the individual model and the forecasting period. This work contributes by changing β from one value to a matrix taking the different model and the forecasting period into consideration and presenting a way of searching for the optimal β values by using the QHS algorithm through optimizing the mean absolute percent error (MAPE objective function. The QHS algorithm-based optimization DMSFE combination forecasting model is established and tested by forecasting CO2 emission of the World top‒5 CO2 emitters. The evaluation indexes such as MAPE, root mean squared error (RMSE and mean absolute error (MAE are employed to test the performance of the presented approach. The empirical analyses confirm the validity of the presented method and the forecasting accuracy can be increased in a certain degree.

  15. Turn-Based War Chess Model and Its Search Algorithm per Turn

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2016-01-01

    Full Text Available War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1 enumeration by order; (2 enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.

  16. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  17. KNOWLEDGE MANAGEMENT TOOLS FOR THE EUROPEAN KNOWLEDGE BASED SOCIETY

    Directory of Open Access Journals (Sweden)

    Ramona – Diana Leon

    2011-12-01

    Full Text Available Increasingly more literature mention that in the current competitive environment, knowledge have become the main source of the competitive advantages, while recent researches regarding economic growth and development have defined knowledge as being the most critical resource of the emerging countries.Therefore, the organizations interest for knowledge has increased, the latter being defined as knowledge management process in order to meet existing needs, to identify and exploit existing and/or acquired knowledge and developing new opportunities.In other words, knowledge management facilitates productive information usage, intelligence growth, storing intellectual capital, strategic planning, flexible acquisition, collection of best practices, increasing the likelihood of being successful as well as a more productive collaboration within the company.In order to benefit from all these advantages, it is required the usage of specific tools including models and systems to stimulate the creation, dissemination and use of knowledge held by each employee and the organization as a whole.

  18. A Visualization-Based Tutoring Tool for Engineering Education

    Science.gov (United States)

    Nguyen, Tang-Hung; Khoo, I.-Hung

    2010-06-01

    In engineering disciplines, students usually have hard time to visualize different aspects of engineering analysis and design, which inherently are too complex or abstract to fully understand without the aid of visual explanations or visualizations. As examples, when learning materials and sequences of construction process, students need to visualize how all components of a constructed facility are assembled? Such visualization can not be achieved in a textbook and a traditional lecturing environment. In this paper, the authors present the development of a computer tutoring software, in which different visualization tools including video clips, 3 dimensional models, drawings, pictures/photos together with complementary texts are used to assist students in deeply understanding and effectively mastering materials. The paper will also discuss the implementation and the effectiveness evaluation of the proposed tutoring software, which was used to teach a construction engineering management course offered at California State University, Long Beach.

  19. HLRmon: a role-based grid accounting report web tool

    International Nuclear Information System (INIS)

    Pra, S D; Fattibene, E; Misurelli, G; Pescarmona, F; Gaido, L

    2008-01-01

    Both Grid users and Grid operators need ways to get CPU usage statistics about jobs executed in a given time period at various different levels, depending on their specific Grid's role and rights. While a Grid user is interested in reports about its own jobs and should not get access to other's data, Site or Virtual Organization (VO) or Regional Operation Centre (ROC) manager would also like to see how resources are used through the Grid in a per Site or per VO basis, or both. The whole set of different reports turns out to be quite large, and various existing tools made to create them tend to better satisfy a single user's category, eventually despite of another. HLRmon results from our efforts to generate suitable reports for all existing categories and has been designed to serve them within a unified layout. Thanks to its ability to authenticate clients through certificate and related authorization rights, it can a-priori restrict the selectable items range offered to the web user, so that sensitive information can only be provided to specifically enabled people. Information are gathered by HLRmon from a Home Location Register (HLR) which stores complete accounting data in a per job basis. Depending on the kind of reports that are to be generated, it directly queries the HLR server using an ad-hoc Distributed Grid Accounting System (DGAS) query tool (tipically user's level detail info), or a local RDBMS table with daily aggregate information in a per Day, Site, VO basis, thus saving connection delay time and needless load on the HLR server

  20. Confirmation bias in web-based search: a randomized online study on the effects of expert information and social tags on information search and evaluation.

    Science.gov (United States)

    Schweiger, Stefan; Oeberst, Aileen; Cress, Ulrike

    2014-03-26

    The public typically believes psychotherapy to be more effective than pharmacotherapy for depression treatments. This is not consistent with current scientific evidence, which shows that both types of treatment are about equally effective. The study investigates whether this bias towards psychotherapy guides online information search and whether the bias can be reduced by explicitly providing expert information (in a blog entry) and by providing tag clouds that implicitly reveal experts' evaluations. A total of 174 participants completed a fully automated Web-based study after we invited them via mailing lists. First, participants read two blog posts by experts that either challenged or supported the bias towards psychotherapy. Subsequently, participants searched for information about depression treatment in an online environment that provided more experts' blog posts about the effectiveness of treatments based on alleged research findings. These blogs were organized in a tag cloud; both psychotherapy tags and pharmacotherapy tags were popular. We measured tag and blog post selection, efficacy ratings of the presented treatments, and participants' treatment recommendation after information search. Participants demonstrated a clear bias towards psychotherapy (mean 4.53, SD 1.99) compared to pharmacotherapy (mean 2.73, SD 2.41; t173=7.67, Pinformation search and evaluation. This bias was significantly reduced, however, when participants were exposed to tag clouds with challenging popular tags. Participants facing popular tags challenging their bias (n=61) showed significantly less biased tag selection (F2,168=10.61, Pinformation as presented in blog posts, compared to supporting expert information (n=81), decreased the bias in information search with regard to blog post selection (F1,168=4.32, P=.04, partial eta squared=0.025). No significant effects were found for treatment recommendation (Ps>.33). We conclude that the psychotherapy bias is most effectively