WorldWideScience

Sample records for methods library search

  1. Flexible digital library search

    NARCIS (Netherlands)

    Windhouwer, M.; Schmidt, A.; Zwol, van R.; Petkovic, M.; Blok, H.E.; Dahanayake, A.; Gerhardt, W.

    2003-01-01

    In this chapter the development of a specialised search engine for a digital library is described. The proposed system architecture consists of three levels: the conceptual, the logical and the physical level. The conceptual level schema enables by its exposure of a domain specific schema

  2. Search features of digital libraries

    Directory of Open Access Journals (Sweden)

    Alastair G. Smith

    2000-01-01

    Full Text Available Traditional on-line search services such as Dialog, DataStar and Lexis provide a wide range of search features (boolean and proximity operators, truncation, etc. This paper discusses the use of these features for effective searching, and argues that these features are required, regardless of advances in search engine technology. The literature on on-line searching is reviewed, identifying features that searchers find desirable for effective searching. A selective survey of current digital libraries available on the Web was undertaken, identifying which search features are present. The survey indicates that current digital libraries do not implement a wide range of search features. For instance: under half of the examples included controlled vocabulary, under half had proximity searching, only one enabled browsing of term indexes, and none of the digital libraries enable searchers to refine an initial search. Suggestions are made for enhancing the search effectiveness of digital libraries, for instance by: providing a full range of search operators, enabling browsing of search terms, enhancement of records with controlled vocabulary, enabling the refining of initial searches, etc.

  3. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries

    Directory of Open Access Journals (Sweden)

    Han Bucong

    2012-11-01

    Full Text Available Abstract Background Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. Results We evaluated support vector machines (SVM as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33% of 13.56M PubChem, 1,496 (0.89% of 168 K MDDR, and 719 (7.73% of 9,305 MDDR compounds similar to the known inhibitors. Conclusions SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  4. Development and experimental test of support vector machines virtual screening method for searching Src inhibitors from large compound libraries.

    Science.gov (United States)

    Han, Bucong; Ma, Xiaohua; Zhao, Ruiying; Zhang, Jingxian; Wei, Xiaona; Liu, Xianghui; Liu, Xin; Zhang, Cunlong; Tan, Chunyan; Jiang, Yuyang; Chen, Yuzong

    2012-11-23

    Src plays various roles in tumour progression, invasion, metastasis, angiogenesis and survival. It is one of the multiple targets of multi-target kinase inhibitors in clinical uses and trials for the treatment of leukemia and other cancers. These successes and appearances of drug resistance in some patients have raised significant interest and efforts in discovering new Src inhibitors. Various in-silico methods have been used in some of these efforts. It is desirable to explore additional in-silico methods, particularly those capable of searching large compound libraries at high yields and reduced false-hit rates. We evaluated support vector machines (SVM) as virtual screening tools for searching Src inhibitors from large compound libraries. SVM trained and tested by 1,703 inhibitors and 63,318 putative non-inhibitors correctly identified 93.53%~ 95.01% inhibitors and 99.81%~ 99.90% non-inhibitors in 5-fold cross validation studies. SVM trained by 1,703 inhibitors reported before 2011 and 63,318 putative non-inhibitors correctly identified 70.45% of the 44 inhibitors reported since 2011, and predicted as inhibitors 44,843 (0.33%) of 13.56M PubChem, 1,496 (0.89%) of 168 K MDDR, and 719 (7.73%) of 9,305 MDDR compounds similar to the known inhibitors. SVM showed comparable yield and reduced false hit rates in searching large compound libraries compared to the similarity-based and other machine-learning VS methods developed from the same set of training compounds and molecular descriptors. We tested three virtual hits of the same novel scaffold from in-house chemical libraries not reported as Src inhibitor, one of which showed moderate activity. SVM may be potentially explored for searching Src inhibitors from large compound libraries at low false-hit rates.

  5. How Users Search the Library from a Single Search Box

    Science.gov (United States)

    Lown, Cory; Sierra, Tito; Boyer, Josh

    2013-01-01

    Academic libraries are turning increasingly to unified search solutions to simplify search and discovery of library resources. Unfortunately, very little research has been published on library user search behavior in single search box environments. This study examines how users search a large public university library using a prominent, single…

  6. Nigerian Libraries: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  7. A Search method for Scientific Data in Digital Libraries, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Unlike the world wide web or general libraries, digital libraries typically serve a specialized community of experts sharing a relatively narrow focus, such as some...

  8. An Exploration of Retrieval-Enhancing Methods for Integrated Search in a Digital Library

    DEFF Research Database (Denmark)

    Sørensen, Diana Ransgaard; Bogers, Toine; Larsen, Birger

    2012-01-01

    Integrated search is defined as searching across different document types and representations simultaneously, with the goal of presenting the user with a single ranked result list containing the optimal mix of document types. In this paper, we compare various approaches to integrating three diffe...

  9. The North Carolina State University Libraries Search Experience: Usability Testing Tabbed Search Interfaces for Academic Libraries

    Science.gov (United States)

    Teague-Rector, Susan; Ballard, Angela; Pauley, Susan K.

    2011-01-01

    Creating a learnable, effective, and user-friendly library Web site hinges on providing easy access to search. Designing a search interface for academic libraries can be particularly challenging given the complexity and range of searchable library collections, such as bibliographic databases, electronic journals, and article search silos. Library…

  10. The Library as Search Engine

    Science.gov (United States)

    Chronicle of Higher Education, 2007

    2007-01-01

    This article presents a Technology Forum that focuses on online archives and their role in academe. The forum brought together Daniel Greenstein, associate vice provost for scholarly information and university librarian at the California Digital Library of the University of California; Adam Smith, group business-product manager for the Google Book…

  11. Ghana Library Journal: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  12. Combination of Multiple Spectral Libraries Improves the Current Search Methods Used to Identify Missing Proteins in the Chromosome-Centric Human Proteome Project.

    Science.gov (United States)

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Kim, Kwang-Youl; Kwon, Kyung-Hoon; Yoo, Jong Shin; Omenn, Gilbert S; Baker, Mark S; Hancock, William S; Paik, Young-Ki

    2015-12-04

    Approximately 2.9 billion long base-pair human reference genome sequences are known to encode some 20 000 representative proteins. However, 3000 proteins, that is, ~15% of all proteins, have no or very weak proteomic evidence and are still missing. Missing proteins may be present in rare samples in very low abundance or be only temporarily expressed, causing problems in their detection and protein profiling. In particular, some technical limitations cause missing proteins to remain unassigned. For example, current mass spectrometry techniques have high limits and error rates for the detection of complex biological samples. An insufficient proteome coverage in a reference sequence database and spectral library also raises major issues. Thus, the development of a better strategy that results in greater sensitivity and accuracy in the search for missing proteins is necessary. To this end, we used a new strategy, which combines a reference spectral library search and a simulated spectral library search, to identify missing proteins. We built the human iRefSPL, which contains the original human reference spectral library and additional peptide sequence-spectrum match entries from other species. We also constructed the human simSPL, which contains the simulated spectra of 173 907 human tryptic peptides determined by MassAnalyzer (version 2.3.1). To prove the enhanced analytical performance of the combination of the human iRefSPL and simSPL methods for the identification of missing proteins, we attempted to reanalyze the placental tissue data set (PXD000754). The data from each experiment were analyzed using PeptideProphet, and the results were combined using iProphet. For the quality control, we applied the class-specific false-discovery rate filtering method. All of the results were filtered at a false-discovery rate of libraries, iRefSPL and simSPL, were designed to ensure no overlap of the proteome coverage. They were shown to be complementary to spectral library

  13. Flexible and scalable digital library search

    NARCIS (Netherlands)

    M.A. Windhouwer (Menzo); A.R. Schmidt; R. van Zwol; M. Petkovic; H.E. Blok

    2001-01-01

    textabstractIn this report the development of a specialised search engine for a digital library is described. The proposed system architecture consists of three levels: the conceptual, the logical and the physical level. The conceptual level schema enables by its exposure of a domain specific

  14. Evaluation of Federated Searching Options for the School Library

    Science.gov (United States)

    Abercrombie, Sarah E.

    2008-01-01

    Three hosted federated search tools, Follett One Search, Gale PowerSearch Plus, and WebFeat Express, were configured and implemented in a school library. Databases from five vendors and the OPAC were systematically searched. Federated search results were compared with each other and to the results of the same searches in the database's native…

  15. Library search with regular reflectance IR spectra

    International Nuclear Information System (INIS)

    Staat, H.; Korte, E.H.; Lampen, P.

    1989-01-01

    Characterisation in situ for coatings and other surface layers is generally favourable, but a prerequisite for precious items such as art objects. In infrared spectroscopy only reflection techniques are applicable here. However for attenuated total reflection (ATR) it is difficult to obtain the necessary optical contact of the crystal with the sample, when the latter is not perfectly plane or flexible. The measurement of diffuse reflectance demands a scattering sample and usually the reflectance is very poor. Therefore in most cases one is left with regular reflectance. Such spectra consist of dispersion-like feature instead of bands impeding their interpretation in the way the analyst is used to. Furthermore for computer search in common spectral libraries compiled from transmittance or absorbance spectra a transformation of the reflectance spectra is needed. The correct conversion is based on the Kramers-Kronig transformation. This somewhat time - consuming procedure can be speeded up by using appropriate approximations. A coarser conversion may be obtained from the first derivative of the reflectance spectrum which resembles the second derivative of a transmittance spectrum. The resulting distorted spectra can still be used successfully for the search in peak table libraries. Experiences with both transformations are presented. (author)

  16. Flexible and scalable digital library search

    NARCIS (Netherlands)

    H.E. Blok; M.A. Windhouwer (Menzo); R. van Zwol; M. Petkovic; P.M.G. Apers; M.L. Kersten (Martin); W. Jonker

    2001-01-01

    textabstractThe everlasting search for new methods to explore the Inter- or Intranet is still going on. In this demo we present the combined effort of the AMIS and DMW research projects, each covering significant parts of this problem. The contribution of this demo is twofold. Firstly, we

  17. Spectrum-to-Spectrum Searching Using a Proteome-wide Spectral Library*

    Science.gov (United States)

    Yen, Chia-Yu; Houel, Stephane; Ahn, Natalie G.; Old, William M.

    2011-01-01

    The unambiguous assignment of tandem mass spectra (MS/MS) to peptide sequences remains a key unsolved problem in proteomics. Spectral library search strategies have emerged as a promising alternative for peptide identification, in which MS/MS spectra are directly compared against a reference library of confidently assigned spectra. Two problems relate to library size. First, reference spectral libraries are limited to rediscovery of previously identified peptides and are not applicable to new peptides, because of their incomplete coverage of the human proteome. Second, problems arise when searching a spectral library the size of the entire human proteome. We observed that traditional dot product scoring methods do not scale well with spectral library size, showing reduction in sensitivity when library size is increased. We show that this problem can be addressed by optimizing scoring metrics for spectrum-to-spectrum searches with large spectral libraries. MS/MS spectra for the 1.3 million predicted tryptic peptides in the human proteome are simulated using a kinetic fragmentation model (MassAnalyzer version2.1) to create a proteome-wide simulated spectral library. Searches of the simulated library increase MS/MS assignments by 24% compared with Mascot, when using probabilistic and rank based scoring methods. The proteome-wide coverage of the simulated library leads to 11% increase in unique peptide assignments, compared with parallel searches of a reference spectral library. Further improvement is attained when reference spectra and simulated spectra are combined into a hybrid spectral library, yielding 52% increased MS/MS assignments compared with Mascot searches. Our study demonstrates the advantages of using probabilistic and rank based scores to improve performance of spectrum-to-spectrum search strategies. PMID:21532008

  18. Revitalizing the Library OPAC: Interface, Searching, and Display Challenges

    Directory of Open Access Journals (Sweden)

    Jia Mi

    2008-03-01

    Full Text Available The behavior of academic library users has drastically changed in recent years. Internet search engines have become the preferred tool over the library online public access catalog (OPAC for finding information. Libraries are losing ground to online search engines. In this paper, two aspects of OPAC use are studied: (1 the current OPAC interface and searching capabilities, and (2 the OPAC bibliographic display. The purpose of the study is to find answers to the following questions: Why is the current OPAC ineffective? What can libraries and librarians do to deliver an OPAC that is as good as search engines to better serve our users? Revitalizing the library OPAC is one of the pressing issues that has to be accomplished.

  19. Federated Search and the Library Web Site: A Study of Association of Research Libraries Member Web Sites

    Science.gov (United States)

    Williams, Sarah C.

    2010-01-01

    The purpose of this study was to investigate how federated search engines are incorporated into the Web sites of libraries in the Association of Research Libraries. In 2009, information was gathered for each library in the Association of Research Libraries with a federated search engine. This included the name of the federated search service and…

  20. Using a Google Search Appliance (GSA) to search digital library collections: a case study of the INIS Collection Search

    OpenAIRE

    Savic, Dobrica

    2014-01-01

    Libraries are facing many challenges today. In addition to diminishing funding and increased user expectations, the use of classic library catalogues is becoming an additional challenge. Library users require fast and easy access to information resources regardless whether the format used is paper or electronic. Google search, with its speed and simplicity, set up a new standard for information retrieval which is hard to achieve with the previous generation of library search facilities. Put i...

  1. Developing as new search engine and browser for libraries to search and organize the World Wide Web library resources

    OpenAIRE

    Sreenivasulu, V.

    2000-01-01

    Internet Granthalaya urges world wide advocates and targets at the task of creating a new search engine and dedicated browseer. Internet Granthalaya may be the ultimate search engine exclusively dedicated for every library use to search and organize the world wide web libary resources

  2. The Role of Libraries in the Search for Educational Excellence.

    Science.gov (United States)

    Breivik, Patricia Senn

    1987-01-01

    Discusses ways in which libraries can make a major contribution to the search for educational excellence and urges librarians to make a concerted effort to capture the attention of educational leaders. Four references are listed. (MES)

  3. Understanding the foundation: the state of generalist search education in library schools as related to the needs of expert searchers in medical libraries.

    Science.gov (United States)

    Nicholson, Scott

    2005-01-01

    The paper explores the current state of generalist search education in library schools and considers that foundation in respect to the Medical Library Association's statement on expert searching. Syllabi from courses with significant searching components were examined from ten of the top library schools, as determined by the U.S. News & World Report rankings. Mixed methods were used, but primarily quantitative bibliometric methods were used. The educational focus in these searching components was on understanding the generalist searching resources and typical users and on performing a reflective search through application of search strategies, controlled vocabulary, and logic appropriate to the search tool. There is a growing emphasis on Web-based search tools and a movement away from traditional set-based searching and toward free-text search strategies. While a core set of authors is used in these courses, no core set of readings is used. While library schools provide a strong foundation, future medical librarians still need to take courses that introduce them to the resources, settings, and users associated with medical libraries. In addition, as more emphasis is placed on Web-based search tools and free-text searching, instructors of the specialist medical informatics courses will need to focus on teaching traditional search methods appropriate for common tools in the medical domain.

  4. Ethnographic Methods in Academic Libraries: A Review

    Science.gov (United States)

    Ramsden, Bryony

    2016-01-01

    Research in academic libraries has recently seen an increase in the use of ethnographic-based methods to collect data. Primarily used to learn about library users and their interaction with spaces and resources, the methods are proving particularly useful to academic libraries. The data ethnographic methods retrieve is rich, context specific, and…

  5. Virtual screening methods as tools for drug lead discovery from large chemical libraries.

    Science.gov (United States)

    Ma, X H; Zhu, F; Liu, X; Shi, Z; Zhang, J X; Yang, S Y; Wei, Y Q; Chen, Y Z

    2012-01-01

    Virtual screening methods have been developed and explored as useful tools for searching drug lead compounds from chemical libraries, including large libraries that have become publically available. In this review, we discussed the new developments in exploring virtual screening methods for enhanced performance in searching large chemical libraries, their applications in screening libraries of ~ 1 million or more compounds in the last five years, the difficulties in their applications, and the strategies for further improving these methods.

  6. Usability Testing of a Large, Multidisciplinary Library Database: Basic Search and Visual Search

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2006-09-01

    Full Text Available Visual search interfaces have been shown by researchers to assist users with information search and retrieval. Recently, several major library vendors have added visual search interfaces or functions to their products. For public service librarians, perhaps the most critical area of interest is the extent to which visual search interfaces and text-based search interfaces support research. This study presents the results of eight full-scale usability tests of both the EBSCOhost Basic Search and Visual Search in the context of a large liberal arts university.

  7. Hunting for unexpected post-translational modifications by spectral library searching with tier-wise scoring.

    Science.gov (United States)

    Ma, Chun Wai Manson; Lam, Henry

    2014-05-02

    Discovering novel post-translational modifications (PTMs) to proteins and detecting specific modification sites on proteins is one of the last frontiers of proteomics. At present, hunting for post-translational modifications remains challenging in widely practiced shotgun proteomics workflows due to the typically low abundance of modified peptides and the greatly inflated search space as more potential mass shifts are considered by the search engines. Moreover, most popular search methods require that the user specifies the modification(s) for which to search; therefore, unexpected and novel PTMs will not be detected. Here a new algorithm is proposed to apply spectral library searching to the problem of open modification searches, namely, hunting for PTMs without prior knowledge of what PTMs are in the sample. The proposed tier-wise scoring method intelligently looks for unexpected PTMs by allowing mass-shifted peak matches but only when the number of matches found is deemed statistically significant. This allows the search engine to search for unexpected modifications while maintaining its ability to identify unmodified peptides effectively at the same time. The utility of the method is demonstrated using three different data sets, in which the numbers of spectrum identifications to both unmodified and modified peptides were substantially increased relative to a regular spectral library search as well as to another open modification spectral search method, pMatch.

  8. Data Analysis Methods for Library Marketing

    Science.gov (United States)

    Minami, Toshiro; Kim, Eunja

    Our society is rapidly changing to information society, where the needs and requests of the people on information access are different widely from person to person. Library's mission is to provide its users, or patrons, with the most appropriate information. Libraries have to know the profiles of their patrons, in order to achieve such a role. The aim of library marketing is to develop methods based on the library data, such as circulation records, book catalogs, book-usage data, and others. In this paper we discuss the methodology and imporatnce of library marketing at the beginning. Then we demonstrate its usefulness through some examples of analysis methods applied to the circulation records in Kyushu University and Guacheon Library, and some implication that obtained as the results of these methods. Our research is a big beginning towards the future when library marketing is an unavoidable tool.

  9. Searching for a New Way to Reach Patrons: A Search Engine Optimization Pilot Project at Binghamton University Libraries

    Science.gov (United States)

    Rushton, Erin E.; Kelehan, Martha Daisy; Strong, Marcy A.

    2008-01-01

    Search engine use is one of the most popular online activities. According to a recent OCLC report, nearly all students start their electronic research using a search engine instead of the library Web site. Instead of viewing search engines as competition, however, librarians at Binghamton University Libraries decided to employ search engine…

  10. Manually Classifying User Search Queries on an Academic Library Web Site

    Science.gov (United States)

    Chapman, Suzanne; Desai, Shevon; Hagedorn, Kat; Varnum, Ken; Mishra, Sonali; Piacentine, Julie

    2013-01-01

    The University of Michigan Library wanted to learn more about the kinds of searches its users were conducting through the "one search" search box on the Library Web site. Library staff conducted two investigations. A preliminary investigation in 2011 involved the manual review of the 100 most frequently occurring queries conducted…

  11. Semantically Enriching the Search System of a Music Digital Library

    Science.gov (United States)

    de Juan, Paloma; Iglesias, Carlos

    Traditional search systems are usually based on keywords, a very simple and convenient mechanism to express a need for information. This is the most popular way of searching the Web, although it is not always an easy task to accurately summarize a natural language query in a few keywords. Working with keywords means losing the context, which is the only thing that can help us deal with ambiguity. This is the biggest problem of keyword-based systems. Semantic Web technologies seem a perfect solution to this problem, since they make it possible to represent the semantics of a given domain. In this chapter, we present three projects, Harmos, Semusici and Cantiga, whose aim is to provide access to a music digital library. We will describe two search systems, a traditional one and a semantic one, developed in the context of these projects and compare them in terms of usability and effectiveness.

  12. Optimization of search algorithms for a mass spectra library

    International Nuclear Information System (INIS)

    Domokos, L.; Henneberg, D.; Weimann, B.

    1983-01-01

    The SISCOM mass spectra library search is mainly an interpretative system producing a ''hit list'' of similar spectra based on six comparison factors. This paper deals with extension of the system; the aim is exact identification (retrieval) of those reference spectra in the SISCOM hit list that correspond to the unknown compounds or components of the mixture. Thus, instead of a similarity measure, a decision (retrieval) function is needed to establish the identity of reference and unknown compounds by comparison of their spectra. To facilitate estimation of the weightings of the different variables in the retrieval function, pattern recognition algorithms were applied. Numerous statistical evaluations of three different library collections were made to check the quality of data bases and to derive appropriate variables for the retrieval function. (Auth.)

  13. Using Google Search Appliance (GSA) to search digital library collections: A Case Study of the INIS Collection Search

    International Nuclear Information System (INIS)

    Savic, Dobrica

    2014-01-01

    Google Search has established a new standard for information retrieval which did not exist with previous generations of library search facilities. The INIS hosts one of the world’s largest collections of published information on the peaceful uses of nuclear science and technology. It offers on-line access to a unique collection of 3.6 million bibliographic records and 483,000 full texts of non-conventional (grey) literature. This large digital library collection suffered from most of the well-known shortcomings of the classic library catalogue. Searching was complex and complicated, it required training in Boolean logic, full-text searching was not an option, and response time was slow. An opportune moment to improve the system came with the retirement of the previous catalogue software and the adoption of GSA as an organization-wide search engine standard. INIS was quick to realize the potential of using such a well-known application to replace its on-line catalogue. This paper presents the advantages and disadvantages encountered during three years of GSA use. Based on specific INIS-based practice and experience, this paper also offers some guidelines on ways to improve classic collections of millions of bibliographic and full-text documents, while reaping multiple benefits, such as increased use, accessibility, usability, expandability and improving user search and retrieval experiences. (author)

  14. Fast parallel tandem mass spectral library searching using GPU hardware acceleration.

    Science.gov (United States)

    Baumgardner, Lydia Ashleigh; Shanmugam, Avinash Kumar; Lam, Henry; Eng, Jimmy K; Martin, Daniel B

    2011-06-03

    Mass spectrometry-based proteomics is a maturing discipline of biologic research that is experiencing substantial growth. Instrumentation has steadily improved over time with the advent of faster and more sensitive instruments collecting ever larger data files. Consequently, the computational process of matching a peptide fragmentation pattern to its sequence, traditionally accomplished by sequence database searching and more recently also by spectral library searching, has become a bottleneck in many mass spectrometry experiments. In both of these methods, the main rate-limiting step is the comparison of an acquired spectrum with all potential matches from a spectral library or sequence database. This is a highly parallelizable process because the core computational element can be represented as a simple but arithmetically intense multiplication of two vectors. In this paper, we present a proof of concept project taking advantage of the massively parallel computing available on graphics processing units (GPUs) to distribute and accelerate the process of spectral assignment using spectral library searching. This program, which we have named FastPaSS (for Fast Parallelized Spectral Searching), is implemented in CUDA (Compute Unified Device Architecture) from NVIDIA, which allows direct access to the processors in an NVIDIA GPU. Our efforts demonstrate the feasibility of GPU computing for spectral assignment, through implementation of the validated spectral searching algorithm SpectraST in the CUDA environment.

  15. Investigating User Search Tactic Patterns and System Support in Using Digital Libraries

    Science.gov (United States)

    Joo, Soohyung

    2013-01-01

    This study aims to investigate users' search tactic application and system support in using digital libraries. A user study was conducted with sixty digital library users. The study was designed to answer three research questions: 1) How do users engage in a search process by applying different types of search tactics while conducting different…

  16. Discovering How Students Search a Library Web Site: A Usability Case Study.

    Science.gov (United States)

    Augustine, Susan; Greene, Courtney

    2002-01-01

    Discusses results of a usability study at the University of Illinois Chicago that investigated whether Internet search engines have influenced the way students search library Web sites. Results show students use the Web site's internal search engine rather than navigating through the pages; have difficulty interpreting library terminology; and…

  17. Systematic reviews in Library and Information Science: analysis and evaluation of the search process

    Directory of Open Access Journals (Sweden)

    José Antonio Salvador-Oliván

    2018-05-01

    Full Text Available Objective: An essential component of a systematic review is the development and execution of a literature search to identify all available and relevant published studies. The main objective of this study is to analyse and evaluate whether the systematic reviews in Library and Information Science (LIS provide complete information on all the elements that make up the search process. Methods: A search was launched in WOS, Scopus, LISTA, Library Science Database, Medline databases and a wiki published from 2000 to February 2017, in order to find and identify systematic reviews. The search was designed to find those records whose titles included the words “systematic review” and/or “meta-analysis”. A list was created with the twelve items recommended from of the main publication guides, to assess the information degree on each of them. Results and conclusions: Most of the reviews in LIS are created by information professionals. From the 94 systematic reviews selected for analysis, it was found that only a 4.3% provided the complete reporting on the search method. The most frequently included item is the name of the database (95.6% and the least one is the name of the host (35.8%. It is necessary to improve and complete the information about the search processes in the complete reports from LIS systematic reviews for reproducibility, updating and quality assessment improvement.

  18. Digging in the Mines: Mining Course Syllabi in Search of the Library

    Directory of Open Access Journals (Sweden)

    Keven M. Jeffery

    2017-03-01

    Full Text Available Objective - The purpose of this study was to analyze a syllabus collection at a large, public university to identify how the university’s library was represented within the syllabi. Specifically, this study was conducted to see which library spaces, resources, and people were included in course syllabi and to identify possible opportunities for library engagement. Methods - A text analysis software called QDA Miner was used to search using keywords and analyze 1,226 syllabi across eight colleges at both the undergraduate and graduate levels from the Fall 2014 semester. Results - Of the 1,226 syllabi analyzed, 665 did not mention the library’s services, spaces, or resources nor did they mention projects requiring research. Of the remaining 561, the text analysis revealed that the highest relevant keyword matches were related to Citation Management (286, Resource Intensive Projects (262, and Library Spaces (251. Relationships between categories were mapped using Sorensen’s coefficient of similarity. Library Space and Library Resources (coefficient =.500 and Library Space and Library Services (coefficient-=.457 were most likely to appear in the same syllabi, with Citation Management and Resource Intensive Projects (coefficient=.445 the next most likely to co-occur. Conclusion - The text analysis proved to be effective at identifying how and where the library was mentioned in course syllabi. This study revealed instructional and research engagement opportunities for the library’s liaisons, and it revealed the ways in which the library’s space was presented to students. Additionally, the faculty’s research expectations for students in their disciplines were better understood.

  19. Using a Google Search Appliance (GSA to search digital library collections: a case study of the INIS Collection Search

    Directory of Open Access Journals (Sweden)

    Dobrica Savic

    2014-05-01

    The International Nuclear Information System (INIS hosts one of the world’s largest collections of published information on the peaceful uses of nuclear science and technology. It offers online access to a unique collection of 3.6 million bibliographic records and 320,000 full-texts of non-conventional (grey literature. This large digital library collection suffered from most of the well-known shortcomings of the classic library catalogue. Searching was complex and complicated, required some training in using Boolean logic, full-text searching was not an option, and the response time was slow. An opportune moment came with the retirement of the previous catalogue software and with the adoption of Google Search Appliance (GSA as an organization-wide search engine standard. INIS was quick to realize a great potential in using such a well-known application as a replacement for its online catalogue and this paper presents the advantages and disadvantages encountered during three years of GSA use. Based on specific INIS-based practice and experience, this paper also offers some guidelines on ways to improve classic collections of millions of bibliographic and full-text documents, while achieving multiple benefits such as increased use, accessibility, usability, expandability and improving the user search and retrieval experience.

  20. Undergraduate Students with Strong Tendencies Towards Critical Thinking Experience Less Library Anxiety. A Review of: Kwon, Nahyun. “A Mixed‐Methods Investigation of the Relationship between Critical Thinking and Library Anxiety among Undergraduate Students in their Information Search Process.” College & Research Libraries 69.2 (2008: 117‐31.

    Directory of Open Access Journals (Sweden)

    Cari Merkley

    2009-12-01

    Full Text Available Objective – To investigate the nature of the association between a student’s critical thinking disposition and the extent to which they suffer from library anxiety.Design – Standardized quantitative survey instruments and a qualitative content analysis of student essays.Setting – A state (publically funded research university located in the southeast United States.Subjects – 137 undergraduate students enrolled in the Library and Research Skills course.Methods – Undergraduate students enrolled in the three‐credit course Library and Research Skills during the spring 2006 semester were invited to participate in the study. Of 180 students registered in the course, 137 volunteered to take part. Data collection took place in the first two weeks of the semester. Participants were asked to complete two standardized survey instruments: the California Critical Thinking Disposition Inventory (CCTDI and the Library Anxiety Scale (LAS. The purpose of the CCTDI is to “measure a person’s disposition to use critical thinking” (119. The instrument consists of seven scales: “truth‐seeking”; “open-mindedness”; “analyticity”; “systematicity”; “critical thinking self‐confidence”; “inquisitiveness”; and “maturity” (119. “Truth‐seeking” is a commitment to seeking answers even if the process proves difficult or reveals information outside of one’s belief system, “systematicity” is defined as an organized approach to problem solving, and “maturity” is the ability to make “reflective decisions when facing ill‐structured problem situations” (119. “Analyticity” refers to a subject’s ability to anticipate possible outcomes, “open‐mindedness” to being open to different points of view, “critical thinking self‐ confidence” to a belief in one’s own critical thinking skills, and “inquisitiveness” to “intellectual curiosity” (119. Participants scored 75 items using a six

  1. The Impact of On-line Searching on Document Supply Services at Individual Libraries

    Science.gov (United States)

    Hosono, Kimio; Tanaka, Isao; Fukazawa, Yoshiko

    As use of on-line searching services has progressed in libraries, requests on primary materials have increased much more than before. For the purpose of clarifying this trend and countermeasures against it a survey by questionnaire was conducted in 1985. The respondants from totally 112 libraries are as follows ; 60 industrial libraries, 41 academic libraries and 11 libraries of research institutes and laboratories. It was shown that industrial libraries have received more requests on primary materials mostly resulting from on-line searching while the requests have not increased remarkably in academic libraries. Regardless the type of libraries, almost all libraries can not fully meet the requests with their own collection so that industrial libraries have to rely on external information services and academic libraries utilize interlibrary loan system. Requests are sent via on-line from industrial libraries, and by mail from academic libraries. In fact, any-libraries are not likely to review their material-collecting policy. Therefore it is urgent to establish the system which enables use of external information services or interlibrary loan system more than ever.

  2. Paths of discovery: Comparing the search effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and conventional library resources.

    Directory of Open Access Journals (Sweden)

    Müge Akbulut

    2015-09-01

    Full Text Available It is becoming hard for users to select significant sources among many others as number of scientific publications increase (Henning and Gunn, 2012. Search engines that are using cloud computing methods such as Google can list related documents successfully answering user requirements (Johnson, Levine and Smith, 2009. In order to meet users’ increasing demands, libraries started to use systems which enable users to access printed and electronic sources through a single interface. This study uses quantitative and qualitative methods to compare search effectiveness between Serial Solutions Summon, EBSCO Discovery Service (EDS web discovery tools, Google Scholar (GS and conventional library databases among users from Bucknell University and Illinois Wesleyan University.

  3. Efficient searching in meshfree methods

    Science.gov (United States)

    Olliff, James; Alford, Brad; Simkins, Daniel C.

    2018-04-01

    Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.

  4. Epsilon-Q: An Automated Analyzer Interface for Mass Spectral Library Search and Label-Free Protein Quantification.

    Science.gov (United States)

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki

    2017-12-01

    Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.

  5. Development and tuning of an original search engine for patent libraries in medicinal chemistry.

    Science.gov (United States)

    Pasche, Emilie; Gobeill, Julien; Kreim, Olivier; Oezdemir-Zaech, Fatma; Vachon, Therese; Lovis, Christian; Ruch, Patrick

    2014-01-01

    The large increase in the size of patent collections has led to the need of efficient search strategies. But the development of advanced text-mining applications dedicated to patents of the biomedical field remains rare, in particular to address the needs of the pharmaceutical & biotech industry, which intensively uses patent libraries for competitive intelligence and drug development. We describe here the development of an advanced retrieval engine to search information in patent collections in the field of medicinal chemistry. We investigate and combine different strategies and evaluate their respective impact on the performance of the search engine applied to various search tasks, which covers the putatively most frequent search behaviours of intellectual property officers in medical chemistry: 1) a prior art search task; 2) a technical survey task; and 3) a variant of the technical survey task, sometimes called known-item search task, where a single patent is targeted. The optimal tuning of our engine resulted in a top-precision of 6.76% for the prior art search task, 23.28% for the technical survey task and 46.02% for the variant of the technical survey task. We observed that co-citation boosting was an appropriate strategy to improve prior art search tasks, while IPC classification of queries was improving retrieval effectiveness for technical survey tasks. Surprisingly, the use of the full body of the patent was always detrimental for search effectiveness. It was also observed that normalizing biomedical entities using curated dictionaries had simply no impact on the search tasks we evaluate. The search engine was finally implemented as a web-application within Novartis Pharma. The application is briefly described in the report. We have presented the development of a search engine dedicated to patent search, based on state of the art methods applied to patent corpora. We have shown that a proper tuning of the system to adapt to the various search tasks

  6. Google vs. the Library (Part II): Student Search Patterns and Behaviors When Using Google and a Federated Search Tool

    Science.gov (United States)

    Georgas, Helen

    2014-01-01

    This study examines the information-seeking behavior of undergraduate students within a research context. Student searches were recorded while the participants used Google and a library (federated) search tool to find sources (one book, two articles, and one other source of their choosing) for a selected topic. The undergraduates in this study…

  7. Libraries in Online Elementary Schools: A Mixed-Methods Study

    Science.gov (United States)

    Hibbard, Laura; Franklin, Teresa

    2015-01-01

    School libraries serve an important role; however, elementary students who attend schools online typically do not have a school library. This study followed an online school's inaugural year in instituting a library. A mixed methods approach examined data from focus groups, interviews, surveys, library-use records and oral reading fluency scores.…

  8. Paths of Discovery: Comparing the Search Effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and Conventional Library Resources

    Science.gov (United States)

    Asher, Andrew D.; Duke, Lynda M.; Wilson, Suzanne

    2013-01-01

    In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar, and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data were gathered on students' usage of these tools. Regardless of the…

  9. Virtual Reference Services through Web Search Engines: Study of Academic Libraries in Pakistan

    Directory of Open Access Journals (Sweden)

    Rubia Khan

    2017-03-01

    Full Text Available Web search engines (WSE are powerful and popular tools in the field of information service management. This study is an attempt to examine the impact and usefulness of web search engines in providing virtual reference services (VRS within academic libraries in Pakistan. The study also attempts to investigate the relevant expertise and skills of library professionals in providing digital reference services (DRS efficiently using web search engines. Methodology used in this study is quantitative in nature. The data was collected from fifty public and private sector universities in Pakistan using a structured questionnaire. Microsoft Excel and SPSS were used for data analysis. The study concludes that web search engines are commonly used by librarians to help users (especially research scholars by providing digital reference services. The study also finds a positive correlation between use of web search engines and quality of digital reference services provided to library users. It is concluded that although search engines have increased the expectations of users and are really big competitors to a library’s reference desk, they are however not an alternative to reference service. Findings reveal that search engines pose numerous challenges for librarians and the study also attempts to bring together possible remedial measures. This study is useful for library professionals to understand the importance of search engines in providing VRS. The study also provides an intellectual comparison among different search engines, their capabilities, limitations, challenges and opportunities to provide VRS effectively in libraries.

  10. MetaSearching and Beyond: Implementation Experiences and Advice from an Academic Library

    Directory of Open Access Journals (Sweden)

    Gail Herrera

    2007-06-01

    Full Text Available In March 2003 the University of Mississippi Libraries made our MetaSearch tool publicly available. After a year of working with this product and integrating it into the library Web site, a wide variety of libraries interested in our implementation process and experiences began to call. Libraries interested in this product have included consortia, public, and academic libraries in the United States, Mexico, and Europe. This article was written in an effort to share the recommendations and concerns given. Much of the advice is general and could be applied to many of the MetaSearch tools available. Google Scholar and other open Web initiatives that could impact the future of MetaSearching are also discussed.

  11. Making the Most of Libraries in the Search for Academic Excellence.

    Science.gov (United States)

    Breivik, Patricia Senn

    1987-01-01

    The role of libraries in the search for quality education was addressed in the Carnegie Foundation's report, "College," and at the first higher education conference on academic libraries. Information literacy and policy, campus organizational issues, and programs in economic development support, active learning, and faculty development…

  12. TRANSFORMATION OF INNOVATIVE AND METHODICAL ACTIVITY LIBRARY OF THE UNIVERSITY

    Directory of Open Access Journals (Sweden)

    О. О. Скаченко

    2015-09-01

    Full Text Available The purpose of our article is the analysis of innovative and methodical activity of university libraries which develop as information centers today. The subject of research is methodical, publishing and innovative activity of Scientific Library of the Kiev national university of culture and arts. We observe process of introduction of technological innovations in library service that allows improving quality of the services provided to the reader by library. New actual projects are developed, cultural and educational and information services extend, work methods improve. Also, the structure of library is improved – there are new sectors. The main finding of the work consists in systematization of various aspects and the directions of innovative activity of library. The research findings have the practical value for library workers, teachers of university, students, library users, and also for anyone who is interested in library science.

  13. Library Catalogue Users Are Influenced by Trends in Web Searching Search Strategies. A review of: Novotny, Eric. “I Don’t Think I Click: A Protocol Analysis Study of Use of a Library Online Catalog in the Internet Age.” College & Research Libraries, 65.6 (Nov. 2004: 525-37.

    Directory of Open Access Journals (Sweden)

    Susan Haigh

    2006-09-01

    Full Text Available Objective – To explore how Web-savvy users think about and search an online catalogue. Design – Protocol analysis study. Setting – Academic library (Pennsylvania State University Libraries. Subjects – Eighteen users (17 students, 1 faculty member of an online public access catalog, divided into two groups of nine first-time and nine experienced users. Method – The study team developed five tasks that represented a range of activities commonly performed by library users, such as searching for a specific item, identifying a library location, and requesting a copy. Seventeen students and one faculty member, divided evenly between novice and experienced searchers, were recruited to “think aloud” through the performance of the tasks. Data were gathered through audio recordings, screen capture software, and investigator notes. The time taken for each task was recorded, and investigators rated task completion as “successful,” “partially successful,” “fail,” or “search aborted.” After the searching session, participants were interviewed to clarify their actions and provide further commentary on the catalogue search. Main results – Participants in both test groups were relatively unsophisticated subject searchers. They made minimal use of Boolean operators, and tended not to repair failed searches by rethinking the search vocabulary and using synonyms. Participants did not have a strong understanding of library catalogue contents or structure and showed little curiosity in developing an understanding of how to utilize the catalogue. Novice users were impatient both in choosing search options and in evaluating their search results. They assumed search results were sorted by relevance, and thus would not typically browse past the initial screen. They quickly followed links, fearlessly tried different searches and options, and rapidly abandoned false trails. Experienced users were more effective and efficient searchers than

  14. Performance of Ruecking's Word-compression Method When Applied to Machine Retrieval from a Library Catalog

    Directory of Open Access Journals (Sweden)

    Ben-Ami Lipetz

    1969-12-01

    Full Text Available F. H. Ruecking's word-compression algorithm for retrieval of bibliographic data from computer stores was tested for performance in matching user-supplied, unedited bibliographic data to the bibliographic data contained in a library catalog. The algorithm was tested by manual simulation, using data derived from 126 case studies of successful manual searches of the card catalog at Sterling Memorial Library, Yale University. The algorithm achieved 70% recall in comparison to conventional searching. Its accepta- bility as a substitute for conventional catalog searching methods is ques- tioned unless recall performance can be improved, either by use of the algorithm alone or in combination with other algorithms.

  15. Improving e-book access via a library-developed full-text search tool.

    Science.gov (United States)

    Foust, Jill E; Bergen, Phillip; Maxeiner, Gretchen L; Pawlowski, Peter N

    2007-01-01

    This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single "Google-style" query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products.

  16. Improving e-book access via a library-developed full-text search tool*

    Science.gov (United States)

    Foust, Jill E.; Bergen, Phillip; Maxeiner, Gretchen L.; Pawlowski, Peter N.

    2007-01-01

    Purpose: This paper reports on the development of a tool for searching the contents of licensed full-text electronic book (e-book) collections. Setting: The Health Sciences Library System (HSLS) provides services to the University of Pittsburgh's medical programs and large academic health system. Brief Description: The HSLS has developed an innovative tool for federated searching of its e-book collections. Built using the XML-based Vivísimo development environment, the tool enables a user to perform a full-text search of over 2,500 titles from the library's seven most highly used e-book collections. From a single “Google-style” query, results are returned as an integrated set of links pointing directly to relevant sections of the full text. Results are also grouped into categories that enable more precise retrieval without reformulation of the search. Results/Evaluation: A heuristic evaluation demonstrated the usability of the tool and a web server log analysis indicated an acceptable level of usage. Based on its success, there are plans to increase the number of online book collections searched. Conclusion: This library's first foray into federated searching has produced an effective tool for searching across large collections of full-text e-books and has provided a good foundation for the development of other library-based federated searching products. PMID:17252065

  17. Encounters with the OPAC: On-Line Searching in Public Libraries.

    Science.gov (United States)

    Slone, Deborah J.

    2000-01-01

    Reports on a qualitative study that explored strategies and behaviors of public library users during interaction with an online public access catalog, and users' confidence in finding needed information online. Discusses results of questionnaires, interviews, and observations that examined unknown-item searches, area searches, and known-item…

  18. An Analysis of the Impact of Federated Search Products on Library Instruction Using the ACRL Standards

    Science.gov (United States)

    Cox, Christopher

    2006-01-01

    Federated search products are becoming more and more prevalent in academic libraries. What are the implications of this phenomenon for instruction librarians? An analysis of federated search products using the "Information Literacy Competency Standards for Higher Education" and a thorough review of the literature offer insight concerning whether…

  19. Lagos Journal of Library and Information Science: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  20. Statistic methods for searching inundated radioactive entities

    International Nuclear Information System (INIS)

    Dubasov, Yu.V.; Krivokhatskij, A.S.; Khramov, N.N.

    1993-01-01

    The problem of searching flooded radioactive object in a present area was considered. Various models of the searching route plotting are discussed. It is shown that spiral route by random points from the centre of the area examined is the most efficient one. The conclusion is made that, when searching flooded radioactive objects, it is advisable to use multidimensional statistical methods of classification

  1. Digital Libraries--Methods and Applications

    Science.gov (United States)

    Huang, Kuo Hung, Ed.

    2011-01-01

    Digital library is commonly seen as a type of information retrieval system which stores and accesses digital content remotely via computer networks. However, the vision of digital libraries is not limited to technology or management, but user experience. This book is an attempt to share the practical experiences of solutions to the operation of…

  2. Novel search algorithms for a mid-infrared spectral library of cotton contaminants.

    Science.gov (United States)

    Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A

    2008-06-01

    During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify

  3. Analytical Methods in Search Theory

    Science.gov (United States)

    1979-11-01

    X, t ) ,I pick g(x,t;E), *(x,tjc) and find the b necessary to satisfy the search equation. SOLUTION: This is an audience participation problem. It...Cnstotiaticon G11trant,’ ’I pp 2110 Path lestegsls,’ to pp., Jun IBM Iltetteol Pepsi pp., Ott 1313 (Tt o besubmitoet lot pubtinatteon l t Messino, Daidit

  4. Library

    OpenAIRE

    Dulaney, Ronald E. Jr.

    1997-01-01

    This study began with the desire to design a public town library of the future and became a search for an inkling of what is essential to Architecture. It is murky and full of contradictions. It asks more than it proposes, and the traces of its windings are better ordered through collage than logical synthesis. This study is neither a thesis nor a synthesis. When drawing out the measure of this study it may be beneficial to state what it attempts to place at the ...

  5. Using internet search engines and library catalogs to locate toxicology information.

    Science.gov (United States)

    Wukovitz, L D

    2001-01-12

    The increasing importance of the Internet demands that toxicologists become aquainted with its resources. To find information, researchers must be able to effectively use Internet search engines, directories, subject-oriented websites, and library catalogs. The article will explain these resources, explore their benefits and weaknesses, and identify skills that help the researcher to improve search results and critically evaluate sources for their relevancy, validity, accuracy, and timeliness.

  6. Method for construction of normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1998-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to appropriate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library. This invention also provides normalized cDNA libraries generated by the above-described method and uses of the generated libraries.

  7. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    Science.gov (United States)

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  8. Many Libraries Have Gone to Federated Searching to Win Users Back from Google. Is It Working?

    Science.gov (United States)

    King, Douglas

    2008-01-01

    In the last issue, this journal asked a question on many librarians' minds, and it was pleased with the depth and variety of responses. As suggested by this journal editorial board member Oliver Pesch, readers were asked, "Many libraries have gone to federated searching to win users back from Google. Is it working?" Respondents approached the…

  9. Delivering a MOOC for literature searching in health libraries: evaluation of a pilot project.

    Science.gov (United States)

    Young, Gil; McLaren, Lisa; Maden, Michelle

    2017-12-01

    In an era when library budgets are being reduced, Massive Online Open Courses (MOOC's) can offer practical and viable alternatives to the delivery of costly face-to-face training courses. In this study, guest writers Gil Young from Health Care Libraries Unit - North, Lisa McLaren from Brighton and Sussex Medical School and Liverpool University PhD student Michelle Maden describe the outcomes of a funded project they led to develop a MOOC to deliver literature search training for health librarians. Funded by Health Education England, the MOOC was developed by the Library and Information Health Network North West as a pilot project that ran for six weeks. In particular, the MOOC target audience is discussed, how content was developed for the MOOC, promotion and participation, cost-effectiveness, evaluation, the impact of the MOOC and recommendations for future development. H. S. © 2017 Health Libraries Group.

  10. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  11. Bayesian approach to peak deconvolution and library search for high resolution gas chromatography - Mass spectrometry.

    Science.gov (United States)

    Barcaru, A; Mol, H G J; Tienstra, M; Vivó-Truyols, G

    2017-08-29

    A novel probabilistic Bayesian strategy is proposed to resolve highly coeluting peaks in high-resolution GC-MS (Orbitrap) data. Opposed to a deterministic approach, we propose to solve the problem probabilistically, using a complete pipeline. First, the retention time(s) for a (probabilistic) number of compounds for each mass channel are estimated. The statistical dependency between m/z channels was implied by including penalties in the model objective function. Second, Bayesian Information Criterion (BIC) is used as Occam's razor for the probabilistic assessment of the number of components. Third, a probabilistic set of resolved spectra, and their associated retention times are estimated. Finally, a probabilistic library search is proposed, computing the spectral match with a high resolution library. More specifically, a correlative measure was used that included the uncertainties in the least square fitting, as well as the probability for different proposals for the number of compounds in the mixture. The method was tested on simulated high resolution data, as well as on a set of pesticides injected in a GC-Orbitrap with high coelution. The proposed pipeline was able to detect accurately the retention times and the spectra of the peaks. For our case, with extremely high coelution situation, 5 out of the 7 existing compounds under the selected region of interest, were correctly assessed. Finally, the comparison with the classical methods of deconvolution (i.e., MCR and AMDIS) indicates a better performance of the proposed algorithm in terms of the number of correctly resolved compounds. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An introduction to harmony search optimization method

    CERN Document Server

    Wang, Xiaolei; Zenger, Kai

    2014-01-01

    This brief provides a detailed introduction, discussion and bibliographic review of the nature1-inspired optimization algorithm called Harmony Search. It uses a large number of simulation results to demonstrate the advantages of Harmony Search and its variants and also their drawbacks. The authors show how weaknesses can be amended by hybridization with other optimization methods. The Harmony Search Method with Applications will be of value to researchers in computational intelligence in demonstrating the state of the art of research on an algorithm of current interest. It also helps researche

  13. The Searchbench - Combining Sentence-semantic, Full-text and Bibliographic Search in Digital Libraries

    Directory of Open Access Journals (Sweden)

    Ulrich Schäfer

    2013-02-01

    Full Text Available We describe a novel approach to precise searching in the full content of digital libraries. The Searchbench (for search workbench is based on sentence-wise syntactic and semantic natural language processing (NLP of both born-digital and scanned publications in PDF format. The term born-digital means natively digital, i.e. prepared electronically using typesetting systems such as LaTeX, OpenOffice, and the like. In the Searchbench, queries can be formulated as (possibly underspecified statements, consisting of simple subject-predicate-object constructs such as ‘algorithm improves word alignment’. This reduces the number of false hits in large document collections when the search words happen to appear close to each other, but are not semantically related. The method also abstracts from passive voice and predicate synonyms. Moreover, negated statements can be excluded from the search results, and negated antonym predicates again count as synonyms (e.g. not include = exclude.In the Searchbench, a sentence-semantic search can be combined with search filters for classical full-text, bibliographic metadata and automatically computed domain terms. Auto-suggest fields facilitate text input. Queries can be bookmarked or emailed. Furthermore, a novel citation browser in the Searchbench allows graphical navigation in citation networks. These have been extracted automatically from metadata and paper texts. The citation browser displays short phrases from citation sentences at the edges in the citation graph and thus allows students and researchers to quickly browse publications and immerse into a new research field. By clicking on a citation edge, the original citation sentence is shown in context, and optionally also in the original PDF layout.To showcase the usefulness of our research, we have a applied it to a collection of currently approx. 25,000 open access research papers in the field of computational linguistics and language technology, the ACL

  14. Library Search Prefilters for Vehicle Manufacturers to Assist in the Forensic Examination of Automotive Paints.

    Science.gov (United States)

    Lavine, Barry K; White, Collin G; Ding, Tao

    2018-03-01

    Pattern recognition techniques have been applied to the infrared (IR) spectral libraries of the Paint Data Query (PDQ) database to differentiate between nonidentical but similar IR spectra of automotive paints. To tackle the problem of library searching, search prefilters were developed to identify the vehicle make from IR spectra of the clear coat, surfacer-primer, and e-coat layers. To develop these search prefilters with the appropriate degree of accuracy, IR spectra from the PDQ database were preprocessed using the discrete wavelet transform to enhance subtle but significant features in the IR spectral data. Wavelet coefficients characteristic of vehicle make were identified using a genetic algorithm for pattern recognition and feature selection. Search prefilters to identify automotive manufacturer through IR spectra obtained from a paint chip recovered at a crime scene were developed using 1596 original manufacturer's paint systems spanning six makes (General Motors, Chrysler, Ford, Honda, Nissan, and Toyota) within a limited production year range (2000-2006). Search prefilters for vehicle manufacturer that were developed as part of this study were successfully validated using IR spectra obtained directly from the PDQ database. Information obtained from these search prefilters can serve to quantify the discrimination power of original automotive paint encountered in casework and further efforts to succinctly communicate trace evidential significance to the courts.

  15. Evaluating ranking methods on heterogeneous digital library collections

    CERN Document Server

    Canévet, Olivier; Marian, Ludmila; Chonavel, Thierry

    In the frame of research in particle physics, CERN has been developing its own web-based software /Invenio/ to run the digital library of all the documents related to CERN and fundamental physics. The documents (articles, photos, news, thesis, ...) can be retrieved through a search engine. The results matching the query of the user can be displayed in several ways: sorted by latest first, author, title and also ranked by word similarity. The purpose of this project is to study and implement a new ranking method in Invenio: distributed-ranking (D-Rank). This method aims at aggregating several ranking scores coming from different ranking methods into a new score. In addition to query-related scores such as word similarity, the goal of the work is to take into account non-query-related scores such as citations, journal impact factor and in particular scores related to the document access frequency in the database. The idea is that for two equally query-relevant documents, if one has been more downloaded for inst...

  16. FRBRization of a Library Catalog: Better Collocation of Records, Leading to Enhanced Search, Retrieval, and Display

    Directory of Open Access Journals (Sweden)

    Timothy J. Dickey

    2008-03-01

    Full Text Available The Functional Requirements for Bibliographic Records (FRBR’s hierarchical system defines families of bibliographic relationship between records and collocates them better than most extant bibliographic systems. Certain library materials (especially audio-visual formats pose notable challenges to search and retrieval; the first benefits of a FRBRized system would be felt in music libraries, but research already has proven its advantages for fine arts, theology, and literature—the bulk of the non-science, technology, and mathematics collections. This report will summarize the benefits of FRBR to nextgeneration library catalogs and OPACs, and will review the handful of ILS and catalog systems currently operating with its theoretical structure.

  17. Google vs. the Library: Student Preferences and Perceptions when Doing Research Using Google and a Federated Search Tool

    Science.gov (United States)

    Georgas, Helen

    2013-01-01

    Federated searching was once touted as the library world's answer to Google, but ten years since federated searching technology's inception, how does it actually compare? This study focuses on undergraduate student preferences and perceptions when doing research using both Google and a federated search tool. Students were asked about their…

  18. A rational workflow for sequential virtual screening of chemical libraries on searching for new tyrosinase inhibitors.

    Science.gov (United States)

    Le-Thi-Thu, Huong; Casanola-Martín, Gerardo M; Marrero-Ponce, Yovani; Rescigno, Antonio; Abad, Concepcion; Khan, Mahmud Tareq Hassan

    2014-01-01

    The tyrosinase is a bifunctional, copper-containing enzyme widely distributed in the phylogenetic tree. This enzyme is involved in the production of melanin and some other pigments in humans, animals and plants, including skin pigmentations in mammals, and browning process in plants and vegetables. Therefore, enzyme inhibitors has been under the attention of the scientist community, due to its broad applications in food, cosmetic, agricultural and medicinal fields, to avoid the undesirable effects of abnormal melanin overproduction. However, the research of novel chemical with antityrosinase activity demands the use of more efficient tools to speed up the tyrosinase inhibitors discovery process. This chapter is focused in the different components of a predictive modeling workflow for the identification and prioritization of potential new compounds with activity against the tyrosinase enzyme. In this case, two structure chemical libraries Spectrum Collection and Drugbank are used in this attempt to combine different virtual screening data mining techniques, in a sequential manner helping to avoid the usually expensive and time consuming traditional methods. Some of the sequential steps summarize here comprise the use of drug-likeness filters, similarity searching, classification and potency QSAR multiclassifier systems, modeling molecular interactions systems, and similarity/diversity analysis. Finally, the methodologies showed here provide a rational workflow for virtual screening hit analysis and selection as a promissory drug discovery strategy for use in target identification phase.

  19. Optimization of axial enrichment distribution for BWR fuels using scoping libraries and block coordinate descent method

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2017-03-15

    Highlights: • An optimization method for axial enrichment distribution in a BWR fuel was developed. • Block coordinate descent method is employed to search for optimal solution. • Scoping libraries are used to reduce computational effort. • Optimization search space consists of enrichment difference parameters. • Capability of the method to find optimal solution is demonstrated. - Abstract: An optimization method has been developed to search for the optimal axial enrichment distribution in a fuel assembly for a boiling water reactor core. The optimization method features: (1) employing the block coordinate descent method to find the optimal solution in the space of enrichment difference parameters, (2) using scoping libraries to reduce the amount of CASMO-4 calculation, and (3) integrating a core critical constraint into the objective function that is used to quantify the quality of an axial enrichment design. The objective function consists of the weighted sum of core parameters such as shutdown margin and critical power ratio. The core parameters are evaluated by using SIMULATE-3, and the cross section data required for the SIMULATE-3 calculation are generated by using CASMO-4 and scoping libraries. The application of the method to a 4-segment fuel design (with the highest allowable segment enrichment relaxed to 5%) demonstrated that the method can obtain an axial enrichment design with improved thermal limit ratios and objective function value while satisfying the core design constraints and core critical requirement through the use of an objective function. The use of scoping libraries effectively reduced the number of CASMO-4 calculation, from 85 to 24, in the 4-segment optimization case. An exhausted search was performed to examine the capability of the method in finding the optimal solution for a 4-segment fuel design. The results show that the method found a solution very close to the optimum obtained by the exhausted search. The number of

  20. Combining Fragment-Ion and Neutral-Loss Matching during Mass Spectral Library Searching: A New General Purpose Algorithm Applicable to Illicit Drug Identification.

    Science.gov (United States)

    Moorthy, Arun S; Wallace, William E; Kearsley, Anthony J; Tchekhovskoi, Dmitrii V; Stein, Stephen E

    2017-12-19

    A mass spectral library search algorithm that identifies compounds that differ from library compounds by a single "inert" structural component is described. This algorithm, the Hybrid Similarity Search, generates a similarity score based on matching both fragment ions and neutral losses. It employs the parameter DeltaMass, defined as the mass difference between query and library compounds, to shift neutral loss peaks in the library spectrum to match corresponding neutral loss peaks in the query spectrum. When the spectra being compared differ by a single structural feature, these matching neutral loss peaks should contain that structural feature. This method extends the scope of the library to include spectra of "nearest-neighbor" compounds that differ from library compounds by a single chemical moiety. Additionally, determination of the structural origin of the shifted peaks can aid in the determination of the chemical structure and fragmentation mechanism of the query compound. A variety of examples are presented, including the identification of designer drugs and chemical derivatives not present in the library.

  1. Automated search method for AFM and profilers

    Science.gov (United States)

    Ray, Michael; Martin, Yves C.

    2001-08-01

    A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.

  2. Project GRACE A grid based search tool for the global digital library

    CERN Document Server

    Scholze, Frank; Vigen, Jens; Prazak, Petra; The Seventh International Conference on Electronic Theses and Dissertations

    2004-01-01

    The paper will report on the progress of an ongoing EU project called GRACE - Grid Search and Categorization Engine (http://www.grace-ist.org). The project participants are CERN, Sheffield Hallam University, Stockholm University, Stuttgart University, GL 2006 and Telecom Italia. The project started in 2002 and will finish in 2005, resulting in a Grid based search engine that will search across a variety of content sources including a number of electronic thesis and dissertation repositories. The Open Archives Initiative (OAI) is expanding and is clearly an interesting movement for a community advocating open access to ETD. However, the OAI approach alone may not be sufficiently scalable to achieve a truly global ETD Digital Library. Many universities simply offer their collections to the world via their local web services without being part of any federated system for archiving and even those dissertations that are provided with OAI compliant metadata will not necessarily be picked up by a centralized OAI Ser...

  3. Beyond Failure: Potentially Mitigating Failed Author Searches in the Online Library Catalog through the Use of Linked Data

    Science.gov (United States)

    Moulaison, Heather Lea; Stanley, Susan Nicole

    2013-01-01

    Linked data stores house vetted content that can supplement the information available through online library catalogs, potentially mitigating failed author searches if information about the author exists in linked data formats. In this case study, a total of 689 failed author index queries from a large Midwestern academic library's online library…

  4. Employed and unemployed job search methods: Australian evidence on search duration, wages and job stability

    OpenAIRE

    Colin Green

    2012-01-01

    This paper examines the use and impact of job search methods of both unemployed and employed job seekers. Informal job search methods are associated with relativel high level of job exit and shorter search duration. Job exists through the public employment agency (PEA) display positive duration dependence for the unemployed. This may suggest that the PEA is used as a job search method of last resort. Informal job search methods have lower associated duration in search and higher wages than th...

  5. Development of user-centered interfaces to search the knowledge resources of the Virginia Henderson International Nursing Library.

    Science.gov (United States)

    Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane

    2003-01-01

    This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.

  6. The method of search of tendencies

    International Nuclear Information System (INIS)

    Reuss, Paul.

    1981-08-01

    The search of tendencies is an application of the mean squares method. Its objective is the better possible evaluation of the basic data used in the calculations from the comparison between measurements of integral characteristics and the corresponding theoretical results. This report presents the minimization which allows the estimation of the basic data and, above all, the methods which are necessary for the critical analysis of the obtained results [fr

  7. Methods of Usability Testing in Libraries Web Sites

    Directory of Open Access Journals (Sweden)

    Eman Fawzy

    2006-03-01

    Full Text Available A Study about libraries' web sites evaluation, that is the Usability, the study talking about methods of usability testing and define it, and its important in web sites evaluation, then details the methods of usability: questionnaire, core groups, testing experimental model, cards arrangement, and composed evaluation.

  8. Analysis of Users' Searches of CD-ROM Databases in the National and University Library in Zagreb.

    Science.gov (United States)

    Jokic, Maja

    1997-01-01

    Investigates the search behavior of CD-ROM database users in Zagreb (Croatia) libraries: one group needed a minimum of technical assistance, and the other was completely independent. Highlights include the use of questionnaires and transaction log analysis and the need for end-user education. The questionnaire and definitions of search process…

  9. Generating "fragment-based virtual library" using pocket similarity search of ligand-receptor complexes.

    Science.gov (United States)

    Khashan, Raed S

    2015-01-01

    As the number of available ligand-receptor complexes is increasing, researchers are becoming more dedicated to mine these complexes to aid in the drug design and development process. We present free software which is developed as a tool for performing similarity search across ligand-receptor complexes for identifying binding pockets which are similar to that of a target receptor. The search is based on 3D-geometric and chemical similarity of the atoms forming the binding pocket. For each match identified, the ligand's fragment(s) corresponding to that binding pocket are extracted, thus forming a virtual library of fragments (FragVLib) that is useful for structure-based drug design. The program provides a very useful tool to explore available databases.

  10. Detecting atypical examples of known domain types by sequence similarity searching: the SBASE domain library approach.

    Science.gov (United States)

    Dhir, Somdutta; Pacurar, Mircea; Franklin, Dino; Gáspári, Zoltán; Kertész-Farkas, Attila; Kocsor, András; Eisenhaber, Frank; Pongor, Sándor

    2010-11-01

    SBASE is a project initiated to detect known domain types and predicting domain architectures using sequence similarity searching (Simon et al., Protein Seq Data Anal, 5: 39-42, 1992, Pongor et al, Nucl. Acids. Res. 21:3111-3115, 1992). The current approach uses a curated collection of domain sequences - the SBASE domain library - and standard similarity search algorithms, followed by postprocessing which is based on a simple statistics of the domain similarity network (http://hydra.icgeb.trieste.it/sbase/). It is especially useful in detecting rare, atypical examples of known domain types which are sometimes missed even by more sophisticated methodologies. This approach does not require multiple alignment or machine learning techniques, and can be a useful complement to other domain detection methodologies. This article gives an overview of the project history as well as of the concepts and principles developed within this the project.

  11. Harmony Search Method: Theory and Applications

    Directory of Open Access Journals (Sweden)

    X. Z. Gao

    2015-01-01

    Full Text Available The Harmony Search (HS method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  12. Bridging the Gulf: Mixed Methods and Library Service Evaluation

    Science.gov (United States)

    Haynes, Abby

    2004-01-01

    This paper explores library evaluation in Australia and proposes a return to research fundamentals in which evaluators are asked to consider the centrality of philosophical issues and the role of different research methods. A critique of current evaluation examples demonstrates a system-centred, quantitative, input/output focus which fails to…

  13. Heuristic method for searching global maximum of multimodal unknown function

    Energy Technology Data Exchange (ETDEWEB)

    Kamei, K; Araki, Y; Inoue, K

    1983-06-01

    The method is composed of three kinds of searches. They are called g (grasping)-mode search, f (finding)-mode search and c (confirming)-mode search. In the g-mode search and the c-mode search, a heuristic method is used which was extracted from search behaviors of human subjects. In f-mode search, the simplex method is used which is well known as a search method for unimodal unknown function. Each mode search and its transitions are shown in the form of flowchart. The numerical results for one-dimensional through six-dimensional multimodal functions prove the proposed search method to be an effective one. 11 references.

  14. Research Library

    Science.gov (United States)

    Los Alamos National Laboratory Research Library Search Site submit Contact Us | Remote Access Standards Theses/Dissertations Research Help Subject Guides Library Training Video Tutorials Alerts Research Library: delivering essential knowledge services for national security sciences since 1947 Los

  15. Combining history of medicine and library instruction: an innovative approach to teaching database searching to medical students.

    Science.gov (United States)

    Timm, Donna F; Jones, Dee; Woodson, Deidra; Cyrus, John W

    2012-01-01

    Library faculty members at the Health Sciences Library at the LSU Health Shreveport campus offer a database searching class for third-year medical students during their surgery rotation. For a number of years, students completed "ten-minute clinical challenges," but the instructors decided to replace the clinical challenges with innovative exercises using The Edwin Smith Surgical Papyrus to emphasize concepts learned. The Surgical Papyrus is an online resource that is part of the National Library of Medicine's "Turning the Pages" digital initiative. In addition, vintage surgical instruments and historic books are displayed in the classroom to enhance the learning experience.

  16. Design of combinatorial libraries for the exploration of virtual hits from fragment space searches with LoFT.

    Science.gov (United States)

    Lessel, Uta; Wellenzohn, Bernd; Fischer, J Robert; Rarey, Matthias

    2012-02-27

    A case study is presented illustrating the design of a focused CDK2 library. The scaffold of the library was detected by a feature trees search in a fragment space based on reactions from combinatorial chemistry. For the design the software LoFT (Library optimizer using Feature Trees) was used. The special feature called FTMatch was applied to restrict the parts of the queries where the reagents are permitted to match. This way a 3D scoring function could be simulated. Results were compared with alternative designs by GOLD docking and ROCS 3D alignments.

  17. Methods library of embedded R functions at Statistics Norway

    Directory of Open Access Journals (Sweden)

    Øyvind Langsrud

    2017-11-01

    Full Text Available Statistics Norway is modernising the production processes. An important element in this work is a library of functions for statistical computations. In principle, the functions in such a methods library can be programmed in several languages. A modernised production environment demand that these functions can be reused for different statistics products, and that they are embedded within a common IT system. The embedding should be done in such a way that the users of the methods do not need to know the underlying programming language. As a proof of concept, Statistics Norway soon has established a methods library offering a limited number of methods for macro-editing, imputation and confidentiality. This is done within an area of municipal statistics with R as the only programming language. This paper presents the details and experiences from this work. The problem of fitting real word applications to simple and strict standards is discussed and exemplified by the development of solutions to regression imputation and table suppression.

  18. Processing methods for temperature-dependent MCNP libraries

    International Nuclear Information System (INIS)

    Li Songyang; Wang Kan; Yu Ganglin

    2008-01-01

    In this paper,the processing method of NJOY which transfers ENDF files to ACE (A Compact ENDF) files (point-wise cross-Section file used for MCNP program) is discussed. Temperatures that cover the range for reactor design and operation are considered. Three benchmarks are used for testing the method: Jezebel Benchmark, 28 cm-thick Slab Core Benchmark and LWR Benchmark with Burnable Absorbers. The calculation results showed the precision of the neutron cross-section library and verified the correct processing methods in usage of NJOY. (authors)

  19. The Use of OPAC in a Large Academic Library: A Transactional Log Analysis Study of Subject Searching

    Science.gov (United States)

    Villen-Rueda, Luis; Senso, Jose A.; de Moya-Anegon, Felix

    2007-01-01

    The analysis of user searches in catalogs has been the topic of research for over four decades, involving numerous studies and diverse methodologies. The present study looks at how different types of users effect queries in the catalog of a university library. For this purpose, we analyzed log files to determine which was the most frequent type of…

  20. Detection and identification of 700 drugs by multi-target screening with a 3200 Q TRAP LC-MS/MS system and library searching.

    Science.gov (United States)

    Dresen, S; Ferreirós, N; Gnann, H; Zimmermann, R; Weinmann, W

    2010-04-01

    The multi-target screening method described in this work allows the simultaneous detection and identification of 700 drugs and metabolites in biological fluids using a hybrid triple-quadrupole linear ion trap mass spectrometer in a single analytical run. After standardization of the method, the retention times of 700 compounds were determined and transitions for each compound were selected by a "scheduled" survey MRM scan, followed by an information-dependent acquisition using the sensitive enhanced product ion scan of a Q TRAP hybrid instrument. The identification of the compounds in the samples analyzed was accomplished by searching the tandem mass spectrometry (MS/MS) spectra against the library we developed, which contains electrospray ionization-MS/MS spectra of over 1,250 compounds. The multi-target screening method together with the library was included in a software program for routine screening and quantitation to achieve automated acquisition and library searching. With the help of this software application, the time for evaluation and interpretation of the results could be drastically reduced. This new multi-target screening method has been successfully applied for the analysis of postmortem and traffic offense samples as well as proficiency testing, and complements screening with immunoassays, gas chromatography-mass spectrometry, and liquid chromatography-diode-array detection. Other possible applications are analysis in clinical toxicology (for intoxication cases), in psychiatry (antidepressants and other psychoactive drugs), and in forensic toxicology (drugs and driving, workplace drug testing, oral fluid analysis, drug-facilitated sexual assault).

  1. Comparison tomography relocation hypocenter grid search and guided grid search method in Java island

    International Nuclear Information System (INIS)

    Nurdian, S. W.; Adu, N.; Palupi, I. R.; Raharjo, W.

    2016-01-01

    The main data in this research is earthquake data recorded from 1952 to 2012 with 9162 P wave and 2426 events are recorded by 30 stations located around Java island. Relocation hypocenter processed using grid search and guidded grid search method. Then the result of relocation hypocenter become input for tomography pseudo bending inversion process. It can be used to identification the velocity distribution in subsurface. The result of relocation hypocenter by grid search and guided grid search method after tomography process shown in locally and globally. In locally area grid search method result is better than guided grid search according to geological reseach area. But in globally area the result of guided grid search method is better for a broad area because the velocity variation is more diverse than the other one and in accordance with local geological research conditions. (paper)

  2. Representation Methods in AI. Searching by Graphs

    Directory of Open Access Journals (Sweden)

    Angel GARRIDO

    2012-12-01

    Full Text Available The historical origin of the Artificial Intelligence (A I is usually established in the Darmouth Conference, of 1956. But we can find many more arcane origins [1]. Also, we can consider, in more recent times, very great thinkers, as Janos Neumann (then, John von Neumann, arrived in USA, Norbert Wiener, Alan Mathison Turing, or Lofti Zadehfor instance [6, 7]. Frequently A I requires Logic. But its classical version shows too many insufficiencies. So, it was necessary to introduce more sophisticated tools, as fuzzy logic, modal logic, non-monotonic logic and so on [2]. Among the things that A I needs to represent are: categories, objects, properties, relations between objects, situations, states, time, events, causes and effects, knowledge about knowledge, and so on. The problems in A I can be classified in two general types [3, 4]: search problems and representation problems. In this last “mountain”, there exist different ways to reach their summit. So, we have [3]: logics, rules, frames, associative nets, scripts and so on, many times connectedamong them. We attempt, in this paper, a panoramic vision of the scope of application of such Representation Methods in A I. The two more disputable questions of both modern philosophy of mind and A I will be Turing Test and The Chinese Room Argument. To elucidate these very difficult questions, see both final Appendices.

  3. A novel method of providing a library of n-mers or biopolymers

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to a method of providing a library of n-mer sequences, wherein the library is composed of an n-mer sequence. Also the invention concerns a method of providing a library of biopolymer sequences having one or more n-mers in common. Further provided are specific primers...

  4. Engineering Bacteria to Search for Specific Concentrations of Molecules by a Systematic Synthetic Biology Design Method.

    Science.gov (United States)

    Tien, Shin-Ming; Hsu, Chih-Yuan; Chen, Bor-Sen

    2016-01-01

    Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.

  5. Engineering Bacteria to Search for Specific Concentrations of Molecules by a Systematic Synthetic Biology Design Method.

    Directory of Open Access Journals (Sweden)

    Shin-Ming Tien

    Full Text Available Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella's rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the "brake component" in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each "brake component" were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the "brake component". Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate "brake component" in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains.

  6. 309 Enhancing the Acquisition Methods of School Library ...

    African Journals Online (AJOL)

    User

    2010-10-17

    Oct 17, 2010 ... Sometime, the school Libraries receives cash donations for the acquisition of library materials. ... To keep financial record or book budget. (v). To keep records .... management systems to knowledge-based systems provides a.

  7. Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR

    Science.gov (United States)

    Wang, Zhijian; Lee, Kyoung; Gardner, Robin

    2010-03-01

    In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.

  8. Efficient protein structure search using indexing methods.

    Science.gov (United States)

    Kim, Sungchul; Sael, Lee; Yu, Hwanjo

    2013-01-01

    Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.

  9. Direct glycan structure determination of intact N-linked glycopeptides by low-energy collision-induced dissociation tandem mass spectrometry and predicted spectral library searching.

    Science.gov (United States)

    Pai, Pei-Jing; Hu, Yingwei; Lam, Henry

    2016-08-31

    Intact glycopeptide MS analysis to reveal site-specific protein glycosylation is an important frontier of proteomics. However, computational tools for analyzing MS/MS spectra of intact glycopeptides are still limited and not well-integrated into existing workflows. In this work, a new computational tool which combines the spectral library building/searching tool, SpectraST (Lam et al. Nat. Methods2008, 5, 873-875), and the glycopeptide fragmentation prediction tool, MassAnalyzer (Zhang et al. Anal. Chem.2010, 82, 10194-10202) for intact glycopeptide analysis has been developed. Specifically, this tool enables the determination of the glycan structure directly from low-energy collision-induced dissociation (CID) spectra of intact glycopeptides. Given a list of possible glycopeptide sequences as input, a sample-specific spectral library of MassAnalyzer-predicted spectra is built using SpectraST. Glycan identification from CID spectra is achieved by spectral library searching against this library, in which both m/z and intensity information of the possible fragmentation ions are taken into consideration for improved accuracy. We validated our method using a standard glycoprotein, human transferrin, and evaluated its potential to be used in site-specific glycosylation profiling of glycoprotein datasets from LC-MS/MS. In addition, we further applied our method to reveal, for the first time, the site-specific N-glycosylation profile of recombinant human acetylcholinesterase expressed in HEK293 cells. For maximum usability, SpectraST is developed as part of the Trans-Proteomic Pipeline (TPP), a freely available and open-source software suite for MS data analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Multi-level computational methods for interdisciplinary research in the HathiTrust Digital Library.

    Science.gov (United States)

    Murdock, Jaimie; Allen, Colin; Börner, Katy; Light, Robert; McAlister, Simon; Ravenscroft, Andrew; Rose, Robert; Rose, Doori; Otsuka, Jun; Bourget, David; Lawrence, John; Reed, Chris

    2017-01-01

    We show how faceted search using a combination of traditional classification systems and mixed-membership topic models can go beyond keyword search to inform resource discovery, hypothesis formulation, and argument extraction for interdisciplinary research. Our test domain is the history and philosophy of scientific work on animal mind and cognition. The methods can be generalized to other research areas and ultimately support a system for semi-automatic identification of argument structures. We provide a case study for the application of the methods to the problem of identifying and extracting arguments about anthropomorphism during a critical period in the development of comparative psychology. We show how a combination of classification systems and mixed-membership models trained over large digital libraries can inform resource discovery in this domain. Through a novel approach of "drill-down" topic modeling-simultaneously reducing both the size of the corpus and the unit of analysis-we are able to reduce a large collection of fulltext volumes to a much smaller set of pages within six focal volumes containing arguments of interest to historians and philosophers of comparative psychology. The volumes identified in this way did not appear among the first ten results of the keyword search in the HathiTrust digital library and the pages bear the kind of "close reading" needed to generate original interpretations that is the heart of scholarly work in the humanities. Zooming back out, we provide a way to place the books onto a map of science originally constructed from very different data and for different purposes. The multilevel approach advances understanding of the intellectual and societal contexts in which writings are interpreted.

  11. Evidential significance of automotive paint trace evidence using a pattern recognition based infrared library search engine for the Paint Data Query Forensic Database.

    Science.gov (United States)

    Lavine, Barry K; White, Collin G; Allen, Matthew D; Fasasi, Ayuba; Weakley, Andrew

    2016-10-01

    A prototype library search engine has been further developed to search the infrared spectral libraries of the paint data query database to identify the line and model of a vehicle from the clear coat, surfacer-primer, and e-coat layers of an intact paint chip. For this study, search prefilters were developed from 1181 automotive paint systems spanning 3 manufacturers: General Motors, Chrysler, and Ford. The best match between each unknown and the spectra in the hit list generated by the search prefilters was identified using a cross-correlation library search algorithm that performed both a forward and backward search. In the forward search, spectra were divided into intervals and further subdivided into windows (which corresponds to the time lag for the comparison) within those intervals. The top five hits identified in each search window were compiled; a histogram was computed that summarized the frequency of occurrence for each library sample, with the IR spectra most similar to the unknown flagged. The backward search computed the frequency and occurrence of each line and model without regard to the identity of the individual spectra. Only those lines and models with a frequency of occurrence greater than or equal to 20% were included in the final hit list. If there was agreement between the forward and backward search results, the specific line and model common to both hit lists was always the correct assignment. Samples assigned to the same line and model by both searches are always well represented in the library and correlate well on an individual basis to specific library samples. For these samples, one can have confidence in the accuracy of the match. This was not the case for the results obtained using commercial library search algorithms, as the hit quality index scores for the top twenty hits were always greater than 99%. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Through the Students’ Lens: Photographic Methods for Research in Library Spaces

    Directory of Open Access Journals (Sweden)

    Shailoo Bedi

    2017-06-01

    Full Text Available Abstract Objective – As librarians and researchers, we are deeply curious about how our library users navigate and experience our library spaces. Although we have some data about users’ experiences and wayfinding strategies at our libraries, including anecdotal evidence, statistics, surveys, and focus group discussions, we lacked more in-depth information that reflected students’ real-time experiences as they move through our library spaces. Our objective is to address that gap by using photographic methods for studying library spaces. Methods – We present two studies conducted in two academic libraries that used participant-driven photo-elicitation (PDPE methods. Described simply, photo-elicitation methods involve the use of photographs as discussion prompts in interviews. In both studies presented here, we asked participants to take photographs that reflected their experiences using and navigating our library spaces. We then met with participants for an interview using their photos as prompts to discuss their experiences. Results – Our analysis of students’ photos and interviews provided rich descriptions of student experiences in library spaces. This analysis resulted in new insights into the ways that students navigate the library as well as the ways that signage, furniture, technology, and artwork in the library can shape student experiences in library spaces. The results have proven productive in generating answers to our research questions and supporting practical improvements to our libraries. Additionally, when comparing the results from our two studies we identified the importance of detailed spatial references for understanding student experiences in library spaces, which has implications beyond our institutions. Conclusion – We found that photographic methods were very productive in helping us to understand library users’ experiences and supporting decision-making related to library spaces. In addition, engaging with

  13. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    Science.gov (United States)

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  14. An automated full-symmetry Patterson search method

    International Nuclear Information System (INIS)

    Rius, J.; Miravitlles, C.

    1987-01-01

    A full-symmetry Patterson search method is presented that performs a molecular coarse rotation search in vector space and orientation refinement using the σ function. The oriented molecule is positioned using the fast translation function τ 0 , which is based on the automated interpretation of τ projections using the sum function. This strategy reduces the number of Patterson-function values to be stored in the rotation search, and the use of the τ 0 function minimizes the required time for the development of all probable rotation search solutions. The application of this method to five representative test examples is shown. (orig.)

  15. Fast radio burst search: cross spectrum vs. auto spectrum method

    Science.gov (United States)

    Liu, Lei; Zheng, Weimin; Yan, Zhen; Zhang, Juan

    2018-06-01

    The search for fast radio bursts (FRBs) is a hot topic in current radio astronomy studies. In this work, we carry out a single pulse search with a very long baseline interferometry (VLBI) pulsar observation data set using both auto spectrum and cross spectrum search methods. The cross spectrum method, first proposed in Liu et al., maximizes the signal power by fully utilizing the fringe phase information of the baseline cross spectrum. The auto spectrum search method is based on the popular pulsar software package PRESTO, which extracts single pulses from the auto spectrum of each station. According to our comparison, the cross spectrum method is able to enhance the signal power and therefore extract single pulses from data contaminated by high levels of radio frequency interference (RFI), which makes it possible to carry out a search for FRBs in regular VLBI observations when RFI is present.

  16. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  17. Building maps to search the web: the method Sewcom

    Directory of Open Access Journals (Sweden)

    Corrado Petrucco

    2002-01-01

    Full Text Available Seeking information on the Internet is becoming a necessity 'at school, at work and in every social sphere. Unfortunately the difficulties' inherent in the use of search engines and the use of unconscious cognitive approaches inefficient limit their effectiveness. It is in this respect presented a method, called SEWCOM that lets you create conceptual maps through interaction with search engines.

  18. Job Search as Goal-Directed Behavior: Objectives and Methods

    Science.gov (United States)

    Van Hoye, Greet; Saks, Alan M.

    2008-01-01

    This study investigated the relationship between job search objectives (finding a new job/turnover, staying aware of job alternatives, developing a professional network, and obtaining leverage against an employer) and job search methods (looking at job ads, visiting job sites, networking, contacting employment agencies, contacting employers, and…

  19. Teaching Literacy: Methods for Studying and Improving Library Instruction

    Directory of Open Access Journals (Sweden)

    Meggan Houlihan

    2012-12-01

    Full Text Available Objective – The aim of this paper is to evaluate teaching effectiveness in one-shotinformation literacy (IL instruction sessions. The authors used multiple methods,including plus/delta forms, peer evaluations, and instructor feedback surveys, in aneffort to improve student learning, individual teaching skill, and the overall IL programat the American University in Cairo.Methods – Researchers implemented three main evaluation tools to gather data in thisstudy. Librarians collected both quantitative and qualitative data using studentplus/delta surveys, peer evaluation, and faculty feedback in order to draw overallconclusions about the effectiveness of one-shot IL sessions. By designing a multi-methodstudy, and gathering information from students, faculty, and instruction librarians,results represented the perspectives of multiple stakeholders. Results – The data collected using the three evaluation tools provided insight into the needs and perspectives of three stakeholder groups. Individual instructors benefit from the opportunity to improve teaching through informed reflection, and are eager for feedback. Faculty members want their students to have more hands-on experience, but are pleased overall with instruction. Students need less lecturing and more authentic learning opportunities to engage with new knowledge.Conclusion – Including evaluation techniques in overall information literacy assessment plans is valuable, as instruction librarians gain opportunities for self-reflection and improvement, and administrators gather information about teaching skill levels. The authors gathered useful data that informed administrative decision making related to the IL program at the American University in Cairo. The findings discussed in this paper, both practical and theoretical, can help other college and university librarians think critically about their own IL programs, and influence how library instruction sessions might be evaluated and

  20. Methods for the preparation of large quantities of complex single-stranded oligonucleotide libraries.

    Science.gov (United States)

    Murgha, Yusuf E; Rouillard, Jean-Marie; Gulari, Erdogan

    2014-01-01

    Custom-defined oligonucleotide collections have a broad range of applications in fields of synthetic biology, targeted sequencing, and cytogenetics. Also, they are used to encode information for technologies like RNA interference, protein engineering and DNA-encoded libraries. High-throughput parallel DNA synthesis technologies developed for the manufacture of DNA microarrays can produce libraries of large numbers of different oligonucleotides, but in very limited amounts. Here, we compare three approaches to prepare large quantities of single-stranded oligonucleotide libraries derived from microarray synthesized collections. The first approach, alkaline melting of double-stranded PCR amplified libraries with a biotinylated strand captured on streptavidin coated magnetic beads results in little or no non-biotinylated ssDNA. The second method wherein the phosphorylated strand of PCR amplified libraries is nucleolyticaly hydrolyzed is recommended when small amounts of libraries are needed. The third method combining in vitro transcription of PCR amplified libraries to reverse transcription of the RNA product into single-stranded cDNA is our recommended method to produce large amounts of oligonucleotide libraries. Finally, we propose a method to remove any primer binding sequences introduced during library amplification.

  1. Library Computing

    Science.gov (United States)

    Library Computing, 1985

    1985-01-01

    Special supplement to "Library Journal" and "School Library Journal" covers topics of interest to school, public, academic, and special libraries planning for automation: microcomputer use, readings in automation, online searching, databases of microcomputer software, public access to microcomputers, circulation, creating a…

  2. Is There a Standard Default Keyword Operator? A Bibliometric Analysis of Processing Options Chosen by Libraries To Execute Keyword Searches in Online Public Access Catalogs.

    Science.gov (United States)

    Klein, Gary M.

    1994-01-01

    Online public access catalogs from 67 libraries using NOTIS software were searched using Internet connections to determine the positional operators selected as the default keyword operator on each catalog. Results indicate the lack of a processing standard for keyword searches. Five tables provide information. (Author/AEF)

  3. Method of Improving Personal Name Search in Academic Information Service

    Directory of Open Access Journals (Sweden)

    Heejun Han

    2012-12-01

    Full Text Available All academic information on the web or elsewhere has its creator, that is, a subject who has created the information. The subject can be an individual, a group, or an institution, and can be a nation depending on the nature of the relevant information. Most information is composed of a title, an author, and contents. An essay which is under the academic information category has metadata including a title, an author, keyword, abstract, data about publication, place of publication, ISSN, and the like. A patent has metadata including the title, an applicant, an inventor, an attorney, IPC, number of application, and claims of the invention. Most web-based academic information services enable users to search the information by processing the meta-information. An important element is to search information by using the author field which corresponds to a personal name. This study suggests a method of efficient indexing and using the adjacent operation result ranking algorithm to which phrase search-based boosting elements are applied, and thus improving the accuracy of the search results of personal names. It also describes a method for providing the results of searching co-authors and related researchers in searching personal names. This method can be effectively applied to providing accurate and additional search results in the academic information services.

  4. All roads lead to Rome - New search methods for the optimal triangulation problem

    Czech Academy of Sciences Publication Activity Database

    Ottosen, T. J.; Vomlel, Jiří

    2012-01-01

    Roč. 53, č. 9 (2012), s. 1350-1366 ISSN 0888-613X R&D Projects: GA MŠk 1M0572; GA ČR GEICC/08/E010; GA ČR GA201/09/1891 Grant - others:GA MŠk(CZ) 2C06019 Institutional support: RVO:67985556 Keywords : Bayesian networks * Optimal triangulation * Probabilistic inference * Cliques in a graph Subject RIV: BD - Theory of Information Impact factor: 1.729, year: 2012 http://library.utia.cas.cz/separaty/2012/MTR/vomlel-all roads lead to rome - new search methods for the optimal triangulation problem.pdf

  5. Exploiting Chemical Libraries, Structure, and Genomics in the Search for Kinase Inhibitors

    NARCIS (Netherlands)

    Gray, Nathanael S.; Wodicka, Lisa; Thunnissen, Andy-Mark W.H.; Norman, Thea C.; Kwon, Soojin; Espinoza, F. Hernan; Morgan, David O.; Barnes, Georjana; LeClerc, Sophie; Meijer, Laurent; Kim, Sung-Hou; Lockhart, David J.; Schultz, Peter G.

    1998-01-01

    Selective protein kinase inhibitors were developed on the basis of the unexpected binding mode of 2,6,9-trisubstituted purines to the adenosine triphosphate-binding site of the human cyclin-dependent kinase 2 (CDK2). By iterating chemical library synthesis and biological screening, potent inhibitors

  6. Building and evaluating an informatics tool to facilitate analysis of a biomedical literature search service in an academic medical center library.

    Science.gov (United States)

    Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha

    2013-01-01

    This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.

  7. End-User Searching in a Large Library Network: A Case Study of Patent Attorneys.

    Science.gov (United States)

    Vollaro, Alice J.; Hawkins, Donald T.

    1986-01-01

    Reports results of study of a group of end users (patent attorneys) doing their own online searching at AT&T Bell Laboratories. Highlights include DIALOG databases used by the attorneys, locations and searching modes, characteristics of patent attorney searchers, and problem areas. Questionnaire is appended. (5 references) (EJS)

  8. Search Strategy Development in a Flipped Library Classroom: A Student-Focused Assessment

    Science.gov (United States)

    Goates, Michael C.; Nelson, Gregory M.; Frost, Megan

    2017-01-01

    Librarians at Brigham Young University compared search statement development between traditional lecture and flipped instruction sessions. Students in lecture sessions scored significantly higher on developing search statements than those in flipped sessions. However, student evaluations show a strong preference for pedagogies that incorporate…

  9. Libraries of Synthetic TALE-Activated Promoters: Methods and Applications.

    Science.gov (United States)

    Schreiber, T; Tissier, A

    2016-01-01

    The discovery of proteins with programmable DNA-binding specificities triggered a whole array of applications in synthetic biology, including genome editing, regulation of transcription, and epigenetic modifications. Among those, transcription activator-like effectors (TALEs) due to their natural function as transcription regulators, are especially well-suited for the development of orthogonal systems for the control of gene expression. We describe here the construction and testing of libraries of synthetic TALE-activated promoters which are under the control of a single TALE with a given DNA-binding specificity. These libraries consist of a fixed DNA-binding element for the TALE, a TATA box, and variable sequences of 19 bases upstream and 43 bases downstream of the DNA-binding element. These libraries were cloned using a Golden Gate cloning strategy making them usable as standard parts in a modular cloning system. The broad range of promoter activities detected and the versatility of these promoter libraries make them valuable tools for applications in the fine-tuning of expression in metabolic engineering projects or in the design and implementation of regulatory circuits. © 2016 Elsevier Inc. All rights reserved.

  10. Remarks on search methods for stable, massive, elementary particles

    International Nuclear Information System (INIS)

    Perl, Martin L.

    2001-01-01

    This paper was presented at the 69th birthday celebration of Professor Eugene Commins, honoring his research achievements. These remarks are about the experimental techniques used in the search for new stable, massive particles, particles at least as massive as the electron. A variety of experimental methods such as accelerator experiments, cosmic ray studies, searches for halo particles in the galaxy and searches for exotic particles in bulk matter are described. A summary is presented of the measured limits on the existence of new stable, massive particle

  11. ARSTEC, Nonlinear Optimization Program Using Random Search Method

    International Nuclear Information System (INIS)

    Rasmuson, D. M.; Marshall, N. H.

    1979-01-01

    1 - Description of problem or function: The ARSTEC program was written to solve nonlinear, mixed integer, optimization problems. An example of such a problem in the nuclear industry is the allocation of redundant parts in the design of a nuclear power plant to minimize plant unavailability. 2 - Method of solution: The technique used in ARSTEC is the adaptive random search method. The search is started from an arbitrary point in the search region and every time a point that improves the objective function is found, the search region is centered at that new point. 3 - Restrictions on the complexity of the problem: Presently, the maximum number of independent variables allowed is 10. This can be changed by increasing the dimension of the arrays

  12. Collection evaluation in University libraries (II. Methods based on collection use

    Directory of Open Access Journals (Sweden)

    Àngels Massísimo i Sánchez de Boado

    2004-01-01

    Full Text Available This is our second paper devoted to the collection evaluation in the university libraries. Seven methods are described, based on collection use. Their advantages and disadvantages are discussed, as well as their usefulness for a range of library types

  13. Application of pulse pile-up correction spectrum to the library least-squares method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Hoon [Kyungpook National Univ., Daegu (Korea, Republic of)

    2006-12-15

    The Monte Carlo simulation code CEARPPU has been developed and updated to provide pulse pile-up correction spectra for high counting rate cases. For neutron activation analysis, CEARPPU correction spectra were used in library least-squares method to give better isotopic activity results than the convention library least-squares fitting with uncorrected spectra.

  14. Usability Testing as a Method to Refine a Health Sciences Library Website.

    Science.gov (United States)

    Denton, Andrea H; Moody, David A; Bennett, Jason C

    2016-01-01

    User testing, a method of assessing website usability, can be a cost-effective and easily administered process to collect information about a website's effectiveness. A user experience (UX) team at an academic health sciences library has employed user testing for over three years to help refine the library's home page. Test methodology used in-person testers using the "think aloud" method to complete tasks on the home page. Review of test results revealed problem areas of the design and redesign; further testing was effective in refining the page. User testing has proved to be a valuable method to engage users and provide feedback to continually improve the library's home page.

  15. Information Retrieval Methods in Libraries and Information Centers ...

    African Journals Online (AJOL)

    African Research Review. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 5, No 6 (2011) >. Log in or Register to get access to full text downloads.

  16. Electronic Book Usage Patterns as Observed at an Academic Library: Searches and Viewings

    Directory of Open Access Journals (Sweden)

    Alain R. Lamothe

    2010-06-01

    Full Text Available In 2009, e-book usage statistics were evaluated at Laurentian University, Canada, to provide a better understanding in how the e-book collection has been utilized as well as to give direction for further collection development. The number of e-books, the number of viewings and the number of searches were examined. The size of the collection grew from a single book in 2002 to more than 60,000 in 2008. The pattern of purchase varied from that of bulk purchasing of large e-book collections to a more selective purchase from 2005 to 2007 and then back to that of bulk purchasing in 2008. Both viewings and searches have increased from year to year at a greater pace than the size e-book collection. The number of searches also appeared to provide a viable means to measure the use of an e-book collection rather than relying entirely on viewings or downloads. Ratios were calculated when comparing viewings and searches to the size of the collection. The largest viewings per e-book and searches per e-book ratios were observed in those years when purchasing was done more selectively. It is also clear that the electronic reference collection has seen far greater use then the electronic monographs. Furthermore, usage of electronic monographs also appeared to be directly proportional to the size of the collection.

  17. Multi-line split DNA synthesis: a novel combinatorial method to make high quality peptide libraries

    Directory of Open Access Journals (Sweden)

    Ueno Shingo

    2004-09-01

    Full Text Available Abstract Background We developed a method to make a various high quality random peptide libraries for evolutionary protein engineering based on a combinatorial DNA synthesis. Results A split synthesis in codon units was performed with mixtures of bases optimally designed by using a Genetic Algorithm program. It required only standard DNA synthetic reagents and standard DNA synthesizers in three lines. This multi-line split DNA synthesis (MLSDS is simply realized by adding a mix-and-split process to normal DNA synthesis protocol. Superiority of MLSDS method over other methods was shown. We demonstrated the synthesis of oligonucleotide libraries with 1016 diversity, and the construction of a library with random sequence coding 120 amino acids containing few stop codons. Conclusions Owing to the flexibility of the MLSDS method, it will be able to design various "rational" libraries by using bioinformatics databases.

  18. The search for new amphiphiles: synthesis of a modular, high-throughput library

    Directory of Open Access Journals (Sweden)

    George C. Feast

    2014-07-01

    Full Text Available Amphiphilic compounds are used in a variety of applications due to their lyotropic liquid-crystalline phase formation, however only a limited number of compounds, in a potentially limitless field, are currently in use. A library of organic amphiphilic compounds was synthesised consisting of glucose, galactose, lactose, xylose and mannose head groups and double and triple-chain hydrophobic tails. A modular, high-throughput approach was developed, whereby head and tail components were conjugated using the copper-catalysed azide–alkyne cycloaddition (CuAAC reaction. The tails were synthesised from two core alkyne-tethered intermediates, which were subsequently functionalised with hydrocarbon chains varying in length and degree of unsaturation and branching, while the five sugar head groups were selected with ranging substitution patterns and anomeric linkages. A library of 80 amphiphiles was subsequently produced, using a 24-vial array, with the majority formed in very good to excellent yields. A preliminary assessment of the liquid-crystalline phase behaviour is also presented.

  19. 5e$^{x+y}$: Searching over Mathematical Content in Digital Libraries

    CERN Document Server

    Oviedo, Arthur; Aberer, Karl

    2015-01-01

    This paper presents 5 e x + y , a system that is able to extract, index and query mathematical content expressed as math- ematical expressions, complementing the CERN Document Server (CDS)[5]. We present the most important aspects of its design, our approach to model the relevant features of the mathematical content, and provide a demonstration of its searching capabilities.

  20. ENSURING HIGH-QUALITY THEMATIC SEARCH IN THE ELECTRONIC CATALOGUE (FROM EXPERIENCE OF THE SCIENTIFIC LIBRARY OF THE ODESSA I. I. MECHNIKOV NATIONAL UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Т. М. Бикова

    2015-09-01

    Full Text Available The purpose of our article is the description of electronic subject analysis of documents and creation of the dictionary of subject headings in the electronic catalog. The subject of the research is the electronic catalog of the Scientific Library of the Odessa I. I. Mechnikov National University. The purpose of work is development of a technique of edition of sections of the thesaurus. In 2012 the Scientific Library of the Odessa I. I. Mechnikov National University has moved to the new software ABIS Absotheque Unicode which allowed to improve (to simplify search in the electronic catalog. The department of scientific processing and the organization of catalogs carried out edition of sections of the thesaurus, instructions are formed, and the edition technique is developed, process of editing is studied. Drawing up the thesaurus of subject headings is carried out by means of loans from full tables BBC for scientific libraries. Even if he has no clear idea and profound knowledge of a search subject, simplifies correctly made subject headings for the reader and does more effective thematic search. The main finding of the work is in need of continuous editing subject headings that simplifies and does more comfortable search in the electronic catalog for the user of library. The research findings have the practical value for employees of libraries.

  1. An improved yeast transformation method for the generation of very large human antibody libraries.

    Science.gov (United States)

    Benatuil, Lorenzo; Perez, Jennifer M; Belk, Jonathan; Hsieh, Chung-Ming

    2010-04-01

    Antibody library selection by yeast display technology is an efficient and highly sensitive method to identify binders to target antigens. This powerful selection tool, however, is often hampered by the typically modest size of yeast libraries (approximately 10(7)) due to the limited yeast transformation efficiency, and the full potential of the yeast display technology for antibody discovery and engineering can only be realized if it can be coupled with a mean to generate very large yeast libraries. We describe here a yeast transformation method by electroporation that allows for the efficient generation of large antibody libraries up to 10(10) in size. Multiple components and conditions including CaCl(2), MgCl(2), sucrose, sorbitol, lithium acetate, dithiothreitol, electroporation voltage, DNA input and cell volume have been tested to identify the best combination. By applying this developed protocol, we have constructed a 1.4 x 10(10) human spleen antibody library essentially in 1 day with a transformation efficiency of 1-1.5 x 10(8) transformants/microg vector DNA. Taken together, we have developed a highly efficient yeast transformation method that enables the generation of very large and productive human antibody libraries for antibody discovery, and we are now routinely making 10(9) libraries in a day for antibody engineering purposes.

  2. A library based fitting method for visual reflectance spectroscopy of human skin

    Energy Technology Data Exchange (ETDEWEB)

    Verkruysse, Wim [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States); Zhang Rong [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States); Choi, Bernard [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States); Lucassen, Gerald [Personal Care Institute, Philips Research, Prof Holstlaan 4, Eindhoven (Netherlands); Svaasand, Lars O [Department of Physical Electronics Norwegian University of Science and Technology, N-7491 Trondheim (Norway); Nelson, J Stuart [Beckman Laser Institute, University of California, Irvine, CA 92612 (United States)

    2005-01-07

    The diffuse reflectance spectrum of human skin in the visible region (400-800 nm) contains information on the concentrations of chromophores such as melanin and haemoglobin. This information may be extracted by fitting the reflectance spectrum with an optical diffusion based analytical expression applied to a layered skin model. With the use of the analytical expression, it is assumed that light transport is dominated by scattering. For port wine stain (PWS) and highly pigmented human skin, however, this assumption may not be valid resulting in a potentially large error in visual reflectance spectroscopy (VRS). Monte Carlo based techniques can overcome this problem but are currently too computationally intensive to be combined with previously used fitting procedures. The fitting procedure presented herein is based on a library search which enables the use of accurate reflectance spectra based on forward Monte Carlo simulations or diffusion theory. This allows for accurate VRS to characterize chromophore concentrations in PWS and highly pigmented human skin. The method is demonstrated using both simulated and measured reflectance spectra. An additional advantage of the method is that the fitting procedure is very fast.

  3. A library based fitting method for visual reflectance spectroscopy of human skin

    International Nuclear Information System (INIS)

    Verkruysse, Wim; Zhang Rong; Choi, Bernard; Lucassen, Gerald; Svaasand, Lars O; Nelson, J Stuart

    2005-01-01

    The diffuse reflectance spectrum of human skin in the visible region (400-800 nm) contains information on the concentrations of chromophores such as melanin and haemoglobin. This information may be extracted by fitting the reflectance spectrum with an optical diffusion based analytical expression applied to a layered skin model. With the use of the analytical expression, it is assumed that light transport is dominated by scattering. For port wine stain (PWS) and highly pigmented human skin, however, this assumption may not be valid resulting in a potentially large error in visual reflectance spectroscopy (VRS). Monte Carlo based techniques can overcome this problem but are currently too computationally intensive to be combined with previously used fitting procedures. The fitting procedure presented herein is based on a library search which enables the use of accurate reflectance spectra based on forward Monte Carlo simulations or diffusion theory. This allows for accurate VRS to characterize chromophore concentrations in PWS and highly pigmented human skin. The method is demonstrated using both simulated and measured reflectance spectra. An additional advantage of the method is that the fitting procedure is very fast

  4. A library based fitting method for visual reflectance spectroscopy of human skin

    Science.gov (United States)

    Verkruysse, Wim; Zhang, Rong; Choi, Bernard; Lucassen, Gerald; Svaasand, Lars O.; Nelson, J. Stuart

    2005-01-01

    The diffuse reflectance spectrum of human skin in the visible region (400-800 nm) contains information on the concentrations of chromophores such as melanin and haemoglobin. This information may be extracted by fitting the reflectance spectrum with an optical diffusion based analytical expression applied to a layered skin model. With the use of the analytical expression, it is assumed that light transport is dominated by scattering. For port wine stain (PWS) and highly pigmented human skin, however, this assumption may not be valid resulting in a potentially large error in visual reflectance spectroscopy (VRS). Monte Carlo based techniques can overcome this problem but are currently too computationally intensive to be combined with previously used fitting procedures. The fitting procedure presented herein is based on a library search which enables the use of accurate reflectance spectra based on forward Monte Carlo simulations or diffusion theory. This allows for accurate VRS to characterize chromophore concentrations in PWS and highly pigmented human skin. The method is demonstrated using both simulated and measured reflectance spectra. An additional advantage of the method is that the fitting procedure is very fast.

  5. Implementation Of Haversine Formula And Best First Search Method In Searching Of Tsunami Evacuation Route

    Science.gov (United States)

    Anisya; Yoga Swara, Ganda

    2017-12-01

    Padang is one of the cities prone to earthquake disaster with tsunami due to its position at the meeting of two active plates, this is, a source of potentially powerful earthquake and tsunami. Central government and most offices are located in the red zone (vulnerable areas), it will also affect the evacuation of the population during the earthquake and tsunami disaster. In this study, researchers produced a system of search nearest shelter using best-first-search method. This method uses the heuristic function, the amount of cost taken and the estimated value or travel time, path length and population density. To calculate the length of the path, researchers used method of haversine formula. The value obtained from the calculation process is implemented on a web-based system. Some alternative paths and some of the closest shelters will be displayed in the system.

  6. Undergraduates Prefer Federated Searching to Searching Databases Individually. A Review of: Belliston, C. Jeffrey, Jared L. Howland, & Brian C. Roberts. “Undergraduate Use of Federated Searching: A Survey of Preferences and Perceptions of Value-Added Functionality.” College & Research Libraries 68.6 (Nov. 2007: 472-86.

    Directory of Open Access Journals (Sweden)

    Genevieve Gore

    2008-09-01

    Full Text Available Objective – To determine whether use offederated searching by undergraduates saves time, meets their information needs, is preferred over searching databases individually, and provides results of higher quality. Design – Crossover study.Setting – Three American universities, all members of the Consortium of Church Libraries & Archives (CCLA: BYU (Brigham Young University, a large research university; BYUH (Brigham Young University – Hawaii, a small baccalaureate college; and BYUI (Brigham Young University – Idaho, a large baccalaureate collegeSubjects – Ninety-five participants recruited via e-mail invitations sent to a random sample of currently enrolled undergraduates at BYU, BYUH, and BYUI.Methods – Participants were given written directions to complete a literature search for journal articles on two biology-related topics using two search methods: 1. federated searching with WebFeat® (implemented in the same way for this study at the three universities and 2. a hyperlinked list of databases to search individually. Both methods used the same set of seven databases. Each topic was assigned in random order to one of the two search methods, also assigned in random order, for a total of two searches per participant. The time to complete the searches was recorded. Students compiled their list of citations, which were later normalized and graded. To analyze the quality of the citations, one quantitative rubric was created by librarians and one qualitative rubric was approved by a faculty member at BYU. The librarian-created rubric included the journal impact factor (from ISI’s Journal Citation Reports®, the proportion of citations from peer-reviewed journals (determined from Ulrichsweb.com™ to total citations, and the timeliness of the articles. The faculty-approved rubric included three criteria: relevance to the topic, quality of the individual citations (good quality: primary research results, peer-reviewed sources, and

  7. The commission errors search and assessment (CESA) method

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V. N

    2007-05-15

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  8. The commission errors search and assessment (CESA) method

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V. N.

    2007-05-01

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  9. Studying the Night Shift: A Multi-method Analysis of Overnight Academic Library Users

    Directory of Open Access Journals (Sweden)

    David Schwieder

    2017-09-01

    Full Text Available Abstract Objective – This paper reports on a study which assessed the preferences and behaviors of overnight library users at a major state university. The findings were used to guide the design and improvement of overnight library resources and services, and the selection of a future overnight library site. Methods – A multi-method design used descriptive and correlational statistics to analyze data produced by a multi-sample survey of overnight library users. These statistical methods included rankings, percentages, and multiple regression. Results – Results showed a strong consistency across statistical methods and samples. Overnight library users consistently prioritized facilities like power outlets for electronic devices, and group and quiet study spaces, and placed far less emphasis on assistance from library staff. Conclusions – By employing more advanced statistical and sampling procedures than had been found in previous research, this paper strengthens the validity of findings on overnight user preferences and behaviors. The multi-method research design can also serve to guide future work in this area.

  10. Fuzzy Search Method for Hi Education Information Security

    Directory of Open Access Journals (Sweden)

    Grigory Grigorevich Novikov

    2016-03-01

    Full Text Available The main reason of the research is how to use fuzzy search method for information security of Hi Education or some similar purposes. So many sensitive information leaks are through non SUMMARY 149 classified documents legal publishing. That’s why many intelligence services so love to use the «mosaic» information collection method. This article is about how to prevent it.

  11. Geometrical Fuzzy Search Method for the Business Information Security Systems

    Directory of Open Access Journals (Sweden)

    Grigory Grigorievich Novikov

    2014-12-01

    Full Text Available The main reason of the article is how to use one of new fuzzy search method for information security of business or some other purposes. So many sensitive information leaks are through non-classified documents legal publishing. That’s why many intelligence services like to use the “mosaic” information collection method so much: This article is about how to prevent it.

  12. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Randriantsizafy, R D; Ramanandraibe, M J [Madagascar Institut National des Sciences et Techniques Nucleaires, Antananarivo (Madagascar); Raboanary, R [Institut of astro and High-Energy Physics Madagascar, University of Antananarivo, Antananarivo (Madagascar)

    2007-07-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  13. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  14. A rapid method for screening arrayed plasmid cDNA library by PCR

    International Nuclear Information System (INIS)

    Hu Yingchun; Zhang Kaitai; Wu Dechang; Li Gang; Xiang Xiaoqiong

    1999-01-01

    Objective: To develop a PCR-based method for rapid and effective screening of arrayed plasmid cDNA library. Methods: The plasmid cDNA library was arrayed and screened by PCR with a particular set of primers. Results: Four positive clones were obtained through about one week. Conclusion: This method can be applied to screening not only normal cDNA clones, but also cDNA clones-containing small size fragments. This method offers significant advantages over traditional screening method in terms of sensitivity, specificity and efficiency

  15. Study on boundary search method for DFM mesh generation

    Directory of Open Access Journals (Sweden)

    Li Ri

    2012-08-01

    Full Text Available The boundary mesh of the casting model was determined by direct calculation on the triangular facets extracted from the STL file of the 3D model. Then the inner and outer grids of the model were identified by the algorithm in which we named Inner Seed Grid Method. Finally, a program to automatically generate a 3D FDM mesh was compiled. In the paper, a method named Triangle Contraction Search Method (TCSM was put forward to ensure not losing the boundary grids; while an algorithm to search inner seed grids to identify inner/outer grids of the casting model was also brought forward. Our algorithm was simple, clear and easy to construct program. Three examples for the casting mesh generation testified the validity of the program.

  16. Electronic Book Usage Patterns as Observed at an Academic Library: Searches and Viewings

    OpenAIRE

    Alain R. Lamothe

    2010-01-01

    In 2009, e-book usage statistics were evaluated at Laurentian University, Canada, to provide a better understanding in how the e-book collection has been utilized as well as to give direction for further collection development. The number of e-books, the number of viewings and the number of searches were examined. The size of the collection grew from a single book in 2002 to more than 60,000 in 2008. The pattern of purchase varied from that of bulk purchasing of large e-book collections to a ...

  17. The Use of Resistivity Methods in Terrestrial Forensic Searches

    Science.gov (United States)

    Wolf, R. C.; Raisuddin, I.; Bank, C.

    2013-12-01

    The increasing use of near-surface geophysical methods in forensic searches has demonstrated the need for further studies to identify the ideal physical, environmental and temporal settings for each geophysical method. Previous studies using resistivity methods have shown promising results, but additional work is required to more accurately interpret and analyze survey findings. The Ontario Provincial Police's UCRT (Urban Search and Rescue; Chemical, Biolgical, Radiological, Nuclear and Explosives; Response Team) is collaborating with the University of Toronto and two additional universities in a multi-year study investigating the applications of near-surface geophysical methods to terrestrial forensic searches. In the summer of 2012, on a test site near Bolton, Ontario, the OPP buried weapons, drums and pigs (naked, tarped, and clothed) to simulate clandestine graves and caches. Our study aims to conduct repeat surveys using an IRIS Syscal Junior with 48 electrode switching system resistivity-meter. These surveys will monitor changes in resistivity reflecting decomposition of the object since burial, and identify the strengths and weaknesses of resistivity when used in a rural, clandestine burial setting. Our initial findings indicate the usefulness of this method, as prominent resistivity changes have been observed. We anticipate our results will help to assist law enforcement agencies in determining the type of resistivity results to expect based on time since burial, depth of burial and state of dress of the body.

  18. Exploration of Stellarator Configuration Space with Global Search Methods

    International Nuclear Information System (INIS)

    Mynick, H.E.; Pomphrey, N.; Ethier, S.

    2001-01-01

    An exploration of stellarator configuration space z for quasi-axisymmetric stellarator (QAS) designs is discussed, using methods which provide a more global view of that space. To this end, we have implemented a ''differential evolution'' (DE) search algorithm in an existing stellarator optimizer, which is much less prone to become trapped in local, suboptimal minima of the cost function chi than the local search methods used previously. This search algorithm is complemented by mapping studies of chi over z aimed at gaining insight into the results of the automated searches. We find that a wide range of the attractive QAS configurations previously found fall into a small number of classes, with each class corresponding to a basin of chi(z). We develop maps on which these earlier stellarators can be placed, the relations among them seen, and understanding gained into the physics differences between them. It is also found that, while still large, the region of z space containing practically realizable QAS configurations is much smaller than earlier supposed

  19. Evaluation of academic library collection using a check-list method

    Directory of Open Access Journals (Sweden)

    Kornelija Petr Balog

    2015-04-01

    Full Text Available The purpose of this paper is to evaluate the quality of the ILS library collection of the Faculty of Humanities and Social Sciences (FHSS in Osijek, Croatia and its congruence with the curriculum. The quality of the collection is measured using the check-list method. The required and optional reading lists of the Department of Information Sciences at the FHSS (academic year 2011/2012 are used as standard lists that the library holdings are compared to. The results found that the library does not have 30.8 per cent of the titles on the reading lists. The remaining 33.9 per cent of the titles are accessible in the library, 28.5 per cent are free electronic resources, and 6.8 per cent of titles are accessible for students through the Department’s Moodle, Learning Management System. The study provides data about the titles available and not available in the FHSS library. However, it does not differentiate between the titles on the required and optional reading lists. This study provides the FHSS librarians with the list of titles that should be obtained in the near future. In Croatia, very few papers on collection assessment have been published so far, and this is the first study about the quality of a library collection at the University of Osijek. The paper attempts to fill that gap and contribute to a deeper understanding of the quality of library collections in the Croatian academic setting.

  20. SearchSmallRNA: a graphical interface tool for the assemblage of viral genomes using small RNA libraries data.

    Science.gov (United States)

    de Andrade, Roberto R S; Vaslin, Maite F S

    2014-03-07

    Next-generation parallel sequencing (NGS) allows the identification of viral pathogens by sequencing the small RNAs of infected hosts. Thus, viral genomes may be assembled from host immune response products without prior virus enrichment, amplification or purification. However, mapping of the vast information obtained presents a bioinformatics challenge. In order to by pass the need of line command and basic bioinformatics knowledge, we develop a mapping software with a graphical interface to the assemblage of viral genomes from small RNA dataset obtained by NGS. SearchSmallRNA was developed in JAVA language version 7 using NetBeans IDE 7.1 software. The program also allows the analysis of the viral small interfering RNAs (vsRNAs) profile; providing an overview of the size distribution and other features of the vsRNAs produced in infected cells. The program performs comparisons between each read sequenced present in a library and a chosen reference genome. Reads showing Hamming distances smaller or equal to an allowed mismatched will be selected as positives and used to the assemblage of a long nucleotide genome sequence. In order to validate the software, distinct analysis using NGS dataset obtained from HIV and two plant viruses were used to reconstruct viral whole genomes. SearchSmallRNA program was able to reconstructed viral genomes using NGS of small RNA dataset with high degree of reliability so it will be a valuable tool for viruses sequencing and discovery. It is accessible and free to all research communities and has the advantage to have an easy-to-use graphical interface. SearchSmallRNA was written in Java and is freely available at http://www.microbiologia.ufrj.br/ssrna/.

  1. State Virtual Libraries

    Science.gov (United States)

    Pappas, Marjorie L.

    2003-01-01

    Virtual library? Electronic library? Digital library? Online information network? These all apply to the growing number of Web-based resource collections managed by consortiums of state library entities. Some, like "INFOhio" and "KYVL" ("Kentucky Virtual Library"), have been available for a few years, but others are just starting. Searching for…

  2. Information Literacy for Users at the National Medical Library of Cuba: Cochrane Library Course for the Search of Best Evidence for Clinical Decisions

    Science.gov (United States)

    Santana Arroyo, Sonia; del Carmen Gonzalez Rivero, Maria

    2012-01-01

    The National Medical Library of Cuba is currently developing an information literacy program to train users in the use of biomedical databases. This paper describes the experience with the course "Cochrane Library: Evidence-Based Medicine," which aims to teach users how to make the best use of this database, as well as the evidence-based…

  3. Assessment of the effectiveness of uranium deposit searching methods

    International Nuclear Information System (INIS)

    Suran, J.

    1998-01-01

    The following groups of uranium deposit searching methods are described: radiometric review of foreign work; aerial radiometric survey; automobile radiometric survey; emanation survey up to 1 m; emanation survey up to 2 m; ground radiometric survey; radiometric survey in pits; deep radiometric survey; combination of the above methods; and other methods (drilling survey). For vein-type deposits, the majority of Czech deposits were discovered in 1945-1965 by radiometric review of foreign work, automobile radiometric survey, and emanation survey up to 1 m. The first significant indications of sandstone type uranium deposits were observed in the mid-1960 by aerial radiometric survey and confirmed later by drilling. (P.A.)

  4. Combined use of ESI-QqTOF-MS and ESI-QqTOF-MS/MS with mass-spectral library search for qualitative analysis of drugs.

    Science.gov (United States)

    Pavlic, Marion; Libiseller, Kathrin; Oberacher, Herbert

    2006-09-01

    The potential of the combined use of ESI-QqTOF-MS and ESI-QqTOF-MS/MS with mass-spectral library search for the identification of therapeutic and illicit drugs has been evaluated. Reserpine was used for standardizing experimental conditions and for characterization of the performance of the applied mass spectrometric system. Experiments revealed that because of the mass accuracy, the stability of calibration, and the reproducibility of fragmentation, the QqTOF mass spectrometer is an appropriate platform for establishment of a tandem-mass-spectral library. Three-hundred and nineteen substances were used as reference samples to build the spectral library. For each reference compound, product-ion spectra were acquired at ten different collision-energy values between 5 eV and 50 eV. For identification of unknown compounds, a library search algorithm was developed. The closeness of matching between a measured product-ion spectrum and a spectrum stored in the library was characterized by a value called "match probability", which took into account the number of matched fragment ions, the number of fragment ions observed in the two spectra, and the sum of the intensity differences calculated for matching fragments. A large value for the match probability indicated a close match between the measured and the reference spectrum. A unique feature of the library search algorithm-an implemented spectral purification option-enables characterization of multi-contributor fragment-ion spectra. With the aid of this software feature, substances comprising only 1.0% of the total amount of binary mixtures were unequivocally assigned, in addition to the isobaric main contributors. The spectral library was successfully applied to the characterization of 39 forensic casework samples.

  5. Bayesian methods in the search for MH370

    CERN Document Server

    Davey, Sam; Holland, Ian; Rutten, Mark; Williams, Jason

    2016-01-01

    This book demonstrates how nonlinear/non-Gaussian Bayesian time series estimation methods were used to produce a probability distribution of potential MH370 flight paths. It provides details of how the probabilistic models of aircraft flight dynamics, satellite communication system measurements, environmental effects and radar data were constructed and calibrated. The probability distribution was used to define the search zone in the southern Indian Ocean. The book describes particle-filter based numerical calculation of the aircraft flight-path probability distribution and validates the method using data from several of the involved aircraft’s previous flights. Finally it is shown how the Reunion Island flaperon debris find affects the search probability distribution.

  6. A method of searching LDAP directories using XQuery

    International Nuclear Information System (INIS)

    Hesselroth, Ted

    2011-01-01

    A method by which an LDAP directory can be searched using XQuery is described. The strategy behind the tool consists of four steps. First the XQuery script is examined and relevant XPath expressions are extracted, determined to be sufficient to define all information needed to perform the query. Then the XPath expressions are converted into their equivalent LDAP search filters by use of the published LDAP schema of the service, and search requests are made to the LDAP host. The search results are then merged and converted to an XML document that conforms to the hierarchy of the LDAP schema. Finally, the XQuery script is executed on the working XML document by conventional means. Examples are given of application of the tool in the Open Science Grid, which for discovery purposes operates an LDAP server that contains Glue schema-based information on site configuration and authorization policies. The XQuery scripts compactly replace hundreds of lines of custom python code that relied on the unix ldapsearch utility. Installation of the tool is available through the Virtual Data Toolkit.

  7. Cumulative query method for influenza surveillance using search engine data.

    Science.gov (United States)

    Seo, Dong-Woo; Jo, Min-Woo; Sohn, Chang Hwan; Shin, Soo-Yong; Lee, JaeHo; Yu, Maengsoo; Kim, Won Young; Lim, Kyoung Soo; Lee, Sang-Il

    2014-12-16

    Internet search queries have become an important data source in syndromic surveillance system. However, there is currently no syndromic surveillance system using Internet search query data in South Korea. The objective of this study was to examine correlations between our cumulative query method and national influenza surveillance data. Our study was based on the local search engine, Daum (approximately 25% market share), and influenza-like illness (ILI) data from the Korea Centers for Disease Control and Prevention. A quota sampling survey was conducted with 200 participants to obtain popular queries. We divided the study period into two sets: Set 1 (the 2009/10 epidemiological year for development set 1 and 2010/11 for validation set 1) and Set 2 (2010/11 for development Set 2 and 2011/12 for validation Set 2). Pearson's correlation coefficients were calculated between the Daum data and the ILI data for the development set. We selected the combined queries for which the correlation coefficients were .7 or higher and listed them in descending order. Then, we created a cumulative query method n representing the number of cumulative combined queries in descending order of the correlation coefficient. In validation set 1, 13 cumulative query methods were applied, and 8 had higher correlation coefficients (min=.916, max=.943) than that of the highest single combined query. Further, 11 of 13 cumulative query methods had an r value of ≥.7, but 4 of 13 combined queries had an r value of ≥.7. In validation set 2, 8 of 15 cumulative query methods showed higher correlation coefficients (min=.975, max=.987) than that of the highest single combined query. All 15 cumulative query methods had an r value of ≥.7, but 6 of 15 combined queries had an r value of ≥.7. Cumulative query method showed relatively higher correlation with national influenza surveillance data than combined queries in the development and validation set.

  8. Comprehensive evaluation and optimization of amplicon library preparation methods for high-throughput antibody sequencing.

    Science.gov (United States)

    Menzel, Ulrike; Greiff, Victor; Khan, Tarik A; Haessler, Ulrike; Hellmann, Ina; Friedensohn, Simon; Cook, Skylar C; Pogson, Mark; Reddy, Sai T

    2014-01-01

    High-throughput sequencing (HTS) of antibody repertoire libraries has become a powerful tool in the field of systems immunology. However, numerous sources of bias in HTS workflows may affect the obtained antibody repertoire data. A crucial step in antibody library preparation is the addition of short platform-specific nucleotide adapter sequences. As of yet, the impact of the method of adapter addition on experimental library preparation and the resulting antibody repertoire HTS datasets has not been thoroughly investigated. Therefore, we compared three standard library preparation methods by performing Illumina HTS on antibody variable heavy genes from murine antibody-secreting cells. Clonal overlap and rank statistics demonstrated that the investigated methods produced equivalent HTS datasets. PCR-based methods were experimentally superior to ligation with respect to speed, efficiency, and practicality. Finally, using a two-step PCR based method we established a protocol for antibody repertoire library generation, beginning from inputs as low as 1 ng of total RNA. In summary, this study represents a major advance towards a standardized experimental framework for antibody HTS, thus opening up the potential for systems-based, cross-experiment meta-analyses of antibody repertoires.

  9. Research Methods and Techniques in Spanish Library and Information Science Journals (2012-2014)

    Science.gov (United States)

    Ferran-Ferrer, Núria; Guallar, Javier; Abadal, Ernest; Server, Adan

    2017-01-01

    Introduction. This study examines the research methods and techniques used in Spanish journals of library and information science, the topics addressed by papers in these journals and their authorship affiliation. Method. The researchers selected 580 papers published in the top seven Spanish LIS journals indexed in Web of Science and Scopus and…

  10. A method to incorporate interstitial components into the TPS gynecologic rigid applicator library

    Directory of Open Access Journals (Sweden)

    Antonio Otal

    2017-01-01

    Full Text Available Purpose: T2 magnetic resonance imaging (MRI is recommended as the imaging modality for image-guided brachytherapy. In locally advanced cervical carcinoma, combined endocavitary and interstitial applicators are appropriate (Vienna or Utrecht. To cover extensive disease, Template Benidorm (TB was developed. Treatment planning system applicator libraries are currently unavailable for the Utrecht applicator or the TB. The purpose of this work is to develop an applicator library for both applicators. Material and methods: The library developed in this work has been used in the Oncentra Brachytherapy TPS, version 4.3.0, which has a brachytherapy module that includes a library of rigid applicators. To add the needles of the Utrecht applicator and to model the TB, we used FreeCAD and MeshLab. The reconstruction process was based on the points that the rigid section and the interstitial part have in common. This, together with the free length, allowed us to ascertain the position of the tip. Results: In case of the Utrecht applicator, one of the sources of uncertainty in the reconstruction was determining the distance of the tip of needle from the ovoid. In case of the TB, the large number of needles involved made their identification time consuming. The developed library resolved both issues. Conclusions : The developed library for the Utrecht and TB is feasible and efficient improving accuracy. It allows all the required treatment planning to proceed using just a T2 MRI sequence. The additional use of specific free available software applications makes it possible to add this information to the already existing library of the Oncentra Brachytherapy TPS. Specific details not included on this manuscript will be available under request. This library is also currently being implemented also into the Sagiplan v 2.0 TPS.

  11. Information Retrieval Methods in Libraries and Information Centers ...

    African Journals Online (AJOL)

    The volumes of information created, generated and stored are immense that without adequate knowledge of information retrieval methods, the retrieval process for an information user would be cumbersome and frustrating. Studies have further revealed that information retrieval methods are essential in information centers ...

  12. 108 Information Retrieval Methods in Libraries and Information ...

    African Journals Online (AJOL)

    User

    without adequate knowledge of information retrieval methods, the retrieval process for an ... discusses the concept of Information retrieval, the various information ..... Other advantages of automatic indexing are the maintenance of consistency.

  13. A Fast Radio Burst Search Method for VLBI Observation

    Science.gov (United States)

    Liu, Lei; Tong, Fengxian; Zheng, Weimin; Zhang, Juan; Tong, Li

    2018-02-01

    We introduce the cross-spectrum-based fast radio burst (FRB) search method for Very Long Baseline Interferometer (VLBI) observation. This method optimizes the fringe fitting scheme in geodetic VLBI data post-processing, which fully utilizes the cross-spectrum fringe phase information and therefore maximizes the power of single-pulse signals. Working with cross-spectrum greatly reduces the effect of radio frequency interference compared with using auto-power spectrum. Single-pulse detection confidence increases by cross-identifying detections from multiple baselines. By combining the power of multiple baselines, we may improve the detection sensitivity. Our method is similar to that of coherent beam forming, but without the computational expense to form a great number of beams to cover the whole field of view of our telescopes. The data processing pipeline designed for this method is easy to implement and parallelize, which can be deployed in various kinds of VLBI observations. In particular, we point out that VGOS observations are very suitable for FRB search.

  14. An extensible and successful method of identifying collaborators for National Library of Medicine informationist projects.

    Science.gov (United States)

    Williams, Jeff D; Rambo, Neil H

    2015-07-01

    The New York University (NYU) Health Sciences Library used a new method to arrange in-depth discussions with basic science researchers. The objective was to identify collaborators for a new National Library of Medicine administrative supplement. The research took place at the NYU Health Sciences Library. Using the National Institutes of Health (NIH) RePORTER, forty-four researchers were identified and later contacted through individualized emails. Nine researchers responded to the email followed by six in-person or phone discussions. At the conclusion of this process, two researchers submitted applications for supplemental funding, and both of these applications were successful. This method confirmed these users could benefit from the skills and knowledge of health sciences librarians, but they are largely unaware of this.

  15. IMPROVING NEAREST NEIGHBOUR SEARCH IN 3D SPATIAL ACCESS METHOD

    Directory of Open Access Journals (Sweden)

    A. Suhaibaha

    2016-10-01

    Full Text Available Nearest Neighbour (NN is one of the important queries and analyses for spatial application. In normal practice, spatial access method structure is used during the Nearest Neighbour query execution to retrieve information from the database. However, most of the spatial access method structures are still facing with unresolved issues such as overlapping among nodes and repetitive data entry. This situation will perform an excessive Input/Output (IO operation which is inefficient for data retrieval. The situation will become more crucial while dealing with 3D data. The size of 3D data is usually large due to its detail geometry and other attached information. In this research, a clustered 3D hierarchical structure is introduced as a 3D spatial access method structure. The structure is expected to improve the retrieval of Nearest Neighbour information for 3D objects. Several tests are performed in answering Single Nearest Neighbour search and k Nearest Neighbour (kNN search. The tests indicate that clustered hierarchical structure is efficient in handling Nearest Neighbour query compared to its competitor. From the results, clustered hierarchical structure reduced the repetitive data entry and the accessed page. The proposed structure also produced minimal Input/Output operation. The query response time is also outperformed compared to the other competitor. For future outlook of this research several possible applications are discussed and summarized.

  16. New procedure for criticality search using coarse mesh nodal methods

    International Nuclear Information System (INIS)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S.

    2011-01-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  17. New procedure for criticality search using coarse mesh nodal methods

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Wanderson F.; Silva, Fernando C. da; Martinez, Aquilino S., E-mail: wneto@con.ufrj.b, E-mail: fernando@con.ufrj.b, E-mail: Aquilino@lmp.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    The coarse mesh nodal methods have as their primary goal to calculate the neutron flux inside the reactor core. Many computer systems use a specific form of calculation, which is called nodal method. In classical computing systems that use the criticality search is made after the complete convergence of the iterative process of calculating the neutron flux. In this paper, we proposed a new method for the calculation of criticality, condition which will be over very iterative process of calculating the neutron flux. Thus, the processing time for calculating the neutron flux was reduced by half compared with the procedure developed by the Nuclear Engineering Program of COPPE/UFRJ (PEN/COPPE/UFRJ). (author)

  18. Method s for Measuring Productivity in Libraries and Information Centres

    OpenAIRE

    Mohammad Alaaei

    2009-01-01

      Within Information centers, productivity is the result of optimal and effective use of information resources, service quality improvement, increased user satisfaction, pleasantness of working environment, increased motivation and enthusiasm of staff to work better. All contribute to the growth and development of information centers. Thus these centers would need to be familiar with methods employed in productivity measurement. Productivity is one of the criteria for evaluating system perfor...

  19. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    Science.gov (United States)

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  20. PMD2HD--a web tool aligning a PubMed search results page with the local German Cancer Research Centre library collection.

    Science.gov (United States)

    Bohne-Lang, Andreas; Lang, Elke; Taube, Anke

    2005-06-27

    Web-based searching is the accepted contemporary mode of retrieving relevant literature, and retrieving as many full text articles as possible is a typical prerequisite for research success. In most cases only a proportion of references will be directly accessible as digital reprints through displayed links. A large number of references, however, have to be verified in library catalogues and, depending on their availability, are accessible as print holdings or by interlibrary loan request. The problem of verifying local print holdings from an initial retrieval set of citations can be solved using Z39.50, an ANSI protocol for interactively querying library information systems. Numerous systems include Z39.50 interfaces and therefore can process Z39.50 interactive requests. However, the programmed query interaction command structure is non-intuitive and inaccessible to the average biomedical researcher. For the typical user, it is necessary to implement the protocol within a tool that hides and handles Z39.50 syntax, presenting a comfortable user interface. PMD2HD is a web tool implementing Z39.50 to provide an appropriately functional and usable interface to integrate into the typical workflow that follows an initial PubMed literature search, providing users with an immediate asset to assist in the most tedious step in literature retrieval, checking for subscription holdings against a local online catalogue. PMD2HD can facilitate literature access considerably with respect to the time and cost of manual comparisons of search results with local catalogue holdings. The example presented in this article is related to the library system and collections of the German Cancer Research Centre. However, the PMD2HD software architecture and use of common Z39.50 protocol commands allow for transfer to a broad range of scientific libraries using Z39.50-compatible library information systems.

  1. Method s for Measuring Productivity in Libraries and Information Centres

    Directory of Open Access Journals (Sweden)

    Mohammad Alaaei

    2009-04-01

    Full Text Available   Within Information centers, productivity is the result of optimal and effective use of information resources, service quality improvement, increased user satisfaction, pleasantness of working environment, increased motivation and enthusiasm of staff to work better. All contribute to the growth and development of information centers. Thus these centers would need to be familiar with methods employed in productivity measurement. Productivity is one of the criteria for evaluating system performance. In the past decades particular emphasis has been placed on measurement and improvement of human resource, creativity, innovation and expert analysis. Contemplation and efforts made towards identification of problems and issues and new means to make more useful and better resource management is the very definition of productivity. Simply put, productivity is the relationship between system output and the elements garnered to produce these outputs. The causality between variables and factors impacting on productivity is very complex. In information centers, given the large volume of elements involved, it seems necessary to increase efficiency and productivity

  2. Innovative methods of knowledge transfer by multimedia library

    Science.gov (United States)

    Goanta, A. M.

    2016-08-01

    The present situation of teaching and learning new knowledge taught in the classroom is highly variable depending on the specific topics concerned. If we analyze the manifold ways of teaching / learning at university level, we can notice a very good combination between classical and modern methods. The first category includes the classic chalk blackboard teaching, followed by the also classical learning based on paper reference material. The second category includes books published in PDF or PPT [1], which are printed on the type backing CD / DVD. Since 2006 the author was concerned about the transfer of information and knowledge through video files like AVI, FLV or MPEG using various means of transfer, from the free ones (via Internet) and continuing with those involving minimal costs, i.e. on CD / DVD support. Encouraged by the students’ interest in this kind of teaching material as proved by monitoring [2] the site http://www.cursuriuniversitarebraila.ugal.ro, the author has managed to publish with ISBN the first video book in Romania, which has a non conformist content in that the chapters are located not by paging but by the hour or minutes of shooting when they were made.

  3. Potential Fit to the Department Outweighs Professional Criteria in the Hiring Process in Academic Libraries. A Review of: Wang, Z. & Guarria, C. (2010. Unlocking the mystery: What academic library search committees look for in filling faculty positions. Technical Services Quarterly, 27, 66–86.

    Directory of Open Access Journals (Sweden)

    Yvonne Hultman Özek

    2010-12-01

    Full Text Available Objective – To identify key factors affecting the probability of obtaining an interview and being hired for an academic library position.Design – An online survey was distributed via the following electronic mail lists: ACRL, LITA, COLLIB, METRO, ACQNET, COLLDV, ULS, EQUILIBR, and ALF. The questionnaire was posted via StudentVoice, an assessment survey provider.Setting – Academic libraries in the United States.Subjects – The 242 academic library search committees that responded to the online survey.Methods – The authors reviewed the literature on the hiring process in academic libraries. A questionnaire for an online survey was developed. The instrument contained closed questions with the option to add comments. The survey was available for completion June 3 to June 15, 2008.Main Results – Skills and performance of job requirements were rated as the most important criteria by 90% of the 242 academic library search committees that responded to the survey. Previous academic library experience was rated as essential by 38%. The findings also showed that committees are positive towards hiring recent graduates, and over 90% check references. In addition, 75% of the respondents emphasized the importance of skills in bibliographic instruction (BI, particularly when choosing staff for public services.Furthermore, of the 242 respondents, 47.52%, answering the corresponding question indicated that a relevant cover letter, correct spelling, and declaration of the candidate’s activities over all time periods are crucial aspects.Those in favour of using a weighted scoring system, 37% of 218 respondents, felt that it served as a tool to level the playing field for gathering accurate information, and it also helped to improve the efficiency as well as speed of the hiring process. However, 62.84% of the respondents commented that a weighted scoring system is too prescribed, and some universities did not allow the use of this method. Of 218

  4. Three looks at users: a comparison of methods for studying digital library use. User studies, Digital libraries, Digital music libraries, Music, Information use, Information science, Contextual inquiry, Contextual design, User research, Questionnaires, Log file analysis

    Directory of Open Access Journals (Sweden)

    Mark Notess

    2004-01-01

    Full Text Available Compares three user research methods of studying real-world digital library usage within the context of the Variations and Variations2 digital music libraries at Indiana University. After a brief description of both digital libraries, each method is described and illustrated with findings from the studies. User satisfaction questionnaires were used in two studies, one of Variations (n=30 and the other of Variations2 (n=12. Second, session activity log files were examined for 175 Variations2 sessions using both quantitative and qualitative methods. The third method, contextual inquiry, is illustrated with results from field observations of four voice students' information usage patterns. The three methods are compared in terms of expertise required; time required to set up, conduct, and analyse resulting data; and the benefits derived. Further benefits are achieved with a mixed-methods approach, combining the strengths of the methods to answer questions lingering as a result of other methods.

  5. Comparison of Iranian National Medical Library with digital libraries of selected countries.

    Science.gov (United States)

    Zare-Farashbandi, Firoozeh; Najafi, Nayere Sadat Soleimanzade; Atashpour, Bahare

    2014-01-01

    The important role of information and communication technologies and their influence on methods of storing, retrieving information in digital libraries, has not only changed the meanings behind classic library activates but has also created great changes in their services. However, it seems that not all digital libraries provide their users with similar services and only some of them are successful in fulfilling their role in digital environment. The Iranian National Medical library is among those that appear to come short compared to other digital libraries around the world. By knowing the different services provided by digital libraries worldwide, one can evaluate the services provided by Iranian National Medical library. The goal of this study is a comparison between Iranian National Medical library and digital libraries of selected countries. This is an applied study and uses descriptive - survey method. The statistical population is the digital libraries around the world which were actively providing library services between October and December 2011 and were selected by using the key word "Digital Library" in Google search engine. The data-gathering tool was direct access to the websites of these digital libraries. The statistical study is descriptive and Excel software was used for data analysis and plotting of the charts. The findings showed that among the 33 digital libraries investigated worldwide, most of them provided Browse (87.87%), Search (84.84%), and Electronic information retrieval (57.57%) services. The "Help" in public services (48/48%) and "Interlibrary Loan" in traditional services (27/27%) had the highest frequency. The Iranian National Medical library provides more digital services compared to other libraries but has less classic and public services and has less than half of possible public services. Other than Iranian National Medical library, among the 33 libraries investigated, the leaders in providing different services are Library of

  6. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  7. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  8. Evaluation of library preparation methods for Illumina next generation sequencing of small amounts of DNA from foodborne parasites.

    Science.gov (United States)

    Nascimento, Fernanda S; Wei-Pridgeon, Yuping; Arrowood, Michael J; Moss, Delynn; da Silva, Alexandre J; Talundzic, Eldin; Qvarnstrom, Yvonne

    2016-11-01

    Illumina library preparation methods for ultra-low input amounts were compared using genomic DNA from two foodborne parasites (Angiostrongylus cantonensis and Cyclospora cayetanensis) as examples. The Ovation Ultralow method resulted in libraries with the highest concentration and produced quality sequencing data, even when the input DNA was in the picogram range. Published by Elsevier B.V.

  9. Development of Pulsar Detection Methods for a Galactic Center Search

    Science.gov (United States)

    Thornton, Stephen; Wharton, Robert; Cordes, James; Chatterjee, Shami

    2018-01-01

    Finding pulsars within the inner parsec of the galactic center would be incredibly beneficial: for pulsars sufficiently close to Sagittarius A*, extremely precise tests of general relativity in the strong field regime could be performed through measurement of post-Keplerian parameters. Binary pulsar systems with sufficiently short orbital periods could provide the same laboratories with which to test existing theories. Fast and efficient methods are needed to parse large sets of time-domain data from different telescopes to search for periodicity in signals and differentiate radio frequency interference (RFI) from pulsar signals. Here we demonstrate several techniques to reduce red noise (low-frequency interference), generate signals from pulsars in binary orbits, and create plots that allow for fast detection of both RFI and pulsars.

  10. A Robust and Versatile Method of Combinatorial Chemical Synthesis of Gene Libraries via Hierarchical Assembly of Partially Randomized Modules

    Science.gov (United States)

    Popova, Blagovesta; Schubert, Steffen; Bulla, Ingo; Buchwald, Daniela; Kramer, Wilfried

    2015-01-01

    A major challenge in gene library generation is to guarantee a large functional size and diversity that significantly increases the chances of selecting different functional protein variants. The use of trinucleotides mixtures for controlled randomization results in superior library diversity and offers the ability to specify the type and distribution of the amino acids at each position. Here we describe the generation of a high diversity gene library using tHisF of the hyperthermophile Thermotoga maritima as a scaffold. Combining various rational criteria with contingency, we targeted 26 selected codons of the thisF gene sequence for randomization at a controlled level. We have developed a novel method of creating full-length gene libraries by combinatorial assembly of smaller sub-libraries. Full-length libraries of high diversity can easily be assembled on demand from smaller and much less diverse sub-libraries, which circumvent the notoriously troublesome long-term archivation and repeated proliferation of high diversity ensembles of phages or plasmids. We developed a generally applicable software tool for sequence analysis of mutated gene sequences that provides efficient assistance for analysis of library diversity. Finally, practical utility of the library was demonstrated in principle by assessment of the conformational stability of library members and isolating protein variants with HisF activity from it. Our approach integrates a number of features of nucleic acids synthetic chemistry, biochemistry and molecular genetics to a coherent, flexible and robust method of combinatorial gene synthesis. PMID:26355961

  11. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    Science.gov (United States)

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  12. Evaluation of a new method for librarian-mediated literature searches for systematic reviews

    NARCIS (Netherlands)

    W.M. Bramer (Wichor); Rethlefsen, M.L. (Melissa L.); F. Mast (Frans); J. Kleijnen (Jos)

    2017-01-01

    textabstractObjective: To evaluate and validate the time of completion and results of a new method of searching for systematic reviews, the exhaustive search method (ESM), using a pragmatic comparison. Methods: Single-line search strategies were prepared in a text document. Term completeness was

  13. Methods of experimental settlement of contradicting data in evaluated nuclear data libraries

    Directory of Open Access Journals (Sweden)

    V. A. Libman

    2016-12-01

    Full Text Available The latest versions of the evaluated nuclear data libraries (ENDLs have contradictions concerning data about neutron cross sections. To resolve this contradiction we propose the method of experimental verification. This method is based on using of the filtered neutron beams and following measurement of appropriate samples. The basic idea of the method is to modify the suited filtered neutron beam so that the differences between the neutron cross sections in accordance with different ENDLs become measurable. Demonstration of the method is given by the example of cerium, which according to the latest versions of four ENDLs has significantly different total neutron cross section.

  14. In search of new methods. Qigong in stuttering therapy

    Directory of Open Access Journals (Sweden)

    Paweł Półrola

    2013-10-01

    Full Text Available Introduction : Even though stuttering is probably as old a phenomenon as the human speech itself, the stuttering therapy is still a challenge for the therapist and requires constant searching for new methods. Qigong may prove to be one of them. Aim of the research: The research paper presents the results of an experimental investigation evaluating the usefulness of qigong practice in stuttering therapy. Material and methods: Two groups of stuttering adults underwent 6-month therapy. In group I – the experimental one (n = 11 – the therapy consisted of speech fluency training, psychotherapy and qigong practice. In group II – the control one (n = 12 – it included speech fluency training and psychotherapy. In both groups 2-hour sessions of speech fluency training and psychotherapy were conducted twice a week. Two-hour qigong sessions took place once a week. Results: After 6 months the therapy results were compared with regard to the basic stuttering parameters, such as the degree of speech disfluency, the level of logophobia and speech disfluency symptoms. Improvement was observed in both groups, the beneficial effects, however, being more prominent in the qigong-practising group. Conclusions : Qigong exercises used in the therapy of stuttering people along with speech fluency training and psychotherapy give beneficial effects.

  15. Library fingerprints: a novel approach to the screening of virtual libraries.

    Science.gov (United States)

    Klon, Anthony E; Diller, David J

    2007-01-01

    We propose a novel method to prioritize libraries for combinatorial synthesis and high-throughput screening that assesses the viability of a particular library on the basis of the aggregate physical-chemical properties of the compounds using a naïve Bayesian classifier. This approach prioritizes collections of related compounds according to the aggregate values of their physical-chemical parameters in contrast to single-compound screening. The method is also shown to be useful in screening existing noncombinatorial libraries when the compounds in these libraries have been previously clustered according to their molecular graphs. We show that the method used here is comparable or superior to the single-compound virtual screening of combinatorial libraries and noncombinatorial libraries and is superior to the pairwise Tanimoto similarity searching of a collection of combinatorial libraries.

  16. [Progress in the spectral library based protein identification strategy].

    Science.gov (United States)

    Yu, Derui; Ma, Jie; Xie, Zengyan; Bai, Mingze; Zhu, Yunping; Shu, Kunxian

    2018-04-25

    Exponential growth of the mass spectrometry (MS) data is exhibited when the mass spectrometry-based proteomics has been developing rapidly. It is a great challenge to develop some quick, accurate and repeatable methods to identify peptides and proteins. Nowadays, the spectral library searching has become a mature strategy for tandem mass spectra based proteins identification in proteomics, which searches the experiment spectra against a collection of confidently identified MS/MS spectra that have been observed previously, and fully utilizes the abundance in the spectrum, peaks from non-canonical fragment ions, and other features. This review provides an overview of the implement of spectral library search strategy, and two key steps, spectral library construction and spectral library searching comprehensively, and discusses the progress and challenge of the library search strategy.

  17. Illuminating choices for library prep: a comparison of library preparation methods for whole genome sequencing of Cryptococcus neoformans using Illumina HiSeq.

    Directory of Open Access Journals (Sweden)

    Johanna Rhodes

    Full Text Available The industry of next-generation sequencing is constantly evolving, with novel library preparation methods and new sequencing machines being released by the major sequencing technology companies annually. The Illumina TruSeq v2 library preparation method was the most widely used kit and the market leader; however, it has now been discontinued, and in 2013 was replaced by the TruSeq Nano and TruSeq PCR-free methods, leaving a gap in knowledge regarding which is the most appropriate library preparation method to use. Here, we used isolates from the pathogenic fungi Cryptococcus neoformans var. grubii and sequenced them using the existing TruSeq DNA v2 kit (Illumina, along with two new kits: the TruSeq Nano DNA kit (Illumina and the NEBNext Ultra DNA kit (New England Biolabs to provide a comparison. Compared to the original TruSeq DNA v2 kit, both newer kits gave equivalent or better sequencing data, with increased coverage. When comparing the two newer kits, we found little difference in cost and workflow, with the NEBNext Ultra both slightly cheaper and faster than the TruSeq Nano. However, the quality of data generated using the TruSeq Nano DNA kit was superior due to higher coverage at regions of low GC content, and more SNPs identified. Researchers should therefore evaluate their resources and the type of application (and hence data quality being considered when ultimately deciding on which library prep method to use.

  18. Illuminating choices for library prep: a comparison of library preparation methods for whole genome sequencing of Cryptococcus neoformans using Illumina HiSeq.

    Science.gov (United States)

    Rhodes, Johanna; Beale, Mathew A; Fisher, Matthew C

    2014-01-01

    The industry of next-generation sequencing is constantly evolving, with novel library preparation methods and new sequencing machines being released by the major sequencing technology companies annually. The Illumina TruSeq v2 library preparation method was the most widely used kit and the market leader; however, it has now been discontinued, and in 2013 was replaced by the TruSeq Nano and TruSeq PCR-free methods, leaving a gap in knowledge regarding which is the most appropriate library preparation method to use. Here, we used isolates from the pathogenic fungi Cryptococcus neoformans var. grubii and sequenced them using the existing TruSeq DNA v2 kit (Illumina), along with two new kits: the TruSeq Nano DNA kit (Illumina) and the NEBNext Ultra DNA kit (New England Biolabs) to provide a comparison. Compared to the original TruSeq DNA v2 kit, both newer kits gave equivalent or better sequencing data, with increased coverage. When comparing the two newer kits, we found little difference in cost and workflow, with the NEBNext Ultra both slightly cheaper and faster than the TruSeq Nano. However, the quality of data generated using the TruSeq Nano DNA kit was superior due to higher coverage at regions of low GC content, and more SNPs identified. Researchers should therefore evaluate their resources and the type of application (and hence data quality) being considered when ultimately deciding on which library prep method to use.

  19. Library holdings for Bermuda: Search for Deep Water Caves 2009 on the R/V Endurance between 20090905 and 20090930

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Library Catalog may include: Data Management Plans, Cruise Plans, Cruise Summary Reports, Scientific "Quick Look Reports", Video Annotation Logs, Image Collections,...

  20. Next-generation sequencing library preparation method for identification of RNA viruses on the Ion Torrent Sequencing Platform.

    Science.gov (United States)

    Chen, Guiqian; Qiu, Yuan; Zhuang, Qingye; Wang, Suchun; Wang, Tong; Chen, Jiming; Wang, Kaicheng

    2018-05-09

    Next generation sequencing (NGS) is a powerful tool for the characterization, discovery, and molecular identification of RNA viruses. There were multiple NGS library preparation methods published for strand-specific RNA-seq, but some methods are not suitable for identifying and characterizing RNA viruses. In this study, we report a NGS library preparation method to identify RNA viruses using the Ion Torrent PGM platform. The NGS sequencing adapters were directly inserted into the sequencing library through reverse transcription and polymerase chain reaction, without fragmentation and ligation of nucleic acids. The results show that this method is simple to perform, able to identify multiple species of RNA viruses in clinical samples.

  1. The "SAFARI" Method of Collection Study and Cooperative Acquisition for a Multi-Library Cooperative. A Manual of Procedures.

    Science.gov (United States)

    Sinclair, Dorothy

    This document examines the importance and difficulties in resource sharing and acquisition by libraries and introduces the procedures of the Site Appraisal for Area Resources Inventory (SAFARI) system as a method of comparative evaluation of subject collections among a group of libraries. Resource, or collection, sharing offers specific…

  2. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  3. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    OpenAIRE

    Zheluk, Andrey; Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's a...

  4. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    Science.gov (United States)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  5. Quantification of massively parallel sequencing libraries - a comparative study of eight methods

    DEFF Research Database (Denmark)

    Hussing, Christian; Kampmann, Marie-Louise; Mogensen, Helle Smidt

    2018-01-01

    Quantification of massively parallel sequencing libraries is important for acquisition of monoclonal beads or clusters prior to clonal amplification and to avoid large variations in library coverage when multiple samples are included in one sequencing analysis. No gold standard for quantification...... estimates followed by Qubit and electrophoresis-based instruments (Bioanalyzer, TapeStation, GX Touch, and Fragment Analyzer), while SYBR Green and TaqMan based qPCR assays gave the lowest estimates. qPCR gave more accurate predictions of sequencing coverage than Qubit and TapeStation did. Costs, time......-consumption, workflow simplicity, and ability to quantify multiple samples are discussed. Technical specifications, advantages, and disadvantages of the various methods are pointed out....

  6. Systematic hybrid LOH: a new method to reduce false positives and negatives during screening of yeast gene deletion libraries

    DEFF Research Database (Denmark)

    Alvaro, D.; Sunjevaric, I.; Reid, R. J.

    2006-01-01

    We have developed a new method, systematic hybrid loss of heterozygosity, to facilitate genomic screens utilizing the yeast gene deletion library. Screening is performed using hybrid diploid strains produced through mating the library haploids with strains from a different genetic background......, to minimize the contribution of unpredicted recessive genetic factors present in the individual library strains. We utilize a set of strains where each contains a conditional centromere construct on one of the 16 yeast chromosomes that allows the destabilization and selectable loss of that chromosome. After...... complementation of any spurious recessive mutations in the library strain, facilitating attribution of the observed phenotype to the documented gene deletion and dramatically reducing false positive results commonly obtained in library screens. The systematic hybrid LOH method can be applied to virtually any...

  7. The Weaknesses of Full-Text Searching

    Science.gov (United States)

    Beall, Jeffrey

    2008-01-01

    This paper provides a theoretical critique of the deficiencies of full-text searching in academic library databases. Because full-text searching relies on matching words in a search query with words in online resources, it is an inefficient method of finding information in a database. This matching fails to retrieve synonyms, and it also retrieves…

  8. A modified harmony search based method for optimal rural radial ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 3 (2010) >. Log in or Register to get access to full text downloads.

  9. Searching for Truth: Internet Search Patterns as a Method of Investigating Online Responses to a Russian Illicit Drug Policy Debate

    Science.gov (United States)

    Gillespie, James A; Quinn, Casey

    2012-01-01

    Background This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. Objective This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. Methods A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. Results We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r s = 0.88, P < .001), “Bychkov” (r s = .78, P < .001) and “Khimki”(r s = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for “Bychkov” and

  10. Searching for truth: internet search patterns as a method of investigating online responses to a Russian illicit drug policy debate.

    Science.gov (United States)

    Zheluk, Andrey; Gillespie, James A; Quinn, Casey

    2012-12-13

    This is a methodological study investigating the online responses to a national debate over an important health and social problem in Russia. Russia is the largest Internet market in Europe, exceeding Germany in the absolute number of users. However, Russia is unusual in that the main search provider is not Google, but Yandex. This study had two main objectives. First, to validate Yandex search patterns against those provided by Google, and second, to test this method's adequacy for investigating online interest in a 2010 national debate over Russian illicit drug policy. We hoped to learn what search patterns and specific search terms could reveal about the relative importance and geographic distribution of interest in this debate. A national drug debate, centering on the anti-drug campaigner Egor Bychkov, was one of the main Russian domestic news events of 2010. Public interest in this episode was accompanied by increased Internet search. First, we measured the search patterns for 13 search terms related to the Bychkov episode and concurrent domestic events by extracting data from Google Insights for Search (GIFS) and Yandex WordStat (YaW). We conducted Spearman Rank Correlation of GIFS and YaW search data series. Second, we coded all 420 primary posts from Bychkov's personal blog between March 2010 and March 2012 to identify the main themes. Third, we compared GIFS and Yandex policies concerning the public release of search volume data. Finally, we established the relationship between salient drug issues and the Bychkov episode. We found a consistent pattern of strong to moderate positive correlations between Google and Yandex for the terms "Egor Bychkov" (r(s) = 0.88, P < .001), "Bychkov" (r(s) = .78, P < .001) and "Khimki"(r(s) = 0.92, P < .001). Peak search volumes for the Bychkov episode were comparable to other prominent domestic political events during 2010. Monthly search counts were 146,689 for "Bychkov" and 48,084 for "Egor Bychkov", compared to 53

  11. Library and information services: impact on patient care quality.

    Science.gov (United States)

    Marshall, Joanne Gard; Morgan, Jennifer Craft; Thompson, Cheryl A; Wells, Amber L

    2014-01-01

    The purpose of this paper is to explore library and information service impact on patient care quality. A large-scale critical incident survey of physicians and residents at 56 library sites serving 118 hospitals in the USA and Canada. Respondents were asked to base their answers on a recent incident in which they had used library resources to search for information related to a specific clinical case. Of 4,520 respondents, 75 percent said that they definitely or probably handled patient care differently using information obtained through the library. In a multivariate analysis, three summary clinical outcome measures were used as value and impact indicators: first, time saved; second, patient care changes; and third, adverse events avoided. The outcomes were examined in relation to four information access methods: first, asking librarian for assistance; second, performing search in a physical library; third, searching library's web site; or fourth, searching library resources on an institutional intranet. All library access methods had consistently positive relationships with the clinical outcomes, providing evidence that library services have a positive impact on patient care quality. Electronic collections and services provided by the library and the librarian contribute to patient care quality.

  12. A human-machine interface evaluation method: A difficulty evaluation method in information searching (DEMIS)

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2009-01-01

    A human-machine interface (HMI) evaluation method, which is named 'difficulty evaluation method in information searching (DEMIS)', is proposed and demonstrated with an experimental study. The DEMIS is based on a human performance model and two measures of attentional-resource effectiveness in monitoring and detection tasks in nuclear power plants (NPPs). Operator competence and HMI design are modeled to be most significant factors to human performance. One of the two effectiveness measures is fixation-to-importance ratio (FIR) which represents attentional resource (eye fixations) spent on an information source compared to importance of the information source. The other measure is selective attention effectiveness (SAE) which incorporates FIRs for all information sources. The underlying principle of the measures is that the information source should be selectively attended to according to its informational importance. In this study, poor performance in information searching tasks is modeled to be coupled with difficulties caused by poor mental models of operators or/and poor HMI design. Human performance in information searching tasks is evaluated by analyzing the FIR and the SAE. Operator mental models are evaluated by a questionnaire-based method. Then difficulties caused by a poor HMI design are evaluated by a focused interview based on the FIR evaluation and then root causes leading to poor performance are identified in a systematic way.

  13. Modern Library Facilities to Enhance Learning in a Teachers' College

    African Journals Online (AJOL)

    unique firstlady

    Modern facilities/Equipments in the library that can aid effective utilization of ... we acquire the education by imitation or learning through the process of “doing it .... spent while searching for materials compared to traditional services method.

  14. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  15. Activity-Based Costing (ABC and Time-Driven Activity-Based Costing (TDABC: Applicable Methods for University Libraries?

    Directory of Open Access Journals (Sweden)

    Kate-Riin Kont

    2011-01-01

    Full Text Available Objective – This article provides an overview of how university libraries research and adapt new cost accounting models, such as “activity-based costing” (ABC and “time-driven activity-based costing” (TDABC, focusing on the strengths and weaknesses of both methods to determine which of these two is suitable for application in university libraries.Methods – This paper reviews and summarizes the literature on cost accounting and costing practices of university libraries. A brief overview of the history of cost accounting, costing, and time and motion studies in libraries is also provided. The ABC and the TDABC method, designed as a revised and easier version of the ABC by Kaplan and Anderson (Kaplan & Anderson 2004 at the beginning of the 21st century, as well as the adoption and adaptation of these methods by university libraries are described, and their strengths and weaknesses, as well as their suitability for university libraries, are analyzed. Results – Cost accounting and costing studies in libraries have a long history, the first of these dating back to 1877. The development of cost accounting and time and motion studies can be seen as a natural evolution of techniques which were created to solve management problems. The ABC method is the best-known management accounting innovation of the last 20 years, and is already widely used in university libraries around the world. However, setting up an ABC system can be very costly, and the system needs to be regularly updated, which further increases its costs. The TDABC system can not only be implemented more quickly (and thus more cheaply, but also can be updated more easily than the traditional ABC, which makes the TDABC the more suitable method for university libraries.Conclusion – Both methods are suitable for university libraries. However, the ABC method can only be implemented in collaboration with an accounting department. The TDABC method can be tested and implemented by

  16. Searching methods for biometric identification systems: Fundamental limits

    NARCIS (Netherlands)

    Willems, F.M.J.

    2009-01-01

    We study two-stage search procedures for biometric identification systems in an information-theoretical setting. Our main conclusion is that clustering based on vector-quantization achieves the optimum trade-off between the number of clusters (cluster rate) and the number of individuals within a

  17. Methods for transforming and expression screening of filamentous fungal cells with a DNA library

    Science.gov (United States)

    Teter, Sarah; Lamsa, Michael; Cherry, Joel; Ward, Connie

    2015-06-02

    The present invention relates to methods for expression screening of filamentous fungal transformants, comprising: (a) isolating single colony transformants of a DNA library introduced into E. coli; (b) preparing DNA from each of the single colony E. coli transformants; (c) introducing a sample of each of the DNA preparations of step (b) into separate suspensions of protoplasts of a filamentous fungus to obtain transformants thereof, wherein each transformant contains one or more copies of an individual polynucleotide from the DNA library; (d) growing the individual filamentous fungal transformants of step (c) on selective growth medium, thereby permitting growth of the filamentous fungal transformants, while suppressing growth of untransformed filamentous fungi; and (e) measuring activity or a property of each polypeptide encoded by the individual polynucleotides. The present invention also relates to isolated polynucleotides encoding polypeptides of interest obtained by such methods, to nucleic acid constructs, expression vectors, and recombinant host cells comprising the isolated polynucleotides, and to methods of producing the polypeptides encoded by the isolated polynucleotides.

  18. Enterprise Reference Library

    Science.gov (United States)

    Bickham, Grandin; Saile, Lynn; Havelka, Jacque; Fitts, Mary

    2011-01-01

    Introduction: Johnson Space Center (JSC) offers two extensive libraries that contain journals, research literature and electronic resources. Searching capabilities are available to those individuals residing onsite or through a librarian s search. Many individuals have rich collections of references, but no mechanisms to share reference libraries across researchers, projects, or directorates exist. Likewise, information regarding which references are provided to which individuals is not available, resulting in duplicate requests, redundant labor costs and associated copying fees. In addition, this tends to limit collaboration between colleagues and promotes the establishment of individual, unshared silos of information The Integrated Medical Model (IMM) team has utilized a centralized reference management tool during the development, test, and operational phases of this project. The Enterprise Reference Library project expands the capabilities developed for IMM to address the above issues and enhance collaboration across JSC. Method: After significant market analysis for a multi-user reference management tool, no available commercial tool was found to meet this need, so a software program was built around a commercial tool, Reference Manager 12 by The Thomson Corporation. A use case approach guided the requirements development phase. The premise of the design is that individuals use their own reference management software and export to SharePoint when their library is incorporated into the Enterprise Reference Library. This results in a searchable user-specific library application. An accompanying share folder will warehouse the electronic full-text articles, which allows the global user community to access full -text articles. Discussion: An enterprise reference library solution can provide a multidisciplinary collection of full text articles. This approach improves efficiency in obtaining and storing reference material while greatly reducing labor, purchasing and

  19. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis.

    Science.gov (United States)

    Huang, Yan Xin; Bao, Yong Li; Guo, Shu Yan; Wang, Yan; Zhou, Chun Guang; Li, Yu Xin

    2008-12-16

    The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb) in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes), Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation coefficient (MCC), sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be superior to that of other similar programs

  20. A protein-dependent side-chain rotamer library.

    KAUST Repository

    Bhuyan, M.S.

    2011-12-14

    Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.

  1. A protein-dependent side-chain rotamer library.

    KAUST Repository

    Bhuyan, M.S.; Gao, Xin

    2011-01-01

    Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.

  2. Compare the user interface of digital libraries\\' websites between the developing and developed countries in content analysis method

    Directory of Open Access Journals (Sweden)

    Gholam Abbas Mousavi

    2017-03-01

    Full Text Available Purpose: This study performed with goals of determining the Items in designing and developing the user interface of digital libraries' websites and to determine the best digital libraries' websites and discuss their advantages and disadvantages; to analyze and compare digital libraries' websites in developing countries with those in the developed countries. Methodology: to do so, 50 digital libraries' websites were selected by purposive sampling method. By analyzing the level of development of the countries in the sample regarding their digital libraries' websites, 12 websites were classified as belonging to developing and 38 countries to developed counties. Then, their content was studied by using a qualitative content analysis. The study was conducted by using a research-constructed checklist containing 12 main categories and 44 items, whose validity was decided by content validity method. The data was analyzed in SPSS (version 16. Findings: The results showed that in terms of “online resources”, “library collection,” and “navigation”, there is a significant relationship between the digital library' user interface design in both types of countries. Results: The items of “online public access catalogue (OPAC” and “visits statistics” were observed in more developing countries’ digital libraries' websites. However, the item of “menu and submenus to introduce library' sections” was presented in more developed countries’ digital libraries' websites. Moreover, by analyzing the number of items in the selected websites, “American Memory” with 44 items, “International Children Digital Library” with 40 items, and “California” with 39 items were the best, and “Berkeley Sun Site” with 10 items was the worst website. Despite more and better quality digital libraries in developed countries, the quality of digital libraries websites in developing countries is considerable. In general, some of the newly established

  3. A study of certain Monte Carlo search and optimisation methods

    International Nuclear Information System (INIS)

    Budd, C.

    1984-11-01

    Studies are described which might lead to the development of a search and optimisation facility for the Monte Carlo criticality code MONK. The facility envisaged could be used to maximise a function of k-effective with respect to certain parameters of the system or, alternatively, to find the system (in a given range of systems) for which that function takes a given value. (UK)

  4. Rapid identification and quantitation of compounds with forensic interest using fast liquid chromatography-ion trap mass spectrometry and library searching.

    Science.gov (United States)

    Pihlainen, Katja; Sippola, Erkki; Kostiainen, Risto

    2003-04-25

    A fast liquid chromatography-electrospray tandem mass spectrometric (LC-ESI-MS-MS) method by using a monolithic column, gradient elution and ion trap mass spectrometer was developed for 14 forensically interesting and chemically different compounds. All compounds were eluted within 2.5 min and the total analysis time was 5 min including stabilisation time required for the next injection. All the compounds, basics, neutrals and acids were efficiently ionised by positive ion ESI. A laboratory library including MS-MS spectra and retention times was developed and tested. Results with 476 standard samples and 50 authentic samples showed that the compounds studied can be unambiguously identified with the library. A quantitative method was developed for the compounds using external calibration. The evaluation process showed good linearity of the method and reasonable repeatability. Limits of detection ranged from 10.0 to 50.0 ng/ml.

  5. USE OF METRIC METHODS OF RESEARCH IN THE LIBRARY OF VINNYTSIA STATE PEDAGOGICAL UNIVERSITY NAMED AFTER MYKHAILO KOTSIUBYNSKY

    Directory of Open Access Journals (Sweden)

    В. С. Білоус

    2017-10-01

    Full Text Available Subject. Theme. The aim of the work. The level of development of science and technology is crucial to the progress of society. The need to increase the presence of science in the global scientific information space, increase its influence in the world. The use of metric methods of research in the library of a higher school. Methods. The use of scientific methods of analysis, synthesis, analogy, comparison, forecasting allowed us to examine the results of implementation of innovative communications initiatives in the library of the university Results. Current and future activities of the library of the higher school in integration of scientific publications to international information space are highlighted. The practical implementation of these activities are discussed on the example of the libraries of Vinnitsa State M. Kotsiubynskyi Pedagogical University. Scientific novelty. The role of the university library in the process of increasing the representation of the Ukrainian science in the world of scholarly communication. The proposed strategy, the implementation of which should characterize a modern librarian as a «role model» for the community of the university in the implementation of electronic models of scientific communication. Conclusions. Innovative transformations in the content, forms and methods of library activities, using metric measurements affect the improvement of scientific activities of the institution, give significant social results. Introduction to the practice of library activities certain areas will prevent the «dissipation» of documentary scientific information resources of the university, will contribute to their consolidating, will increase the importance of scientific publications and the authority of the Ukrainian science in General. The article reflects the innovative activities of libraries inVinnytsiaMykhailoKotsiubynskyiStatePedagogicalUniversity, development of library service in using research work of the

  6. Methods Of Using Chemical Libraries To Search For New Kinase Inhibitors

    Science.gov (United States)

    Gray, Nathanael S. , Schultz, Peter , Wodicka, Lisa , Meijer, Laurent , Lockhart, David J.

    2003-06-03

    The generation of selective inhibitors for specific protein kinases would provide new tools for analyzing signal transduction pathways and possibly new therapeutic agents. We have invented an approach to the development of selective protein kinase inhibitors based on the unexpected binding mode of 2,6,9-trisubstituted purines to the ATP binding site of human CDK2. The most potent inhibitor, purvalanol B (IC.sub.50 =6 nM), binds with a 30-fold greater affinity than the known CDK2 inhibitor, flavopiridol. The cellular effects of this class of compounds were examined and compared to those of flavopiridol by monitoring changes in mRNA expression levels for all genes in treated cells of Saccharomyces cerevisiae using high-density oligonucleotide probe arrays.

  7. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  8. A simple and efficient method for assembling TALE protein based on plasmid library.

    Science.gov (United States)

    Zhang, Zhiqiang; Li, Duo; Xu, Huarong; Xin, Ying; Zhang, Tingting; Ma, Lixia; Wang, Xin; Chen, Zhilong; Zhang, Zhiying

    2013-01-01

    DNA binding domain of the transcription activator-like effectors (TALEs) from Xanthomonas sp. consists of tandem repeats that can be rearranged according to a simple cipher to target new DNA sequences with high DNA-binding specificity. This technology has been successfully applied in varieties of species for genome engineering. However, assembling long TALE tandem repeats remains a big challenge precluding wide use of this technology. Although several new methodologies for efficiently assembling TALE repeats have been recently reported, all of them require either sophisticated facilities or skilled technicians to carry them out. Here, we described a simple and efficient method for generating customized TALE nucleases (TALENs) and TALE transcription factors (TALE-TFs) based on TALE repeat tetramer library. A tetramer library consisting of 256 tetramers covers all possible combinations of 4 base pairs. A set of unique primers was designed for amplification of these tetramers. PCR products were assembled by one step of digestion/ligation reaction. 12 TALE constructs including 4 TALEN pairs targeted to mouse Gt(ROSA)26Sor gene and mouse Mstn gene sequences as well as 4 TALE-TF constructs targeted to mouse Oct4, c-Myc, Klf4 and Sox2 gene promoter sequences were generated by using our method. The construction routines took 3 days and parallel constructions were available. The rate of positive clones during colony PCR verification was 64% on average. Sequencing results suggested that all TALE constructs were performed with high successful rate. This is a rapid and cost-efficient method using the most common enzymes and facilities with a high success rate.

  9. Searching for cellular partners of hantaviral nonstructural protein NSs: Y2H screening of mouse cDNA library and analysis of cellular interactome.

    Science.gov (United States)

    Rönnberg, Tuomas; Jääskeläinen, Kirsi; Blot, Guillaume; Parviainen, Ville; Vaheri, Antti; Renkonen, Risto; Bouloy, Michele; Plyusnin, Alexander

    2012-01-01

    Hantaviruses (Bunyaviridae) are negative-strand RNA viruses with a tripartite genome. The small (S) segment encodes the nucleocapsid protein and, in some hantaviruses, also the nonstructural protein (NSs). The aim of this study was to find potential cellular partners for the hantaviral NSs protein. Toward this aim, yeast two-hybrid (Y2H) screening of mouse cDNA library was performed followed by a search for potential NSs protein counterparts via analyzing a cellular interactome. The resulting interaction network was shown to form logical, clustered structures. Furthermore, several potential binding partners for the NSs protein, for instance ACBD3, were identified and, to prove the principle, interaction between NSs and ACBD3 proteins was demonstrated biochemically.

  10. Reverse screening methods to search for the protein targets of chemopreventive compounds

    Science.gov (United States)

    Huang, Hongbin; Zhang, Guigui; Zhou, Yuquan; Lin, Chenru; Chen, Suling; Lin, Yutong; Mai, Shangkang; Huang, Zunnan

    2018-05-01

    This article is a systematic review of reverse screening methods used to search for the protein targets of chemopreventive compounds or drugs. Typical chemopreventive compounds include components of traditional Chinese medicine, natural compounds and Food and Drug Administration (FDA)-approved drugs. Such compounds are somewhat selective but are predisposed to bind multiple protein targets distributed throughout diverse signaling pathways in human cells. In contrast to conventional virtual screening, which identifies the ligands of a targeted protein from a compound database, reverse screening is used to identify the potential targets or unintended targets of a given compound from a large number of receptors by examining their known ligands or crystal structures. This method, also known as in silico or computational target fishing, is highly valuable for discovering the target receptors of query molecules from terrestrial or marine natural products, exploring the molecular mechanisms of chemopreventive compounds, finding alternative indications of existing drugs by drug repositioning, and detecting adverse drug reactions and drug toxicity. Reverse screening can be divided into three major groups: shape screening, pharmacophore screening and reverse docking. Several large software packages, such as Schrödinger and Discovery Studio; typical software/network services such as ChemMapper, PharmMapper, idTarget and INVDOCK; and practical databases of known target ligands and receptor crystal structures, such as ChEMBL, BindingDB and the Protein Data Bank (PDB), are available for use in these computational methods. Different programs, online services and databases have different applications and constraints. Here, we conducted a systematic analysis and multilevel classification of the computational programs, online services and compound libraries available for shape screening, pharmacophore screening and reverse docking to enable non-specialist users to quickly learn and

  11. Google Scholar Out-Performs Many Subscription Databases when Keyword Searching. A Review of: Walters, W. H. (2009. Google Scholar search performance: Comparative recall and precision. portal: Libraries and the Academy, 9(1, 5-24.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2010-09-01

    Full Text Available Objective – To compare the search performance (i.e., recall and precision of Google Scholar with that of 11 other bibliographic databases when using a keyword search to find references on later-life migration. Design – Comparative database evaluation. Setting – Not stated in the article. It appears from the author’s affiliation that this research took place in an academic institution of higher learning. Subjects – Twelve databases were compared: Google Scholar, Academic Search Elite, AgeLine, ArticleFirst, EconLit, Geobase, Medline, PAIS International, Popline, Social Sciences Abstracts, Social Sciences Citation Index, and SocIndex. Methods – The relevant literature on later-life migration was pre-identified as a set of 155 journal articles published from 1990 to 2000. The author selected these articles from database searches, citation tracking, journal scans, and consultations with social sciences colleagues. Each database was evaluated with regards to its performance in finding references to these 155 papers.Elderly and migration were the keywords used to conduct the searches in each of the 12 databases, since these were the words that were the most frequently used in the titles of the 155 relevant articles. The search was performed in the most basic search interface of each database that allowed limiting results by the needed publication dates (1990-2000. Search results were sorted by relevance when possible (for 9 out of the 12 databases, and by date when the relevance sorting option was not available. Recall and precision statistics were then calculated from the search results. Recall is the number of relevant results obtained in the database for a search topic, divided by all the potential results which can be obtained on that topic (in this case, 155 references. Precision is the number of relevant results obtained in the database for a search topic, divided by the total number of results that were obtained in the database on

  12. Text mining for search term development in systematic reviewing: A discussion of some methods and challenges.

    Science.gov (United States)

    Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James

    2017-09-01

    Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.

  13. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    Science.gov (United States)

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  14. Search for new and improved radiolabeling methods for monoclonal antibodies

    International Nuclear Information System (INIS)

    Hiltunen, J.V.

    1993-01-01

    In this review the selection of different radioisotopes is discussed as well as the various traditional or newer methods to introduce the radiolabel into the antibody structure. Labeling methods for radiohalogens, for technetium and rhenium isotopes, and for 3-valent cation radiometals are reviewed. Some of the newer methods offer simplified labeling procedures, but usually the new methods are more complicated than the earlier ones. However, new labeling methods are available for almost any radioelement group and they may result in better preserved original natural of the antibody and lead to better clinical results. (orig./MG)

  15. Combining a Deconvolution and a Universal Library Search Algorithm for the Nontarget Analysis of Data-Independent Acquisition Mode Liquid Chromatography-High-Resolution Mass Spectrometry Results.

    Science.gov (United States)

    Samanipour, Saer; Reid, Malcolm J; Bæk, Kine; Thomas, Kevin V

    2018-04-17

    Nontarget analysis is considered one of the most comprehensive tools for the identification of unknown compounds in a complex sample analyzed via liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS). Due to the complexity of the data generated via LC-HRMS, the data-dependent acquisition mode, which produces the MS 2 spectra of a limited number of the precursor ions, has been one of the most common approaches used during nontarget screening. However, data-independent acquisition mode produces highly complex spectra that require proper deconvolution and library search algorithms. We have developed a deconvolution algorithm and a universal library search algorithm (ULSA) for the analysis of complex spectra generated via data-independent acquisition. These algorithms were validated and tested using both semisynthetic and real environmental data. A total of 6000 randomly selected spectra from MassBank were introduced across the total ion chromatograms of 15 sludge extracts at three levels of background complexity for the validation of the algorithms via semisynthetic data. The deconvolution algorithm successfully extracted more than 60% of the added ions in the analytical signal for 95% of processed spectra (i.e., 3 complexity levels multiplied by 6000 spectra). The ULSA ranked the correct spectra among the top three for more than 95% of cases. We further tested the algorithms with 5 wastewater effluent extracts for 59 artificial unknown analytes (i.e., their presence or absence was confirmed via target analysis). These algorithms did not produce any cases of false identifications while correctly identifying ∼70% of the total inquiries. The implications, capabilities, and the limitations of both algorithms are further discussed.

  16. Modification of the Armijo line search to satisfy the convergence properties of HS method

    Directory of Open Access Journals (Sweden)

    Mohammed Belloufi

    2013-07-01

    Full Text Available The Hestenes-Stiefel (HS conjugate gradient algorithm is a useful tool of unconstrainednumerical optimization, which has good numerical performance but no global convergence result under traditional line searches. This paper proposes a line search technique that guarantee the globalconvergence of the Hestenes-Stiefel (HS conjugate gradient method. Numerical tests are presented tovalidate the different approaches.

  17. Program for searching for semiempirical parameters by the MNDO method

    International Nuclear Information System (INIS)

    Bliznyuk, A.A.; Voityuk, A.A.

    1987-01-01

    The authors describe an program for optimizing atomic models constructed using the MNDO method which varies not only the parameters but also the scope for simple changes in the calculation scheme. The target function determines properties such as formation enthalpies, dipole moments, ionization potentials, and geometrical parameters. Software used to minimize the target function is based on the simplex method on the Nelder-Mead algorithm and on the Fletcher variable-metric method. The program is written in FORTRAN IV and implemented on the ES computer

  18. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai; Chen, Zonghai; Liu, Chengfei; Huang, He; Zhang, Xiangliang

    2016-01-01

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate

  19. Search and foraging behaviors from movement data: A comparison of methods.

    Science.gov (United States)

    Bennison, Ashley; Bearhop, Stuart; Bodey, Thomas W; Votier, Stephen C; Grecian, W James; Wakefield, Ewan D; Hamer, Keith C; Jessopp, Mark

    2018-01-01

    Search behavior is often used as a proxy for foraging effort within studies of animal movement, despite it being only one part of the foraging process, which also includes prey capture. While methods for validating prey capture exist, many studies rely solely on behavioral annotation of animal movement data to identify search and infer prey capture attempts. However, the degree to which search correlates with prey capture is largely untested. This study applied seven behavioral annotation methods to identify search behavior from GPS tracks of northern gannets ( Morus bassanus ), and compared outputs to the occurrence of dives recorded by simultaneously deployed time-depth recorders. We tested how behavioral annotation methods vary in their ability to identify search behavior leading to dive events. There was considerable variation in the number of dives occurring within search areas across methods. Hidden Markov models proved to be the most successful, with 81% of all dives occurring within areas identified as search. k -Means clustering and first passage time had the highest rates of dives occurring outside identified search behavior. First passage time and hidden Markov models had the lowest rates of false positives, identifying fewer search areas with no dives. All behavioral annotation methods had advantages and drawbacks in terms of the complexity of analysis and ability to reflect prey capture events while minimizing the number of false positives and false negatives. We used these results, with consideration of analytical difficulty, to provide advice on the most appropriate methods for use where prey capture behavior is not available. This study highlights a need to critically assess and carefully choose a behavioral annotation method suitable for the research question being addressed, or resulting species management frameworks established.

  20. The Librarian Leading the Machine: A Reassessment of Library Instruction Methods

    Science.gov (United States)

    Greer, Katie; Hess, Amanda Nichols; Kraemer, Elizabeth W.

    2016-01-01

    This article builds on the 2007 College and Research Libraries article, "The Librarian, the Machine, or a Little of Both." Since that time, Oakland University Libraries implemented changes to its instruction program that reflect larger trends in teaching and assessment throughout the profession; following these revisions, librarians…

  1. Library designs for generic C++ sparse matrix computations of iterative methods

    Energy Technology Data Exchange (ETDEWEB)

    Pozo, R.

    1996-12-31

    A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.

  2. Comprehensive evaluation of SNP identification with the Restriction Enzyme-based Reduced Representation Library (RRL method

    Directory of Open Access Journals (Sweden)

    Du Ye

    2012-02-01

    Full Text Available Abstract Background Restriction Enzyme-based Reduced Representation Library (RRL method represents a relatively feasible and flexible strategy used for Single Nucleotide Polymorphism (SNP identification in different species. It has remarkable advantage of reducing the complexity of the genome by orders of magnitude. However, comprehensive evaluation for actual efficacy of SNP identification by this method is still unavailable. Results In order to evaluate the efficacy of Restriction Enzyme-based RRL method, we selected Tsp 45I enzyme which covers 266 Mb flanking region of the enzyme recognition site according to in silico simulation on human reference genome, then we sequenced YH RRL after Tsp 45I treatment and obtained reads of which 80.8% were mapped to target region with an 20-fold average coverage, about 96.8% of target region was covered by at least one read and 257 K SNPs were identified in the region using SOAPsnp software. Compared with whole genome resequencing data, we observed false discovery rate (FDR of 13.95% and false negative rate (FNR of 25.90%. The concordance rate of homozygote loci was over 99.8%, but that of heterozygote were only 92.56%. Repeat sequences and bases quality were proved to have a great effect on the accuracy of SNP calling, SNPs in recognition sites contributed evidently to the high FNR and the low concordance rate of heterozygote. Our results indicated that repeat masking and high stringent filter criteria could significantly decrease both FDR and FNR. Conclusions This study demonstrates that Restriction Enzyme-based RRL method was effective for SNP identification. The results highlight the important role of bias and the method-derived defects represented in this method and emphasize the special attentions noteworthy.

  3. Short Term Gain, Long Term Pain:Informal Job Search Methods and Post-Displacement Outcomes

    OpenAIRE

    Green, Colin

    2012-01-01

    This paper examines the role of informal job search methods on the labour market outcomes of displaced workers. Informal job search methods could alleviate short-term labour market difficulties of displaced workers by providing information on job opportunities, allowing them to signal their productivity and may mitigate wage losses through better post-displacement job matching. However if displacement results from reductions in demand for specific sectors/skills, the use of informal job searc...

  4. Wide Binaries in TGAS: Search Method and First Results

    Science.gov (United States)

    Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.

    2018-04-01

    Half of all stars reside in binary systems, many of which have orbital separations in excess of 1000 AU. Such binaries are typically identified in astrometric catalogs by matching the proper motions vectors of close stellar pairs. We present a fully Bayesian method that properly takes into account positions, proper motions, parallaxes, and their correlated uncertainties to identify widely separated stellar binaries. After applying our method to the >2 × 106 stars in the Tycho-Gaia astrometric solution from Gaia DR1, we identify over 6000 candidate wide binaries. For those pairs with separations less than 40,000 AU, we determine the contamination rate to be ~5%. This sample has an orbital separation (a) distribution that is roughly flat in log space for separations less than ~5000 AU and follows a power law of a -1.6 at larger separations.

  5. Searching for Suicide Methods: Accessibility of Information About Helium as a Method of Suicide on the Internet.

    Science.gov (United States)

    Gunnell, David; Derges, Jane; Chang, Shu-Sen; Biddle, Lucy

    2015-01-01

    Helium gas suicides have increased in England and Wales; easy-to-access descriptions of this method on the Internet may have contributed to this rise. To investigate the availability of information on using helium as a method of suicide and trends in searching about this method on the Internet. We analyzed trends in (a) Google searching (2004-2014) and (b) hits on a Wikipedia article describing helium as a method of suicide (2013-2014). We also investigated the extent to which helium was described as a method of suicide on web pages and discussion forums identified via Google. We found no evidence of rises in Internet searching about suicide using helium. News stories about helium suicides were associated with increased search activity. The Wikipedia article may have been temporarily altered to increase awareness of suicide using helium around the time of a celebrity suicide. Approximately one third of the links retrieved using Google searches for suicide methods mentioned helium. Information about helium as a suicide method is readily available on the Internet; the Wikipedia article describing its use was highly accessed following celebrity suicides. Availability of online information about this method may contribute to rises in helium suicides.

  6. COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES

    DEFF Research Database (Denmark)

    2017-01-01

    A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...

  7. An effective suggestion method for keyword search of databases

    KAUST Repository

    Huang, Hai

    2016-09-09

    This paper solves the problem of providing high-quality suggestions for user keyword queries over databases. With the assumption that the returned suggestions are independent, existing query suggestion methods over databases score candidate suggestions individually and return the top-k best of them. However, the top-k suggestions have high redundancy with respect to the topics. To provide informative suggestions, the returned k suggestions are expected to be diverse, i.e., maximizing the relevance to the user query and the diversity with respect to topics that the user might be interested in simultaneously. In this paper, an objective function considering both factors is defined for evaluating a suggestion set. We show that maximizing the objective function is a submodular function maximization problem subject to n matroid constraints, which is an NP-hard problem. An greedy approximate algorithm with an approximation ratio O((Formula presented.)) is also proposed. Experimental results show that our suggestion outperforms other methods on providing relevant and diverse suggestions. © 2016 Springer Science+Business Media New York

  8. Beam angle optimization for intensity-modulated radiation therapy using a guided pattern search method

    International Nuclear Information System (INIS)

    Rocha, Humberto; Dias, Joana M; Ferreira, Brígida C; Lopes, Maria C

    2013-01-01

    Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem. (paper)

  9. A Polymerase Chain Reaction-Based Method for Isolating Clones from a Complimentary DNA Library in Sheep

    Science.gov (United States)

    Friis, Thor Einar; Stephenson, Sally; Xiao, Yin; Whitehead, Jon

    2014-01-01

    The sheep (Ovis aries) is favored by many musculoskeletal tissue engineering groups as a large animal model because of its docile temperament and ease of husbandry. The size and weight of sheep are comparable to humans, which allows for the use of implants and fixation devices used in human clinical practice. The construction of a complimentary DNA (cDNA) library can capture the expression of genes in both a tissue- and time-specific manner. cDNA libraries have been a consistent source of gene discovery ever since the technology became commonplace more than three decades ago. Here, we describe the construction of a cDNA library using cells derived from sheep bones based on the pBluescript cDNA kit. Thirty clones were picked at random and sequenced. This led to the identification of a novel gene, C12orf29, which our initial experiments indicate is involved in skeletal biology. We also describe a polymerase chain reaction-based cDNA clone isolation method that allows the isolation of genes of interest from a cDNA library pool. The techniques outlined here can be applied in-house by smaller tissue engineering groups to generate tools for biomolecular research for large preclinical animal studies and highlights the power of standard cDNA library protocols to uncover novel genes. PMID:24447069

  10. Investigating Implementation Methods and Perceived Learning Outcomes of Children’s Library Instruction Programs: A Case of Parent-child Doctors’ Mailbox in National Library

    Directory of Open Access Journals (Sweden)

    Yu-Hua Chang

    2017-06-01

    Full Text Available This study aimed to investigate the implementation methods, process and perceived learning outcomes of children’s library instruction programs. This study adopted a qualitative approach with the Parent-child Doctors’ Mailbox program in National Library of Public Information. Observation (including thinking aloud, interviews and documents were used for data collection in order to elicit perspectives of 31 children, 26 parents and 3 librarians. Main findings derived from this study can be summarized as follows: (1 Parent-child Doctors’ Mailbox integrated play (e.g., prize quizzes and reading guides into the program design, which was based upon the development of different age groups. Children needed to go to the circulation desk in person in order to get designated books and answer sheets. Children earned points to redeem for prizes by answering questions correctly. (2 Motivations for children’s participation in the program were categorized as external (e.g., prizes, recommendations from friends and serendipity and internal (e.g., cultivating habits of reading and writing, and siblings’ company. (3 Children’s perceived learning outcomes of participation in the program included improving children’s attention span, the positive influence of messages delivered by books on children, and the positive progress of children’s reading, writing, logical thinking and interpersonal skills. (4 Parents’ roles in children’s participation in the program included accompanying children and providing reactive assistance. Roles of librarians involved administrative work, encouragement and befriending children. [Article content in Chinese

  11. CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.

    Science.gov (United States)

    Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H

    2016-11-14

    The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.

  12. Designing Focused Chemical Libraries Enriched in Protein-Protein Interaction Inhibitors using Machine-Learning Methods

    Science.gov (United States)

    Reynès, Christelle; Host, Hélène; Camproux, Anne-Claude; Laconde, Guillaume; Leroux, Florence; Mazars, Anne; Deprez, Benoit; Fahraeus, Robin; Villoutreix, Bruno O.; Sperandio, Olivier

    2010-01-01

    Protein-protein interactions (PPIs) may represent one of the next major classes of therapeutic targets. So far, only a minute fraction of the estimated 650,000 PPIs that comprise the human interactome are known with a tiny number of complexes being drugged. Such intricate biological systems cannot be cost-efficiently tackled using conventional high-throughput screening methods. Rather, time has come for designing new strategies that will maximize the chance for hit identification through a rationalization of the PPI inhibitor chemical space and the design of PPI-focused compound libraries (global or target-specific). Here, we train machine-learning-based models, mainly decision trees, using a dataset of known PPI inhibitors and of regular drugs in order to determine a global physico-chemical profile for putative PPI inhibitors. This statistical analysis unravels two important molecular descriptors for PPI inhibitors characterizing specific molecular shapes and the presence of a privileged number of aromatic bonds. The best model has been transposed into a computer program, PPI-HitProfiler, that can output from any drug-like compound collection a focused chemical library enriched in putative PPI inhibitors. Our PPI inhibitor profiler is challenged on the experimental screening results of 11 different PPIs among which the p53/MDM2 interaction screened within our own CDithem platform, that in addition to the validation of our concept led to the identification of 4 novel p53/MDM2 inhibitors. Collectively, our tool shows a robust behavior on the 11 experimental datasets by correctly profiling 70% of the experimentally identified hits while removing 52% of the inactive compounds from the initial compound collections. We strongly believe that this new tool can be used as a global PPI inhibitor profiler prior to screening assays to reduce the size of the compound collections to be experimentally screened while keeping most of the true PPI inhibitors. PPI-HitProfiler is

  13. Designing focused chemical libraries enriched in protein-protein interaction inhibitors using machine-learning methods.

    Directory of Open Access Journals (Sweden)

    Christelle Reynès

    2010-03-01

    Full Text Available Protein-protein interactions (PPIs may represent one of the next major classes of therapeutic targets. So far, only a minute fraction of the estimated 650,000 PPIs that comprise the human interactome are known with a tiny number of complexes being drugged. Such intricate biological systems cannot be cost-efficiently tackled using conventional high-throughput screening methods. Rather, time has come for designing new strategies that will maximize the chance for hit identification through a rationalization of the PPI inhibitor chemical space and the design of PPI-focused compound libraries (global or target-specific. Here, we train machine-learning-based models, mainly decision trees, using a dataset of known PPI inhibitors and of regular drugs in order to determine a global physico-chemical profile for putative PPI inhibitors. This statistical analysis unravels two important molecular descriptors for PPI inhibitors characterizing specific molecular shapes and the presence of a privileged number of aromatic bonds. The best model has been transposed into a computer program, PPI-HitProfiler, that can output from any drug-like compound collection a focused chemical library enriched in putative PPI inhibitors. Our PPI inhibitor profiler is challenged on the experimental screening results of 11 different PPIs among which the p53/MDM2 interaction screened within our own CDithem platform, that in addition to the validation of our concept led to the identification of 4 novel p53/MDM2 inhibitors. Collectively, our tool shows a robust behavior on the 11 experimental datasets by correctly profiling 70% of the experimentally identified hits while removing 52% of the inactive compounds from the initial compound collections. We strongly believe that this new tool can be used as a global PPI inhibitor profiler prior to screening assays to reduce the size of the compound collections to be experimentally screened while keeping most of the true PPI inhibitors. PPI

  14. Designing focused chemical libraries enriched in protein-protein interaction inhibitors using machine-learning methods.

    Science.gov (United States)

    Reynès, Christelle; Host, Hélène; Camproux, Anne-Claude; Laconde, Guillaume; Leroux, Florence; Mazars, Anne; Deprez, Benoit; Fahraeus, Robin; Villoutreix, Bruno O; Sperandio, Olivier

    2010-03-05

    Protein-protein interactions (PPIs) may represent one of the next major classes of therapeutic targets. So far, only a minute fraction of the estimated 650,000 PPIs that comprise the human interactome are known with a tiny number of complexes being drugged. Such intricate biological systems cannot be cost-efficiently tackled using conventional high-throughput screening methods. Rather, time has come for designing new strategies that will maximize the chance for hit identification through a rationalization of the PPI inhibitor chemical space and the design of PPI-focused compound libraries (global or target-specific). Here, we train machine-learning-based models, mainly decision trees, using a dataset of known PPI inhibitors and of regular drugs in order to determine a global physico-chemical profile for putative PPI inhibitors. This statistical analysis unravels two important molecular descriptors for PPI inhibitors characterizing specific molecular shapes and the presence of a privileged number of aromatic bonds. The best model has been transposed into a computer program, PPI-HitProfiler, that can output from any drug-like compound collection a focused chemical library enriched in putative PPI inhibitors. Our PPI inhibitor profiler is challenged on the experimental screening results of 11 different PPIs among which the p53/MDM2 interaction screened within our own CDithem platform, that in addition to the validation of our concept led to the identification of 4 novel p53/MDM2 inhibitors. Collectively, our tool shows a robust behavior on the 11 experimental datasets by correctly profiling 70% of the experimentally identified hits while removing 52% of the inactive compounds from the initial compound collections. We strongly believe that this new tool can be used as a global PPI inhibitor profiler prior to screening assays to reduce the size of the compound collections to be experimentally screened while keeping most of the true PPI inhibitors. PPI-HitProfiler is

  15. XML in Libraries.

    Science.gov (United States)

    Tennant, Roy, Ed.

    This book presents examples of how libraries are using XML (eXtensible Markup Language) to solve problems, expand services, and improve systems. Part I contains papers on using XML in library catalog records: "Updating MARC Records with XMLMARC" (Kevin S. Clarke, Stanford University) and "Searching and Retrieving XML Records via the…

  16. A review of the scientific rationale and methods used in the search for other planetary systems

    Science.gov (United States)

    Black, D. C.

    1985-01-01

    Planetary systems appear to be one of the crucial links in the chain leading from simple molecules to living systems, particularly complex (intelligent?) living systems. Although there is currently no observational proof of the existence of any planetary system other than our own, techniques are now being developed which will permit a comprehensive search for other planetary systems. The scientific rationale for and methods used in such a search effort are reviewed here.

  17. The Personal Virtual Library

    CERN Document Server

    Le Meur, Jean-Yves

    1998-01-01

    Looking for "library" in the usual search engines of the World Wide Web gives: "Infoseek found 3,593,126 pages containing the word library" and it nicely proposes: "Search only within these 3,59 3,126 pages ?" "Yahoo! Found 1299 categories and 8669 sites for library" "LycOs: 1-10 von 512354 relevanten Ergebnissen" "AltaVista: About 14830527 documents match your query" and at the botto m: "Word count: library: 15466897" ! Excite: Top 10 matches and it does not say how many can be browsed... "Library" on the World Wide Web is really popular. At least fiveteen million pages ar e supposed to contain this word. Half of them may have disappeared by now but one more hit will be added once the search robots will have indexed this document ! The notion of Personal Library i s a modest attempt, in a small environment like a library, to give poor users lost in cyber-libraries the opportunity to keep their own private little shelves - virtually. In this paper, we will l ook at the usual functionalities of library systems...

  18. Low-Mode Conformational Search Method with Semiempirical Quantum Mechanical Calculations: Application to Enantioselective Organocatalysis.

    Science.gov (United States)

    Kamachi, Takashi; Yoshizawa, Kazunari

    2016-02-22

    A conformational search program for finding low-energy conformations of large noncovalent complexes has been developed. A quantitatively reliable semiempirical quantum mechanical PM6-DH+ method, which is able to accurately describe noncovalent interactions at a low computational cost, was employed in contrast to conventional conformational search programs in which molecular mechanical methods are usually adopted. Our approach is based on the low-mode method whereby an initial structure is perturbed along one of its low-mode eigenvectors to generate new conformations. This method was applied to determine the most stable conformation of transition state for enantioselective alkylation by the Maruoka and cinchona alkaloid catalysts and Hantzsch ester hydrogenation of imines by chiral phosphoric acid. Besides successfully reproducing the previously reported most stable DFT conformations, the conformational search with the semiempirical quantum mechanical calculations newly discovered a more stable conformation at a low computational cost.

  19. A new method for the construction of a mutant library with a predictable occurrence rate using Poisson distribution.

    Science.gov (United States)

    Seong, Ki Moon; Park, Hweon; Kim, Seong Jung; Ha, Hyo Nam; Lee, Jae Yung; Kim, Joon

    2007-06-01

    A yeast transcriptional activator, Gcn4p, induces the expression of genes that are involved in amino acid and purine biosynthetic pathways under amino acid starvation. Gcn4p has an acidic activation domain in the central region and a bZIP domain in the C-terminus that is divided into the DNA-binding motif and dimerization leucine zipper motif. In order to identify amino acids in the DNA-binding motif of Gcn4p which are involved in transcriptional activation, we constructed mutant libraries in the DNA-binding motif through an innovative application of random mutagenesis. Mutant library made by oligonucleotides which were mutated randomly using the Poisson distribution showed that the actual mutation frequency was in good agreement with expected values. This method could save the time and effort to create a mutant library with a predictable mutation frequency. Based on the studies using the mutant libraries constructed by the new method, the specific residues of the DNA-binding domain in Gcn4p appear to be involved in the transcriptional activities on a conserved binding site.

  20. Best methods for evaluating educational impact: a comparison of the efficacy of commonly used measures of library instruction.

    Science.gov (United States)

    Schilling, Katherine; Applegate, Rachel

    2012-10-01

    Libraries are increasingly called upon to demonstrate student learning outcomes and the tangible benefits of library educational programs. This study reviewed and compared the efficacy of traditionally used measures for assessing library instruction, examining the benefits and drawbacks of assessment measures and exploring the extent to which knowledge, attitudes, and behaviors actually paralleled demonstrated skill levels. An overview of recent literature on the evaluation of information literacy education addressed these questions: (1) What evaluation measures are commonly used for evaluating library instruction? (2) What are the pros and cons of popular evaluation measures? (3) What are the relationships between measures of skills versus measures of attitudes and behavior? Research outcomes were used to identify relationships between measures of attitudes, behaviors, and skills, which are typically gathered via attitudinal surveys, written skills tests, or graded exercises. Results provide useful information about the efficacy of instructional evaluation methods, including showing significant disparities between attitudes, skills, and information usage behaviors. This information can be used by librarians to implement the most appropriate evaluation methods for measuring important variables that accurately demonstrate students' attitudes, behaviors, or skills.

  1. Using Digital Libraries Non-Visually: Understanding the Help-Seeking Situations of Blind Users

    Science.gov (United States)

    Xie, Iris; Babu, Rakesh; Joo, Soohyung; Fuller, Paige

    2015-01-01

    Introduction: This study explores blind users' unique help-seeking situations in interacting with digital libraries. In particular, help-seeking situations were investigated at both the physical and cognitive levels. Method: Fifteen blind participants performed three search tasks, including known- item search, specific information search, and…

  2. A cross-correlation method to search for gravitational wave bursts with AURIGA and Virgo

    NARCIS (Netherlands)

    Bignotto, M.; Bonaldi, M.; Camarda, M.; Cerdonio, M.; Conti, L.; Drago, M.; Falferi, P.; Liguori, N.; Longo, S.; Mezzena, R.; Mion, A.; Ortolan, A.; Prodi, G. A.; Re, V.; Salemi, F.; Taffarello, L.; Vedovato, G.; Vinante, A.; Vitale, S.; Zendri, J. -P.; Acernese, F.; Alshourbagy, Mohamed; Amico, Paolo; Antonucci, Federica; Aoudia, S.; Astone, P.; Avino, Saverio; Baggio, L.; Ballardin, G.; Barone, F.; Barsotti, L.; Barsuglia, M.; Bauer, Th. S.; Bigotta, Stefano; Birindelli, Simona; Boccara, Albert-Claude; Bondu, F.; Bosi, Leone; Braccini, Stefano; Bradaschia, C.; Brillet, A.; Brisson, V.; Buskulic, D.; Cagnoli, G.; Calloni, E.; Campagna, Enrico; Carbognani, F.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cesarini, E.; Chassande-Mottin, E.; Clapson, A-C; Cleva, F.; Coccia, E.; Corda, C.; Corsi, A.; Cottone, F.; Coulon, J. -P.; Cuoco, E.; D'Antonio, S.; Dari, A.; Dattilo, V.; Davier, M.; Rosa, R.; Del Prete, M.; Di Fiore, L.; Di Lieto, A.; Emilio, M. Di Paolo; Di Virgilio, A.; Evans, M.; Fafone, V.; Ferrante, I.; Fidecaro, F.; Fiori, I.; Flaminio, R.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Gammaitoni, L.; Garufi, F.; Genin, E.; Gennai, A.; Giazotto, A.; Giordano, L.; Granata, V.; Greverie, C.; Grosjean, D.; Guidi, G.; Hamdani, S.U.; Hebri, S.; Heitmann, H.; Hello, P.; Huet, D.; Kreckelbergh, S.; La Penna, P.; Laval, M.; Leroy, N.; Letendre, N.; Lopez, B.; Lorenzini, M.; Loriette, V.; Losurdo, G.; Mackowski, J. -M.; Majorana, E.; Man, C. N.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marque, J.; Martelli, F.; Masserot, A.; Menzinger, F.; Milano, L.; Minenkov, Y.; Moins, C.; Moreau, J.; Morgado, N.; Mosca, S.; Mours, B.; Neri, I.; Nocera, F.; Pagliaroli, G.; Palomba, C.; Paoletti, F.; Pardi, S.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Piergiovanni, F.; Pinard, L.; Poggiani, R.; Punturo, M.; Puppo, P.; Rapagnani, P.; Regimbau, T.; Remillieux, A.; Ricci, F.; Ricciardi, I.; Rocchi, A.; Rolland, L.; Romano, R.; Ruggi, P.; Russo, G.; Solimeno, S.; Spallicci, A.; Swinkels, B. L.; Tarallo, M.; Terenzi, R.; Toncelli, A.; Tonelli, M.; Tournefier, E.; Travasso, F.; Vajente, G.; van den Brand, J. F. J.; van der Putten, S.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinet, J. -Y.; Vocca, H.; Yvert, M.

    2008-01-01

    We present a method to search for transient gravitational waves using a network of detectors with different spectral and directional sensitivities: the interferometer Virgo and the bar detector AURIGA. The data analysis method is based on the measurements of the correlated energy in the network by

  3. A three-term conjugate gradient method under the strong-Wolfe line search

    Science.gov (United States)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  4. Featured Library: Parrish Library

    OpenAIRE

    Kirkwood, Hal P, Jr

    2015-01-01

    The Roland G. Parrish Library of Management & Economics is located within the Krannert School of Management at Purdue University. Between 2005 - 2007 work was completed on a white paper that focused on a student-centered vision for the Management & Economics Library. The next step was a massive collection reduction and a re-envisioning of both the services and space of the library. Thus began a 3 phase renovation from a 2 floor standard, collection-focused library into a single floor, 18,000s...

  5. Dual-mode nested search method for categorical uncertain multi-objective optimization

    Science.gov (United States)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  6. New Internet search volume-based weighting method for integrating various environmental impacts

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-01-15

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  7. New Internet search volume-based weighting method for integrating various environmental impacts

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. The resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.

  8. A Fragment-Based Method of Creating Small-Molecule Libraries to Target the Aggregation of Intrinsically Disordered Proteins.

    Science.gov (United States)

    Joshi, Priyanka; Chia, Sean; Habchi, Johnny; Knowles, Tuomas P J; Dobson, Christopher M; Vendruscolo, Michele

    2016-03-14

    The aggregation process of intrinsically disordered proteins (IDPs) has been associated with a wide range of neurodegenerative disorders, including Alzheimer's and Parkinson's diseases. Currently, however, no drug in clinical use targets IDP aggregation. To facilitate drug discovery programs in this important and challenging area, we describe a fragment-based approach of generating small-molecule libraries that target specific IDPs. The method is based on the use of molecular fragments extracted from compounds reported in the literature to inhibit of the aggregation of IDPs. These fragments are used to screen existing large generic libraries of small molecules to form smaller libraries specific for given IDPs. We illustrate this approach by describing three distinct small-molecule libraries to target, Aβ, tau, and α-synuclein, which are three IDPs implicated in Alzheimer's and Parkinson's diseases. The strategy described here offers novel opportunities for the identification of effective molecular scaffolds for drug discovery for neurodegenerative disorders and to provide insights into the mechanism of small-molecule binding to IDPs.

  9. A Study on Librarian Service Providers' Awareness and Perceptions of Library Services for the Disabled

    Directory of Open Access Journals (Sweden)

    Younghee Noh

    2011-12-01

    Full Text Available The purpose of this study is to improve library promotional marketing for the disabled by identifying requirements of public library disability services. This study aimed to investigate librarian service providers' awareness of library programs for the disabled in order to prepare a systematic plan for promoting such library services. Research methods used are a literature analysis and survey. First, the ratio of respondents with experience promoting activities and services for the disabled was less than 50%. Second, regarding methods for promoting library disability services, the respondents used library homepages, press releases, library user guides, library newsletters, and library pamphlets in that order. Third, when asked what kind of PR media the library disability service providers had experience with and how often they use it, library boards and banners were the most common response. Fourth, suggested improvements to the current design and content of PR materials included: clearer word choice (or greater understandability, more detailed descriptions, simpler layouts, and more interesting or eye-catching content in that order. Fifth, the library disability services which are in the most need of public relations were guide information for library disability services, Library and Information Service (DOI services and search services, using alternative materials and the library collection, and aiding the information search. Overall, when evaluating the promotion of disability services in Korea, the library's public relations for disabled services needs to improve because currently neither librarians nor the disabled community they are targeting has frequent or quality experience with it. Thus, the policy department for the library disability services must develop a variety of promotional strategies adjusted for each type of the disability and distribute PR materials to service providers individually, making sure to utilize effective PR

  10. A conjugate gradient method with descent properties under strong Wolfe line search

    Science.gov (United States)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  11. Surfing for suicide methods and help: content analysis of websites retrieved with search engines in Austria and the United States.

    Science.gov (United States)

    Till, Benedikt; Niederkrotenthaler, Thomas

    2014-08-01

    The Internet provides a variety of resources for individuals searching for suicide-related information. Structured content-analytic approaches to assess intercultural differences in web contents retrieved with method-related and help-related searches are scarce. We used the 2 most popular search engines (Google and Yahoo/Bing) to retrieve US-American and Austrian search results for the term suicide, method-related search terms (e.g., suicide methods, how to kill yourself, painless suicide, how to hang yourself), and help-related terms (e.g., suicidal thoughts, suicide help) on February 11, 2013. In total, 396 websites retrieved with US search engines and 335 websites from Austrian searches were analyzed with content analysis on the basis of current media guidelines for suicide reporting. We assessed the quality of websites and compared findings across search terms and between the United States and Austria. In both countries, protective outweighed harmful website characteristics by approximately 2:1. Websites retrieved with method-related search terms (e.g., how to hang yourself) contained more harmful (United States: P search engines generally had more protective characteristics (P search engines. Resources with harmful characteristics were better ranked than those with protective characteristics (United States: P < .01, Austria: P < .05). The quality of suicide-related websites obtained depends on the search terms used. Preventive efforts to improve the ranking of preventive web content, particularly regarding method-related search terms, seem necessary. © Copyright 2014 Physicians Postgraduate Press, Inc.

  12. Reading for Education: the role of libraries | Dadzie | Ghana Library ...

    African Journals Online (AJOL)

    Reading for Education: the role of libraries. ... PROMOTING ACCESS TO AFRICAN RESEARCH. AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES. Ghana Library Journal. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home ...

  13. Distributed Cooperative Search Control Method of Multiple UAVs for Moving Target

    Directory of Open Access Journals (Sweden)

    Chang-jian Ru

    2015-01-01

    Full Text Available To reduce the impact of uncertainties caused by unknown motion parameters on searching plan of moving targets and improve the efficiency of UAV’s searching, a novel distributed Multi-UAVs cooperative search control method for moving target is proposed in this paper. Based on detection results of onboard sensors, target probability map is updated using Bayesian theory. A Gaussian distribution of target transition probability density function is introduced to calculate prediction probability of moving target existence, and then target probability map can be further updated in real-time. A performance index function combining with target cost, environment cost, and cooperative cost is constructed, and the cooperative searching problem can be transformed into a central optimization problem. To improve computational efficiency, the distributed model predictive control method is presented, and thus the control command of each UAV can be obtained. The simulation results have verified that the proposed method can avoid the blindness of UAV searching better and improve overall efficiency of the team effectively.

  14. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  15. The search conference as a method in planning community health promotion actions

    Directory of Open Access Journals (Sweden)

    Eva Magnus

    2016-08-01

    Full Text Available Aims: The aim of this article is to describe and discuss how the search conference can be used as a method for planning health promotion actions in local communities. Design and methods: The article draws on experiences with using the method for an innovative project in health promotion in three Norwegian municipalities. The method is described both in general and how it was specifically adopted for the project. Results and conclusions: The search conference as a method was used to develop evidence-based health promotion action plans. With its use of both bottom-up and top-down approaches, this method is a relevant strategy for involving a community in the planning stages of health promotion actions in line with political expectations of participation, ownership, and evidence-based initiatives.

  16. Parallel Computation on Multicore Processors Using Explicit Form of the Finite Element Method and C++ Standard Libraries

    Directory of Open Access Journals (Sweden)

    Rek Václav

    2016-11-01

    Full Text Available In this paper, the form of modifications of the existing sequential code written in C or C++ programming language for the calculation of various kind of structures using the explicit form of the Finite Element Method (Dynamic Relaxation Method, Explicit Dynamics in the NEXX system is introduced. The NEXX system is the core of engineering software NEXIS, Scia Engineer, RFEM and RENEX. It has the possibilities of multithreaded running, which can now be supported at the level of native C++ programming language using standard libraries. Thanks to the high degree of abstraction that a contemporary C++ programming language provides, a respective library created in this way can be very generalized for other purposes of usage of parallelism in computational mechanics.

  17. A new greedy search method for the design of digital IIR filter

    Directory of Open Access Journals (Sweden)

    Ranjit Kaur

    2015-07-01

    Full Text Available A new greedy search method is applied in this paper to design the optimal digital infinite impulse response (IIR filter. The greedy search method is based on binary successive approximation (BSA and evolutionary search (ES. The suggested greedy search method optimizes the magnitude response and the phase response simultaneously and also finds the lowest order of the filter. The order of the filter is controlled by a control gene whose value is also optimized along with the filter coefficients to obtain optimum order of designed IIR filter. The stability constraints of IIR filter are taken care of during the design procedure. To determine the trade-off relationship between conflicting objectives in the non-inferior domain, the weighting method is exploited. The proposed approach is effectively applied to solve the multiobjective optimization problems of designing the digital low-pass (LP, high-pass (HP, bandpass (BP, and bandstop (BS filters. It has been demonstrated that this technique not only fulfills all types of filter performance requirements, but also the lowest order of the filter can be found. The computational experiments show that the proposed approach gives better digital IIR filters than the existing evolutionary algorithm (EA based methods.

  18. Introducing PALETTE: an iterative method for conducting a literature search for a review in palliative care.

    Science.gov (United States)

    Zwakman, Marieke; Verberne, Lisa M; Kars, Marijke C; Hooft, Lotty; van Delden, Johannes J M; Spijker, René

    2018-06-02

    In the rapidly developing specialty of palliative care, literature reviews have become increasingly important to inform and improve the field. When applying widely used methods for literature reviews developed for intervention studies onto palliative care, challenges are encountered such as the heterogeneity of palliative care in practice (wide range of domains in patient characteristics, stages of illness and stakeholders), the explorative character of review questions, and the poorly defined keywords and concepts. To overcome the challenges and to provide guidance for researchers to conduct a literature search for a review in palliative care, Palliative cAre Literature rEview iTeraTive mEthod (PALLETE), a pragmatic framework, was developed. We assessed PALETTE with a detailed description. PALETTE consists of four phases; developing the review question, building the search strategy, validating the search strategy and performing the search. The framework incorporates different information retrieval techniques: contacting experts, pearl growing, citation tracking and Boolean searching in a transparent way to maximize the retrieval of literature relevant to the topic of interest. The different components and techniques are repeated until no new articles are qualified for inclusion. The phases within PALETTE are interconnected by a recurrent process of validation on 'golden bullets' (articles that undoubtedly should be part of the review), citation tracking and concept terminology reflecting the review question. To give insight in the value of PALETTE, we compared PALETTE with the recommended search method for reviews of intervention studies. By using PALETTE on two palliative care literature reviews, we were able to improve our review questions and search strategies. Moreover, in comparison with the recommended search for intervention reviews, the number of articles needed to be screened was decreased whereas more relevant articles were retrieved. Overall, PALETTE

  19. Automated recycling of chemistry for virtual screening and library design.

    Science.gov (United States)

    Vainio, Mikko J; Kogej, Thierry; Raubacher, Florian

    2012-07-23

    An early stage drug discovery project needs to identify a number of chemically diverse and attractive compounds. These hit compounds are typically found through high-throughput screening campaigns. The diversity of the chemical libraries used in screening is therefore important. In this study, we describe a virtual high-throughput screening system called Virtual Library. The system automatically "recycles" validated synthetic protocols and available starting materials to generate a large number of virtual compound libraries, and allows for fast searches in the generated libraries using a 2D fingerprint based screening method. Virtual Library links the returned virtual hit compounds back to experimental protocols to quickly assess the synthetic accessibility of the hits. The system can be used as an idea generator for library design to enrich the screening collection and to explore the structure-activity landscape around a specific active compound.

  20. Comparison of several chemometric methods of libraries and classifiers for the analysis of expired drugs based on Raman spectra.

    Science.gov (United States)

    Gao, Qun; Liu, Yan; Li, Hao; Chen, Hui; Chai, Yifeng; Lu, Feng

    2014-06-01

    Some expired drugs are difficult to detect by conventional means. If they are repackaged and sold back into market, they will constitute a new public health challenge. For the detection of repackaged expired drugs within specification, paracetamol tablet from a manufacturer was used as a model drug in this study for comparison of Raman spectra-based library verification and classification methods. Raman spectra of different batches of paracetamol tablets were collected and a library including standard spectra of unexpired batches of tablets was established. The Raman spectrum of each sample was identified by cosine and correlation with the standard spectrum. The average HQI of the suspicious samples and the standard spectrum were calculated. The optimum threshold values were 0.997 and 0.998 respectively as a result of ROC and four evaluations, for which the accuracy was up to 97%. Three supervised classifiers, PLS-DA, SVM and k-NN, were chosen to establish two-class classification models and compared subsequently. They were used to establish a classification of expired batches and an unexpired batch, and predict the suspect samples. The average accuracy was 90.12%, 96.80% and 89.37% respectively. Different pre-processing techniques were tried to find that first derivative was optimal for methods of libraries and max-min normalization was optimal for that of classifiers. The results obtained from these studies indicated both libraries and classifier methods could detect the expired drugs effectively, and they should be used complementarily in the fast-screening. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    Science.gov (United States)

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  2. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  3. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  4. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  5. Library resources on the Internet

    Science.gov (United States)

    Buchanan, Nancy L.

    1995-07-01

    Library resources are prevalent on the Internet. Library catalogs, electronic books, electronic periodicals, periodical indexes, reference sources, and U.S. Government documents are available by telnet, Gopher, World Wide Web, and FTP. Comparatively few copyrighted library resources are available freely on the Internet. Internet implementations of library resources can add useful features, such as full-text searching. There are discussion lists, Gophers, and World Wide Web pages to help users keep up with new resources and changes to existing ones. The future will bring more library resources, more types of library resources, and more integrated implementations of such resources to the Internet.

  6. Mathematical programming models for solving in equal-sized facilities layout problems. A genetic search method

    International Nuclear Information System (INIS)

    Tavakkoli-Moghaddam, R.

    1999-01-01

    This paper present unequal-sized facilities layout solutions generated by a genetic search program. named Layout Design using a Genetic Algorithm) 9. The generalized quadratic assignment problem requiring pre-determined distance and material flow matrices as the input data and the continuous plane model employing a dynamic distance measure and a material flow matrix are discussed. Computational results on test problems are reported as compared with layout solutions generated by the branch - and bound algorithm a hybrid method merging simulated annealing and local search techniques, and an optimization process of an enveloped block

  7. An Efficient Hybrid Conjugate Gradient Method with the Strong Wolfe-Powell Line Search

    Directory of Open Access Journals (Sweden)

    Ahmad Alhawarat

    2015-01-01

    Full Text Available Conjugate gradient (CG method is an interesting tool to solve optimization problems in many fields, such as design, economics, physics, and engineering. In this paper, we depict a new hybrid of CG method which relates to the famous Polak-Ribière-Polyak (PRP formula. It reveals a solution for the PRP case which is not globally convergent with the strong Wolfe-Powell (SWP line search. The new formula possesses the sufficient descent condition and the global convergent properties. In addition, we further explained about the cases where PRP method failed with SWP line search. Furthermore, we provide numerical computations for the new hybrid CG method which is almost better than other related PRP formulas in both the number of iterations and the CPU time under some standard test functions.

  8. A peak value searching method of the MCA based on digital logic devices

    International Nuclear Information System (INIS)

    Sang Ziru; Huang Shanshan; Chen Lian; Jin Ge

    2010-01-01

    Digital multi-channel analyzers play a more important role in multi-channel pulse height analysis technique. The direction of digitalization are characterized by powerful pulse processing ability, high throughput, improved stability and flexibility. This paper introduces a method of searching peak value of waveform based on digital logic with FPGA. This method reduce the dead time. Then data correction offline can improvement the non-linearity of MCA. It gives the α energy spectrum of 241 Am. (authors)

  9. Local Path Planning of Driverless Car Navigation Based on Jump Point Search Method Under Urban Environment

    Directory of Open Access Journals (Sweden)

    Kaijun Zhou

    2017-09-01

    Full Text Available The Jump Point Search (JPS algorithm is adopted for local path planning of the driverless car under urban environment, and it is a fast search method applied in path planning. Firstly, a vector Geographic Information System (GIS map, including Global Positioning System (GPS position, direction, and lane information, is built for global path planning. Secondly, the GIS map database is utilized in global path planning for the driverless car. Then, the JPS algorithm is adopted to avoid the front obstacle, and to find an optimal local path for the driverless car in the urban environment. Finally, 125 different simulation experiments in the urban environment demonstrate that JPS can search out the optimal and safety path successfully, and meanwhile, it has a lower time complexity compared with the Vector Field Histogram (VFH, the Rapidly Exploring Random Tree (RRT, A*, and the Probabilistic Roadmaps (PRM algorithms. Furthermore, JPS is validated usefully in the structured urban environment.

  10. An adaptive bin framework search method for a beta-sheet protein homopolymer model

    Directory of Open Access Journals (Sweden)

    Hoos Holger H

    2007-04-01

    Full Text Available Abstract Background The problem of protein structure prediction consists of predicting the functional or native structure of a protein given its linear sequence of amino acids. This problem has played a prominent role in the fields of biomolecular physics and algorithm design for over 50 years. Additionally, its importance increases continually as a result of an exponential growth over time in the number of known protein sequences in contrast to a linear increase in the number of determined structures. Our work focuses on the problem of searching an exponentially large space of possible conformations as efficiently as possible, with the goal of finding a global optimum with respect to a given energy function. This problem plays an important role in the analysis of systems with complex search landscapes, and particularly in the context of ab initio protein structure prediction. Results In this work, we introduce a novel approach for solving this conformation search problem based on the use of a bin framework for adaptively storing and retrieving promising locally optimal solutions. Our approach provides a rich and general framework within which a broad range of adaptive or reactive search strategies can be realized. Here, we introduce adaptive mechanisms for choosing which conformations should be stored, based on the set of conformations already stored in memory, and for biasing choices when retrieving conformations from memory in order to overcome search stagnation. Conclusion We show that our bin framework combined with a widely used optimization method, Monte Carlo search, achieves significantly better performance than state-of-the-art generalized ensemble methods for a well-known protein-like homopolymer model on the face-centered cubic lattice.

  11. A comparison of two search methods for determining the scope of systematic reviews and health technology assessments.

    Science.gov (United States)

    Forsetlund, Louise; Kirkehei, Ingvild; Harboe, Ingrid; Odgaard-Jensen, Jan

    2012-01-01

    This study aims to compare two different search methods for determining the scope of a requested systematic review or health technology assessment. The first method (called the Direct Search Method) included performing direct searches in the Cochrane Database of Systematic Reviews (CDSR), Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessments (HTA). Using the comparison method (called the NHS Search Engine) we performed searches by means of the search engine of the British National Health Service, NHS Evidence. We used an adapted cross-over design with a random allocation of fifty-five requests for systematic reviews. The main analyses were based on repeated measurements adjusted for the order in which the searches were conducted. The Direct Search Method generated on average fewer hits (48 percent [95 percent confidence interval {CI} 6 percent to 72 percent], had a higher precision (0.22 [95 percent CI, 0.13 to 0.30]) and more unique hits than when searching by means of the NHS Search Engine (50 percent [95 percent CI, 7 percent to 110 percent]). On the other hand, the Direct Search Method took longer (14.58 minutes [95 percent CI, 7.20 to 21.97]) and was perceived as somewhat less user-friendly than the NHS Search Engine (-0.60 [95 percent CI, -1.11 to -0.09]). Although the Direct Search Method had some drawbacks such as being more time-consuming and less user-friendly, it generated more unique hits than the NHS Search Engine, retrieved on average fewer references and fewer irrelevant results.

  12. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    Science.gov (United States)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  13. Utilisation and prevalence of mixed methods research in library and information research in South Africa 2002-2008

    Directory of Open Access Journals (Sweden)

    Patrick Ngulube

    2009-01-01

    Full Text Available This article explores the use of mixed methods research (MMR in library and information science (LIS research in South Africa from 2000 to 2008. The authors contrast the mixed methods research debate in the general methodological literature to how this method was practiced within the LIS scientific community. They reviewed 613 research articles published in six peer-reviewed LIS journals in South Africa, finding the research methods in these journals to be surveys drawing on positivistic assumptions and cross-sectional designs, and historical research based on constructivist knowledge claims. Mixed methods approaches that the authors identified in the methodological literature have had little impact on LIS research in South Africa. Given these limitations, the authors argue for greater methodological pluralism in conducting research in LIS and recommend the use of mixed methods research.

  14. An improved method for RNA isolation and cDNA library construction from immature seeds of Jatropha curcas L

    Directory of Open Access Journals (Sweden)

    Kaur Jatinder

    2010-05-01

    Full Text Available Abstract Background RNA quality and quantity is sometimes unsuitable for cDNA library construction, from plant seeds rich in oil, polysaccharides and other secondary metabolites. Seeds of jatropha (Jatropha curcas L. are rich in fatty acids/lipids, storage proteins, polysaccharides, and a number of other secondary metabolites that could either bind and/or co-precipitate with RNA, making it unsuitable for downstream applications. Existing RNA isolation methods and commercial kits often fail to deliver high-quality total RNA from immature jatropha seeds for poly(A+ RNA purification and cDNA synthesis. Findings A protocol has been developed for isolating good quality total RNA from immature jatropha seeds, whereby a combination of the CTAB based RNA extraction method and a silica column of a commercial plant RNA extraction kit is used. The extraction time was reduced from two days to about 3 hours and the RNA was suitable for poly(A+ RNA purification, cDNA synthesis, cDNA library construction, RT-PCR, and Northern hybridization. Based on sequence information from selected clones and amplified PCR product, the cDNA library seems to be a good source of full-length jatropha genes. The method was equally effective for isolating RNA from mustard and rice seeds. Conclusions This is a simple CTAB + silica column method to extract high quality RNA from oil rich immature jatropha seeds that is suitable for several downstream applications. This method takes less time for RNA extraction and is equally effective for other tissues where the quality and quantity of RNA is highly interfered by the presence of fatty acids, polysaccharides and polyphenols.

  15. A fast tomographic method for searching the minimum free energy path

    International Nuclear Information System (INIS)

    Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

    2014-01-01

    Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP

  16. Active Search on Carcasses versus Pitfall Traps: a Comparison of Sampling Methods.

    Science.gov (United States)

    Zanetti, N I; Camina, R; Visciarelli, E C; Centeno, N D

    2016-04-01

    The study of insect succession in cadavers and the classification of arthropods have mostly been done by placing a carcass in a cage, protected from vertebrate scavengers, which is then visited periodically. An alternative is to use specific traps. Few studies on carrion ecology and forensic entomology involving the carcasses of large vertebrates have employed pitfall traps. The aims of this study were to compare both sampling methods (active search on a carcass and pitfall trapping) for each coleopteran family, and to establish whether there is a discrepancy (underestimation and/or overestimation) in the presence of each family by either method. A great discrepancy was found for almost all families with some of them being more abundant in samples obtained through active search on carcasses and others in samples from traps, whereas two families did not show any bias towards a given sampling method. The fact that families may be underestimated or overestimated by the type of sampling technique highlights the importance of combining both methods, active search on carcasses and pitfall traps, in order to obtain more complete information on decomposition, carrion habitat and cadaveric families or species. Furthermore, a hypothesis advanced on the reasons for the underestimation by either sampling method showing biases towards certain families. Information about the sampling techniques indicating which would be more appropriate to detect or find a particular family is provided.

  17. Study of old ecological hazards, oil seeps and contaminations using earth observation methods – spectral library for oil seep

    Directory of Open Access Journals (Sweden)

    Smejkalová Eva

    2017-03-01

    Full Text Available The possibilities of remote sensing techniques in the field of the Earth surface monitoring and protection specifically for the problems caused by petroleum contaminations, for the mapping of insufficiently plugged and abandoned old oil wells and for the analysis of onshore oil seeps are described. Explained is the methodology for analyzing and detection of potential hydrocarbon contaminations using the Earth observation in the area of interest in Slovakia (Korňa and in Czech Republic (Nesyt, mainly building and calibrating the spectral library for oil seeps. The acquisition of the in-situ field data (ASD, Cropscan spectroradiometers for this purpose, the successful building and verification of hydrocarbon spectral library, the application of hydrocarbon indexes and use of shift in red-edge part of electromagnetic spectra, the spectral analysis of input data are clarified in the paper. Described is approach which could innovate the routine methods for investigating the occurrence of hydrocarbons and can assist during the mapping and locating the potential oil seep sites. Important outcome is the successful establishment of a spectral library (database with calibration data suitable for further application in data classification for identifying the occurrence of hydrocarbons.

  18. Novel citation-based search method for scientific literature: application to meta-analyses.

    Science.gov (United States)

    Janssens, A Cecile J W; Gwinn, M

    2015-10-13

    Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of co-citation with one or more "known" articles before reviewing their eligibility. In two independent studies, we aimed to reproduce the results of literature searches for sets of published meta-analyses (n = 10 and n = 42). For each meta-analysis, we extracted co-citations for the randomly selected 'known' articles from the Web of Science database, counted their frequencies and screened all articles with a score above a selection threshold. In the second study, we extended the method by retrieving direct citations for all selected articles. In the first study, we retrieved 82% of the studies included in the meta-analyses while screening only 11% as many articles as were screened for the original publications. Articles that we missed were published in non-English languages, published before 1975, published very recently, or available only as conference abstracts. In the second study, we retrieved 79% of included studies while screening half the original number of articles. Citation searching appears to be an efficient and reasonably accurate method for finding articles similar to one or more articles of interest for meta-analysis and reviews.

  19. Wide-scope screening of pesticides in fruits and vegetables using information-dependent acquisition employing UHPLC-QTOF-MS and automated MS/MS library searching.

    Science.gov (United States)

    Wang, Zhibin; Cao, Yanzhong; Ge, Na; Liu, Xiaomao; Chang, Qiaoying; Fan, Chunlin; Pang, Guo-Fang

    2016-11-01

    This paper presents an application of ultra-high performance liquid chromatography-quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS) for simultaneous screening and identification of 427 pesticides in fresh fruit and vegetable samples. Both full MS scan mode for quantification, and an artificial-intelligence-based product ion scan mode information-dependent acquisition (IDA) providing automatic MS to MS/MS switching of product ion spectra for identification, were conducted by one injection. A home-in collision-induced-dissociation all product ions accurate mass spectra library containing more than 1700 spectra was developed prior to actual application. Both qualitative and quantitative validations of the method were carried out. The result showed that 97.4 % of the pesticides had the screening detection limit (SDL) less than 50 μg kg -1 and more than 86.7 % could be confirmed by accurate MS/MS spectra embodied in the home-made library. Meanwhile, calibration curves covering two orders of magnitude were performed, and they were linear over the concentration range studied for the selected matrices (from 5 to 500 μg kg -1 for most of the pesticides). Recoveries between 80 and 110 % in four matrices (apple, orange, tomato, and spinach) at two spiked levels, 10 and 100 μg kg -1 , was 88.7 or 86.8 %. Furthermore, the overall relative standard deviation (RSD, n = 12) for 94.3 % of the pesticides in 10 μg kg -1 and 98.1 % of the pesticides in 100 μg kg -1 spiked levels was less than 20 %. In order to validate the suitability for routine analysis, the method was applied to 448 fruit and vegetable samples purchased in different local markets. The results show 83.3 % of the analyzed samples have positive findings (higher than the limits of identification and quantification), and 412 commodity-pesticide combinations are identified in our scope. The approach proved to be a cost-effective, time-saving and powerful strategy for routine large

  20. Fast optimization of binary clusters using a novel dynamic lattice searching method

    International Nuclear Information System (INIS)

    Wu, Xia; Cheng, Wen

    2014-01-01

    Global optimization of binary clusters has been a difficult task despite of much effort and many efficient methods. Directing toward two types of elements (i.e., homotop problem) in binary clusters, two classes of virtual dynamic lattices are constructed and a modified dynamic lattice searching (DLS) method, i.e., binary DLS (BDLS) method, is developed. However, it was found that the BDLS can only be utilized for the optimization of binary clusters with small sizes because homotop problem is hard to be solved without atomic exchange operation. Therefore, the iterated local search (ILS) method is adopted to solve homotop problem and an efficient method based on the BDLS method and ILS, named as BDLS-ILS, is presented for global optimization of binary clusters. In order to assess the efficiency of the proposed method, binary Lennard-Jones clusters with up to 100 atoms are investigated. Results show that the method is proved to be efficient. Furthermore, the BDLS-ILS method is also adopted to study the geometrical structures of (AuPd) 79 clusters with DFT-fit parameters of Gupta potential

  1. Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures

    Science.gov (United States)

    Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.

    2017-03-01

    The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.

  2. Recruiting and Retaining LGBTQ-Identified Staff in Academic Libraries Through Ordinary Methods

    Directory of Open Access Journals (Sweden)

    Elliott Kuecker

    2017-03-01

    Full Text Available In Brief While the American academic library field works hard to include all patrons and materials that represent less dominant populations, it should be more mindful of inclusivity in its own workforce. Particularly, the field does nothing to explicitly recruit or retain LGBTQ-identified librarians. The author proposes practical remedies to these problems that directly respond to workplace studies on interpersonal difficulties LGBTQ-identified librarians and others have cited as barriers to happiness in the workplace, and argues toward more inclusive LIS education and financial support.

  3. Setting up Information Literacy Workshops in School Libraries: Imperatives, Principles and Methods

    Directory of Open Access Journals (Sweden)

    Reza Mokhtarpour

    2010-09-01

    Full Text Available While many professional literature have talked at length about the importance of dealing with information literacy in school libraries in ICT dominated era, but few have dealt with the nature and mode of implementation nor offered a road map. The strategy emphasized in this paper is to hold information literacy sessions through effective workshops. While explaining the reasons behind such workshops being essential in enhancing information literacy skills, the most important principles and stages for setting up of such workshops are offered in a step-by-step manner.

  4. Frequency domain optical tomography using a conjugate gradient method without line search

    International Nuclear Information System (INIS)

    Kim, Hyun Keol; Charette, Andre

    2007-01-01

    A conjugate gradient method without line search (CGMWLS) is presented. This method is used to retrieve the local maps of absorption and scattering coefficients inside the tissue-like test medium, with the synthetic data. The forward problem is solved with a discrete-ordinates finite-difference method based on the frequency domain formulation of radiative transfer equation. The inversion results demonstrate that the CGMWLS can retrieve simultaneously the spatial distributions of optical properties inside the medium within a reasonable accuracy, by reducing cross-talk between absorption and scattering coefficients

  5. Non-contact method of search and analysis of pulsating vessels

    Science.gov (United States)

    Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.

    2018-04-01

    Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.

  6. An Efficient Method to Search Real-Time Bulk Data for an Information Processing System

    International Nuclear Information System (INIS)

    Kim, Seong Jin; Kim, Jong Myung; Suh, Yong Suk; Keum, Jong Yong; Park, Heui Youn

    2005-01-01

    The Man Machine Interface System (MMIS) of System-integrated Modular Advanced ReacTor (SMART) is designed with fully digitalized features. The Information Processing System (IPS) of the MMIS acquires and processes plant data from other systems. In addition, the IPS provides plant operation information to operators in the control room. The IPS is required to process bulky data in a real-time. So, it is necessary to consider a special processing method with regards to flexibility and performance because more than a few thousands of Plant Information converges on the IPS. Among other things, the processing time for searching for data from the bulk data consumes much more than other the processing times. Thus, this paper explores an efficient method for the search and examines its feasibility

  7. Neural Based Tabu Search method for solving unit commitment problem with cooling-banking constraints

    Directory of Open Access Journals (Sweden)

    Rajan Asir Christober Gnanakkan Charles

    2009-01-01

    Full Text Available This paper presents a new approach to solve short-term unit commitment problem (UCP using Neural Based Tabu Search (NBTS with cooling and banking constraints. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for next H hours. A 7-unit utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 10, 26 and 34 units. Numerical results are shown to compare the superiority of the cost solutions obtained using the Tabu Search (TS method, Dynamic Programming (DP and Lagrangian Relaxation (LR methods in reaching proper unit commitment.

  8. The Search Conference as a Method in Planning Community Health Promotion Actions

    Science.gov (United States)

    Magnus, Eva; Knudtsen, Margunn Skjei; Wist, Guri; Weiss, Daniel; Lillefjell, Monica

    2016-01-01

    Aims: The aim of this article is to describe and discuss how the search conference can be used as a method for planning health promotion actions in local communities. Design and methods: The article draws on experiences with using the method for an innovative project in health promotion in three Norwegian municipalities. The method is described both in general and how it was specifically adopted for the project. Results and conclusions: The search conference as a method was used to develop evidence-based health promotion action plans. With its use of both bottom-up and top-down approaches, this method is a relevant strategy for involving a community in the planning stages of health promotion actions in line with political expectations of participation, ownership, and evidence-based initiatives. Significance for public health This article describe and discuss how the Search conference can be used as a method when working with knowledge based health promotion actions in local communities. The article describe the sequences of the conference and shows how this have been adapted when planning and prioritizing health promotion actions in three Norwegian municipalities. The significance of the article is that it shows how central elements in the planning of health promotion actions, as participation and involvements as well as evidence was a fundamental thinking in how the conference were accomplished. The article continue discussing how the method function as both a top-down and a bottom-up strategy, and in what way working evidence based can be in conflict with a bottom-up strategy. The experiences described can be used as guidance planning knowledge based health promotion actions in communities. PMID:27747199

  9. NEIC Library Services

    Science.gov (United States)

    The National Enforcement Investigation Center (NEIC) Environmental Forensic Library partners with NEIC's forensic scientists to retrieve, validate and deliver information to develop methods, defensible regulations, and environmental measurements.

  10. Outsourcing in libraries

    Directory of Open Access Journals (Sweden)

    Matjaž Žaucer

    1999-01-01

    Full Text Available Like other organisations more flexible libraries tend to conform to the changing environment as this is the only way to be successful and effective. They are expected to offer "more for less" and they are reorganising and searching the ways to reduce the costs. Outsourcing is one of possible solutions. The article deals with the possibilities of outsourcing in libraries, higher quality of their work eoneentrated on principal activities and gives some experienees in this field.

  11. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    Science.gov (United States)

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the

  12. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  13. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  14. The Retrieval of Information in an Elementary School Library Media Center: An Alternative Method of Classification in the Common School Library, Amherst, Massachusetts.

    Science.gov (United States)

    Cooper, Linda

    1997-01-01

    Discusses the problems encountered by elementary school children in retrieving information from a library catalog, either the traditional card catalog or an OPAC (online public access catalog). An alternative system of classification using colors and symbols is described that was developed in the Common School (Amherst, Massachusetts). (Author/LRW)

  15. Fee-based services in sci-tech libraries

    CERN Document Server

    Mount, Ellis

    2013-01-01

    This timely and important book explores how fee-based services have developed in various types of sci-tech libraries. The authoritative contributors focus on the current changing financial aspects of the sci-tech library operation and clarify for the reader how these changes have brought about conditions in which traditional methods of funding are no longer adequate. What new options are open and how they are best being applied in today's sci-tech libraries is fully and clearly explained and illustrated. Topics explored include cost allocation and cost recovery, fees for computer searching, an

  16. An efficient search method for finding the critical slip surface using the compositional Monte Carlo technique

    International Nuclear Information System (INIS)

    Goshtasbi, K.; Ahmadi, M; Naeimi, Y.

    2008-01-01

    Locating the critical slip surface and the associated minimum factor of safety are two complementary parts in a slope stability analysis. A large number of computer programs exist to solve slope stability problems. Most of these programs, however, have used inefficient and unreliable search procedures to locate the global minimum factor of safety. This paper presents an efficient and reliable method to determine the global minimum factor of safety coupled with a modified version of the Monte Carlo technique. Examples arc presented to illustrate the reliability of the proposed method

  17. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    Science.gov (United States)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  18. Searching in the Context of a Task: A Review of Methods and Tools

    Directory of Open Access Journals (Sweden)

    Ana Maguitman

    2018-04-01

    Full Text Available Contextual information extracted from the user task can help to better target retrieval to task-relevant content. In particular, topical context can be exploited to identify the subject of the information needs, contributing to reduce the information overload problem. A great number of methods exist to extract raw context data and contextual interaction patterns from the user task and to model this information using higher-level representations. Context can then be used as a source for automatic query generation, or as a means to refine or disambiguate user-generated queries. It can also be used to filter and rank results as well as to select domain-specific search engines with better capabilities to satisfy specific information requests. This article reviews methods that have been applied to deal with the problem of reflecting the current and long-term interests of a user in the search process. It discusses major difficulties encountered in the research area of context-based information retrieval and presents an overview of tools proposed since the mid-nineties to deal with the problem of context-based search.

  19. Search method for long-duration gravitational-wave transients from neutron stars

    International Nuclear Information System (INIS)

    Prix, R.; Giampanis, S.; Messenger, C.

    2011-01-01

    We introduce a search method for a new class of gravitational-wave signals, namely, long-duration O(hours-weeks) transients from spinning neutron stars. We discuss the astrophysical motivation from glitch relaxation models and we derive a rough estimate for the maximal expected signal strength based on the superfluid excess rotational energy. The transient signal model considered here extends the traditional class of infinite-duration continuous-wave signals by a finite start-time and duration. We derive a multidetector Bayes factor for these signals in Gaussian noise using F-statistic amplitude priors, which simplifies the detection statistic and allows for an efficient implementation. We consider both a fully coherent statistic, which is computationally limited to directed searches for known pulsars, and a cheaper semicoherent variant, suitable for wide parameter-space searches for transients from unknown neutron stars. We have tested our method by Monte-Carlo simulation, and we find that it outperforms orthodox maximum-likelihood approaches both in sensitivity and in parameter-estimation quality.

  20. Utilizing mixed methods research in analyzing Iranian researchers’ informarion search behaviour in the Web and presenting current pattern

    Directory of Open Access Journals (Sweden)

    Maryam Asadi

    2015-12-01

    Full Text Available Using mixed methods research design, the current study has analyzed Iranian researchers’ information searching behaviour on the Web.Then based on extracted concepts, the model of their information searching behavior was revealed. . Forty-four participants, including academic staff from universities and research centers were recruited for this study selected by purposive sampling. Data were gathered from questionnairs including ten questions and semi-structured interview. Each participant’s memos were analyzed using grounded theory methods adapted from Strauss & Corbin (1998. Results showed that the main objectives of subjects were doing a research, writing a paper, studying, doing assignments, downloading files and acquiring public information in using Web. The most important of learning about how to search and retrieve information were trial and error and get help from friends among the subjects. Information resources are identified by searching in information resources (e.g. search engines, references in papers, and search in Online database… communications facilities & tools (e.g. contact with colleagues, seminars & workshops, social networking..., and information services (e.g. RSS, Alerting, and SDI. Also, Findings indicated that searching by search engines, reviewing references, searching in online databases, and contact with colleagues and studying last issue of the electronic journals were the most important for searching. The most important strategies were using search engines and scientific tools such as Google Scholar. In addition, utilizing from simple (Quick search method was the most common among subjects. Using of topic, keywords, title of paper were most important of elements for retrieval information. Analysis of interview showed that there were nine stages in researchers’ information searching behaviour: topic selection, initiating search, formulating search query, information retrieval, access to information

  1. (Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop

    CERN Document Server

    The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...

  2. Electricity price forecast using Combinatorial Neural Network trained by a new stochastic search method

    International Nuclear Information System (INIS)

    Abedinia, O.; Amjady, N.; Shafie-khah, M.; Catalão, J.P.S.

    2015-01-01

    Highlights: • Presenting a Combinatorial Neural Network. • Suggesting a new stochastic search method. • Adapting the suggested method as a training mechanism. • Proposing a new forecast strategy. • Testing the proposed strategy on real-world electricity markets. - Abstract: Electricity price forecast is key information for successful operation of electricity market participants. However, the time series of electricity price has nonlinear, non-stationary and volatile behaviour and so its forecast method should have high learning capability to extract the complex input/output mapping function of electricity price. In this paper, a Combinatorial Neural Network (CNN) based forecasting engine is proposed to predict the future values of price data. The CNN-based forecasting engine is equipped with a new training mechanism for optimizing the weights of the CNN. This training mechanism is based on an efficient stochastic search method, which is a modified version of chemical reaction optimization algorithm, giving high learning ability to the CNN. The proposed price forecast strategy is tested on the real-world electricity markets of Pennsylvania–New Jersey–Maryland (PJM) and mainland Spain and its obtained results are extensively compared with the results obtained from several other forecast methods. These comparisons illustrate effectiveness of the proposed strategy.

  3. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC algorithm and the evolutionary method harmony search (HS. With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.

  4. Search method optimization technique for thermal design of high power RFQ structure

    International Nuclear Information System (INIS)

    Sharma, N.K.; Joshi, S.C.

    2009-01-01

    RRCAT has taken up the development of 3 MeV RFQ structure for the low energy part of 100 MeV H - ion injector linac. RFQ is a precision machined resonating structure designed for high rf duty factor. RFQ structural stability during high rf power operation is an important design issue. The thermal analysis of RFQ has been performed using ANSYS finite element analysis software and optimization of various parameters is attempted using Search Method optimization technique. It is an effective optimization technique for the systems governed by a large number of independent variables. The method involves examining a number of combinations of values of independent variables and drawing conclusions from the magnitude of the objective function at these combinations. In these methods there is a continuous improvement in the objective function throughout the course of the search and hence these methods are very efficient. The method has been employed in optimization of various parameters (called independent variables) of RFQ like cooling water flow rate, cooling water inlet temperatures, cavity thickness etc. involved in RFQ thermal design. The temperature rise within RFQ structure is the objective function during the thermal design. Using ANSYS Programming Development Language (APDL), various multiple iterative programmes are written and the analysis are performed to minimize the objective function. The dependency of the objective function on various independent variables is established and the optimum values of the parameters are evaluated. The results of the analysis are presented in the paper. (author)

  5. Sliding surface searching method for slopes containing a potential weak structural surface

    Directory of Open Access Journals (Sweden)

    Aijun Yao

    2014-06-01

    Full Text Available Weak structural surface is one of the key factors controlling the stability of slopes. The stability of rock slopes is in general concerned with set of discontinuities. However, in soft rocks, failure can occur along surfaces approaching to a circular failure surface. To better understand the position of potential sliding surface, a new method called simplex-finite stochastic tracking method is proposed. This method basically divides sliding surface into two parts: one is described by smooth curve obtained by random searching, the other one is polyline formed by the weak structural surface. Single or multiple sliding surfaces can be considered, and consequently several types of combined sliding surfaces can be simulated. The paper will adopt the arc-polyline to simulate potential sliding surface and analyze the searching process of sliding surface. Accordingly, software for slope stability analysis using this method was developed and applied in real cases. The results show that, using simplex-finite stochastic tracking method, it is possible to locate the position of a potential sliding surface in the slope.

  6. Establish an automated flow injection ESI-MS method for the screening of fragment based libraries: Application to Hsp90.

    Science.gov (United States)

    Riccardi Sirtori, Federico; Caronni, Dannica; Colombo, Maristella; Dalvit, Claudio; Paolucci, Mauro; Regazzoni, Luca; Visco, Carlo; Fogliatto, Gianpaolo

    2015-08-30

    ESI-MS is a well established technique for the study of biopolymers (nucleic acids, proteins) and their non covalent adducts, due to its capacity to detect ligand-target complexes in the gas phase and allows inference of ligand-target binding in solution. In this article we used this approach to investigate the interaction of ligands to the Heat Shock Protein 90 (Hsp90). This enzyme is a molecular chaperone involved in the folding and maturation of several proteins which has been subjected in the last years to intensive drug discovery efforts due to its key role in cancer. In particular, reference compounds, with a broad range of dissociation constants from 40pM to 100μM, were tested to assess the reliability of ESI-MS for the study of protein-ligand complexes. A good agreement was found between the values measured with a fluorescence polarization displacement assay and those determined by mass spectrometry. After this validation step we describe the setup of a medium throughput screening method, based on ESI-MS, suitable to explore interactions of therapeutic relevance biopolymers with chemical libraries. Our approach is based on an automated flow injection ESI-MS method (AFI-MS) and has been applied to screen the Nerviano Medical Sciences proprietary fragment library of about 2000 fragments against Hsp90. In order to discard false positive hits and to discriminate those of them interacting with the N-terminal ATP binding site, competition experiments were performed using a reference inhibitor. Gratifyingly, this group of hits matches with the ligands previously identified by NMR FAXS techniques and confirmed by X-ray co-crystallization experiments. These results support the use of AFI-MS for the screening of medium size libraries, including libraries of small molecules with low affinity typically used in fragment based drug discovery. AFI-MS is a valid alternative to other techniques with the additional opportunities to identify compounds interacting with

  7. Searching for rigour in the reporting of mixed methods population health research: a methodological review.

    Science.gov (United States)

    Brown, K M; Elliott, S J; Leatherdale, S T; Robertson-Wilson, J

    2015-12-01

    The environments in which population health interventions occur shape both their implementation and outcomes. Hence, when evaluating these interventions, we must explore both intervention content and context. Mixed methods (integrating quantitative and qualitative methods) provide this opportunity. However, although criteria exist for establishing rigour in quantitative and qualitative research, there is poor consensus regarding rigour in mixed methods. Using the empirical example of school-based obesity interventions, this methodological review examined how mixed methods have been used and reported, and how rigour has been addressed. Twenty-three peer-reviewed mixed methods studies were identified through a systematic search of five databases and appraised using the guidelines for Good Reporting of a Mixed Methods Study. In general, more detailed description of data collection and analysis, integration, inferences and justifying the use of mixed methods is needed. Additionally, improved reporting of methodological rigour is required. This review calls for increased discussion of practical techniques for establishing rigour in mixed methods research, beyond those for quantitative and qualitative criteria individually. A guide for reporting mixed methods research in population health should be developed to improve the reporting quality of mixed methods studies. Through improved reporting, mixed methods can provide strong evidence to inform policy and practice. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  8. SCHOOL COMMUNITY PERCEPTION OF LIBRARY APPS AGAINTS LIBRARY EMPOWERMENT

    Directory of Open Access Journals (Sweden)

    Achmad Riyadi Alberto

    2017-07-01

    Full Text Available Abstract. This research is motivated by the development of information and communication technology (ICT in the library world so rapidly that allows libraries in the present to develop its services into digital-based services. This study aims to find out the school community’s perception of library apps developed by Riche Cynthia Johan, Hana Silvana, and Holin Sulistyo and its influence on library empowerment at the library of SD Laboratorium Percontohan UPI Bandung. Library apps in this research belong to the context of m-libraries, which is a library that meets the needs of its users by using mobile platforms such as smartphones,computers, and other mobile devices. Empowerment of library is the utilization of all aspects of the implementation of libraries to the best in order to achieve the expected goals. An analysis of the schoolcommunity’s perception of library apps using the Technology Acceptance Model (TAM includes: ease of use, usefulness, usability, usage trends, and real-use conditions. While the empowerment of the library includes aspects: information empowerment, empowerment of learning resources, empowerment of human resources, empowerment of library facilities, and library promotion. The research method used in this research is descriptive method with quantitative approach. Population and sample in this research is school community at SD Laboratorium Percontohan UPI Bandung. Determination of sample criteria by using disproportionate stratified random sampling with the number of samples of 83 respondents. Data analysis using simple linear regression to measure the influence of school community perception about library apps to library empowerment. The result of data analysis shows that there is influence between school community perception about library apps to library empowerment at library of SD Laboratorium Percontohan UPI Bandung which is proved by library acceptance level and library empowerment improvement.

  9. The status of health librarianship and libraries in the Republic of Ireland (SHELLI): a mixed methods review to inform future strategy and sustainability.

    Science.gov (United States)

    Harrison, Janet; Creaser, Claire; Greenwood, Helen

    2013-06-01

    This paper summarises the main points of a review of the Status of Health Librarianship & Libraries in Ireland (SHELLI). The review was commissioned to gain a broad understanding of what was happening in practice in Ireland; acquire knowledge about international best practice, and to inform strategic plans to develop and sustain health libraries and librarianship in Ireland. A Mixed Methods approach was used: a literature review; an online survey distributed to health librarians; Semi structured interviews with key stakeholders; a focus group drawing participants from the survey. All evidence was triangulated. New roles for health librarians needed development and the changing educational needs of health librarians warranted attention. Increased collaboration across institutional boundaries needed more consideration, especially in relation to access to e-resources. Marketing of library services was crucial. Irish health library standards, needed to be updated and enforced and a proper evidence base established. The literature provided a number of examples of potentially useful initiatives. A strategic plan of action was drawn up in three areas: (i) to identify champions and promote visibility of health service libraries, (ii) to establish a body of evidence and (iii) to support service development and staff mentoring. © 2013 The authors. Health Information and Libraries Journal © 2013 Health Libraries Group.

  10. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    Science.gov (United States)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  11. MacroEvoLution: A New Method for the Rapid Generation of Novel Scaffold-Diverse Macrocyclic Libraries.

    Science.gov (United States)

    Saupe, Jörn; Kunz, Oliver; Haustedt, Lars Ole; Jakupovic, Sven; Mang, Christian

    2017-09-04

    Macrocycles are a structural class bearing great promise for future challenges in medicinal chemistry. Nevertheless, there are few flexible approaches for the rapid generation of structurally diverse macrocyclic compound collections. Here, an efficient method for the generation of novel macrocyclic peptide-based scaffolds is reported. The process, named here as "MacroEvoLution", is based on a cyclization screening approach that gives reliable access to novel macrocyclic architectures. Classification of building blocks into specific pools ensures that scaffolds with orthogonally addressable functionalities are generated, which can easily be used for the generation of structurally diverse compound libraries. The method grants rapid access to novel scaffolds with scalable synthesis (multi gram scale) and the introduction of further diversity at a late stage. Despite being developed for peptidic systems, the approach can easily be extended for the synthesis of systems with a decreased peptidic character. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  12. A comparison of methods for gravitational wave burst searches from LIGO and Virgo

    International Nuclear Information System (INIS)

    Beauville, F; Buskulic, D; Grosjean, D; Bizouard, M-A; Cavalier, F; Clapson, A-C; Hello, P; Blackburn, L; Katsavounidis, E; Bosi, L; Brocco, L; Brown, D A; Chatterji, S; Christensen, N; Knight, M; Fairhurst, S; Guidi, G; Heng, S; Hewitson, M; Klimenko, S

    2008-01-01

    The search procedure for burst gravitational waves has been studied using 24 h of simulated data in a network of three interferometers (Hanford 4 km, Livingston 4 km and Virgo 3 km are the example interferometers). Several methods to detect burst events developed in the LIGO Scientific Collaboration (LSC) and Virgo Collaboration have been studied and compared. We have performed coincidence analysis of the triggers obtained in the different interferometers with and without simulated signals added to the data. The benefits of having multiple interferometers of similar sensitivity are demonstrated by comparing the detection performance of the joint coincidence analysis with LSC and Virgo only burst searches. Adding Virgo to the LIGO detector network can increase by 50% the detection efficiency for this search. Another advantage of a joint LIGO-Virgo network is the ability to reconstruct the source sky position. The reconstruction accuracy depends on the timing measurement accuracy of the events in each interferometer, and is displayed in this paper with a fixed source position example

  13. A comparison of methods for gravitational wave burst searches from LIGO and Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Beauville, F; Buskulic, D; Grosjean, D [Laboratoire d' Annecy-le-Vieux de Physique des Particules, Chemin de Bellevue, BP 110, 74941 Annecy-le-Vieux Cedex (France); Bizouard, M-A; Cavalier, F; Clapson, A-C; Hello, P [Laboratoire de l' Accelerateur Lineaire, IN2P3/CNRS-Universite de Paris XI, BP 34, 91898 Orsay Cedex (France); Blackburn, L; Katsavounidis, E [LIGO-Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Bosi, L [INFN Sezione di Perugia and/or Universita di Perugia, Via A Pascoli, I-06123 Perugia (Italy); Brocco, L [INFN Sezione di Roma and/or Universita ' La Sapienza' , P le A Moro 2, I-00185 Roma (Italy); Brown, D A; Chatterji, S [LIGO-California Institute of Technology, Pasadena, CA 91125 (United States); Christensen, N; Knight, M [Carleton College, Northfield, MN 55057 (United States); Fairhurst, S [University of Wisconsin-Milwaukee, Milwaukee, WI 53201 (United States); Guidi, G [INFN Sezione Firenze/Urbino Via G Sansone 1, I-50019 Sesto Fiorentino (Italy); and/or Universita di Firenze, Largo E Fermi 2, I-50125 Firenze and/or Universita di Urbino, Via S Chiara 27, I-61029 Urbino (Italy); Heng, S; Hewitson, M [University of Glasgow, Glasgow, G12 8QQ (United Kingdom); Klimenko, S [University of Florida-Gainesville, FL 32611 (United States)] (and others)

    2008-02-21

    The search procedure for burst gravitational waves has been studied using 24 h of simulated data in a network of three interferometers (Hanford 4 km, Livingston 4 km and Virgo 3 km are the example interferometers). Several methods to detect burst events developed in the LIGO Scientific Collaboration (LSC) and Virgo Collaboration have been studied and compared. We have performed coincidence analysis of the triggers obtained in the different interferometers with and without simulated signals added to the data. The benefits of having multiple interferometers of similar sensitivity are demonstrated by comparing the detection performance of the joint coincidence analysis with LSC and Virgo only burst searches. Adding Virgo to the LIGO detector network can increase by 50% the detection efficiency for this search. Another advantage of a joint LIGO-Virgo network is the ability to reconstruct the source sky position. The reconstruction accuracy depends on the timing measurement accuracy of the events in each interferometer, and is displayed in this paper with a fixed source position example.

  14. The Role of School Libraries | Wali | Nigerian School Library Journal

    African Journals Online (AJOL)

    Nigerian School Library Journal. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 1 (2002) >. Log in or Register to get access to full text downloads.

  15. Pre-Service Teachers' Use of Library Databases: Some Insights

    Science.gov (United States)

    Lamb, Janeen; Howard, Sarah; Easey, Michael

    2014-01-01

    The aim of this study is to investigate if providing mathematics education pre-service teachers with animated library tutorials on library and database searches changes their searching practices. This study involved the completion of a survey by 138 students and seven individual interviews before and after library search demonstration videos were…

  16. Phase boundary estimation in electrical impedance tomography using the Hooke and Jeeves pattern search method

    International Nuclear Information System (INIS)

    Khambampati, Anil Kumar; Kim, Kyung Youn; Ijaz, Umer Zeeshan; Lee, Jeong Seong; Kim, Sin

    2010-01-01

    In industrial processes, monitoring of heterogeneous phases is crucial to the safety and operation of the engineering structures. Particularly, the visualization of voids and air bubbles is advantageous. As a result many studies have appeared in the literature that offer varying degrees of functionality. Electrical impedance tomography (EIT) has already been proved to be a hallmark for process monitoring and offers not only the visualization of the resistivity profile for a given flow mixture but is also used for detection of phase boundaries. Iterative image reconstruction algorithms, such as the modified Newton–Raphson (mNR) method, are commonly used as inverse solvers. However, their utility is problematic in a sense that they require the initial solution in close proximity of the ground truth. Furthermore, they also rely on the gradient information of the objective function to be minimized. Therefore, in this paper, we address all these issues by employing a direct search algorithm, namely the Hooke and Jeeves pattern search method, to estimate the phase boundaries that directly minimizes the cost function and does not require the gradient information. It is assumed that the resistivity profile is known a priori and therefore the unknown information will be the size and location of the object. The boundary coefficients are parameterized using truncated Fourier series and are estimated using the relationship between the measured voltages and injected currents. Through extensive simulation and experimental result and by comparison with mNR, we show that the Hooke and Jeeves pattern search method offers a promising prospect for process monitoring

  17. The Library Macintosh at SCIL [Small Computers in Libraries]'88.

    Science.gov (United States)

    Valauskas, Edward J.; And Others

    1988-01-01

    The first of three papers describes the role of Macintosh workstations in a library. The second paper explains why the Macintosh was selected for end-user searching in an academic library, and the third discusses advantages and disadvantages of desktop publishing for librarians. (8 references) (MES)

  18. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    Science.gov (United States)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond

  19. PMSVM: An Optimized Support Vector Machine Classification Algorithm Based on PCA and Multilevel Grid Search Methods

    Directory of Open Access Journals (Sweden)

    Yukai Yao

    2015-01-01

    Full Text Available We propose an optimized Support Vector Machine classifier, named PMSVM, in which System Normalization, PCA, and Multilevel Grid Search methods are comprehensively considered for data preprocessing and parameters optimization, respectively. The main goals of this study are to improve the classification efficiency and accuracy of SVM. Sensitivity, Specificity, Precision, and ROC curve, and so forth, are adopted to appraise the performances of PMSVM. Experimental results show that PMSVM has relatively better accuracy and remarkable higher efficiency compared with traditional SVM algorithms.

  20. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  1. Evidence Based Management as a Tool for Special Libraries

    Directory of Open Access Journals (Sweden)

    Bill Fisher

    2007-12-01

    Full Text Available Objective ‐ To examine the evidence based management literature, as an example of evidence based practice, and determine how applicable evidence based management might be in the special library environment. Methods ‐ Recent general management literature and the subject‐focused literature of evidence based management were reviewed; likewise recent library/information science management literature and the subject‐focused literature of evidence based librarianshipwere reviewed to identify relevant examples of the introduction and use of evidence based practice in organizations. Searches were conducted in major business/management databases, major library/information science databases, and relevant Web sites, blogs and wikis. Citation searches on key articles and follow‐up searches on cited references were also conducted. Analysis of the retrieved literature was conducted to find similarities and/or differences between the management literature and the library/information scienceliterature, especially as it related to special libraries.Results ‐ The barriers to introducing evidence based management into most organizations were found to apply to many special libraries and are similar to issues involved with evidence based practice in librarianship in general. Despite these barriers, a set of resources to assist special librarians in accessing research‐based information to help them use principles of evidence based management is identified.Conclusion ‐ While most special librarians are faced with a number of barriers to using evidence based management, resources do exist to help overcome these obstacles.

  2. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  3. Libraries and the Search for Academic Excellence. Proceedings of the Arden House Symposium (New York, New York, March 15-17, 1987).

    Science.gov (United States)

    1987

    In the introductory paper Patricia Senn Breivik provides background information on and an overview of a national symposium. This introduction is followed by the full text of nine papers presented at the symposium: (1) "The Academic Library and Education for Leadership" (Major R. Owens, U.S. House of Representatives); (2) "Academic…

  4. A dynamic lattice searching method with rotation operation for optimization of large clusters

    International Nuclear Information System (INIS)

    Wu Xia; Cai Wensheng; Shao Xueguang

    2009-01-01

    Global optimization of large clusters has been a difficult task, though much effort has been paid and many efficient methods have been proposed. During our works, a rotation operation (RO) is designed to realize the structural transformation from decahedra to icosahedra for the optimization of large clusters, by rotating the atoms below the center atom with a definite degree around the fivefold axis. Based on the RO, a development of the previous dynamic lattice searching with constructed core (DLSc), named as DLSc-RO, is presented. With an investigation of the method for the optimization of Lennard-Jones (LJ) clusters, i.e., LJ 500 , LJ 561 , LJ 600 , LJ 665-667 , LJ 670 , LJ 685 , and LJ 923 , Morse clusters, silver clusters by Gupta potential, and aluminum clusters by NP-B potential, it was found that both the global minima with icosahedral and decahedral motifs can be obtained, and the method is proved to be efficient and universal.

  5. MRS algorithm: a new method for searching myocardial region in SPECT myocardial perfusion images.

    Science.gov (United States)

    He, Yuan-Lie; Tian, Lian-Fang; Chen, Ping; Li, Bin; Mao, Zhong-Yuan

    2005-10-01

    First, the necessity of automatically segmenting myocardium from myocardial SPECT image is discussed in Section 1. To eliminate the influence of the background, the optimal threshold segmentation method modified for the MRS algorithm is explained in Section 2. Then, the image erosion structure is applied to identify the myocardium region and the liver region. The contour tracing method is introduced to extract the myocardial contour. To locate the centriod of the myocardium, the myocardial centriod searching method is developed. The protocol of the MRS algorithm is summarized in Section 6. The performance of the MRS algorithm is investigated and the conclusion is drawn in Section 7. Finally, the importance of the MRS algorithm and the improvement of the MRS algorithm are discussed.

  6. The CERN Library

    CERN Multimedia

    Hester, Alec G

    1968-01-01

    Any advanced research centre needs a good Library. It can be regarded as a piece of equipment as vital as any machine. At the present time, the CERN Library is undergoing a number of modifications to adjust it to the changing scale of CERN's activities and to the ever increasing flood of information. This article, by A.G. Hester, former Editor of CERN COURIER who now works in the Scientific Information Service, describes the purposes, methods and future of the CERN Library.

  7. Slovene specialized text corpus of Library and Information Science – An advanced lexicographic tool for library terminology research

    OpenAIRE

    Kanič, Ivan

    2013-01-01

    To support the research in the field of library and information science terminology and dictionary construction in Slovene language a specialized text corpus has been designed and constructed. The corpus has reached 3,6 million words extracted from 625 Slovene technical and scientific texts of the field. It supports a variety of specialized search methods, display of search results, and their statistic computation. The web based application is in open public access.

  8. SWATH Mass Spectrometry Performance Using Extended Peptide MS/MS Assay Libraries*

    Science.gov (United States)

    Wu, Jemma X.; Song, Xiaomin; Pascovici, Dana; Zaw, Thiri; Care, Natasha; Krisp, Christoph; Molloy, Mark P.

    2016-01-01

    The use of data-independent acquisition methods such as SWATH for mass spectrometry based proteomics is usually performed with peptide MS/MS assay libraries which enable identification and quantitation of peptide peak areas. Reference assay libraries can be generated locally through information dependent acquisition, or obtained from community data repositories for commonly studied organisms. However, there have been no studies performed to systematically evaluate how locally generated or repository-based assay libraries affect SWATH performance for proteomic studies. To undertake this analysis, we developed a software workflow, SwathXtend, which generates extended peptide assay libraries by integration with a local seed library and delivers statistical analysis of SWATH-quantitative comparisons. We designed test samples using peptides from a yeast extract spiked into peptides from human K562 cell lysates at three different ratios to simulate protein abundance change comparisons. SWATH-MS performance was assessed using local and external assay libraries of varying complexities and proteome compositions. These experiments demonstrated that local seed libraries integrated with external assay libraries achieve better performance than local assay libraries alone, in terms of the number of identified peptides and proteins and the specificity to detect differentially abundant proteins. Our findings show that the performance of extended assay libraries is influenced by the MS/MS feature similarity of the seed and external libraries, while statistical analysis using multiple testing corrections increases the statistical rigor needed when searching against large extended assay libraries. PMID:27161445

  9. An R-peak detection method that uses an SVD filter and a search back system.

    Science.gov (United States)

    Jung, Woo-Hyuk; Lee, Sang-Goog

    2012-12-01

    In this paper, we present a method for detecting the R-peak of an ECG signal by using an singular value decomposition (SVD) filter and a search back system. The ECG signal was detected in two phases: the pre-processing phase and the decision phase. The pre-processing phase consisted of the stages for the SVD filter, Butterworth High Pass Filter (HPF), moving average (MA), and squaring, whereas the decision phase consisted of a single stage that detected the R-peak. In the pre-processing phase, the SVD filter removed noise while the Butterworth HPF eliminated baseline wander. The MA removed the remaining noise of the signal that had gone through the SVD filter to make the signal smooth, and squaring played a role in strengthening the signal. In the decision phase, the threshold was used to set the interval before detecting the R-peak. When the latest R-R interval (RRI), suggested by Hamilton et al., was greater than 150% of the previous RRI, the method of detecting the R-peak in such an interval was modified to be 150% or greater than the smallest interval of the two most latest RRIs. When the modified search back system was used, the error rate of the peak detection decreased to 0.29%, compared to 1.34% when the modified search back system was not used. Consequently, the sensitivity was 99.47%, the positive predictivity was 99.47%, and the detection error was 1.05%. Furthermore, the quality of the signal in data with a substantial amount of noise was improved, and thus, the R-peak was detected effectively. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Multilevel library instruction for emerging nursing roles.

    Science.gov (United States)

    Francis, B W; Fisher, C C

    1995-10-01

    As new nursing roles emerge that involve greater decision making than in the past, added responsibility for outcomes and cost control, and increased emphasis on primary care, the information-seeking skills needed by nurses change. A search of library and nursing literature indicates that there is little comprehensive library instruction covering all levels of nursing programs: undergraduate, returning registered nurses, and graduate students. The University of Florida is one of the few places that has such a multilevel, course-integrated curriculum in place for all entrants into the nursing program. Objectives have been developed for each stage of learning. The courses include instruction in the use of the online public access catalog, printed resources, and electronic databases. A library classroom equipped with the latest technology enables student interaction with electronic databases. This paper discusses the program and several methods used to evaluate it.

  11. The impact of PICO as a search strategy tool on literature search quality

    DEFF Research Database (Denmark)

    Eriksen, Mette Brandt; Frandsen, Tove Faber

    2018-01-01

    Objective: This review aimed to determine, if the use of the PICO model (Patient Intervention Comparison Outcome) as a search strategy tool affects the quality of the literature search. Methods: A comprehensive literature search was conducted in: PubMed, Embase, CINAHL, PsycInfo, Cochrane Library...... and three studies were included, data was extracted, risk of bias was assessed and a qualitative analysis was conducted. The included studies compared PICO to PIC or link to related articles in PubMed; PICOS and SPIDER. One study compared PICO to unguided searching. Due to differences in intervention...

  12. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  13. Methods to filter out spurious disturbances in continuous-wave searches from gravitational-wave detectors

    International Nuclear Information System (INIS)

    Leaci, Paola

    2015-01-01

    Semicoherent all-sky searches over year-long observation times for continuous gravitational wave signals produce various thousands of potential periodic source candidates. Efficient methods able to discard false candidate events are crucial in order to put all the efforts into a computationally intensive follow-up analysis for the remaining most promising candidates (Shaltev et al 2014 Phys. Rev. D 89 124030). In this paper we present a set of techniques able to fulfill such requirements, identifying and eliminating false candidate events, reducing thus the bulk of candidate sets that need to be further investigated. Some of these techniques were also used to streamline the candidate sets returned by the Einstein@Home hierarchical searches presented in (Aasi J et al (The LIGO Scientific Collaboration and the Virgo Collaboration) 2013 Phys. Rev. D 87 042001). These powerful methods and the benefits originating from their application to both simulated and on detector data from the fifth LIGO science run are illustrated and discussed. (paper)

  14. Validation of LWR calculation methods and JEF-1 based data libraries by TRX and BAPL critical experiments

    International Nuclear Information System (INIS)

    Pelloni, S.; Grimm, P.; Mathews, D.; Paratte, J.M.

    1989-06-01

    In this report the capability of various code systems widely used at PSI (such as WIMS-D, BOXER, and the AARE modules TRAMIX and MICROX-2 in connection with the one-dimensional transport code ONEDANT) and JEF-1 based nuclear data libraries to compute LWR lattices is analysed by comparing results from thermal reactor benchmarks TRX and BAPL with experiment and with previously published values. It is shown that with the JEF-1 evaluation eigenvalues are generally well predicted within 8 mk (1 mk = 0.001) or less by all code systems, and that all methods give reasonable results for the measured reaction rate within or not too far from the experimental uncertainty. This is consistent with previous similar studies. (author) 7 tabs., 36 refs

  15. Reporting Quality of Search Methods in Systematic Reviews of HIV Behavioral Interventions (2000–2010): Are the Searches Clearly Explained, Systematic and Reproducible?

    Science.gov (United States)

    Mullins, Mary M.; DeLuca, Julia B.; Crepaz, Nicole; Lyles, Cynthia M.

    2018-01-01

    Systematic reviews are an essential tool for researchers, prevention providers and policy makers who want to remain current with the evidence in the field. Systematic review must adhere to strict standards, as the results can provide a more objective appraisal of evidence for making scientific decisions than traditional narrative reviews. An integral component of a systematic review is the development and execution of a comprehensive systematic search to collect available and relevant information. A number of reporting guidelines have been developed to ensure quality publications of systematic reviews. These guidelines provide the essential elements to include in the review process and report in the final publication for complete transparency. We identified the common elements of reporting guidelines and examined the reporting quality of search methods in HIV behavioral intervention literature. Consistent with the findings from previous evaluations of reporting search methods of systematic reviews in other fields, our review shows a lack of full and transparent reporting within systematic reviews even though a plethora of guidelines exist. This review underscores the need for promoting the completeness of and adherence to transparent systematic search reporting within systematic reviews. PMID:26052651

  16. LITERATURE SEARCH FOR METHODS FOR HAZARD ANALYSES OF AIR CARRIER OPERATIONS.

    Energy Technology Data Exchange (ETDEWEB)

    MARTINEZ - GURIDI,G.; SAMANTA,P.

    2002-07-01

    Representatives of the Federal Aviation Administration (FAA) and several air carriers under Title 14 of the Code of Federal Regulations (CFR) Part 121 developed a system-engineering model of the functions of air-carrier operations. Their analyses form the foundation or basic architecture upon which other task areas are based: hazard analyses, performance measures, and risk indicator design. To carry out these other tasks, models may need to be developed using the basic architecture of the Air Carrier Operations System Model (ACOSM). Since ACOSM encompasses various areas of air-carrier operations and can be used to address different task areas with differing but interrelated objectives, the modeling needs are broad. A literature search was conducted to identify and analyze the existing models that may be applicable for pursuing the task areas in ACOSM. The intent of the literature search was not necessarily to identify a specific model that can be directly used, but rather to identify relevant ones that have similarities with the processes and activities defined within ACOSM. Such models may provide useful inputs and insights in structuring ACOSM models. ACOSM simulates processes and activities in air-carrier operation, but, in a general framework, it has similarities with other industries where attention also has been paid to hazard analyses, emphasizing risk management, and in designing risk indicators. To assure that efforts in other industries are adequately considered, the literature search includes publications from other industries, e.g., chemical, nuclear, and process industries. This report discusses the literature search, the relevant methods identified and provides a preliminary assessment of their use in developing the models needed for the ACOSM task areas. A detailed assessment of the models has not been made. Defining those applicable for ACOSM will need further analyses of both the models and tools identified. The report is organized in four chapters

  17. Hybrid Direct and Iterative Solver with Library of Multi-criteria Optimal Orderings for h Adaptive Finite Element Method Computations

    KAUST Repository

    AbouEisha, Hassan M.

    2016-06-02

    In this paper we present a multi-criteria optimization of element partition trees and resulting orderings for multi-frontal solver algorithms executed for two dimensional h adaptive finite element method. In particular, the problem of optimal ordering of elimination of rows in the sparse matrices resulting from adaptive finite element method computations is reduced to the problem of finding of optimal element partition trees. Given a two dimensional h refined mesh, we find all optimal element partition trees by using the dynamic programming approach. An element partition tree defines a prescribed order of elimination of degrees of freedom over the mesh. We utilize three different metrics to estimate the quality of the element partition tree. As the first criterion we consider the number of floating point operations(FLOPs) performed by the multi-frontal solver. As the second criterion we consider the number of memory transfers (MEMOPS) performed by the multi-frontal solver algorithm. As the third criterion we consider memory usage (NONZEROS) of the multi-frontal direct solver. We show the optimization results for FLOPs vs MEMOPS as well as for the execution time estimated as FLOPs+100MEMOPS vs NONZEROS. We obtain Pareto fronts with multiple optimal trees, for each mesh, and for each refinement level. We generate a library of optimal elimination trees for small grids with local singularities. We also propose an algorithm that for a given large mesh with identified local sub-grids, each one with local singularity. We compute Schur complements over the sub-grids using the optimal trees from the library, and we submit the sequence of Schur complements into the iterative solver ILUPCG.

  18. SPOT-ligand 2: improving structure-based virtual screening by binding-homology search on an expanded structural template library.

    Science.gov (United States)

    Litfin, Thomas; Zhou, Yaoqi; Yang, Yuedong

    2017-04-15

    The high cost of drug discovery motivates the development of accurate virtual screening tools. Binding-homology, which takes advantage of known protein-ligand binding pairs, has emerged as a powerful discrimination technique. In order to exploit all available binding data, modelled structures of ligand-binding sequences may be used to create an expanded structural binding template library. SPOT-Ligand 2 has demonstrated significantly improved screening performance over its previous version by expanding the template library 15 times over the previous one. It also performed better than or similar to other binding-homology approaches on the DUD and DUD-E benchmarks. The server is available online at http://sparks-lab.org . yaoqi.zhou@griffith.edu.au or yuedong.yang@griffith.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. The PSIMECX medium-energy neutron activation cross-section library. Part III: Calculational methods for heavy nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Atchison, F.

    1998-09-01

    The PSIMECX library contains calculated nuclide production cross-sections from neutron-induced reactions in the energy range about 2 to 800 MeV in the following 72 stable isotopes of 24 elements: {sup 12}C, {sup 13}C, {sup 16}O, {sup 17}O, {sup 18}O, {sup 23}Na, {sup 24}Mg, {sup 25}Mg, {sup 26}Mg, {sup 27}Al, {sup 28}Si, {sup 29}Si, {sup 30}Si, {sup 31}P, {sup 32}S, {sup 33}S, {sup 34}S, {sup 36}S, {sup 35}Cl, {sup 37}Cl, {sup 39}K, {sup 40}K, {sup 41}K, {sup 40}Ca, {sup 42}Ca, {sup 43}Ca, {sup 44}Ca, {sup 46}Ca, {sup 48}Ca, {sup 46}Ti, {sup 47}Ti, {sup 48}Ti, {sup 49}Ti, {sup 50}Ti, {sup 50}V, {sup 51}V, {sup 50}Cr, {sup 52}Cr, {sup 53}Cr, {sup 54}Cr, {sup 55}Mn, {sup 54}Fe, {sup 56}Fe, {sup 57}Fe, {sup 58}Fe, {sup 58}Ni, {sup 60}Ni, {sup 61}Ni, {sup 62}Ni, {sup 64}Ni, {sup 63}Cu, {sup 65}Cu, {sup 64}Zn, {sup 66}Zn, {sup 67}Zn, {sup 68}Zn, {sup 70}Zn, {sup 92}Mo, {sup 94}Mo, {sup 95}Mo, {sup 96}Mo, {sup 97}Mo, {sup 98}Mo, {sup 100}Mo, {sup 121}Sb, {sup 123}Sb, {sup 204}Pb, {sup 206}Pb, {sup 207}Pb, {sup 208}Pb, {sup 232}Th and {sup 238}U. The energy range covers essentially all transmutation channels other than capture. The majority of the selected elements are main constituents of normal materials of construction used in and around accelerator facilities and the library is, first and foremost, designed to be a tool for the estimation of their activation in wide-band neutron fields. This third report describes and discusses the calculational methods used for the heavy nuclei. The library itself has been described in the first report of this series and the treatment for the medium and light mass nuclei is given in the second. (author)

  20. Merchandising Techniques and Libraries.

    Science.gov (United States)

    Green, Sylvie A.

    1981-01-01

    Proposes that libraries employ modern booksellers' merchandising techniques to improve circulation of library materials. Using displays in various ways, the methods and reasons for weeding out books, replacing worn book jackets, and selecting new books are discussed. Suggestions for learning how to market and 11 references are provided. (RBF)

  1. Search methods that people use to find owners of lost pets.

    Science.gov (United States)

    Lord, Linda K; Wittum, Thomas E; Ferketich, Amy K; Funk, Julie A; Rajala-Schultz, Päivi J

    2007-06-15

    To characterize the process by which people who find lost pets search for the owners. Cross-sectional study. Sample Population-188 individuals who found a lost pet in Dayton, Ohio, between March 1 and June 30, 2006. Procedures-Potential participants were identified as a result of contact with a local animal agency or placement of an advertisement in the local newspaper. A telephone survey was conducted to identify methods participants used to find the pets' owners. 156 of 188 (83%) individuals completed the survey. Fifty-nine of the 156 (38%) pets were reunited with their owners; median time to reunification was 2 days (range, 0.5 to 45 days). Only 1 (3%) cat owner was found, compared with 58 (46%) dog owners. Pet owners were found as a result of information provided by an animal agency (25%), placement of a newspaper advertisement (24%), walking the neighborhood (19%), signs in the neighborhood (15%), information on a pet tag (10%), and other methods (7%). Most finders (87%) considered it extremely important to find the owner, yet only 13 (8%) initially surrendered the found pet to an animal agency. The primary reason people did not surrender found pets was fear of euthanasia (57%). Only 97 (62%) individuals were aware they could run a found-pet advertisement in the newspaper at no charge, and only 1 person who was unaware of the no-charge policy placed an advertisement. Veterinarians and shelters can help educate people who find lost pets about methods to search for the pets' owners.

  2. SALP, a new single-stranded DNA library preparation method especially useful for the high-throughput characterization of chromatin openness states.

    Science.gov (United States)

    Wu, Jian; Dai, Wei; Wu, Lin; Wang, Jinke

    2018-02-13

    Next-generation sequencing (NGS) is fundamental to the current biological and biomedical research. Construction of sequencing library is a key step of NGS. Therefore, various library construction methods have been explored. However, the current methods are still limited by some shortcomings. This study developed a new NGS library construction method, Single strand Adaptor Library Preparation (SALP), by using a novel single strand adaptor (SSA). SSA is a double-stranded oligonucleotide with a 3' overhang of 3 random nucleotides, which can be efficiently ligated to the 3' end of single strand DNA by T4 DNA ligase. SALP can be started with any denatured DNA fragments such as those sheared by Tn5 tagmentation, enzyme digestion and sonication. When started with Tn5-tagmented chromatin, SALP can overcome a key limitation of ATAC-seq and become a high-throughput NGS library construction method, SALP-seq, which can be used to comparatively characterize the chromatin openness state of multiple cells unbiasly. In this way, this study successfully characterized the comparative chromatin openness states of four different cell lines, including GM12878, HepG2, HeLa and 293T, with SALP-seq. Similarly, this study also successfully characterized the chromatin openness states of HepG2 cells with SALP-seq by using 10 5 to 500 cells. This study developed a new NGS library construction method, SALP, by using a novel kind of single strand adaptor (SSA), which should has wide applications in the future due to its unique performance.

  3. Exploring genomic dark matter: A critical assessment of the performance of homology search methods on noncoding RNA

    DEFF Research Database (Denmark)

    Freyhult, E.; Bollback, J. P.; Gardner, P. P.

    2006-01-01

    Homology search is one of the most ubiquitous bioinformatic tasks, yet it is unknown how effective the currently available tools are for identifying noncoding RNAs (ncRNAs). In this work, we use reliable ncRNA data sets to assess the effectiveness of methods such as BLAST, FASTA, HMMer, and Infer......Homology search is one of the most ubiquitous bioinformatic tasks, yet it is unknown how effective the currently available tools are for identifying noncoding RNAs (ncRNAs). In this work, we use reliable ncRNA data sets to assess the effectiveness of methods such as BLAST, FASTA, HMMer......, and Infernal. Surprisingly, the most popular homology search methods are often the least accurate. As a result, many studies have used inappropriate tools for their analyses. On the basis of our results, we suggest homology search strategies using the currently available tools and some directions for future...

  4. Quality evaluation of tandem mass spectral libraries.

    Science.gov (United States)

    Oberacher, Herbert; Weinmann, Wolfgang; Dresen, Sebastian

    2011-06-01

    Tandem mass spectral libraries are gaining more and more importance for the identification of unknowns in different fields of research, including metabolomics, forensics, toxicology, and environmental analysis. Particularly, the recent invention of reliable, robust, and transferable libraries has increased the general acceptance of these tools. Herein, we report on results obtained from thorough evaluation of the match reliabilities of two tandem mass spectral libraries: the MSforID library established by the Oberacher group in Innsbruck and the Weinmann library established by the Weinmann group in Freiburg. Three different experiments were performed: (1) Spectra of the libraries were searched against their corresponding library after excluding either this single compound-specific spectrum or all compound-specific spectra prior to searching; (2) the libraries were searched against each other using either library as reference set or sample set; (3) spectra acquired on different mass spectrometric instruments were matched to both libraries. Almost 13,000 tandem mass spectra were included in this study. The MSforID search algorithm was used for spectral matching. Statistical evaluation of the library search results revealed that principally both libraries enable the sensitive and specific identification of compounds. Due to higher mass accuracy of the QqTOF compared with the QTrap instrument, matches to the MSforID library were more reliable when comparing spectra with both libraries. Furthermore, only the MSforID library was shown to be efficiently transferable to different kinds of tandem mass spectrometers, including "tandem-in-time" instruments; this is due to the coverage of a large range of different collision energy settings-including the very low range-which is an outstanding characteristics of the MSforID library.

  5. Resonance self-shielding method using resonance interference factor library for practical lattice physics computations of LWRs

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Khassenov, Azamat; Lee, Deokjung

    2016-01-01

    This paper presents a new method of resonance interference effect treatment using resonance interference factor for high fidelity analysis of light water reactors (LWRs). Although there have been significant improvements in the lattice physics calculations over the several decades, there exist still relatively large errors in the resonance interference treatment, in the order of ∼300 pcm in the reactivity prediction of LWRs. In the newly developed method, the impact of resonance interference to the multi-group cross-sections has been quantified and tabulated in a library which can be used in lattice physics calculation as adjustment factors of multi-group cross-sections. The verification of the new method has been performed with Mosteller benchmark, UO_2 and MOX pin-cell depletion problems, and a 17×17 fuel assembly loaded with gadolinia burnable poison, and significant improvements were demonstrated in the accuracy of reactivity and pin power predictions, with reactivity errors down to the order of ∼100 pcm. (author)

  6. ChIPnorm: a statistical method for normalizing and identifying differential regions in histone modification ChIP-seq libraries.

    Science.gov (United States)

    Nair, Nishanth Ulhas; Sahu, Avinash Das; Bucher, Philipp; Moret, Bernard M E

    2012-01-01

    The advent of high-throughput technologies such as ChIP-seq has made possible the study of histone modifications. A problem of particular interest is the identification of regions of the genome where different cell types from the same organism exhibit different patterns of histone enrichment. This problem turns out to be surprisingly difficult, even in simple pairwise comparisons, because of the significant level of noise in ChIP-seq data. In this paper we propose a two-stage statistical method, called ChIPnorm, to normalize ChIP-seq data, and to find differential regions in the genome, given two libraries of histone modifications of different cell types. We show that the ChIPnorm method removes most of the noise and bias in the data and outperforms other normalization methods. We correlate the histone marks with gene expression data and confirm that histone modifications H3K27me3 and H3K4me3 act as respectively a repressor and an activator of genes. Compared to what was previously reported in the literature, we find that a substantially higher fraction of bivalent marks in ES cells for H3K27me3 and H3K4me3 move into a K27-only state. We find that most of the promoter regions in protein-coding genes have differential histone-modification sites. The software for this work can be downloaded from http://lcbb.epfl.ch/software.html.

  7. Optimal correction and design parameter search by modern methods of rigorous global optimization

    International Nuclear Information System (INIS)

    Makino, K.; Berz, M.

    2011-01-01

    Frequently the design of schemes for correction of aberrations or the determination of possible operating ranges for beamlines and cells in synchrotrons exhibit multitudes of possibilities for their correction, usually appearing in disconnected regions of parameter space which cannot be directly qualified by analytical means. In such cases, frequently an abundance of optimization runs are carried out, each of which determines a local minimum depending on the specific chosen initial conditions. Practical solutions are then obtained through an often extended interplay of experienced manual adjustment of certain suitable parameters and local searches by varying other parameters. However, in a formal sense this problem can be viewed as a global optimization problem, i.e. the determination of all solutions within a certain range of parameters that lead to a specific optimum. For example, it may be of interest to find all possible settings of multiple quadrupoles that can achieve imaging; or to find ahead of time all possible settings that achieve a particular tune; or to find all possible manners to adjust nonlinear parameters to achieve correction of high order aberrations. These tasks can easily be phrased in terms of such an optimization problem; but while mathematically this formulation is often straightforward, it has been common belief that it is of limited practical value since the resulting optimization problem cannot usually be solved. However, recent significant advances in modern methods of rigorous global optimization make these methods feasible for optics design for the first time. The key ideas of the method lie in an interplay of rigorous local underestimators of the objective functions, and by using the underestimators to rigorously iteratively eliminate regions that lie above already known upper bounds of the minima, in what is commonly known as a branch-and-bound approach. Recent enhancements of the Differential Algebraic methods used in particle

  8. A meta-heuristic method for solving scheduling problem: crow search algorithm

    Science.gov (United States)

    Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi

    2018-04-01

    Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.

  9. Inverse atmospheric radiative transfer problems - A nonlinear minimization search method of solution. [aerosol pollution monitoring

    Science.gov (United States)

    Fymat, A. L.

    1976-01-01

    The paper studies the inversion of the radiative transfer equation describing the interaction of electromagnetic radiation with atmospheric aerosols. The interaction can be considered as the propagation in the aerosol medium of two light beams: the direct beam in the line-of-sight attenuated by absorption and scattering, and the diffuse beam arising from scattering into the viewing direction, which propagates more or less in random fashion. The latter beam has single scattering and multiple scattering contributions. In the former case and for single scattering, the problem is reducible to first-kind Fredholm equations, while for multiple scattering it is necessary to invert partial integrodifferential equations. A nonlinear minimization search method, applicable to the solution of both types of problems has been developed, and is applied here to the problem of monitoring aerosol pollution, namely the complex refractive index and size distribution of aerosol particles.

  10. Adjusting the Parameters of Metal Oxide Gapless Surge Arresters’ Equivalent Circuits Using the Harmony Search Method

    Directory of Open Access Journals (Sweden)

    Christos A. Christodoulou

    2017-12-01

    Full Text Available The appropriate circuit modeling of metal oxide gapless surge arresters is critical for insulation coordination studies. Metal oxide arresters present a dynamic behavior for fast front surges; namely, their residual voltage is dependent on the peak value, as well as the duration of the injected impulse current, and should therefore not only be represented by non-linear elements. The aim of the current work is to adjust the parameters of the most frequently used surge arresters’ circuit models by considering the magnitude of the residual voltage, as well as the dissipated energy for given pulses. In this aim, the harmony search method is implemented to adjust parameter values of the arrester equivalent circuit models. This functions by minimizing a defined objective function that compares the simulation outcomes with the manufacturer’s data and the results obtained from previous methodologies.

  11. A method in search of a theory: peer education and health promotion.

    Science.gov (United States)

    Turner, G; Shepherd, J

    1999-04-01

    Peer education has grown in popularity and practice in recent years in the field of health promotion. However, advocates of peer education rarely make reference to theories in their rationale for particular projects. In this paper the authors review a selection of commonly cited theories, and examine to what extent they have value and relevance to peer education in health promotion. Beginning from an identification of 10 claims made for peer education, each theory is examined in terms of the scope of the theory and evidence to support it in practice. The authors conclude that, whilst most theories have something to offer towards an explanation of why peer education might be effective, most theories are limited in scope and there is little empirical evidence in health promotion practice to support them. Peer education would seem to be a method in search of a theory rather than the application of theory to practice.

  12. Statistical Measures Alone Cannot Determine Which Database (BNI, CINAHL, MEDLINE, or EMBASE Is the Most Useful for Searching Undergraduate Nursing Topics. A Review of: Stokes, P., Foster, A., & Urquhart, C. (2009. Beyond relevance and recall: Testing new user-centred measures of database performance. Health Information and Libraries Journal, 26(3, 220-231.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-03-01

    Full Text Available Objective – The research project sought to determine which of four databases was the most useful for searching undergraduate nursing topics. Design – Comparative database evaluation. Setting – Nursing and midwifery students at Homerton School of Health Studies (now part of Anglia Ruskin University, Cambridge, United Kingdom, in 2005-2006. Subjects – The subjects were four databases: British Nursing Index (BNI, CINAHL, MEDLINE, and EMBASE.Methods – This was a comparative study using title searches to compare BNI (BritishNursing Index, CINAHL, MEDLINE and EMBASE.According to the authors, this is the first study to compare BNI with other databases. BNI is a database produced by British libraries that indexes the nursing and midwifery literature. It covers over 240 British journals, and includes references to articles from health sciences journals that are relevant to nurses and midwives (British Nursing Index, n.d..The researchers performed keyword searches in the title field of the four databases for the dissertation topics of nine nursing and midwifery students enrolled in undergraduate dissertation modules. The list of titles of journals articles on their topics were given to the students and they were asked to judge the relevancy of the citations. The title searches were evaluated in each of the databases using the following criteria: • precision (the number of relevant results obtained in the database for a search topic, divided by the total number of results obtained in the database search;• recall (the number of relevant results obtained in the database for a search topic, divided by the total number of relevant results obtained on that topic from all four database searches;• novelty (the number of relevant results that were unique in the database search, which was calculated as a percentage of the total number of relevant results found in the database;• originality (the number of unique relevant results obtained in the

  13. Children's Search Engines from an Information Search Process Perspective.

    Science.gov (United States)

    Broch, Elana

    2000-01-01

    Describes cognitive and affective characteristics of children and teenagers that may affect their Web searching behavior. Reviews literature on children's searching in online public access catalogs (OPACs) and using digital libraries. Profiles two Web search engines. Discusses some of the difficulties children have searching the Web, in the…

  14. Assessment of Library Instruction and Library Literacy Skills of First ...

    African Journals Online (AJOL)

    This study investigated the effectiveness and impact of library instruction (GST 111 – the use of library) course on library literacy skills of first year undergraduate students. The study adopted the descriptive survey research method and questionnaire was used as the research instrument. First year undergraduate students of ...

  15. Evaluating Public Libraries Using Standard Scores: The Library Quotient.

    Science.gov (United States)

    O'Connor, Daniel O.

    1982-01-01

    Describes a method for assessing the performance of public libraries using a standardized scoring system and provides an analysis of public library data from New Jersey as an example. Library standards and the derivation of measurement ratios are also discussed. A 33-item bibliography and three data tables are included. (JL)

  16. E-Library and Traditonal Library Resources Usage: A Comparative ...

    African Journals Online (AJOL)

    A comparative usage of e-library and traditional resources in Nigerian libraries was examined in this study. A descriptive survey method was adopted and a purposive sampling technique was used to select the sample and the process produced 125 academic, research, and public libraries in Nigeria. A total of 116 cases ...

  17. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    Science.gov (United States)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  18. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    International Nuclear Information System (INIS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-01-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb 1/2 Mn 1/2 )O 3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb 2 O 4 ; and (3) ferroelectric semiconductors with formula M 2 P 2 (S,Se) 6 . A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: ► Integration of first-principles methods and database mining. ► Minor structural families with desirable functional properties. ► Survey of polar entries in the Inorganic Crystal Structural Database.

  19. A cost-effective method for Illumina small RNA-Seq library preparation using T4 RNA ligase 1 adenylated adapters

    Directory of Open Access Journals (Sweden)

    Chen Yun-Ru

    2012-09-01

    Full Text Available Abstract Background Deep sequencing is a powerful tool for novel small RNA discovery. Illumina small RNA sequencing library preparation requires a pre-adenylated 3’ end adapter containing a 5’,5’-adenyl pyrophosphoryl moiety. In the absence of ATP, this adapter can be ligated to the 3’ hydroxyl group of small RNA, while RNA self-ligation and concatenation are repressed. Pre-adenylated adapters are one of the most essential and costly components required for library preparation, and few are commercially available. Results We demonstrate that DNA oligo with 5’ phosphate and 3’ amine groups can be enzymatically adenylated by T4 RNA ligase 1 to generate customized pre-adenylated adapters. We have constructed and sequenced a small RNA library for tomato (Solanum lycopersicum using the T4 RNA ligase 1 adenylated adapter. Conclusion We provide an efficient and low-cost method for small RNA sequencing library preparation, which takes two days to complete and costs around $20 per library. This protocol has been tested in several plant species for small RNA sequencing including sweet potato, pepper, watermelon, and cowpea, and could be readily applied to any RNA samples.

  20. Evolutionary Policy Transfer and Search Methods for Boosting Behavior Quality: RoboCup Keep-Away Case Study

    Directory of Open Access Journals (Sweden)

    Geoff Nitschke

    2017-11-01

    Full Text Available This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior transfer across increasingly complex collective robotic (RoboCup keep-away tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function, behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.

  1. Demonstrating the financial impact of clinical libraries: a systematic review.

    Science.gov (United States)

    Madden, Anne; Collins, Pamela; McGowan, Sondhaya; Stevenson, Paul; Castelli, David; Hyde, Loree; DeSanto, Kristen; O'Brien, Nancy; Purdon, Michelle; Delgado, Diana

    2016-09-01

    The purpose of this review is to evaluate the tools used to measure the financial value of libraries in a clinical setting. Searches were carried out on ten databases for the years 2003-2013, with a final search before completion to identify any recent papers. Eleven papers met the final inclusion criteria. There was no evidence of a single 'best practice', and many metrics used to measure financial impact of clinical libraries were developed on an ad hoc basis locally. The most common measures of financial impact were value of time saved, value of resource collection against cost of alternative sources, cost avoidance and revenue generated through assistance on grant submissions. Few papers provided an insight into the longer term impact on the library service resulting from submitting return on investment (ROI) or other financial impact statements. There are limited examples of metrics which clinical libraries can use to measure explicit financial impact. The methods highlighted in this literature review are generally implicit in the measures used and lack robustness. There is a need for future research to develop standardised, validated tools that clinical libraries can use to demonstrate their financial impact. © 2016 Health Libraries Group.

  2. Investigations on search methods for speech recognition using weighted finite state transducers

    OpenAIRE

    Rybach, David

    2014-01-01

    The search problem in the statistical approach to speech recognition is to find the most likely word sequence for an observed speech signal using a combination of knowledge sources, i.e. the language model, the pronunciation model, and the acoustic models of phones. The resulting search space is enormous. Therefore, an efficient search strategy is required to compute the result with a feasible amount of time and memory. The structured statistical models as well as their combination, the searc...

  3. Elliptical tiling method to generate a 2-dimensional set of templates for gravitational wave search

    International Nuclear Information System (INIS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-01-01

    Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method that allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated with a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the deexcitation phase of black holes - the so-called 'ringdown' signals - and are also encountered in some numerically computed supernova signals. First, the number of templates N computed by the method is similar to its analytical estimation, despite the overlaps between neighbor templates and the border effects. Moreover, N is small enough to test for the first time the performances of the set of templates for different choices of the minimal match MM, the parameter used to define the maximal allowed loss of signal-to-noise ratio (SNR) due to the mismatch between real signals and templates. The main result of this analysis is that the fraction of SNR recovered is on average much higher than MM, which dramatically decreases the mean percentage of false dismissals. Indeed, it goes well below its estimated value of 1-MM 3 used as input of the algorithm. Thus, as this feature should be common to any tiling algorithm, it seems possible to reduce the constraint on the value of MM - and indeed the number of templates and the computing power - without losing as many events as expected on average. This should be of great

  4. Intelligent Search Method Based ACO Techniques for a Multistage Decision Problem EDP/LFP

    Directory of Open Access Journals (Sweden)

    Mostefa RAHLI

    2006-07-01

    Full Text Available The implementation of a numerical library of calculation based optimization in electrical supply networks area is in the centre of the current research orientations, thus, our project in a form given is centred on the development of platform NMSS1. It's a software environment which will preserve many efforts as regards calculations of charge, smoothing curves, losses calculation and economic planning of the generated powers [23].The operational research [17] in a hand and the industrial practice in the other, prove that the means and processes of simulation reached a level of very appreciable reliability and mathematical confidence [4, 5, 14]. It is of this expert observation that many processes make confidence to the results of simulation.The handicaps of this approach or methodology are that it makes base its judgments and handling on simplified assumptions and constraints whose influence was deliberately neglected to be added to the cost to spend [14].By juxtaposing the methods of simulation with artificial intelligence techniques, gathering set of numerical methods acquires an optimal reliability whose assurance can not leave doubt.Software environment NMSS [23] can be a in the field of the rallying techniques of simulation and electric network calculation via a graphic interface. In the same software integrate an AI capability via a module expert system.Our problem is a multistage case where are completely dependant and can't be performed separately.For a multistage problem [21, 22], the results obtained from a credible (large size problem calculation, makes the following question: Could choice of numerical methods set make the calculation of a complete problem using more than two treatments levels, a total error which will be the weakest one possible? It is well-known according to algorithmic policy; each treatment can be characterized by a function called mathematical complexity. This complexity is in fact a coast (a weight overloading

  5. Application of a heuristic search method for generation of fuel reload configurations

    International Nuclear Information System (INIS)

    Galperin, A.; Nissan, E.

    1988-01-01

    A computerized heuristic search method for the generation and optimization of fuel reload configurations is proposed and investigated. The heuristic knowledge is expressed modularly in the form of ''IF-THEN'' production rules. The method was implemented in a program coded in the Franz LISP programming language and executed under the UNIX operating system. A test problem was formulated, based on a typical light water reactor reload problem with a few simplifications assumed, in order to allow formulation of the reload strategy into a relatively small number of rules. A computer run of the problem was performed with a VAX-780 machine. A set of 312 solutions was generated in -- 20 min of execution time. Testing of a few arbitrarily chosen configurations demonstrated reasonably good performance for the computer-generated solutions. A computerized generator of reload configurations may be used for the fast generation or modification of reload patterns and as a tool for the formulation, tuning, and testing of the heuristic knowledge rules used by an ''expert'' fuel manager

  6. Gravity Search Algorithm hybridized Recursive Least Square method for power system harmonic estimation

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Singh

    2017-06-01

    Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.

  7. Search for the top quark at D0 using multivariate methods

    International Nuclear Information System (INIS)

    Bhat, P.C.

    1995-07-01

    We report on the search for the top quark in p bar p collisions at the Fermilab Tevatron (√s = 1.8 TeV) in the di-lepton and lepton+jets channels using multivariate methods. An H-matrix analysis of the eμ data corresponding to an integrated luminosity of 13.5±1.6 pb -1 yields one event whose likelihood to be a top quark event, assuming m top = 180 GeV/c 2 , is ten times more than that of WW and eighteen times more than that of Z → ττ. A neural network analysis of the e+jets channel using a data sample corresponding to an integrated luminosity of 47.9±5.7 pb -1 shows an excess of events in the signal region and yields a cross-section for t bar t production of 6.7±2.3 (stat.) pb, assuming a top mass of 200 GeV/c 2 . An analysis of the e+jets data using the probability density estimation method yields a cross-section that is consistent with the above result

  8. Validation of a search strategy to identify nutrition trials in PubMed using the relative recall method.

    Science.gov (United States)

    Durão, Solange; Kredo, Tamara; Volmink, Jimmy

    2015-06-01

    To develop, assess, and maximize the sensitivity of a search strategy to identify diet and nutrition trials in PubMed using relative recall. We developed a search strategy to identify diet and nutrition trials in PubMed. We then constructed a gold standard reference set to validate the identified trials using the relative recall method. Relative recall was calculated by dividing the number of references from the gold standard our search strategy identified by the total number of references in the gold standard. Our gold standard comprised 298 trials, derived from 16 included systematic reviews. The initial search strategy identified 242 of 298 references, with a relative recall of 81.2% [95% confidence interval (CI): 76.3%, 85.5%]. We analyzed titles and abstracts of the 56 missed references for possible additional terms. We then modified the search strategy accordingly. The relative recall of the final search strategy was 88.6% (95% CI: 84.4%, 91.9%). We developed a search strategy to identify diet and nutrition trials in PubMed with a high relative recall (sensitivity). This could be useful for establishing a nutrition trials register to support the conduct of future research, including systematic reviews. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Mainstreaming the New Library.

    Science.gov (United States)

    Keeler, Elizabeth

    1982-01-01

    This discussion of methods of integrating the corporate library into the mainstream of affairs highlights three major elements of the process: marketing, production, and advertising. Professionalism and the information seeking behavior of clients are noted. Five references are provided. (EJS)

  10. Enhanced Catalogue Records Positively Impact Circulation but Are Not Used to Their Potential in Patron Searching. A Review of: Tosaka, Y., & Weng, C. (2011. Reexamining content-enriched access: Its effect on usage and discovery. College & Research Libraries, 72(5, 412-427.

    Directory of Open Access Journals (Sweden)

    Cari Merkley

    2012-09-01

    Full Text Available Objective – To determine how contentenrichedcatalogue records impact thecirculation rates of print resources in foursubject areas, and to investigate how thisadditional metadata influences OPACsearching and item retrieval.Design – Analysis of circulation data,bibliographic records, and OPAC search logs.Setting – A library at a four-yearundergraduate residential college in theNorth-eastern United States.Subjects – Bibliographic records for 88,538titles; data from 7,782 circulation transactions;and 130 OPAC search strings and relatedcirculation data.Methods – In the first part of the study,bibliographic records for print items publishedsince 1990 were extracted from the library’sintegrated library system (ILS in the followingLibrary of Congress (LC classes: D, E, F, H, J,L, P, Q, R, S, and T. It is assumed thatelectronic books were excluded from thisstudy because their usage is not tracked in theILS. These LC classes were chosen tocorrespond to the subject areas targeted by theresearchers for comparison – “history, socialsciences, language and literature, and scienceand technology” (p. 416. The data fileincluded the publication date of the title, aswell as values for the MARC fields identified by the researchers as containing content-enriched data. These fields were MARC 505 (an item’s table of contents or list of works included, MARC 520 (summaries or annotations, and MARC 856 (URL to electronic location of related material or electronic copy (p. 416; Library of Congress Network Development and MARC Standards Office, 2003, 2008a, 2008b. The authors analyzed records for 88,538 titles and determined the total number of records containing each of the MARC fields either singly or in combination.Data relating to circulation transactions for items located in these LC classes from January to May 2009 was also identified. Like the bibliographic records, circulation data was pulled for print items only. The researchers identified 7

  11. Public libraries, as an infrastructure for a sustainable public sphere: A systematic review of research: A preliminary paper

    DEFF Research Database (Denmark)

    Audunson, Ragnar; Svandhild, Aabø,; Blomgren, Roger

    of the major findings are: Research on libraries as public sphere institutions cover a wide range of topics the dominating being freedom of access to information, often related to social inclusion, empowerment and justice. Contributions are often normative and non-empirical, but the proportion of empirically...... based research is increasing. This paper focuses on contributions related to public libraries.......This paper is based on a systematic literature search aiming at identifying research on the role of libraries as institutions underpinning a sustainable public sphere in a digital age. The major research questions are: 1. Is systematic literature search a fruitful method when it comes to a social...

  12. ESPRIT: an automated, library-based method for mapping and soluble expression of protein domains from challenging targets.

    Science.gov (United States)

    Yumerefendi, Hayretin; Tarendeau, Franck; Mas, Philippe J; Hart, Darren J

    2010-10-01

    Expression of sufficient quantities of soluble protein for structural biology and other applications is often a very difficult task, especially when multimilligram quantities are required. In order to improve yield, solubility or crystallisability of a protein, it is common to subclone shorter genetic constructs corresponding to single- or multi-domain fragments. However, it is not always clear where domain boundaries are located, especially when working on novel targets with little or no sequence similarity to other proteins. Several methods have been described employing aspects of directed evolution to the recombinant expression of challenging proteins. These combine the construction of a random library of genetic constructs of a target with a screening or selection process to identify solubly expressing protein fragments. Here we review several datasets from the ESPRIT (Expression of Soluble Proteins by Random Incremental Truncation) technology to provide a view on its capabilities. Firstly, we demonstrate how it functions using the well-characterised NF-kappaB p50 transcription factor as a model system. Secondly, application of ESPRIT to the challenging PB2 subunit of influenza polymerase has led to several novel atomic resolution structures; here we present an overview of the screening phase of that project. Thirdly, analysis of the human kinase TBK1 is presented to show how the ESPRIT technology rapidly addresses the compatibility of challenging targets with the Escherichia coli expression system.

  13. The method to set up file-6 in neutron data library of light nuclei below 20 MeV

    International Nuclear Information System (INIS)

    Zhang Jingshang; Han Yinlu

    2001-01-01

    So far there is no file-6 (double differential cross section data, DDX) of the light nuclei in the main evaluated neutron nuclear data libraries in the world. Therefore, locating a proper description on the double differential cross section of all kinds of outgoing particles from neutron induced light nucleus reaction below 20 MeV is necessary. The motivation for this work is to introduce a way to set up file-6 in the neutron data library

  14. INTERACTION OF SEARCH CAPABILITIES OF ELECTRONIC AND TRADITIONAL (CARD CATALOGS

    Directory of Open Access Journals (Sweden)

    Л. В. Головко

    2017-10-01

    Full Text Available Purpose. Interaction of search capabilities of electronic and traditional (card catalogs. Subject: search capabilities of electronic and traditional (card catalogs and their interaction. Goal: Creating efficient search system for library information services, updating and improving the information retrieval system. To reach this goal, following tasks are set: – to determine the possibility of parallel functioning of electronic and traditional card catalogs, and to reveal the interaction of their search capabilities by conducting a survey via questionnaire titled «Interaction of search capabilities of electronic and traditional (card catalogs»; – to find out which search systems are preferred by users; – to estimate the actual condition of search capabilities of electronic and traditional (card catalogs in the library. Methodology. On various stages of the survey the following methods were used: analysis and synthesis, comparison, generalization, primary sources search; sociological method (survey. These methods allowed determining, processing and ana lyzing the whole complex of available sources, which became an important factor of research objectivity. Finding. Survey results allowed us to analyze the dynamics of changes, new needs of the readers, and to make a decision regarding the quality improvement of information search services. Practical value. Creating a theoretical foundation for implementation of set tasks is the practical value of the acquired findings. Conclusions and results of the research can be used in university students’, postgraduates’ and professors’ information search activities. Certain results of the research are used and implemented in practice of the library of Kryvyi Rih State Pedagogical University, namely at workshops on the basics of information culture (using bibliographic reference unit, information search by key words, authors and titles via electronic catalogue. Guides for users were created. Duty

  15. Novel citation-based search method for scientific literature: application to meta-analyses

    NARCIS (Netherlands)

    Janssens, A.C.J.W.; Gwinn, M.

    2015-01-01

    Background: Finding eligible studies for meta-analysis and systematic reviews relies on keyword-based searching as the gold standard, despite its inefficiency. Searching based on direct citations is not sufficiently comprehensive. We propose a novel strategy that ranks articles on their degree of

  16. About the Library - Betty Petersen Memorial Library

    Science.gov (United States)

    branch library of the NOAA Central Library. The library serves the NOAA Science Center in Camp Springs , Maryland. History and Mission: Betty Petersen Memorial Library began as a reading room in the NOAA Science Science Center staff and advises the library on all aspects of the library program. Library Newsletters

  17. Female Public Library Patrons Value the Library for Services, Programs, and Technology. A Review of: Fidishun, Dolores. “Women and the Public Library: Using Technology, Using the Library.” Library Trends 56.2 (2007: 328-43.

    Directory of Open Access Journals (Sweden)

    Virginia Wilson

    2009-03-01

    Full Text Available Objective – This study attempts to give insight into why and how women use the public library and information technology, and how they learned to use the technology.Design – Qualitative survey.Setting – The research took place at the Chester County Library in Exton, Pennsylvania, USA.Subjects – One hundred and eighty-four female library patrons 18 years and older.Methods – An anonymous qualitative survey was handed out to all patrons at the ChesterCounty Library 18 years of age and older who came into the library on four separate days and times. Times were chosen to obtain a good representation of library patrons, and included daytime, evening, and weekend hours. The survey consisted of questions about library use, information sought, information seeking behaviour, technology used, and how the respondents learned to use the technology. The surveys were collated and spreadsheets were created that reported answers to yes/no and other data questions. Word documents facilitated the listing of more qualitative answers. The data were analyzed using a thematic content analysis to find themes and patterns that emerged to create grounded theory. In thematic content analysis, “the coding scheme is based on categories designed to capture the dominant themes in a text (Franzosi 184. There is no universal coding scheme, and this method requires extensive pre-testing of the scheme (Franzosi 184. Grounded theory “uses a prescribed set of procedures for analyzing data and constructing a theoretical model” from the data (Leedy and Ormrod 154.Main Results – The survey asked questions about library use, reasons for library use, using technology, finding information, and learning to use online resources. A total of 465 surveys were distributed and 329 were returned. From the surveys returned, 184 were from female patrons, 127 from male patrons, and 18 did not report gender. The data for this article are primarily taken from the 184 female

  18. Accessing Digital Libraries: A Study of ARL Members' Digital Projects

    Science.gov (United States)

    Kahl, Chad M.; Williams, Sarah C.

    2006-01-01

    To ensure efficient access to and integrated searching capabilities for their institution's new digital library projects, the authors studied Web sites of the Association of Research Libraries' (ARL) 111 academic, English-language libraries. Data were gathered on 1117 digital projects, noting library Web site and project access, metadata, and…

  19. A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available In this paper, we address the Wireless Sensor Network (WSN deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values.

  20. A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events.

    Science.gov (United States)

    Aitsaadi, Nadjib; Achir, Nadjib; Boussetta, Khaled; Pujolle, Guy

    2009-01-01

    In this paper, we address the Wireless Sensor Network (WSN) deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values.

  1. Searching for beyond the Standard Model physics using direct and indirect methods at LHCb

    CERN Document Server

    Hall, Samuel C P; Golutvin, Andrey

    It is known that the Standard Model of particle physics is incomplete in its description of nature at a fundamental level. For example, the Standard Model can neither incorporate dark matter nor explain the matter dominated nature of the Universe. This thesis presents three analyses undertaken using data collected by the LHCb detector. Each analysis searches for indications of physics beyond the Standard Model in dierent decays of B mesons, using dierent techniques. Notably, two analyses look for indications of new physics using indirect methods, and one uses a direct approach. The rst analysis shows evidence for the rare decay $B^{+} \\rightarrow D^{+}_{s}\\phi$ with greater than 3 $\\sigma$ signicance; this also constitutes the rst evidence for a fullyhadronic annihilation-type decay of a $B^{+}$ meson. A measurement of the branching fraction of the decay $B^{+} \\rightarrow D^{+}_{s}\\phi$ is seen to be higher than, but still compatible with, Standard Model predictions. The CP-asymmetry of the decay is also ...

  2. Minimization of municipal solid waste transportation route in West Jakarta using Tabu Search method

    Science.gov (United States)

    Chaerul, M.; Mulananda, A. M.

    2018-04-01

    Indonesia still adopts the concept of collect-haul-dispose for municipal solid waste handling and it leads to the queue of the waste trucks at final disposal site (TPA). The study aims to minimize the total distance of waste transportation system by applying a Transshipment model. In this case, analogous of transshipment point is a compaction facility (SPA). Small capacity of trucks collects the waste from waste temporary collection points (TPS) to the compaction facility which located near the waste generator. After compacted, the waste is transported using big capacity of trucks to the final disposal site which is located far away from city. Problem related with the waste transportation can be solved using Vehicle Routing Problem (VRP). In this study, the shortest distance of route from truck pool to TPS, TPS to SPA, and SPA to TPA was determined by using meta-heuristic methods, namely Tabu Search 2 Phases. TPS studied is the container type with total 43 units throughout the West Jakarta City with 38 units of Armroll truck with capacity of 10 m3 each. The result determines the assignment of each truck from the pool to the selected TPS, SPA and TPA with the total minimum distance of 2,675.3 KM. The minimum distance causing the total cost for waste transportation to be spent by the government also becomes minimal.

  3. Library Use

    DEFF Research Database (Denmark)

    Konzack, Lars

    2012-01-01

    A seminar paper about a survey of role-playing games in public libraries combined with three cases and a presentation of a model.......A seminar paper about a survey of role-playing games in public libraries combined with three cases and a presentation of a model....

  4. Pathway Detection from Protein Interaction Networks and Gene Expression Data Using Color-Coding Methods and A* Search Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2012-01-01

    Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.

  5. Application of pattern search method to power system security constrained economic dispatch with non-smooth cost function

    International Nuclear Information System (INIS)

    Al-Othman, A.K.; El-Naggar, K.M.

    2008-01-01

    Direct search methods are evolutionary algorithms used to solve optimization problems. (DS) methods do not require any information about the gradient of the objective function at hand while searching for an optimum solution. One of such methods is Pattern Search (PS) algorithm. This paper presents a new approach based on a constrained pattern search algorithm to solve a security constrained power system economic dispatch problem (SCED) with non-smooth cost function. Operation of power systems demands a high degree of security to keep the system satisfactorily operating when subjected to disturbances, while and at the same time it is required to pay attention to the economic aspects. Pattern recognition technique is used first to assess dynamic security. Linear classifiers that determine the stability of electric power system are presented and added to other system stability and operational constraints. The problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Pattern search method is then applied to solve the constrained optimization formulation. In particular, the method is tested using three different test systems. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that pattern search (PS) is very applicable for solving security constrained power system economic dispatch problem (SCED). In addition, valve-point effect loading and total system losses are considered to further investigate the potential of the PS technique. Based on the results, it can be concluded that the PS has demonstrated ability in handling highly nonlinear discontinuous non-smooth cost function of the SCED. (author)

  6. Searching for Innovations and Methods of Using the Cultural Heritage on the Example of Upper Silesia

    Science.gov (United States)

    Wagner, Tomasz

    2017-10-01

    The basic subject of this paper is historical and cultural heritage of some parts of Upper Silesia, bind by common history and similar problems at present days. The paper presents some selected historical phenomena that have influenced contemporary space, mentioned above, and contemporary issues of heritage protection in Upper Silesia. The Silesian architecture interpretation, since 1989, is strongly covered with some ideological and national ideas. The last 25 years are the next level of development which contains rapidly transformation of the space what is caused by another economical transformations. In this period, we can observe landscape transformations, liquidation of objects and historical structures, loos of regional features, spontaneous adaptation processes of objects and many methods of implementation forms of protection, and using of cultural resources. Some upheaval linked to the state borders changes, system, economy and ethnic transformation caused that former Upper Silesia border area focuses phenomena that exists in some other similar European areas which are abutments of cultures and traditions. The latest period in the history of Upper Silesia gives us time to reflect the character of changes in architecture and city planning of the area and appraisal of efficiency these practices which are connected to cultural heritage perseveration. The phenomena of the last decades are: decrement of regional features, elimination of objects, which were a key feature of the regional cultural heritage, deformation of these forms that were shaped in the history and some trials of using these elements of cultural heritage, which are widely recognized as cultural values. In this situation, it is important to seek creative solutions that will neutralize bad processes resulting from bad law and practice. The most important phenomena of temporary space is searching of innovative fields and methods and use of cultural resources. An important part of the article is

  7. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  8. Assessment of methods for computing the closest point projection, penetration, and gap functions in contact searching problems

    Czech Academy of Sciences Publication Activity Database

    Kopačka, Ján; Gabriel, Dušan; Plešek, Jiří; Ulbin, M.

    2016-01-01

    Roč. 105, č. 11 (2016), s. 803-833 ISSN 0029-5981 R&D Projects: GA ČR(CZ) GAP101/12/2315; GA MŠk(CZ) ME10114 Institutional support: RVO:61388998 Keywords : closest point projection * local contact search * quadratic elements * Newtons methods * geometric iteration methods * simplex method Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.162, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/nme.4994/abstract

  9. FRDS.Broker Library

    DEFF Research Database (Denmark)

    2018-01-01

    The FRDS.Broker library is a teaching oriented implementation of the Broker architectural pattern for distributed remote method invocation. It defines the central roles of the pattern and provides implementations of those roles that are not domain/use case specific. It provides a JSON based (GSon...... library) Requestor implementation, and implementations of the ClientRequestHandler and ServerRequestHandler roles in both a Java socket based and a Http/URI tunneling based variants. The latter us based upon the UniRest and Spark-Java libraries. The Broker pattern and the source code is explained...

  10. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  11. An automated and efficient conformation search of L-cysteine and L,L-cystine using the scaled hypersphere search method

    Science.gov (United States)

    Kishimoto, Naoki; Waizumi, Hiroki

    2017-10-01

    Stable conformers of L-cysteine and L,L-cystine were explored using an automated and efficient conformational searching method. The Gibbs energies of the stable conformers of L-cysteine and L,L-cystine were calculated with G4 and MP2 methods, respectively, at 450, 298.15, and 150 K. By assuming thermodynamic equilibrium and the barrier energies for the conformational isomerization pathways, the estimated ratios of the stable conformers of L-cysteine were compared with those determined by microwave spectroscopy in a previous study. Equilibrium structures of 1:1 and 2:1 cystine-Fe complexes were also calculated, and the energy of insertion of Fe into the disulfide bond was obtained.

  12. Decentralized cooperative unmanned aerial vehicles conflict resolution by neural network-based tree search method

    Directory of Open Access Journals (Sweden)

    Jian Yang

    2016-09-01

    Full Text Available In this article, a tree search algorithm is proposed to find the near optimal conflict avoidance solutions for unmanned aerial vehicles. In the dynamic environment, the unmodeled elements, such as wind, would make UAVs deviate from nominal traces. It brings about difficulties for conflict detection and resolution. The back propagation neural networks are utilized to approximate the unmodeled dynamics of the environment. To satisfy the online planning requirement, the search length of the tree search algorithm would be limited. Therefore, the algorithm may not be able to reach the goal states in search process. The midterm reward function for assessing each node is devised, with consideration given to two factors, namely, the safe separation requirement and the mission of each unmanned aerial vehicle. The simulation examples and the comparisons with previous approaches are provided to illustrate the smooth and convincing behaviours of the proposed algorithm.

  13. A borderless Library

    CERN Multimedia

    CERN Library

    2010-01-01

    The CERN Library has a large collection of documents in online or printed format in all disciplines needed by physicists, engineers and technicians. However,  users sometimes need to read documents not available at CERN. But don’t worry! Thanks to its Interlibrary loan and document delivery service, the CERN Library can still help you. Just fill in the online form or email us. We will then locate the document in other institutions and order it for you free of charge. The CERN Library cooperates with the largest libraries in Europe, such as ETH (Eidgenössische Technische Hochschule) in Zurich, TIB (Technische Informationsbibliothek) in Hanover and the British Library in London. Thanks to our network and our expertise in document search, most requests are satisfied in record time: articles are usually served in .pdf version a few hours after the order, and books or other printed materials are delivered within a few days. It is possible to ask for all types of documents suc...

  14. The first report: An analysis of bacterial flora of the first voided urine specimens of patients with male urethritis using the 16S ribosomal RNA gene-based clone library method.

    Science.gov (United States)

    You, Chunlin; Hamasuna, Ryoichi; Ogawa, Midori; Fukuda, Kazumasa; Hachisuga, Toru; Matsumoto, Tetsuro; Taniguchi, Hatsumi

    2016-06-01

    To analyse the bacterial flora of urine from patients with male urethritis using the clone library method. Urine specimens from patients with urethritis were used. The bacterial flora was analysed according to the 16S ribosomal RNA gene-based clone library method. In addition, Neisseria gonorrhoeae, Chlamydia trachomatis, Mycoplasma genitalium, Mycoplasma hominis, Ureaplasma urealyticum or Ureaplasma parvum were detected by the conventional PCR methods (TMA or real-time PCR) and data from the clone library and conventional PCR methods were compared. Among 58 urine specimens, 38 were successfully analysed using the clone library method. From the specimens, 2427 clones were evaluated and 95 bacterial phylotypes were detected. N. gonorrhoeae was detected from 6 specimens and as the predominant bacterial species in 5 specimens. M. genitalium was detected from 5 specimens and as the predominant bacterial species in 3 specimens. C. trachomatis was detected from 15 specimens using the TMA method, but was detected from only 1 specimen using the clone library method. U. parvum was detected from only 2 specimens using the clone library method. In addition, Haemophilus influenzae and Neisseria meningitidis were also detected in 8 and 1 specimens, respectively. Gardnerella vaginalis, which is a potential pathogen for bacterial vaginitis in women, was detected in 10 specimens. The clone library method can detect the occupancy rate of each bacteria species among the bacterial flora and may be a new method for bacterial analyses in male urethritis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Criticality and safety parameter studies for upgrading 3 MW TRIGA MARK II research reactor and validation of generated cross section library and computational method

    International Nuclear Information System (INIS)

    Bhuiyan, S.I.; Mondal, M.A.W.; Sarker, M.M.; Rahman, M.; Shahdatullah, M.S.; Huda, M.Q.; Chakrroborty, T.K.; Khan, M.J.H.

    2000-01-01

    This study deals with the neutronic and thermal hydraulic analysis of the 3MW TRIGA MARK II research reactor to upgrade it to a higher flux. The upgrading will need a major reshuffling and reconfiguration of the current core. To reshuffle the current core configuration, the chain of NJOY94.10 - WIMSD-5A - CITATION - PARET - MCNP4B2 codes has been used for the overall analysis. The computational methods, tools and techniques, customisation of cross section libraries, various models for cells and super cells, and a lot of associated utilities have been standardised and established/validated for the overall core analysis. Analyses using the 4-group and 7-group libraries of macroscopic cross sections generated from the 69-group WIMSD-5 library showed that a 7-group structure is more suitable for TRIGA calculations considering its LEU fuel composition. The MCNP calculations established that the CITATION calculations and the generated cross section library are reasonably good for neutronic analysis of TRIGA reactors. Results obtained from PARET demonstrated that the flux upgrade will not cause the temperature limit on the fuel to be exceeded. Also, the maximum power density remains, by a substantial margin below the level at which the departure from nucleate boiling could occur. A possible core with two additional irradiation channels around the CT is projected where almost identical thermal fluxes as in the CT are obtained. The reconfigured core also shows 7.25% thermal flux increase in the Lazy Susan. (author)

  16. Search without Boundaries Using Simple APIs

    Science.gov (United States)

    Tong, Qi

    2009-01-01

    The U.S. Geological Survey (USGS) Library, where the author serves as the digital services librarian, is increasingly challenged to make it easier for users to find information from many heterogeneous information sources. Information is scattered throughout different software applications (i.e., library catalog, federated search engine, link resolver, and vendor websites), and each specializes in one thing. How could the library integrate the functionalities of one application with another and provide a single point of entry for users to search across? To improve the user experience, the library launched an effort to integrate the federated search engine into the library's intranet website. The result is a simple search box that leverages the federated search engine's built-in application programming interfaces (APIs). In this article, the author describes how this project demonstrated the power of APIs and their potential to be used by other enterprise search portals inside or outside of the library.

  17. Marketing and health libraries

    OpenAIRE

    Wakeham, Maurice

    2004-01-01

    AIM: To present an overview of the concepts of marketing and to examine ways in which they can be applied to health libraries.\\ud METHODS: A review was carried out of literature relating to health libraries using LISA, CINAHL, BNI and Google.\\ud RESULTS: Marketing is seen as a strategic management activity aimed at developing customer relationships. Concepts such as the 'four Ps' (product, price, place and promotion), marketing plans, the marketing mix, segmentation, promotion and evaluation ...

  18. Developing online research strategies for library users at Sokoine ...

    African Journals Online (AJOL)

    Wanyenda

    SNAL develop effective online search strategies that help them make effective use of ... “trained in the application of information resources of their work”. ... However, the library has now migrated to an integrated web-based library automation.

  19. Library Education in the ASEAN Countries.

    Science.gov (United States)

    Atan, H. B.; Havard-Williams, P.

    1987-01-01

    Identifies the hierarchy of library development in Southeast Asian countries that results in the neglect of public and school libraries. Developing local library school curricula which focus on the specific needs of each country and cooperation among library schools are suggested as methods of correcting this situation. (CLB)

  20. Climate change on the Colorado River: a method to search for robust management strategies

    Science.gov (United States)

    Keefe, R.; Fischbach, J. R.

    2010-12-01

    The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.

  1. Libraries Today, Libraries Tomorrow: Contemporary Library Practices and the Role of Library Space in the L

    Directory of Open Access Journals (Sweden)

    Ana Vogrinčič Čepič

    2013-09-01

    Full Text Available ABSTRACTPurpose: The article uses sociological concepts in order to rethink the changes in library practices. Contemporary trends are discussed with regard to the changing nature of working habits, referring mostly to the new technology, and the (emergence of the third space phenomenon. The author does not regard libraries only as concrete public service institutions, but rather as complex cultural forms, taking in consideration wider social context with a stress on users’ practices in relation to space.Methodology/approach: The article is based on the (self- observation of the public library use, and on the (discourse analysis of internal library documents (i.e. annual reports and plans and secondary sociological literature. As such, the cultural form approach represents a classic method of sociology of culture.Results: The study of relevant material in combination with direct personal experiences reveals socio-structural causes for the change of users’ needs and habits, and points at the difficulty of spatial redefinition of libraries as well as at the power of the discourse.Research limitations: The article is limited to an observation of users’ practices in some of the public libraries in Ljubljana and examines only a small number of annual reports – the discoveries are then further debated from the sociological perspective.Originality/practical implications: The article offers sociological insight in the current issues of the library science and tries to suggest a wider explanation that could answer some of the challenges of the contemporary librarianship.

  2. Best, Useful and Objective Precisions for Information Retrieval of Three Search Methods in PubMed and iPubMed

    Directory of Open Access Journals (Sweden)

    Somayyeh Nadi Ravandi

    2016-10-01

    Full Text Available MEDLINE is one of the valuable sources of medical information on the Internet. Among the different open access sites of MEDLINE, PubMed is the best-known site. In 2010, iPubMed was established with an interaction-fuzzy search method for MEDLINE access. In the present work, we aimed to compare the precision of the retrieved sources (Best, Useful and Objective precision in the PubMed and iPubMed using two search methods (simple and MeSH search in PubMed and interaction-fuzzy method in iPubmed. During our semi-empirical study period, we held training workshops for 61 students of higher education to teach them Simple Search, MeSH Search, and Fuzzy-Interaction Search methods. Then, the precision of 305 searches for each method prepared by the students was calculated on the basis of Best precision, Useful precision, and Objective precision formulas. Analyses were done in SPSS version 11.5 using the Friedman and Wilcoxon Test, and three precisions obtained with the three precision formulas were studied for the three search methods. The mean precision of the interaction-fuzzy Search method was higher than that of the simple search and MeSH search for all three types of precision, i.e., Best precision, Useful precision, and Objective precision, and the Simple search method was in the next rank, and their mean precisions were significantly different (P < 0.001. The precision of the interaction-fuzzy search method in iPubmed was investigated for the first time. Also for the first time, three types of precision were evaluated in PubMed and iPubmed. The results showed that the Interaction-Fuzzy search method is more precise than using the natural language search (simple search and MeSH search, and users of this method found papers that were more related to their queries; even though search in Pubmed is useful, it is important that users apply new search methods to obtain the best results.

  3. Hybrid Direct and Iterative Solver with Library of Multi-criteria Optimal Orderings for h Adaptive Finite Element Method Computations

    KAUST Repository

    AbouEisha, Hassan M.; Jopek, Konrad; Medygrał, Bartłomiej; Moshkov, Mikhail; Nosek, Szymon; Paszyńska, Anna; Paszyński, Maciej; Pingali, Keshav

    2016-01-01

    trees, for each mesh, and for each refinement level. We generate a library of optimal elimination trees for small grids with local singularities. We also propose an algorithm that for a given large mesh with identified local sub-grids, each one

  4. Synthesis of a drug-like focused library of trisubstituted pyrrolidines using integrated flow chemistry and batch methods.

    Science.gov (United States)

    Baumann, Marcus; Baxendale, Ian R; Kuratli, Christoph; Ley, Steven V; Martin, Rainer E; Schneider, Josef

    2011-07-11

    A combination of flow and batch chemistries has been successfully applied to the assembly of a series of trisubstituted drug-like pyrrolidines. This study demonstrates the efficient preparation of a focused library of these pharmaceutically important structures using microreactor technologies, as well as classical parallel synthesis techniques, and thus exemplifies the impact of integrating innovative enabling tools within the drug discovery process.

  5. Nigerian Libraries

    African Journals Online (AJOL)

    Bridging the digital divide: the potential role of the National Library of Nigeria · EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Juliana Obiageri Akidi, Joy Chituru Onyenachi, 11-19 ...

  6. Library Locations

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Carnegie Library of Pittsburgh locations including address, coordinates, phone number, square footage, and standard operating hours. The map below does not display...

  7. academic libraries

    African Journals Online (AJOL)

    Information Impact: Journal of Information and Knowledge Management

    Information Impact: Journal of Information and Knowledge Management ... Key words: academic libraries, open access, research, researchers, technology ... European commission (2012) reports that affordable and easy access to the results ...

  8. In vitro detection of circulating tumor cells compared by the CytoTrack and CellSearch methods

    DEFF Research Database (Denmark)

    Hillig, T.; Horn, P.; Nygaard, Ann-Britt

    2015-01-01

    .23/p = 0.09). Overall, the recovery of CytoTrack and CellSearch was 68.8 +/- 3.9 %/71.1 +/- 2.9 %, respectively (p = 0.58). In spite of different methodologies, CytoTrack and CellSearch found similar number of CTCs, when spiking was performed with the EpCAM and pan cytokeratin-positive cell line MCF-7......Comparison of two methods to detect circulating tumor cells (CTC) CytoTrack and CellSearch through recovery of MCF-7 breast cancer cells, spiked into blood collected from healthy donors. Spiking of a fixed number of EpCAM and pan-cytokeratin positive MCF-7 cells into 7.5 mL donor blood...... was performed by FACSAria flow sorting. The samples were shipped to either CytoTrack or CellSearch research facilities within 48 h, where evaluation of MCF-7 recovery was performed. CytoTrack and CellSearch analyses were performed simultaneously. Recoveries of MCF-7 single cells, cells in clusters, and clusters...

  9. Search Engines for Tomorrow's Scholars

    Science.gov (United States)

    Fagan, Jody Condit

    2011-01-01

    Today's scholars face an outstanding array of choices when choosing search tools: Google Scholar, discipline-specific abstracts and index databases, library discovery tools, and more recently, Microsoft's re-launch of their academic search tool, now dubbed Microsoft Academic Search. What are these tools' strengths for the emerging needs of…

  10. The Search for Meaning of Life in Mitch Albom's Tuesdays with Morrie

    OpenAIRE

    Rahmahwati, Ummu

    2014-01-01

    People often realize that they need to search the meaning of their life. The main character in the novel Mitch Albom's Tuesdays with Morrie, Morrie, exemplifies the process of this searching the meaning of life. This study is designed for analyzing how Morrie searches the meaning in life and how this process also influences the supporting character, Mitch, who tries to grasp the meaning in his life. The methods used in this study are library research and psychological approach that relate to ...

  11. Constructing Effective Search Strategies for Electronic Searching.

    Science.gov (United States)

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  12. Library cooperation among academic libraries in Katsina state ...

    African Journals Online (AJOL)

    This study examined Library cooperation among academic libraries in Katsina state. Qualitative research method was adopted in carrying out this study. Interview was used as instrument for data collection. The population comprised of 7 Acquisition librarians from the schools understudy. A descriptive method of da ta ...

  13. Multi-Agent Based Beam Search for Real-Time Production Scheduling and Control Method, Software and Industrial Application

    CERN Document Server

    Kang, Shu Gang

    2013-01-01

    The Multi-Agent Based Beam Search (MABBS) method systematically integrates four major requirements of manufacturing production - representation capability, solution quality, computation efficiency, and implementation difficulty - within a unified framework to deal with the many challenges of complex real-world production planning and scheduling problems. Multi-agent Based Beam Search for Real-time Production Scheduling and Control introduces this method, together with its software implementation and industrial applications.  This book connects academic research with industrial practice, and develops a practical solution to production planning and scheduling problems. To simplify implementation, a reusable software platform is developed to build the MABBS method into a generic computation engine.  This engine is integrated with a script language, called the Embedded Extensible Application Script Language (EXASL), to provide a flexible and straightforward approach to representing complex real-world problems. ...

  14. Method and electronic database search engine for exposing the content of an electronic database

    NARCIS (Netherlands)

    Stappers, P.J.

    2000-01-01

    The invention relates to an electronic database search engine comprising an electronic memory device suitable for storing and releasing elements from the database, a display unit, a user interface for selecting and displaying at least one element from the database on the display unit, and control

  15. Uncovering Web search strategies in South African higher education

    Directory of Open Access Journals (Sweden)

    Surika Civilcharran

    2016-11-01

    Full Text Available Background: In spite of the enormous amount of information available on the Web and the fact that search engines are continuously evolving to enhance the search experience, students are nevertheless faced with the difficulty of effectively retrieving information. It is, therefore, imperative for the interaction between students and search tools to be understood and search strategies to be identified, in order to promote successful information retrieval. Objectives: This study identifies the Web search strategies used by postgraduate students and forms part of a wider study into information retrieval strategies used by postgraduate students at the University of KwaZulu-Natal (UKZN, Pietermaritzburg campus, South Africa. Method: Largely underpinned by Thatcher’s cognitive search strategies, the mixed-methods approach was utilised for this study, in which questionnaires were employed in Phase 1 and structured interviews in Phase 2. This article reports and reflects on the findings of Phase 2, which focus on identifying the Web search strategies employed by postgraduate students. The Phase 1 results were reported in Civilcharran, Hughes and Maharaj (2015. Results: Findings reveal the Web search strategies used for academic information retrieval. In spite of easy access to the invisible Web and the advent of meta-search engines, the use of Web search engines still remains the preferred search tool. The UKZN online library databases and especially the UKZN online library, Online Public Access Catalogue system, are being underutilised. Conclusion: Being ranked in the top three percent of the world’s universities, UKZN is investing in search tools that are not being used to their full potential. This evidence suggests an urgent need for students to be trained in Web searching and to have a greater exposure to a variety of search tools. This article is intended to further contribute to the design of undergraduate training programmes in order to deal

  16. Promoter Boundaries for the luxCDABE and betIBA-proXWV Operons in Vibrio harveyi Defined by the Method Rapid Arbitrary PCR Insertion Libraries (RAIL).

    Science.gov (United States)

    Hustmyer, Christine M; Simpson, Chelsea A; Olney, Stephen G; Rusch, Douglas B; Bochman, Matthew L; van Kessel, Julia C

    2018-06-01

    Experimental studies of transcriptional regulation in bacteria require the ability to precisely measure changes in gene expression, often accomplished through the use of reporter genes. However, the boundaries of promoter sequences required for transcription are often unknown, thus complicating the construction of reporters and genetic analysis of transcriptional regulation. Here, we analyze reporter libraries to define the promoter boundaries of the luxCDABE bioluminescence operon and the betIBA-proXWV osmotic stress operon in Vibrio harveyi We describe a new method called r apid a rbitrary PCR i nsertion l ibraries (RAIL) that combines the power of arbitrary PCR and isothermal DNA assembly to rapidly clone promoter fragments of various lengths upstream of reporter genes to generate large libraries. To demonstrate the versatility and efficiency of RAIL, we analyzed the promoters driving expression of the luxCDABE and betIBA-proXWV operons and created libraries of DNA fragments from these loci fused to fluorescent reporters. Using flow cytometry sorting and deep sequencing, we identified the DNA regions necessary and sufficient for maximum gene expression for each promoter. These analyses uncovered previously unknown regulatory sequences and validated known transcription factor binding sites. We applied this high-throughput method to gfp , mCherry , and lacZ reporters and multiple promoters in V. harveyi We anticipate that the RAIL method will be easily applicable to other model systems for genetic, molecular, and cell biological applications. IMPORTANCE Gene reporter constructs have long been essential tools for studying gene regulation in bacteria, particularly following the recent advent of fluorescent gene reporters. We developed a new method that enables efficient construction of promoter fusions to reporter genes to study gene regulation. We demonstrate the versatility of this technique in the model bacterium Vibrio harveyi by constructing promoter libraries

  17. Building a Better Fragment Library for De Novo Protein Structure Prediction

    Science.gov (United States)

    de Oliveira, Saulo H. P.; Shi, Jiye; Deane, Charlotte M.

    2015-01-01

    Fragment-based approaches are the current standard for de novo protein structure prediction. These approaches rely on accurate and reliable fragment libraries to generate good structural models. In this work, we describe a novel method for structure fragment library generation and its application in fragment-based de novo protein structure prediction. The importance of correct testing procedures in assessing the quality of fragment libraries is demonstrated. In particular, the exclusion of homologs to the target from the libraries to correctly simulate a de novo protein structure prediction scenario, something which surprisingly is not always done. We demonstrate that fragments presenting different predominant predicted secondary structures should be treated differently during the fragment library generation step and that exhaustive and random search strategies should both be used. This information was used to develop a novel method, Flib. On a validation set of 41 structurally diverse proteins, Flib libraries presents both a higher precision and coverage than two of the state-of-the-art methods, NNMake and HHFrag. Flib also achieves better precision and coverage on the set of 275 protein domains used in the two previous experiments of the the Critical Assessment of Structure Prediction (CASP9 and CASP10). We compared Flib libraries against NNMake libraries in a structure prediction context. Of the 13 cases in which a correct answer was generated, Flib models were more accurate than NNMake models for 10. “Flib is available for download at: http://www.stats.ox.ac.uk/research/proteins/resources”. PMID:25901595

  18. Building a better fragment library for de novo protein structure prediction.

    Directory of Open Access Journals (Sweden)

    Saulo H P de Oliveira

    Full Text Available Fragment-based approaches are the current standard for de novo protein structure prediction. These approaches rely on accurate and reliable fragment libraries to generate good structural models. In this work, we describe a novel method for structure fragment library generation and its application in fragment-based de novo protein structure prediction. The importance of correct testing procedures in assessing the quality of fragment libraries is demonstrated. In particular, the exclusion of homologs to the target from the libraries to correctly simulate a de novo protein structure prediction scenario, something which surprisingly is not always done. We demonstrate that fragments presenting different predominant predicted secondary structures should be treated differently during the fragment library generation step and that exhaustive and random search strategies should both be used. This information was used to develop a novel method, Flib. On a validation set of 41 structurally diverse proteins, Flib libraries presents both a higher precision and coverage than two of the state-of-the-art methods, NNMake and HHFrag. Flib also achieves better precision and coverage on the set of 275 protein domains used in the two previous experiments of the the Critical Assessment of Structure Prediction (CASP9 and CASP10. We compared Flib libraries against NNMake libraries in a structure prediction context. Of the 13 cases in which a correct answer was generated, Flib models were more accurate than NNMake models for 10. "Flib is available for download at: http://www.stats.ox.ac.uk/research/proteins/resources".

  19. The Weakest Link: Library Catalogs.

    Science.gov (United States)

    Young, Terrence E., Jr.

    2002-01-01

    Describes methods of correcting MARC records in online public access catalogs in school libraries. Highlights include in-house methods; professional resources; conforming to library cataloging standards; vendor services, including Web-based services; software specifically developed for record cleanup; and outsourcing. (LRW)

  20. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on

  1. Identification of toxic cyclopeptides based on mass spectral library matching

    Directory of Open Access Journals (Sweden)

    Boris L. Milman

    2014-08-01

    Full Text Available To gain perspective on the use of tandem mass spectral libraries for identification of toxic cyclic peptides, the new library was built from 263 mass spectra (mainly MS2 spectra of 59 compounds of that group, such as microcystins, amatoxins, and some related compounds. Mass spectra were extracted from the literature or specially acquired on ESI-Q-ToF and MALDI-ToF/ToF tandem instruments. ESI-MS2 product-ion mass spectra appeared to be rather close to MALDI-ToF/ToF fragment spectra which are uncommon for mass spectral libraries. Testing of the library was based on searches where reference spectra were in turn cross-compared. The percentage of 1st rank correct identifications (true positives was 70% in a general case and 88–91% without including knowingly defective (‘one-dimension’ spectra as test ones. The percentage of 88–91% is the principal estimate for the overall performance of this library that can be used in a method of choice for identification of individual cyclopeptides and also for group recognition of individual classes of such peptides. The approach to identification of cyclopeptides based on mass spectral library matching proved to be the most effective for abundant toxins. That was confirmed by analysis of extracts from two cyanobacterial strains.

  2. Apps and Mobile Support Services in Canadian Academic Medical Libraries

    Directory of Open Access Journals (Sweden)

    Tess Grynoch

    2016-12-01

    Full Text Available Objective: To examine how Canadian academic medical libraries are supporting mobile apps, what apps are currently being provided by these libraries, and what types of promotion are being used. Methods: A survey of the library websites for the 17 medical schools in Canada was completed. For each library website surveyed, the medical apps listed on the website, any services mentioned through this medium, and any type of app promotion events were noted. When Facebook and Twitter accounts were evident, the tweets were searched and the past two years of Facebook posts scanned for mention of medical apps or mobile services/events. Results: All seventeen academic medical libraries had lists of mobile medical apps with a large range in the number of medical relevant apps (average=31, median= 23. A total of 275 different apps were noted and the apps covered a wide range of subjects. Five of the 14 Facebook accounts scanned had posts about medical apps in the past two years while 11 of the 15 Twitter accounts had tweets about medical apps. Social media was only one of the many promotional methods noted. Outside of the app lists and mobile resources guides, Canadian academic medical libraries are providing workshops, presentations, and drop-in sessions for mobile medical apps. Conclusion: While librarians cannot simply compare mobile services and resources between academic medical libraries without factoring in a number of other circumstances, librarians can learn from mobile resources strategies employed at other libraries, such as using research guides to increase medical app literacy.

  3. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  4. Hybridization of Sensing Methods of the Search Domain and Adaptive Weighted Sum in the Pareto Approximation Problem

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP  sequences.

  5. Towards a suite of test cases and a pycomodo library to assess and improve numerical methods in ocean models

    Science.gov (United States)

    Garnier, Valérie; Honnorat, Marc; Benshila, Rachid; Boutet, Martial; Cambon, Gildas; Chanut, Jérome; Couvelard, Xavier; Debreu, Laurent; Ducousso, Nicolas; Duhaut, Thomas; Dumas, Franck; Flavoni, Simona; Gouillon, Flavien; Lathuilière, Cyril; Le Boyer, Arnaud; Le Sommer, Julien; Lyard, Florent; Marsaleix, Patrick; Marchesiello, Patrick; Soufflet, Yves

    2016-04-01

    bed to continue research in numerical approaches as well as an efficient tool to maintain any oceanic code and assure the users a stamped model in a certain range of hydrodynamical regimes. Thanks to a common netCDF format, this suite is completed with a python library that encompasses all the tools and metrics used to assess the efficiency of the numerical methods. References - Couvelard X., F. Dumas, V. Garnier, A.L. Ponte, C. Talandier, A.M. Treguier (2015). Mixed layer formation and restratification in presence of mesoscale and submesoscale turbulence. Ocean Modelling, Vol 96-2, p 243-253. doi:10.1016/j.ocemod.2015.10.004. - Soufflet Y., P. Marchesiello, F. Lemarié, J. Jouanno, X. Capet, L. Debreu , R. Benshila (2016). On effective resolution in ocean models. Ocean Modelling, in press. doi:10.1016/j.ocemod.2015.12.004

  6. Coherent search of continuous gravitational wave signals: extension of the 5-vectors method to a network of detectors

    International Nuclear Information System (INIS)

    Astone, P; Colla, A; Frasca, S; Palomba, C; D'Antonio, S

    2012-01-01

    We describe the extension to multiple datasets of a coherent method for the search of continuous gravitational wave signals, based on the computation of 5-vectors. In particular, we show how to coherently combine different datasets belonging to the same detector or to different detectors. In the latter case the coherent combination is the way to have the maximum increase in signal-to-noise ratio. If the datasets belong to the same detector the advantage comes mainly from the properties of a quantity called coherence which is helpful (in both cases, in fact) in rejecting false candidates. The method has been tested searching for simulated signals injected in Gaussian noise and the results of the simulations are discussed.

  7. A Simple Time Domain Collocation Method to Precisely Search for the Periodic Orbits of Satellite Relative Motion

    Directory of Open Access Journals (Sweden)

    Xiaokui Yue

    2014-01-01

    Full Text Available A numerical approach for obtaining periodic orbits of satellite relative motion is proposed, based on using the time domain collocation (TDC method to search for the periodic solutions of an exact J2 nonlinear relative model. The initial conditions for periodic relative orbits of the Clohessy-Wiltshire (C-W equations or Tschauner-Hempel (T-H equations can be refined with this approach to generate nearly bounded orbits. With these orbits, a method based on the least-squares principle is then proposed to generate projected closed orbit (PCO, which is a reference for the relative motion control. Numerical simulations reveal that the presented TDC searching scheme is effective and simple, and the projected closed orbit is very fuel saving.

  8. Methods and pitfalls in searching drug safety databases utilising the Medical Dictionary for Regulatory Activities (MedDRA).

    Science.gov (United States)

    Brown, Elliot G

    2003-01-01

    The Medical Dictionary for Regulatory Activities (MedDRA) is a unified standard terminology for recording and reporting adverse drug event data. Its introduction is widely seen as a significant improvement on the previous situation, where a multitude of terminologies of widely varying scope and quality were in use. However, there are some complexities that may cause difficulties, and these will form the focus for this paper. Two methods of searching MedDRA-coded databases are described: searching based on term selection from all of MedDRA and searching based on terms in the safety database. There are several potential traps for the unwary in safety searches. There may be multiple locations of relevant terms within a system organ class (SOC) and lack of recognition of appropriate group terms; the user may think that group terms are more inclusive than is the case. MedDRA may distribute terms relevant to one medical condition across several primary SOCs. If the database supports the MedDRA model, it is possible to perform multiaxial searching: while this may help find terms that might have been missed, it is still necessary to consider the entire contents of the SOCs to find all relevant terms and there are many instances of incomplete secondary linkages. It is important to adjust for multiaxiality if data are presented using primary and secondary locations. Other sources for errors in searching are non-intuitive placement and the selection of terms as preferred terms (PTs) that may not be widely recognised. Some MedDRA rules could also result in errors in data retrieval if the individual is unaware of these: in particular, the lack of multiaxial linkages for the Investigations SOC, Social circumstances SOC and Surgical and medical procedures SOC and the requirement that a PT may only be present under one High Level Term (HLT) and one High Level Group Term (HLGT) within any single SOC. Special Search Categories (collections of PTs assembled from various SOCs by

  9. Parallel metaheuristics in computational biology: an asynchronous cooperative enhanced scatter search method

    OpenAIRE

    Penas, David R.; González, Patricia; Egea, José A.; Banga, Julio R.; Doallo, Ramón

    2015-01-01

    Metaheuristics are gaining increased attention as efficient solvers for hard global optimization problems arising in bioinformatics and computational systems biology. Scatter Search (SS) is one of the recent outstanding algorithms in that class. However, its application to very hard problems, like those considering parameter estimation in dynamic models of systems biology, still results in excessive computation times. In order to reduce the computational cost of the SS and improve its success...

  10. NEW METHOD FOR REACHING CONSUMERS OVER THE INTERNET: "SEARCH ENGINE MARKETING”

    OpenAIRE

    Ergezer, Çağrı

    2018-01-01

    Internet has become a platform which reached millions of users momentarily with increased use, also become a place where people spent most of their time during the day by gaining consumer and potential customer ID in addition to just being ordinary Internet users. Search engines also have earned the distinction of being the preferred reference for users in the Internet sea which draws attention with usage rate and allowing you to easily reach the sought-after content where millions of content...

  11. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  12. Danish Post‐Secondary Students Use Public Libraries for Study Purposes. A review of: Pors, Niels Ole. “The Public Library and Students’ Information Needs.” New Library World 107.1226/12272 (2006: 275‐85.

    Directory of Open Access Journals (Sweden)

    Julie McKenna

    2007-09-01

    Full Text Available Objective – To determine whether and how Danish university and higher education students use public libraries for studypurposes.Design – Online survey.Setting – Post‐secondary programs in Denmark.Subjects – 1,575 students in university‐level programs or other higher education programs (vocational three‐to‐four‐year programs in Denmark.Methods – A sample of students was drawn from the national database of students by selecting every student born on the 15th of every month (approximately 4,900 students. A letter describing the study and with an invitation to fill out an online questionnaire was sent to all students in the sample. There were 1,694 valid responses (approximately 35% response rate. Students following short vocational programs were deemed to be under‐represented and these subjects were omitted from the analysis of this report,which reflects the response of 1,575 students. The online questionnaire gathered demographic details (gender, age, educational institution, study topic, study year, geographical location, access to the Internet, etc. and used 110 questions or statements to gather information about student information‐seeking behaviour related to study purposes. These included use of the physical library and satisfaction with services, use of search engines, awareness and use of library Web‐based services, study behaviour, and participation in information literacy activities. Main results – For the purposes of this study, “academic library is used as a generic term covering university libraries, research libraries, educational libraries and all other kind of libraries outside the field of public libraries” (p. 278. The survey results confirmed many of the previous international reports of student information‐seeking behaviour: 85% of students use the academic library for study purposes; fewer than 10% of all students are able to cope without any library use; students in technology and engineering

  13. A Comparison of Local Search Methods for the Multicriteria Police Districting Problem on Graph

    Directory of Open Access Journals (Sweden)

    F. Liberatore

    2016-01-01

    Full Text Available In the current economic climate, law enforcement agencies are facing resource shortages. The effective and efficient use of scarce resources is therefore of the utmost importance to provide a high standard public safety service. Optimization models specifically tailored to the necessity of police agencies can help to ameliorate their use. The Multicriteria Police Districting Problem (MC-PDP on a graph concerns the definition of sound patrolling sectors in a police district. The objective of this problem is to partition a graph into convex and continuous subsets, while ensuring efficiency and workload balance among the subsets. The model was originally formulated in collaboration with the Spanish National Police Corps. We propose for its solution three local search algorithms: a Simple Hill Climbing, a Steepest Descent Hill Climbing, and a Tabu Search. To improve their diversification capabilities, all the algorithms implement a multistart procedure, initialized by randomized greedy solutions. The algorithms are empirically tested on a case study on the Central District of Madrid. Our experiments show that the solutions identified by the novel Tabu Search outperform the other algorithms. Finally, research guidelines for future developments on the MC-PDP are given.

  14. Development of library preparation method able to correct gene expression levels in rice anther and isolate a trace expression gene mediated in cold-resistance

    International Nuclear Information System (INIS)

    Yamaguchi, Tomoya; Koike, Setsuo

    2000-01-01

    When cDNA library is prepared by a previously developed method, genes of which expression level is high are apt to be cloned at a high frequency, whereas genes of which expression level are low, are difficult to be cloned. A low-expression gene has been cloned at very low frequency. Therefore, the gene encoding the key enzyme that is involved in growth disturbance of rice pollen has not been identified. In this study, development of a library preparing method able to correct the expression level was attempted using highly sensitive detection method with radioisotope and some genes related to cold-resistance of rice were isolated. Double strand DNAs were synthesized using mRNA extract from rice anthers and annealed following heat-denaturation. It has been known that single strand DNA molecules abundantly existing in DNA solution can easily aggregate to form double strand DNA, but single stranded DNA molecules poor in the solution are apt to still remain as single strand after annealing. Thus, the amount of single strand DNA would be balanced in the solution between abundant DNA and poor DNA species. The authors succeeded to prepare a gene library including low and high expression genes at similar proportions. Moreover, spin trap method that allows RI labeling of DNA bound to latex particle, was developed to detect with high sensitivity, especially for genes that are expressed at low level. The present method could be used for recovery, detection and quantitative analysis of radiolabeled single strand DNA. Thus, it was demonstrated that the stage from tetrad sperm to small sperm might be easily affected by cold stress. The present results suggest that the expressions of β-1 and β-3 glucanase, which are involved in the release of small sperms following meiosis in the pollen formation, might be easily affected by cold stress. (M.N.)

  15. Development of library preparation method able to correct gene expression levels in rice anther and isolate a trace expression gene mediated in cold-resistance

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Tomoya; Koike, Setsuo [Tohoku National Agricultural Experiment Station, Morioka (Japan)

    2000-02-01

    When cDNA library is prepared by a previously developed method, genes of which expression level is high are apt to be cloned at a high frequency, whereas genes of which expression level are low, are difficult to be cloned. A low-expression gene has been cloned at very low frequency. Therefore, the gene encoding the key enzyme that is involved in growth disturbance of rice pollen has not been identified. In this study, development of a library preparing method able to correct the expression level was attempted using highly sensitive detection method with radioisotope and some genes related to cold-resistance of rice were isolated. Double strand DNAs were synthesized using mRNA extract from rice anthers and annealed following heat-denaturation. It has been known that single strand DNA molecules abundantly existing in DNA solution can easily aggregate to form double strand DNA, but single stranded DNA molecules poor in the solution are apt to still remain as single strand after annealing. Thus, the amount of single strand DNA would be balanced in the solution between abundant DNA and poor DNA species. The authors succeeded to prepare a gene library including low and high expression genes at similar proportions. Moreover, spin trap method that allows RI labeling of DNA bound to latex particle, was developed to detect with high sensitivity, especially for genes that are expressed at low level. The present method could be used for recovery, detection and quantitative analysis of radiolabeled single strand DNA. Thus, it was demonstrated that the stage from tetrad sperm to small sperm might be easily affected by cold stress. The present results suggest that the expressions of {beta}-1 and {beta}-3 glucanase, which are involved in the release of small sperms following meiosis in the pollen formation, might be easily affected by cold stress. (M.N.)

  16. The National Cryptologic Museum Library

    Science.gov (United States)

    2010-09-01

    telegrams. Modern communications and encryption methods have made them obsolete and mainly of historical interest. The library is also home to a...interpretations. Cross References The National Cryptologic Museum Library Eugene Becker Last year, a widely published German technical author, Klaus...Schmeh, e-mailed the library of the National Cryptologic Museum from his home in Gelsenkirchen, Germany. He needed information for an article on the

  17. MyLibrary: A Web Personalized Digital Library.

    Science.gov (United States)

    Rocha, Catarina; Xexeo, Geraldo; da Rocha, Ana Regina C.

    With the increasing availability of information on Internet information providers, like search engines, digital libraries and online databases, it becomes more important to have personalized systems that help users to find relevant information. One type of personalization that is growing in use is recommender systems. This paper presents…

  18. The library

    International Nuclear Information System (INIS)

    1980-01-01

    A specialized library is essential for conducting the research work of the Uranium Institute. The need was recognized at the foundation of the Institute and a full-time librarian was employed in 1976 to establish the necessary systems and begin the task of building up the collection. A brief description is given of the services offered by the library which now contains books, periodicals, pamphlets and press cuttings, focussed on uranium and nuclear energy, but embracing economics, politics, trade, legislation, geology, mining and mineral processing, environmental protection and nuclear technology. (author)

  19. The Safari E-Book Route through the ICT Jungle: Experiences at Hillingdon Libraries

    Science.gov (United States)

    Fernandes, Derrick

    2007-01-01

    Purpose: The paper seeks to describe the provision of access to the Safari Tech Books collection of e-books at Hillingdon Libraries. Design/methodology/approach: Details are given of how the e-books collection is part of a broader range of e-information services provided by Hillingdon. Methods of searching for e-books are described, with…

  20. Optimal generation and reserve dispatch in a multi-area competitive market using a hybrid direct search method

    International Nuclear Information System (INIS)

    Chen, C.-L.

    2005-01-01

    With restructuring of the power industry, competitive bidding for energy and ancillary services are increasingly recognized as an important part of electricity markets. It is desirable to optimize not only the generator's bid prices for energy and for providing minimized ancillary services but also the transmission congestion costs. In this paper, a hybrid approach of combining sequential dispatch with a direct search method is developed to deal with the multi-product and multi-area electricity market dispatch problem. The hybrid direct search method (HDSM) incorporates sequential dispatch into the direct search method to facilitate economic sharing of generation and reserve across areas and to minimize the total market cost in a multi-area competitive electricity market. The effects of tie line congestion and area spinning reserve requirement are also consistently reflected in the marginal price in each area. Numerical experiments are included to understand the various constraints in the market cost analysis and to provide valuable information for market participants in a pool oriented electricity market

  1. Optimal generation and reserve dispatch in a multi-area competitive market using a hybrid direct search method

    International Nuclear Information System (INIS)

    Chun Lung Chen

    2005-01-01

    With restructuring of the power industry, competitive bidding for energy and ancillary services are increasingly recognized as an important part of electricity markets. It is desirable to optimize not only the generator's bid prices for energy and for providing minimized ancillary services but also the transmission congestion costs. In this paper, a hybrid approach of combining sequential dispatch with a direct search method is developed to deal with the multi-product and multi-area electricity market dispatch problem. The hybrid direct search method (HDSM) incorporates sequential dispatch into the direct search method to facilitate economic sharing of generation and reserve across areas and to minimize the total market cost in a multi-area competitive electricity market. The effects of tie line congestion and area spinning reserve requirement are also consistently reflected in the marginal price in each area. Numerical experiments are included to understand the various constraints in the market cost analysis and to provide valuable information for market participants in a pool oriented electricity market. (author)

  2. Libraries on the MOVE.

    Science.gov (United States)

    Edgar, Jim; And Others

    1986-01-01

    Presents papers from Illinois State Library and Shawnee Library System's "Libraries on the MOVE" conference focusing on how libraries can impact economic/cultural climate of an area. Topics addressed included information services of rural libraries; marketing; rural library development; library law; information access; interagency…

  3. Personal Virtual Libraries

    Science.gov (United States)

    Pappas, Marjorie L.

    2004-01-01

    Virtual libraries are becoming more and more common. Most states have a virtual library. A growing number of public libraries have a virtual presence on the Web. Virtual libraries are a growing addition to school library media collections. The next logical step would be personal virtual libraries. A personal virtual library (PVL) is a collection…

  4. America's Star Libraries

    Science.gov (United States)

    Lyons, Ray; Lance, Keith Curry

    2009-01-01

    "Library Journal"'s new national rating of public libraries, the "LJ" Index of Public Library Service, identifies 256 "star" libraries. It rates 7,115 public libraries. The top libraries in each group get five, four, or three Michelin guide-like stars. All included libraries, stars or not, can use their scores to learn from their peers and improve…

  5. Exposing Library Services with AngularJS

    OpenAIRE

    Jakob Voß; Moritz Horn

    2014-01-01

    This article provides an introduction to the JavaScript framework AngularJS and specific AngularJS modules for accessing library services. It shows how information such as search suggestions, additional links, and availability can be embedded in any website. The ease of reuse may encourage more libraries to expose their services via standard APIs to allow usage in different contexts.

  6. Helping Students Use Virtual Libraries Effectively.

    Science.gov (United States)

    Fitzgerald, Mary Ann; Galloway, Chad

    2001-01-01

    Describes a study in which online behavior of high school and undergraduate students using GALILEO (Georgia Library Learning Online), a virtual library, were observed. Topics include cognitive demands; technology literacy; domain knowledge; search strategies; relevance; evaluation of information; information literacy standards; and suggestions to…

  7. Can Library Use Enhance Intercultural Education?

    Science.gov (United States)

    Pihl, Joron

    2012-01-01

    This article explores the questions to what extent educational research addresses library use in education and how the library can contribute to intercultural education. The focus is primarily on elementary education in Europe. Analysis of research publications was based on searches for peer-reviewed journals in international databases, literary…

  8. Library of Alexandria's New Web Site

    Directory of Open Access Journals (Sweden)

    2004-06-01

    Full Text Available A review for the new version of Library of Alexandria web site which lunched on May 2004, the review deals with general introduction to the new version , then the main 6 section of the site , and show some features of the new site, and finally talk in concentration about the library catalog on the internet and its search capabilities.

  9. Google Scholar Usage: An Academic Library's Experience

    Science.gov (United States)

    Wang, Ya; Howard, Pamela

    2012-01-01

    Google Scholar is a free service that provides a simple way to broadly search for scholarly works and to connect patrons with the resources libraries provide. The researchers in this study analyzed Google Scholar usage data from 2006 for three library tools at San Francisco State University: SFX link resolver, Web Access Management proxy server,…

  10. Assessing the search for information on three Rs methods, and their subsequent implementation: a national survey among scientists in the Netherlands

    NARCIS (Netherlands)

    Luijk, J. van; Cuijpers, Y.M.; Vaart, L. van der; Leenaars, M; Ritskes-Hoitinga, M.

    2011-01-01

    A local survey conducted among scientists into the current practice of searching for information on Three Rs (i.e. Replacement, Reduction and Refinement) methods has highlighted the gap between the statutory requirement to apply Three Rs methods and the lack of criteria to search for them. To verify

  11. Assessing the Search for Information on Three Rs Methods, and their Subsequent Implementation: A National Survey among Scientists in The Netherlands.

    NARCIS (Netherlands)

    Luijk, J. van; Cuijpers, Y.M.; Vaart, L. van der; Leenaars, M.; Ritskes-Hoitinga, M.

    2011-01-01

    A local survey conducted among scientists into the current practice of searching for information on Three Rs (i.e. Replacement, Reduction and Refinement) methods has highlighted the gap between the statutory requirement to apply Three Rs methods and the lack of criteria to search for them. To verify

  12. Optimizing use of library technology.

    Science.gov (United States)

    Wink, Diane M; Killingsworth, Elizabeth K

    2011-01-01

    In this bimonthly series, the author examines how nurse educators can use the Internet and Web-based computer technologies such as search, communication, collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes optimizing the use of library technology.

  13. Charging Users for Library Service.

    Science.gov (United States)

    Cooper, Michael D.

    1978-01-01

    Examines the question of instituting direct charges for library service, using on-line bibliographic searching as an example, and contrasts this with the current indirect charging system where services are paid for by taxes. Information, as a merit good, should be supplied with or without direct charges, depending upon user status. (CWM)

  14. Library Networks and Electronic Publishing.

    Science.gov (United States)

    Olvey, Lee D.

    1995-01-01

    Provides a description of present and proposed plans and strategies of OCLC (Online Computer Library Center) and their relationship to electronic publishing. FirstSearch (end-user access to secondary information), GUIDON (electronic journals online) and FastDoc (document delivery) are emphasized. (JKP)

  15. Improvement Of Search Process In Electronic Catalogues

    Directory of Open Access Journals (Sweden)

    Titas Savickas

    2014-05-01

    Full Text Available The paper presents investigation on search in electronic catalogues. The chosen problem domain is the search system in the electronic catalogue of Lithuanian Academic Libraries. The catalogue uses ALEPH system with MARC21 bibliographic format. The article presents analysis of problems pertaining to the current search engine and user expectations related to the search system of the electronic catalogue of academic libraries. Subsequent to analysis, the research paper presents the architecture for a semantic search system in the electronic catalogue that uses search process designed to improve search results for users.

  16. An Investigation of the ‘Creative Consultation’ Process and Methods to Capture and Transfer Good Practice in Public Libraries

    OpenAIRE

    Sung, Hui-Yun; Ragsdell, Gillian; Hepworth, Mark

    2008-01-01

    This paper is based on early reflections from Sung’s Master’s dissertation and current Ph.D. research regarding the present ‘creative consultation’ practices in public libraries. Since the Ph.D. is in its early stages, this paper is an opportunity to offer a review of the main literature, related to consultation practices and theories. An awareness of the importance of effective consultation is increasing. This paper aims to discuss the key features of community consultation in public libr...

  17. PROPOSAL OF METHOD FOR AN AUTOMATIC COMPLEMENTARITIES SEARCH BETWEEN COMPANIES' R&D

    OpenAIRE

    PAULO VINÍCIUS MARCONDES CORDEIRO; DARIO EDUARDO AMARAL DERGINT; KAZUO HATAKEYAMA

    2014-01-01

    Open innovation model is the best choice for the firms that cannot afford R&D costs but intent to continue playing the innovation game. This model offers to any firm the possibility to have companies spread worldwide and in all research fields as partners in R&D. However, the possible partnership can be restricted to the manager's know-who. Patent documents can be the source of rich information about technical development and innovation from a huge amount of firms. Search through all these da...

  18. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  19. Preliminary exploration and thought of promoting library science Indigenization

    International Nuclear Information System (INIS)

    Liu Wenping; Du Jingling

    2014-01-01

    The article explains the significance of Library Science Indigenization, Answer some misunderstanding of Library Science Indigenization,reveals express form of Library Science Indigenization, Discusses criteria of Library Science Indigenization, finally give some suggestions and methods of Library Science Indigenization. (authors)

  20. Strategic marketing planning in library

    Directory of Open Access Journals (Sweden)

    Karmen Štular-Sotošek

    2000-01-01

    Full Text Available The article is based on the idea that every library can design instruments for creating events and managing the important resources of today's world, especially to manage the changes. This process can only be successful if libraries use adequate marketing methods. Strategic marketing planning starts with the analysis of library's mission, its objectives, goals and corporate culture. By analysing the public environment, the competitive environment and the macro environment, libraries recognise their opportunities and threats. These analyses are the foundations for library definitions: What does the library represent?, What does it aspire to? Which goals does it want to reach? What kind of marketing strategy will it use for its target market?

  1. Normalized cDNA libraries

    Science.gov (United States)

    Soares, Marcelo B.; Efstratiadis, Argiris

    1997-01-01

    This invention provides a method to normalize a directional cDNA library constructed in a vector that allows propagation in single-stranded circle form comprising: (a) propagating the directional cDNA library in single-stranded circles; (b) generating fragments complementary to the 3' noncoding sequence of the single-stranded circles in the library to produce partial duplexes; (c) purifying the partial duplexes; (d) melting and reassociating the purified partial duplexes to moderate Cot; and (e) purifying the unassociated single-stranded circles, thereby generating a normalized cDNA library.

  2. Multivariate methods and the search for single top-quark production in association with a $W$ boson in ATLAS

    CERN Document Server

    Kovesarki, Peter; Dingfelder, Jochen

    This thesis describes three machine learning algorithms that can be used for physics analyses. The first is a density estimator that was derived from the Green’s function identity of the Laplace operator and is capable of tagging data samples according to the signal purity. This latter task can also be performed with regression methods, and such an algorithm was implemented based on fast multi-dimensional polynomial regression. The accuracy was improved with a decision tree using smooth boundaries. Both methods apply rigorous checks against overtraining to make sure the results are drawn from statistically significant features. These two methods were applied in the search for the single top-quark production with a $W$ boson. Their separation power differ highly in favour for the regression method, mainly because it can exploit the extra information available during training. The third method is an unsupervised learning algorithm that offers finding an optimal coordinate system for a sample in the sense of m...

  3. Search for a transport method for the calculation of the PWR control and safety clusters

    International Nuclear Information System (INIS)

    Bruna, G.B.; Van Frank, C.; Vergain, M.L.; Chauvin, J.P.; Palmiotti, G.; Nobile, M.

    1990-01-01

    The project studies of power reactors rely mainly on diffusion calculations, but transport ones are often needed for assessing fine effects, intimately linked to geometry and spectrum heterogeneities. Accurate transport computations are necessary, in particular, for shielded cross section generation, and when homogenization and dishomogenization processes are involved. The transport codes, generally, offer the user a variety of computational options, related to different approximation levels. In every case, it is obviously desirable to be able to choose the reliable degree of approximation to be accepted in any particular computational circumstance of the project. The search for such adapted procedures is to be made on the basis of critical experiments. In our studies, this task was made possible by the availability of suitable results of the CAMELEON critical experiment, carried on in the EOLE facility at CEA's Center of Cadarache. In this paper, we summarize some of the work in progress at FRAMATOME on the definition of an assembly based transport calculation scheme to be used for PWR control and safety cluster computations. Two main items, devoted to the search of the optimum computational procedures, are presented here: - a parametrical study on computational options, made in an infinite medium assembly geometry, - a series of comparisons between calculated and experimental values of pin power distribution

  4. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  5. Critical Features of Fragment Libraries for Protein Structure Prediction.

    Science.gov (United States)

    Trevizani, Raphael; Custódio, Fábio Lima; Dos Santos, Karina Baptista; Dardenne, Laurent Emmanuel

    2017-01-01

    The use of fragment libraries is a popular approach among protein structure prediction methods and has proven to substantially improve the quality of predicted structures. However, some vital aspects of a fragment library that influence the accuracy of modeling a native structure remain to be determined. This study investigates some of these features. Particularly, we analyze the effect of using secondary structure prediction guiding fragments selection, different fragments sizes and the effect of structural clustering of fragments within libraries. To have a clearer view of how these factors affect protein structure prediction, we isolated the process of model building by fragment assembly from some common limitations associated with prediction methods, e.g., imprecise energy functions and optimization algorithms, by employing an exact structure-based objective function under a greedy algorithm. Our results indicate that shorter fragments reproduce the native structure more accurately than the longer. Libraries composed of multiple fragment lengths generate even better structures, where longer fragments show to be more useful at the beginning of the simulations. The use of many different fragment sizes shows little improvement when compared to predictions carried out with libraries that comprise only three different fragment sizes. Models obtained from libraries built using only sequence similarity are, on average, better than those built with a secondary structure prediction bias. However, we found that the use of secondary structure prediction allows greater reduction of the search space, which is invaluable for prediction methods. The results of this study can be critical guidelines for the use of fragment libraries in protein structure prediction.

  6. Dai-Kou type conjugate gradient methods with a line search only using gradient.

    Science.gov (United States)

    Huang, Yuanyuan; Liu, Changhe

    2017-01-01

    In this paper, the Dai-Kou type conjugate gradient methods are developed to solve the optimality condition of an unconstrained optimization, they only utilize gradient information and have broader application scope. Under suitable conditions, the developed methods are globally convergent. Numerical tests and comparisons with the PRP+ conjugate gradient method only using gradient show that the methods are efficient.

  7. Library security better communication, safer facilities

    CERN Document Server

    Albrecht, Steve

    2015-01-01

    Through the methods outlined in this book, Albrecht demonstrates that effective communication not only makes library users feel more comfortable but also increases staff morale, ensuring the library is place where everyone feels welcome.

  8. Selecting and Using Information Sources: Source Preferences and Information Pathways of Israeli Library and Information Science Students

    Science.gov (United States)

    Bronstein, Jenny

    2010-01-01

    Introduction: The study investigated the source preference criteria of library and information science students for their academic and personal information needs. Method: The empirical study was based on two methods of data collection. Eighteen participants wrote a personal diary for four months in which they recorded search episodes and answered…

  9. Performance for the hybrid method using stochastic and deterministic searching for shape optimization of electromagnetic devices

    International Nuclear Information System (INIS)

    Yokose, Yoshio; Noguchi, So; Yamashita, Hideo

    2002-01-01

    Stochastic methods and deterministic methods are used for the problem of optimization of electromagnetic devices. The Genetic Algorithms (GAs) are used for one stochastic method in multivariable designs, and the deterministic method uses the gradient method, which is applied sensitivity of the objective function. These two techniques have benefits and faults. In this paper, the characteristics of those techniques are described. Then, research evaluates the technique by which two methods are used together. Next, the results of the comparison are described by applying each method to electromagnetic devices. (Author)

  10. MARKETING LIBRARY SERVICES IN ACADEMIC LIBRARIES: A ...

    African Journals Online (AJOL)

    MARKETING LIBRARY SERVICES IN ACADEMIC LIBRARIES: A TOOL FOR SURVIVAL IN THE ... This article discusses the concept of marketing library and information services as an ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  11. Science photo library

    CERN Document Server

    1999-01-01

    SPL [Science Photo Library] holds a wide range of pictures on all aspects of science, medicine and technology. The pictures come with detailed captions and are available as high quality transparencies in medium or 35mm format. Digital files can be made available on request. Our website provides low resolution files of the pictures in this catalogue, which you can search and download for layout presentation use once you have registered. High resolution files or reproduction are available on request and can be delivered to you by disk or ISDN. Visit the online catalog: www.sciencephoto.com

  12. Comparisons of peak-search and photopeak-integration methods in the computer analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Baedecker, P.A.

    1980-01-01

    Myriad methods have been devised for extracting quantitative information from gamma-ray spectra by means of a computer, and a critical evaluation of the relative merits of the various programs that have been written would represent a Herculean, if not an impossible, task. The results from the International Atomic Energy Agency (IAEA) intercomparison, which may represent the most straightforward approach to making such an evaluation, showed a wide range in the quality of the results - even among laboratories where similar methods were used. The most clear-cut way of differentiating between programs is by the method used to evaluate peak areas: by the iterative fitting of the spectral features to an often complex model, or by a simple summation procedure. Previous comparisons have shown that relatively simple algorithms can compete favorably with fitting procedures, although fitting holds the greatest promise for the detection and measurement of complex peaks. However, fitting algorithms, which are generally complex and time consuming, are often ruled out by practical limitations based on the type of computing equipment available, cost limitations, the number of spectra to be processed in a given time period, and the ultimate goal of the analysis. Comparisons of methods can be useful, however, in helping to illustrate the limitations of the various algorithms that have been devised. This paper presents a limited review of some of the more common peak-search and peak-integration methods, along with Peak-search procedures

  13. Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem

    Directory of Open Access Journals (Sweden)

    Vatutin Eduard

    2017-12-01

    Full Text Available The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.

  14. Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem

    Science.gov (United States)

    Vatutin, Eduard

    2017-12-01

    The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.

  15. DISCOVERY OF NINE GAMMA-RAY PULSARS IN FERMI LARGE AREA TELESCOPE DATA USING A NEW BLIND SEARCH METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H. [Albert-Einstein-Institut, Max-Planck-Institut fuer Gravitationsphysik, D-30167 Hannover (Germany); Guillemot, L.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; Freire, P. C. C. [Max-Planck-Institut fuer Radioastronomie, Auf dem Huegel 69, D-53121 Bonn (Germany); Ray, P. S. [Space Science Division, Naval Research Laboratory, Washington, DC 20375-5352 (United States); Belfiore, A.; Dormody, M. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Camilo, F. [Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 (United States); Caraveo, P. A. [INAF-Istituto di Astrofisica Spaziale e Fisica Cosmica, I-20133 Milano (Italy); Celik, Oe.; Ferrara, E. C. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Hessels, J. W. T. [Astronomical Institute ' Anton Pannekoek' , University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands); Keith, M. [CSIRO Astronomy and Space Science, Australia Telescope National Facility, Epping NSW 1710 (Australia); Kerr, M., E-mail: holger.pletsch@aei.mpg.de, E-mail: guillemo@mpifr-bonn.mpg.de [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); and others

    2012-01-10

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative, and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs J1803-2149 and J2111+ 4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 Multiplication-Sign 10{sup 35} erg s{sup -1} and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J0622+3749, J1620-4927, J1746-3239, J2028+3332, J2030+4415, and J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| > 10 Degree-Sign ). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2 Multiplication-Sign 10{sup 11} G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 Multiplication-Sign 10{sup 33} erg s{sup -1}) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  16. Discovery of Nine Gamma-Ray Pulsars in Fermi-Lat Data Using a New Blind Search Method

    Science.gov (United States)

    Celik-Tinmaz, Ozlem; Ferrara, E. C.; Pletsch, H. J.; Allen, B.; Aulbert, C.; Fehrmann, H.; Kramer, M.; Barr, E. D.; Champion, D. J.; Eatough, R. P.; hide

    2011-01-01

    We report the discovery of nine previously unknown gamma-ray pulsars in a blind search of data from the Fermi Large Area Telescope (LAT). The pulsars were found with a novel hierarchical search method originally developed for detecting continuous gravitational waves from rapidly rotating neutron stars. Designed to find isolated pulsars spinning at up to kHz frequencies, the new method is computationally efficient, and incorporates several advances, including a metric-based gridding of the search parameter space (frequency, frequency derivative and sky location) and the use of photon probability weights. The nine pulsars have spin frequencies between 3 and 12 Hz, and characteristic ages ranging from 17 kyr to 3 Myr. Two of them, PSRs Jl803-2149 and J2111+4606, are young and energetic Galactic-plane pulsars (spin-down power above 6 x 10(exp 35) ergs per second and ages below 100 kyr). The seven remaining pulsars, PSRs J0106+4855, J010622+3749, Jl620-4927, Jl746-3239, J2028+3332,J2030+4415, J2139+4716, are older and less energetic; two of them are located at higher Galactic latitudes (|b| greater than 10 degrees). PSR J0106+4855 has the largest characteristic age (3 Myr) and the smallest surface magnetic field (2x 10(exp 11)G) of all LAT blind-search pulsars. PSR J2139+4716 has the lowest spin-down power (3 x l0(exp 33) erg per second) among all non-recycled gamma-ray pulsars ever found. Despite extensive multi-frequency observations, only PSR J0106+4855 has detectable pulsations in the radio band. The other eight pulsars belong to the increasing population of radio-quiet gamma-ray pulsars.

  17. Hybrid Multistarting GA-Tabu Search Method for the Placement of BtB Converters for Korean Metropolitan Ring Grid

    Directory of Open Access Journals (Sweden)

    Remund J. Labios

    2016-01-01

    Full Text Available This paper presents a method to determine the optimal locations for installing back-to-back (BtB converters in a power grid as a countermeasure to reduce fault current levels. The installation of BtB converters can be regarded as network reconfiguration. For the purpose, a hybrid multistarting GA-tabu search method was used to determine the best locations from a preselected list of candidate locations. The constraints used in determining the best locations include circuit breaker fault current limits, proximity of proposed locations, and capability of the solution to reach power flow convergence. A simple power injection model after applying line-opening on selected branches was used as a means for power flows with BtB converters. Kron reduction was also applied as a method for network reduction for fast evaluation of fault currents with a given topology. Simulations of the search method were performed on the Korean power system, particularly the Seoul metropolitan area.

  18. Search for transient ultralight dark matter signatures with networks of precision measurement devices using a Bayesian statistics method

    Science.gov (United States)

    Roberts, B. M.; Blewitt, G.; Dailey, C.; Derevianko, A.

    2018-04-01

    We analyze the prospects of employing a distributed global network of precision measurement devices as a dark matter and exotic physics observatory. In particular, we consider the atomic clocks of the global positioning system (GPS), consisting of a constellation of 32 medium-Earth orbit satellites equipped with either Cs or Rb microwave clocks and a number of Earth-based receiver stations, some of which employ highly-stable H-maser atomic clocks. High-accuracy timing data is available for almost two decades. By analyzing the satellite and terrestrial atomic clock data, it is possible to search for transient signatures of exotic physics, such as "clumpy" dark matter and dark energy, effectively transforming the GPS constellation into a 50 000 km aperture sensor array. Here we characterize the noise of the GPS satellite atomic clocks, describe the search method based on Bayesian statistics, and test the method using simulated clock data. We present the projected discovery reach using our method, and demonstrate that it can surpass the existing constrains by several order of magnitude for certain models. Our method is not limited in scope to GPS or atomic clock networks, and can also be applied to other networks of precision measurement devices.

  19. Evaluated nuclear data library

    International Nuclear Information System (INIS)

    Howerton, R.J.; Dye, R.E.; Perkins, S.T.

    1981-01-01

    The Lawrence Livermore National Laboratory (LLNL) collection of evaluated data for neutron-, photon-, and charged-particle-induced reactions is maintained in a computer-oriented system. In this report we recount the history of Evaluated Nuclear Data Library, describe the methods of evaluation, and give examples of input and output representation of the data

  20. Earthquake effect on volcano and the geological structure in central java using tomography travel time method and relocation hypocenter by grid search method

    International Nuclear Information System (INIS)

    Suharsono; Nurdian, S. W; Palupi, I. R.

    2016-01-01

    Relocating hypocenter is a way to improve the velocity model of the subsurface. One of the method is Grid Search. To perform the distribution of the velocity in subsurface by tomography method, it is used the result of relocating hypocenter to be a reference for subsurface analysis in volcanic and major structural patterns, such as in Central Java. The main data of this study is the earthquake data recorded from 1952 to 2012 with the P wave number is 9162, the number of events is 2426 were recorded by 30 stations located in the vicinity of Central Java. Grid search method has some advantages they are: it can relocate the hypocenter more accurate because this method is dividing space lattice model into blocks, and each grid block can only be occupied by one point hypocenter. Tomography technique is done by travel time data that has had relocated with inversion pseudo bending method. Grid search relocated method show that the hypocenter's depth is shallower than before and the direction is to the south, the hypocenter distribution is modeled into the subduction zone between the continent of Eurasia with the Indo-Australian with an average angle of 14 °. The tomography results show the low velocity value is contained under volcanoes with value of -8% to -10%, then the pattern of the main fault structure in Central Java can be description by the results of tomography at high velocity that is from 8% to 10% with the direction is northwest and northeast-southwest. (paper)