WorldWideScience

Sample records for base published search

  1. Publishing studies: the search for an elusive academic object

    Directory of Open Access Journals (Sweden)

    Sophie Noël

    2015-07-01

    Full Text Available This paper questions the validity of the so-called “publishing studies” as an academic discipline, while trying to situate them within the field of social sciences and to contextualize their success. It argues that a more appropriate frame could be adopted to describe what people studying the transformations of book publishing do – or should do – both at a theoretical and methodological level. The paper begins by providing an overview of the scholarly and academic context in France as far as book publishing is concerned, highlighting its genesis and current development. It goes on to underline the main pitfalls that such a sub-field as publishing studies is faced with, before making suggestions as to the bases for a stimulating analysis of publishing, making a case for an interdisciplinary approach nurtured by social sciences. The paper is based on a long-term field study on independent presses in France, together with a survey of literature on the subject.

  2. Copyright over Works Reproduced and Published Online by Search Engines

    Directory of Open Access Journals (Sweden)

    Ernesto Rengifo García

    2016-12-01

    Full Text Available Search engines are an important technological tool that facilitates the dissemination and access to information on the Internet. However, when it comes to works protected by authors rights, in the case of continental law, or Copyright, for the Anglo-Saxon tradition, it is difficult to define if search engines infringe the rights of the owners of these works. In the face of this situation, the US and Europe have employed the exceptions to autorights and Fair Use to decide whether search engines infringes owners rights. This article carries out a comparative analysis of the different judicial decisions in the US and Europe on search engines and protected works.

  3. Web-Based Computing Resource Agent Publishing

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Web-based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent-child agent framework and primary-slave agent framework were proposed respectively and discussed in detail.

  4. Enrich the E-publishing Community Website with Search Engine Optimization Technique

    Directory of Open Access Journals (Sweden)

    Vadivel Rangasamy

    2011-09-01

    Full Text Available Internet has played vital role in the online business. Every business peoples are needed to show their information clients or end user. In search engines have million indexed pages. A search engine optimization technique has to implement both web applications static and dynamic. There is no issue for create search engine optimization contents to static (web contents does not change until that web site is re host web application and keep up the search engine optimization regulations and state of affairs. A few significant challenges to dynamic content poses. To overcome these challenges to have a fully functional dynamic site that is optimized as much as a static site can be optimized. Whatever user search and they can get information their information quickly. In that circumstance we are using few search engine optimization dynamic web application methods such as User Friendly URL's, URL Redirector and HTML Generic. Both internal and external elements of the site affect the way it's ranked in any given search engine, so all of these elements should be taken into consideration. Implement these concepts to E-publishing Community Website that web site have large amount of dynamic fields with dynamic validations with the help of XML, XSL Java script. A database plays a major role to accomplish this functionality. We can use 3D (static, dynamic and Meta database structures. One of the advantages of the XML/XSLT combination is the ability to separate content from presentation. A data source can return an XML document, then by using an XSLT, the data can be transformed into whatever HTML is needed, based on the data in the XML document. The flexibility of XML/XLST can be combined with the power of ASP.NET server/client controls by using an XSLT to generate the server/client controls dynamically, thus leveraging the best of both worlds.

  5. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  6. A dynamic knowledge base based search engine

    Institute of Scientific and Technical Information of China (English)

    WANG Hui-jin; HU Hua; LI Qing

    2005-01-01

    Search engines have greatly helped us to find thedesired information from the Intemet. Most search engines use keywords matching technique. This paper discusses a Dynamic Knowledge Base based Search Engine (DKBSE), which can expand the user's query using the keywords' concept or meaning. To do this, the DKBSE needs to construct and maintain the knowledge base dynamically via the system's searching results and the user's feedback information. The DKBSE expands the user's initial query using the knowledge base, and returns the searched information after the expanded query.

  7. Quantum searching application in search based software engineering

    Science.gov (United States)

    Wu, Nan; Song, FangMin; Li, Xiangdong

    2013-05-01

    The Search Based Software Engineering (SBSE) is widely used in software engineering for identifying optimal solutions. However, there is no polynomial-time complexity solution used in the traditional algorithms for SBSE, and that causes the cost very high. In this paper, we analyze and compare several quantum search algorithms that could be applied for SBSE: quantum adiabatic evolution searching algorithm, fixed-point quantum search (FPQS), quantum walks, and a rapid modified Grover quantum searching method. The Grover's algorithm is thought as the best choice for a large-scaled unstructured data searching and theoretically it can be applicable to any search-space structure and any type of searching problems.

  8. Tag Based Audio Search Engine

    Directory of Open Access Journals (Sweden)

    Parameswaran Vellachu

    2012-03-01

    Full Text Available The volume of the music database is increasing day by day. Getting the required song as per the choice of the listener is a big challenge. Hence, it is really hard to manage this huge quantity, in terms of searching, filtering, through the music database. It is surprising to see that the audio and music industry still rely on very simplistic metadata to describe music files. However, while searching audio resource, an efficient "Tag Based Audio Search Engine" is necessary. The current research focuses on two aspects of the musical databases 1. Tag Based Semantic Annotation Generation using the tag based approach.2. An audio search engine, using which the user can retrieve the songs based on the users choice. The proposed method can be used to annotation and retrieve songs based on musical instruments used , mood of the song, theme of the song, singer, music director, artist, film director, instrument, genre or style and so on.

  9. Chemical and biological warfare: General studies. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    The bibliography contains citations concerning federally sponsored and conducted studies into chemical and biological warfare operations and planning. These studies cover areas not addressed in other parts of this series. The topics include production and storage of agents, delivery techniques, training, military and civil defense, general planning studies, psychological reactions to chemical warfare, evaluations of materials exposed to chemical agents, and studies on banning or limiting chemical warfare. Other published searches in this series on chemical warfare cover detection and warning, defoliants, protection, and biological studies, including chemistry and toxicology. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  10. Chemical and biological warfare: General studies. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The bibliography contains citations concerning federally sponsored and conducted studies into chemical and biological warfare operations and planning. These studies cover areas not addressed in other parts of this series. The topics include production and storage of agents, delivery techniques, training, military and civil defense, general planning studies, psychological reactions to chemical warfare, evaluations of materials exposed to chemical agents, and studies on banning or limiting chemical warfare. Other published searches in this series on chemical warfare cover detection and warning, defoliants, protection, and biological studies, including chemistry and toxicology.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  11. A Quantitative Analysis of Published Skull Base Endoscopy Literature.

    Science.gov (United States)

    Hardesty, Douglas A; Ponce, Francisco A; Little, Andrew S; Nakaji, Peter

    2016-02-01

    Objectives Skull base endoscopy allows for minimal access approaches to the sinonasal contents and cranial base. Advances in endoscopic technique and applications have been published rapidly in recent decades. Setting We utilized an Internet-based scholarly database (Web of Science, Thomson Reuters) to query broad-based phrases regarding skull base endoscopy literature. Participants All skull base endoscopy publications. Main Outcome Measures Standard bibliometrics outcomes. Results We identified 4,082 relevant skull base endoscopy English-language articles published between 1973 and 2014. The 50 top-cited publications (n = 51, due to articles with equal citation counts) ranged in citation count from 397 to 88. Most of the articles were clinical case series or technique descriptions. Most (96% [49/51])were published in journals specific to either neurosurgery or otolaryngology. Conclusions A relatively small number of institutions and individuals have published a large amount of the literature. Most of the publications consisted of case series and technical advances, with a lack of randomized trials.

  12. Distributed search engine architecture based on topic specific searches

    Science.gov (United States)

    Abudaqqa, Yousra; Patel, Ahmed

    2015-05-01

    Indisputably, search engines (SEs) abound. The monumental growth of users performing online searches on the Web is a contending issue in the contemporary world nowadays. For example, there are tens of billions of searches performed everyday, which typically offer the users many irrelevant results which are time consuming and costly to the user. Based on the afore-going problem it has become a herculean task for existing Web SEs to provide complete, relevant and up-to-date information response to users' search queries. To overcome this problem, we developed the Distributed Search Engine Architecture (DSEA), which is a new means of smart information query and retrieval of the World Wide Web (WWW). In DSEAs, multiple autonomous search engines, owned by different organizations or individuals, cooperate and act as a single search engine. This paper includes the work reported in this research focusing on development of DSEA, based on topic-specific specialised search engines. In DSEA, the results to specific queries could be provided by any of the participating search engines, for which the user is unaware of. The important design goal of using topic-specific search engines in the research is to build systems that can effectively be used by larger number of users simultaneously. Efficient and effective usage with good response is important, because it involves leveraging the vast amount of searched data from the World Wide Web, by categorising it into condensed focused topic -specific results that meet the user's queries. This design model and the development of the DSEA adopt a Service Directory (SD) to route queries towards topic-specific document hosting SEs. It displays the most acceptable performance which is consistent with the requirements of the users. The evaluation results of the model return a very high priority score which is associated with each frequency of a keyword.

  13. In Search of Signature Pedagogy for PDS Teacher Education: A Review of Articles Published in "School-University Partnerships"

    Science.gov (United States)

    Yendol-Hoppey, Diane; Franco, Yvonne

    2014-01-01

    ''In Search of Signature Pedagogy for PDS Teacher Education'' is a review of articles published in "School-University Partnerships" which emerged in response to Shulman's critique that we do not possess powerful, consistent models of practice that we can define and have deeply studied. To these ends, we searched…

  14. Proposal of Tabu Search Algorithm Based on Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Ahmed T. Sadiq Al-Obaidi

    2014-03-01

    Full Text Available This paper presents a new version of Tabu Search (TS based on Cuckoo Search (CS called (Tabu-Cuckoo Search TCS to reduce the effect of the TS problems. The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.

  15. Mathematical programming solver based on local search

    CERN Document Server

    Gardi, Frédéric; Darlay, Julien; Estellon, Bertrand; Megel, Romain

    2014-01-01

    This book covers local search for combinatorial optimization and its extension to mixed-variable optimization. Although not yet understood from the theoretical point of view, local search is the paradigm of choice for tackling large-scale real-life optimization problems. Today's end-users demand interactivity with decision support systems. For optimization software, this means obtaining good-quality solutions quickly. Fast iterative improvement methods, like local search, are suited to satisfying such needs. Here the authors show local search in a new light, in particular presenting a new kind of mathematical programming solver, namely LocalSolver, based on neighborhood search. First, an iconoclast methodology is presented to design and engineer local search algorithms. The authors' concern about industrializing local search approaches is of particular interest for practitioners. This methodology is applied to solve two industrial problems with high economic stakes. Software based on local search induces ex...

  16. ArraySearch: A Web-Based Genomic Search Engine.

    Science.gov (United States)

    Wilson, Tyler J; Ge, Steven X

    2012-01-01

    Recent advances in microarray technologies have resulted in a flood of genomics data. This large body of accumulated data could be used as a knowledge base to help researchers interpret new experimental data. ArraySearch finds statistical correlations between newly observed gene expression profiles and the huge source of well-characterized expression signatures deposited in the public domain. A search query of a list of genes will return experiments on which the genes are significantly up- or downregulated collectively. Searches can also be conducted using gene expression signatures from new experiments. This resource will empower biological researchers with a statistical method to explore expression data from their own research by comparing it with expression signatures from a large public archive.

  17. How to search, write, prepare and publish the scientific papers in the biomedical journals.

    Science.gov (United States)

    Masic, Izet

    2011-06-01

    This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level.

  18. Smart Agent Learning based Hotel Search System- Android Environment

    Directory of Open Access Journals (Sweden)

    Wayne Lawrence

    2012-08-01

    Full Text Available The process of finding the finest hotel in central location is time consuming, information overload and overwhelming and in some cases poses a security risk to the client. Over time with competition in the market among travel agents and hotels, the process of hotel search and booking has improved with the advances in technology. Various web sites allow a user to select a destination from a pull-down list along with several categories to suit one’s preference.. Some of the more advanced web sites allow for a search of the destination via a map for example hotelguidge.com and jamaica.hotels.hu. Recently good amount of work been carried in the use of Intelligent agents towards hotel search on J2ME based mobile handset which still has some weakness. The proposed system so developed uses smart software agents that overcomes the weakness in the previous system by collaborating among themselves and search Google map based on criteria selected by the user and return results to the client that is precise and best suit the user requirements. In addition, the agent possesses learning capability of searching the hotels too which is based on past search experience. The booking of hotel involving cryptography has not been incorporated in this research paper and been published elsewhere. This will be facilitated on Android 2.2-enabled mobile phone using JADE-LEAP Agent development kit.

  19. Xerox trails: a new web-based publishing technology

    Science.gov (United States)

    Rao, Venkatesh G.; Vandervort, David; Silverstein, Jesse

    2010-02-01

    Xerox Trails is a new digital publishing model developed at the Xerox Research Center, Webster. The primary purpose of the technology is to allow Web users and publishers to collect, organize and present information in the form of a useful annotated narrative (possibly non-sequential) with editorial content and metadata, that can be consumed both online and offline. The core concept is a trail: a digital object that improves online content production, consumption and navigation user experiences. When appropriate, trails can also be easily sequenced and transformed into printable documents, thereby bridging the gap between online and offline content experiences. The model is partly inspired by Vannevar Bush's influential idea of the "Memex" [1] which has inspired several generations of Web technology [2]. Xerox Trails is a realization of selected elements from the idea of the Memex, along with several original design ideas. It is based on a primitive data construct, the trail. In Xerox Trails, the idea of a trail is used to support the architecture of a Web 2.0 product suite called Trailmeme, that includes a destination Web site, plugins for major content management systems, and a browser toolbar.

  20. A web based Publish-Subscribe framework for mobile computing

    Directory of Open Access Journals (Sweden)

    Cosmina Ivan

    2014-05-01

    Full Text Available The growing popularity of mobile devices is permanently changing the Internet user’s computing experience. Smartphones and tablets begin to replace the desktop as the primary means of interacting with various information technology and web resources. While mobile devices facilitate in consuming web resources in the form of web services, the growing demand for consuming services on mobile device is introducing a complex ecosystem in the mobile environment. This research addresses the communication challenges involved in mobile distributed networks and proposes an event-driven communication approach for information dissemination. This research investigates different communication techniques such as polling, long-polling and server-side push as client-server interaction mechanisms and the latest web technologies standard WebSocket , as communication protocol within a Publish/Subscribe paradigm. Finally, this paper introduces and evaluates the proposed framework, that is a hybrid approach of WebSocket and event-based publish/subscribe for operating in mobile environments.

  1. Publishing FAIR Data: an exemplar methodology utilizing PHI-base

    Directory of Open Access Journals (Sweden)

    Alejandro eRodríguez Iglesias

    2016-05-01

    Full Text Available Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species versus the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be FAIR - Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences - the Pathogen-Host Interaction Database (PHI-base - to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  2. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G.; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E.; Wilkinson, Mark D.

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be “FAIR”—Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences—the Pathogen-Host Interaction Database (PHI-base)—to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings. PMID:27433158

  3. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  4. Semantic Map Based Web Search Result Visualization

    OpenAIRE

    2007-01-01

    The problem of information overload has become more pressing with the emergence of the increasingly more popular Internet services. The main information retrieval mechanisms provided by the prevailing Internet Web software are based on either keyword search (e.g., Google and Yahoo) or hypertext browsing (e.g., Internet Explorer and Netscape). The research presented in this paper is aimed at providing an alternative concept-based categorization and search capability based on a combination of m...

  5. Indoor radon pollution: Control and mitigation. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-05-01

    The bibliography contains citations concerning the control and mitigation of radon pollution in homes and commercial buildings. Citations cover radon transport studies in buildings and soils, remedial action proposals on contaminated buildings, soil venting, building ventilation, sealants, filtration systems, water degassing, reduction of radon sources in building materials, and evaluation of existing radon mitigation programs, including their cost effectiveness. Analysis and detection of radon and radon toxicity are covered in separate published bibliographies. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  6. Model-based Tomographic Reconstruction Literature Search

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Lehman, S K

    2005-11-30

    In the process of preparing a proposal for internal research funding, a literature search was conducted on the subject of model-based tomographic reconstruction (MBTR). The purpose of the search was to ensure that the proposed research would not replicate any previous work. We found that the overwhelming majority of work on MBTR which used parameterized models of the object was theoretical in nature. Only three researchers had applied the technique to actual data. In this note, we summarize the findings of the literature search.

  7. Search Result Diversification Based on Query Facets

    Institute of Scientific and Technical Information of China (English)

    胡莎; 窦志成; 王晓捷; 继荣

    2015-01-01

    In search engines, different users may search for different information by issuing the same query. To satisfy more users with limited search results, search result diversification re-ranks the results to cover as many user intents as possible. Most existing intent-aware diversification algorithms recognize user intents as subtopics, each of which is usually a word, a phrase, or a piece of description. In this paper, we leverage query facets to understand user intents in diversification, where each facet contains a group of words or phrases that explain an underlying intent of a query. We generate subtopics based on query facets and propose faceted diversification approaches. Experimental results on the public TREC 2009 dataset show that our faceted approaches outperform state-of-the-art diversification models.

  8. Modeling and Implementing Ontology-Based Publish/Subscribe Using Semantic Web Technologies

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk; Hansen, Klaus Marius

    2010-01-01

    Publish/subscribe is a communication paradigm for distributed interaction. The paradigm provides decoupling in time, space, and synchronization for interacting entities and several variants of publish/subscribe exists including topic-based, subject-based, and type-based publish/subscribe. A centr...

  9. Chemical and biological warfare: Protection, decontamination, and disposal. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The bibliography contains citations concerning the means to defend against chemical and biological agents used in military operations, and to eliminate the effects of such agents on personnel, equipment, and grounds. Protection is accomplished through protective clothing and masks, and in buildings and shelters through filtration. Elimination of effects includes decontamination and removal of the agents from clothing, equipment, buildings, grounds, and water, using chemical deactivation, incineration, and controlled disposal of material in injection wells and ocean dumping. Other Published Searches in this series cover chemical warfare detection; defoliants; general studies; biochemistry and therapy; and biology, chemistry, and toxicology associated with chemical warfare agents.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  10. Chemical and biological warfare: Protection, decontamination, and disposal. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    The bibliography contains citations concerning the means to defend against chemical and biological agents used in military operations, and to eliminate the effects of such agents on personnel, equipment, and grounds. Protection is accomplished through protective clothing and masks, and in buildings and shelters through filtration. Elimination of effects includes decontamination and removal of the agents from clothing, equipment, buildings, grounds, and water, using chemical deactivation, incineration, and controlled disposal of material in injection wells and ocean dumping. Other Published Searches in this series cover chemical warfare detection; defoliants; general studies; biochemistry and therapy; and biology, chemistry, and toxicology associated with chemical warfare agents. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  11. Water pollution analysis and detection. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    The bibliography contains citations concerning water pollution analysis, detection, monitoring, and regulation. Citations review online systems, bioassay monitoring, laser-based detection, sensor and biosensor systems, metabolic analyzers, and microsystem techniques. References cover fiber-optic portable detection instruments and rapid detection of toxicants in drinking water. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  12. Ceramic heat exchangers. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-08-01

    The bibliography contains citations concerning the development, fabrication, and performance of ceramic heat exchangers. References discuss applications in coal-fired gas turbine power plants. Topics cover high temperature corrosion resistance, fracture properties, nondestructive evaluations, thermal shock and fatigue, silicon carbide-based ceramics, and composite joining. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  13. Coal gasification. (Latest citations from the EI compendex*plus database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    The bibliography contains citations concerning the development and assessment of coal gasification technology. Combined-cycle gas turbine power plants are reviewed. References also discuss dry-feed gasification, gas turbine interface, coal gasification pilot plants, underground coal gasification, gasification with nuclear heat, and molten bath processes. Clean-coal based electric power generation and environmental issues are examined. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  14. Battery electrolytes. (Latest citations from the EI Compendex*plus database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The bibliography contains citations concerning the design, construction, and applications of solid, liquid, and gaseous battery electrolytes. Most recent citations focus on solid state battery electrolytes based on lithium or lithium-related chemistry. Some attention is given to the composition of the electrodes associated with solid state batteries. Electrolyte properties and battery performance, maintenance, and safety are also considered. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  15. An Ontology Based Personalised Mobile Search Engine

    Directory of Open Access Journals (Sweden)

    Mrs. Rashmi A. Jolhe

    2014-02-01

    Full Text Available As the amount of Web information grows rapidly, Search engines must be able to retrieve information according to the user's preference. In this paper, we propose Ontology Based Personalised Mobile Search Engine (OBPMSE that captures user‟s interest and preferences in the form of concepts by mining search results and their clickthroughs. OBPMSE profile the user‟s interest and personalised the search results according to user‟s profile. OBPMSE classifies these concepts into content concepts and location concepts. In addition, users‟ locations (positioned by GPS are used to supplement the location concepts in OBPMSE. The user preferences are organized in an ontology-based, multifacet user profile, used to adapt a personalized ranking function which in turn used for rank adaptation of future search results. we propose to define personalization effectiveness based on the entropies and use it to balance the weights between the content and location facets. In our design, the client collects and stores locally the clickthrough data to protect privacy, whereas heavy tasks such as concept extraction ,training, and reranking are performed at the OBPMSE server. OBPMSE provide client-server architecture and distribute the task to each individual component to decrease the complexity.

  16. Partial evolution based local adiabatic quantum search

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Lu Song-Feng; Liu Fang; Yang Li-Ping

    2012-01-01

    Recently,Zhang and Lu provided a quantum search algorithm based on partial adiabatic evolution,which beats the time bound of local adiabatic search when the number of marked items in the unsorted database is larger than one.Later,they found that the above two adiabatic search algorithms had the same time complexity when there is only one marked item in the database.In the present paper,following the idea of Roland and Cerf [Roland J and Cerf N J 2002Phys.Rev.A 65 042308],if within the small symmetric evolution interval defined by Zhang et al.,a local adiabatic evolution is performed instead of the original “global” one,this “new” algorithm exhibits slightly better performance,although they are progressively equivalent with M increasing.In addition,the proof of the optimality for this partial evolution based local adiabatic search when M =1 is also presented.Two other special cases of the adiabatic algorithm obtained by appropriately tuning the evolution interval of partial adiabatic evolution based quantum search,which are found to have the same phenomenon above,are also discussed.

  17. Space based microlensing planet searches

    Directory of Open Access Journals (Sweden)

    Tisserand Patrick

    2013-04-01

    Full Text Available The discovery of extra-solar planets is arguably the most exciting development in astrophysics during the past 15 years, rivalled only by the detection of dark energy. Two projects unite the communities of exoplanet scientists and cosmologists: the proposed ESA M class mission EUCLID and the large space mission WFIRST, top ranked by the Astronomy 2010 Decadal Survey report. The later states that: “Space-based microlensing is the optimal approach to providing a true statistical census of planetary systems in the Galaxy, over a range of likely semi-major axes”. They also add: “This census, combined with that made by the Kepler mission, will determine how common Earth-like planets are over a wide range of orbital parameters”. We will present a status report of the results obtained by microlensing on exoplanets and the new objectives of the next generation of ground based wide field imager networks. We will finally discuss the fantastic prospect offered by space based microlensing at the horizon 2020–2025.

  18. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    situations, for example in urban environments. We propose a system to provide location-based services using image searches without requiring GPS. The goal of this system is to assist tourists in cities with additional information using their mobile phones and built-in cameras. Based upon the result......Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  19. Ontology-Based Search of Genomic Metadata.

    Science.gov (United States)

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.

  20. New similarity search based glioma grading

    Energy Technology Data Exchange (ETDEWEB)

    Haegler, Katrin; Brueckmann, Hartmut; Linn, Jennifer [Ludwig-Maximilians-University of Munich, Department of Neuroradiology, Munich (Germany); Wiesmann, Martin; Freiherr, Jessica [RWTH Aachen University, Department of Neuroradiology, Aachen (Germany); Boehm, Christian [Ludwig-Maximilians-University of Munich, Department of Computer Science, Munich (Germany); Schnell, Oliver; Tonn, Joerg-Christian [Ludwig-Maximilians-University of Munich, Department of Neurosurgery, Munich (Germany)

    2012-08-15

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  1. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  2. Activity based video indexing and search

    Science.gov (United States)

    Chen, Yang; Jiang, Qin; Medasani, Swarup; Allen, David; Lu, Tsai-ching

    2010-04-01

    We describe a method for searching videos in large video databases based on the activity contents present in the videos. Being able to search videos based on the contents (such as human activities) has many applications such as security, surveillance, and other commercial applications such as on-line video search. Conventional video content-based retrieval (CBR) systems are either feature based or semantics based, with the former trying to model the dynamics video contents using the statistics of image features, and the latter relying on automated scene understanding of the video contents. Neither approach has been successful. Our approach is inspired by the success of visual vocabulary of "Video Google" by Sivic and Zisserman, and the work of Nister and Stewenius who showed that building a visual vocabulary tree can improve the performance in both scalability and retrieval accuracy for 2-D images. We apply visual vocabulary and vocabulary tree approach to spatio-temporal video descriptors for video indexing, and take advantage of the discrimination power of these descriptors as well as the scalability of vocabulary tree for indexing. Furthermore, this approach does not rely on any model-based activity recognition. In fact, training of the vocabulary tree is done off-line using unlabeled data with unsupervised learning. Therefore the approach is widely applicable. Experimental results using standard human activity recognition videos will be presented that demonstrate the feasibility of this approach.

  3. Iris Localization Based on Edge Searching Strategies

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Han Jiuqiang

    2005-01-01

    An iris localization scheme based on edge searching strategies is presented. First, the edge detection operator Laplacian-ofGaussian (LoG) is used to iris original image to search its inner boundary. Then, a circle detection operator is introduced to locate the outer boundary and its center, which is invariant of translation, rotation and scale. Finally, the method of curve fitting is developed in localization of eyelid. The performance of the proposed method is tested with 756 iris images from 108 different classes in CASIA Iris Database and compared with the conventional Hough transform method. The experimental results show that without loss of localization accuracy, the proposed iris localization algorithm is apparently faster than Hough transform.

  4. Strategies for searching and managing evidence-based practice resources.

    Science.gov (United States)

    Robb, Meigan; Shellenbarger, Teresa

    2014-10-01

    Evidence-based nursing practice requires the use of effective search strategies to locate relevant resources to guide practice change. Continuing education and staff development professionals can assist nurses to conduct effective literature searches. This article provides suggestions for strategies to aid in identifying search terms. Strategies also are recommended for refining searches by using controlled vocabulary, truncation, Boolean operators, PICOT (Population/Patient Problem, Intervention, Comparison, Outcome, Time) searching, and search limits. Suggestions for methods of managing resources also are identified. Using these approaches will assist in more effective literature searches and may help evidence-based practice decisions.

  5. How are established, subscription-based publishers making the transition to open access?

    Directory of Open Access Journals (Sweden)

    Victoria Gardner

    2014-03-01

    Full Text Available To some, the publishing industry seems to move at pace measured in geological ages, especially when compared to the fast-moving digital and technology industries. Many have described how publishers were caught on the back foot by the move of open access (OA from the fringes of publishing to the mainstream, particularly within the UK following the publication of the ‘Finch’ Report. OA has had a marked influence on the publishing industry, leading publishers to reflect on current practices, to have a much more granular approach to systems and processes, and to be engaging even more than previously with other players in the publishing landscape. OA is both a strategic challenge and an opportunity. But for a publisher like Taylor & Francis with a significant number of subscription-based journals, OA creates new levels of complexity, and requires the ability to adapt to new requirements within short timeframes.

  6. A Feedback-Based Web Search Engine

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei-feng; XU Bao-wen; ZHOU Xiao-yu

    2004-01-01

    Web search engines are very useful information service tools in the Internet.The current web search engines produce search results relating to the search terms and the actual information collected by them.Since the selections of the search results cannot affect the future ones, they may not cover most people's interests.In this paper, feedback information produced by the users' accessing lists will be represented by the rough set and can reconstruct the query string and influence the search results.And thus the search engines can provide self-adaptability.

  7. Content-Based Publish/Subscribe System for Web Syndication

    Institute of Scientific and Technical Information of China (English)

    Zeinab Hmedeh; Harry Kourdounakis; Vassilis Christophides; Cedric du Mouza; Michel Scholl; Nicolas Travers

    2016-01-01

    Content syndication has become a popular way for timely delivery of frequently updated information on the Web. Today, web syndication technologies such as RSS or Atom are used in a wide variety of applications spreading from large-scale news broadcasting to medium-scale information sharing in scientific and professional communities. However, they exhibit serious limitations for dealing with information overload in Web 2.0. There is a vital need for efficient real-time filtering methods across feeds, to allow users to effectively follow personally interesting information. We investigate in this paper three indexing techniques for users’ subscriptions based on inverted lists or on an ordered trie for exact and partial matching. We present analytical models for memory requirements and matching time and we conduct a thorough experimental evaluation to exhibit the impact of critical parameters of realistic web syndication workloads.

  8. Music Publishing

    OpenAIRE

    A.Manuel B. Simoes; J.Joao Dias De Almeida

    2003-01-01

    Current music publishing in the Internet is mainly concerned with sound publishing. We claim that music publishing is not only to make sound available but also to define relations between a set of music objects like music scores, guitar chords, lyrics and their meta-data. We want an easy way to publish music in the Internet, to make high quality paper booklets and even to create Audio CD's. In this document we present a workbench for music publishing based on open formats, using open-source t...

  9. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines.

  10. [Searching for evidence-based data].

    Science.gov (United States)

    Dufour, J-C; Mancini, J; Fieschi, M

    2009-08-01

    The foundation of evidence-based medicine is critical analysis and synthesis of the best data available concerning a given health problem. These factual data are accessible because of the availability on the Internet of web tools specialized in research for scientific publications. A bibliographic database is a collection of bibliographic references describing the documents indexed. Such a reference includes at least the title, summary (or abstract), a set of keywords, and the type of publication. To conduct a strategically effective search, it is necessary to formulate the question - clinical, diagnostic, prognostic, or related to treatment or prevention - in a form understandable by the research engine. Moreover, it is necessary to choose the specific database or databases, which may have particular specificity, and to analyze the results rapidly to refine the strategy. The search for information is facilitated by the knowledge of the standardized terms commonly used to describe the desired information. These come from a specific thesaurus devoted to document indexing. The most frequently used is MeSH (Medical Subject Heading). The principal bibliographic database whose references include a set of describers from the MeSH thesaurus is Medical Literature Analysis and Retrieval System Online (Medline), which has in turn become a subpart of a still more vast bibliography called PubMed, which indexes an additional 1.4 million references. Numerous other databases are maintained by national or international entities. These include the Cochrane Library, Embase, and the PASCAL and FRANCIS databases.

  11. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    Science.gov (United States)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  12. Mashup Based Content Search Engine for Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-05-01

    Full Text Available Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, pdf files, moving picture.

  13. Community Colleges, school data base attribute, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Community Colleges dataset, was produced all or in part from Published Reports/Deeds information as of 2006. It is described as 'school data base attribute'....

  14. Building Permits, permits plus data base, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Building Permits dataset, was produced all or in part from Published Reports/Deeds information as of 2006. It is described as 'permits plus data base'. Data by...

  15. View-Based Searching Systems--Progress Towards Effective Disintermediation.

    Science.gov (United States)

    Pollitt, A. Steven; Smith, Martin P.; Treglown, Mark; Braekevelt, Patrick

    This paper presents the background and then reports progress made in the development of two view-based searching systems--HIBROWSE for EMBASE, searching Europe's most important biomedical bibliographic database, and HIBROWSE for EPOQUE, improving access to the European Parliament's Online Query System. The HIBROWSE approach to searching promises…

  16. Study of a Quantum Framework for Search Based Software Engineering

    Science.gov (United States)

    Wu, Nan; Song, Fangmin; Li, Xiangdong

    2013-06-01

    The Search Based Software Engineering (SBSE) is widely used in the software engineering to identify optimal solutions. The traditional methods and algorithms used in SBSE are criticized due to their high costs. In this paper, we propose a rapid modified-Grover quantum searching method for SBSE, and theoretically this method can be applied to any search-space structure and any type of searching problems.

  17. Empirical Evidences in Citation-Based Search Engines: Is Microsoft Academic Search dead?

    OpenAIRE

    Orduna-Malea, Enrique; Ayllon, Juan Manuel; Martin-Martin, Alberto; Lopez-Cozar, Emilio Delgado

    2014-01-01

    The goal of this working paper is to summarize the main empirical evidences provided by the scientific community as regards the comparison between the two main citation based academic search engines: Google Scholar and Microsoft Academic Search, paying special attention to the following issues: coverage, correlations between journal rankings, and usage of these academic search engines. Additionally, selfelaborated data is offered, which are intended to provide current evidence about the popul...

  18. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  19. Distributing Knight. Using Type-Based Publish/Subscribe for Building Distributed Collaboration Tools

    DEFF Research Database (Denmark)

    Damm, Christian Heide; Hansen, Klaus Marius

    2002-01-01

    Distributed applications are hard to understand, build, and evolve. The need for decoupling, flexibility, and heterogeneity in distributed collaboration tools present particular problems; for such applications, having the right abstractions and primitives for distributed communication becomes even...... more important. We present Distributed Knight, an extension to the Knight tool, for distributed, collaborative, and gesture-based object-oriented modelling. Distributed Knight was built using the type-based publish/subscribe paradigm. Based on this case, we argue that type-based publish....../subscribe provides a natural and effective abstraction for developing distributed collaboration tools....

  20. Mashup Based Content Search Engine for Mobile Devices

    OpenAIRE

    Kohei Arai

    2013-01-01

    Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiv...

  1. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  2. Semantic Web Based Efficient Search Using Ontology and Mathematical Model

    Directory of Open Access Journals (Sweden)

    K.Palaniammal

    2014-01-01

    Full Text Available The semantic web is the forthcoming technology in the world of search engine. It becomes mainly focused towards the search which is more meaningful rather than the syntactic search prevailing now. This proposed work concerns about the semantic search with respect to the educational domain. In this paper, we propose semantic web based efficient search using ontology and mathematical model that takes into account the misleading, unmatched kind of service information, lack of relevant domain knowledge and the wrong service queries. To solve these issues in this framework is designed to make three major contributions, which are ontology knowledge base, Natural Language Processing (NLP techniques and search model. Ontology knowledge base is to store domain specific service ontologies and service description entity (SDE metadata. The search model is to retrieve SDE metadata as efficient for Education lenders, which include mathematical model. The Natural language processing techniques for spell-check and synonym based search. The results are retrieved and stored in an ontology, which in terms prevents the data redundancy. The results are more accurate to search, sensitive to spell check and synonymous context. This paper reduces the user’s time and complexity in finding for the correct results of his/her search text and our model provides more accurate results. A series of experiments are conducted in order to respectively evaluate the mechanism and the employed mathematical model.

  3. Considerations for the development of task-based search engines

    DEFF Research Database (Denmark)

    Petcu, Paula; Dragusin, Radu

    2013-01-01

    Based on previous experience from working on a task-based search engine, we present a list of suggestions and ideas for an Information Retrieval (IR) framework that could inform the development of next generation professional search systems. The specific task that we start from is the clinicians......' information need in finding rare disease diagnostic hypotheses at the time and place where medical decisions are made. Our experience from the development of a search engine focused on supporting clinicians in completing this task has provided us valuable insights in what aspects should be considered...... by the developers of vertical search engines....

  4. Decomposition During Search for Propagation-Based Constraint Solvers

    CERN Document Server

    Mann, Martin; Will, Sebastian

    2007-01-01

    We describe decomposition during search (DDS), an integration of and/or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice.

  5. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  6. Accelerator-based neutrino oscillation searches

    Science.gov (United States)

    Whitehouse, D. A.; Rameika, R.; Stanton, N.

    This paper attempts to summarize the neutrino oscillation section of the Workshop on Future Directions in Particle and Nuclear Physics at Multi-GeV Hadron Beam Facilities. There were very lively discussions about the merits of the different oscillation channels, experiments, and facilities, but we believe a substantial consensus emerged. First, the next decade is one of great potential for discovery in neutrino physics, but it is also one of great peril. The possibility that neutrino oscillations explain the solar neutrino and atmospheric neutrino experiments, and the indirect evidence that Hot Dark Matter (HDM) in the form of light neutrinos might make up 30% of the mass of the universe, point to areas where accelerator-based experiments could play a crucial role in piecing together the puzzle. At the same time, the field faces a very uncertain future. The LSND experiment at LAMPF is the only funded neutrino oscillation experiment in the United States and it is threatened by the abrupt shutdown of LAMPF proposed for fiscal 1994. The future of neutrino physics at the Brookhaven National Laboratory AGS depends on the continuation of High Energy Physics (HEP) funding after the RHIC startup. Most proposed neutrino oscillation searches at Fermilab depend on the completion of the Main Injector project and on the construction of a new neutrino beamline, which is uncertain at this point. The proposed KAON facility at TRIUMF would provide a neutrino beam similar to that at the AGS but with a much increased intensity. The future of KAON is also uncertain. Despite the difficult obstacles present, there is a real possibility that we are on the verge of understanding the masses and mixings of the neutrinos. The physics importance of such a discovery cannot be overstated. The current experimental status and future possibilities are discussed.

  7. Archetype based search in an IHE XDS environment.

    Science.gov (United States)

    Rinner, Christoph; Kohler, Michael; Hübner-Bloder, Gudrun; Saboor, Samrend; Ammenwerth, Elske; Duftschmid, Georg

    2012-01-01

    To prevent information overload of physicians when accessing EHRs we introduce a method to extend the IHE XDS profile metadata-based search towards a content-based search. Detailed queries are created based on predefined information needs mapped to ISO/EN 13606 Archetypes. They are aggregated to a metadata-based query to retrieve all relevant documents, which are then analyzed for the desired contents. The results are presented in a tabular form. The content-based search in IHE-XDS could be implemented efficiently and was found helpful by the evaluating physicians.

  8. Grover quantum searching algorithm based on weighted targets

    Institute of Scientific and Technical Information of China (English)

    Li Panchi; Li Shiyong

    2008-01-01

    The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm.Finally, the validity of the algorithm is proved by a simple searching example.

  9. Weblog Search Engine Based on Quality Criteria

    Directory of Open Access Journals (Sweden)

    F. Azimzadeh,

    2011-01-01

    Full Text Available Nowadays, increasing amount of human knowledge is placed in computerized repositories such as the World Wide Web. This gives rise to the problem of how to locate specific pieces of information in these often quite unstructured repositories. Search engines is the best solved. Some studied show that, almost half of the traffic to the blog server comes from search engines. The more outgoing and informal social nature of the blogosphere opens the opportunity for exploiting more socially-oriented features. The nature of blogs, which are usually characterized by their personal and informal nature, dynamically and constructed on the new relational links required new quality measurement for blog search engine. Link analysis algorithms that exploit the Web graph may not work well in the blogosphere in general. (Gonçalves et al 2010 indicated that most of the popular blogs in the dataset (70% have a PageRank value equal -1, being thus almost invisible to the search engine. We expected that incorporated the special blogs quality criteria would be more desirably retrieved by search engines.

  10. Spaced-based search coil magnetometers

    Science.gov (United States)

    Hospodarsky, George B.

    2016-12-01

    Search coil magnetometers are one of the primary tools used to study the magnetic component of low-frequency electromagnetic waves in space. Their relatively small size, mass, and power consumption, coupled with a good frequency range and sensitivity, make them ideal for spaceflight applications. The basic design of a search coil magnetometer consists of many thousands of turns of wire wound on a high permeability core. When a time-varying magnetic field passes through the coil, a time-varying voltage is induced due to Faraday's law of magnetic induction. The output of the coil is usually attached to a preamplifier, which amplifies the induced voltage and conditions the signal for transmission to the main electronics (usually a low-frequency radio receiver). Search coil magnetometers are usually used in conjunction with electric field antenna to measure electromagnetic plasma waves in the frequency range of a few hertz to a few tens of kilohertzs. Search coil magnetometers are used to determine the properties of waves, such as comparing the relative electric and magnetic field amplitudes of the waves, or to investigate wave propagation parameters, such as Poynting flux and wave normal vectors. On a spinning spacecraft, they are also sometimes used to determine the background magnetic field. This paper presents some of the basic design criteria of search coil magnetometers and discusses design characteristics of sensors flown on a number of spacecraft.

  11. INFORMATION TECHNOLOGY "KEY TO TEXT" FOR SEMANTIC SEARCH AND INDEXING OF TEXTUAL INFORMATION - AN ESSENTIAL TOOL FOR ELECTRONIC PUBLISHING

    OpenAIRE

    M. Kreines

    2000-01-01

    The electronic editions gives essentially new features to structure and organization for searching information by the reader and the information services providers. Before the computer revolution any edition on a library shelf or under a veil of a dust on a desk, before the reader took it in his hands, meant no more than was written in its catalogue card. (Certainly, we here do not speak about the editions surrounded with light of legends). Only the electronic edition is capable to speak at t...

  12. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  13. Semantic Search among Heterogeneous Biological Databases Based on Gene Ontology

    Institute of Scientific and Technical Information of China (English)

    Shun-Liang CAO; Lei QIN; Wei-Zhong HE; Yang ZHONG; Yang-Yong ZHU; Yi-Xue LI

    2004-01-01

    Semantic search is a key issue in integration of heterogeneous biological databases. In thispaper, we present a methodology for implementing semantic search in BioDW, an integrated biological datawarehouse. Two tables are presented: the DB2GO table to correlate Gene Ontology (GO) annotated entriesfrom BioDW data sources with GO, and the semantic similarity table to record similarity scores derived fromany pair of GO terms. Based on the two tables, multifarious ways for semantic search are provided and thecorresponding entries in heterogeneous biological databases in semantic terms can be expediently searched.

  14. Routing Optimization Based on Taboo Search Algorithm for Logistic Distribution

    Directory of Open Access Journals (Sweden)

    Hongxue Yang

    2014-04-01

    Full Text Available Along with the widespread application of the electronic commerce in the modern business, the logistic distribution has become increasingly important. More and more enterprises recognize that the logistic distribution plays an important role in the process of production and sales. A good routing for logistic distribution can cut down transport cost and improve efficiency. In order to cut down transport cost and improve efficiency, a routing optimization based on taboo search for logistic distribution is proposed in this paper. Taboo search is a metaheuristic search method to perform local search used for logistic optimization. The taboo search is employed to accelerate convergence and the aspiration criterion is combined with the heuristics algorithm to solve routing optimization. Simulation experimental results demonstrate that the optimal routing in the logistic distribution can be quickly obtained by the taboo search algorithm

  15. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  16. LAHS: A novel harmony search algorithm based on learning automata

    Science.gov (United States)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  17. Evidence-based librarianship: searching for the needed EBL evidence.

    Science.gov (United States)

    Eldredge, J D

    2000-01-01

    This paper discusses the challenges of finding evidence needed to implement Evidence-Based Librarianship (EBL). Focusing first on database coverage for three health sciences librarianship journals, the article examines the information contents of different databases. Strategies are needed to search for relevant evidence in the library literature via these databases, and the problems associated with searching the grey literature of librarianship. Database coverage, plausible search strategies, and the grey literature of library science all pose challenges to finding the needed research evidence for practicing EBL. Health sciences librarians need to ensure that systems are designed that can track and provide access to needed research evidence to support Evidence-Based Librarianship (EBL).

  18. A web-based rapid assessment tool for production publishing solutions

    Science.gov (United States)

    Sun, Tong

    2010-02-01

    Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.

  19. Realizing IoT service's policy privacy over publish/subscribe-based middleware.

    Science.gov (United States)

    Duan, Li; Zhang, Yang; Chen, Shiping; Wang, Shiyao; Cheng, Bo; Chen, Junliang

    2016-01-01

    The publish/subscribe paradigm makes IoT service collaborations more scalable and flexible, due to the space, time and control decoupling of event producers and consumers. Thus, the paradigm can be used to establish large-scale IoT service communication infrastructures such as Supervisory Control and Data Acquisition systems. However, preserving IoT service's policy privacy is difficult in this paradigm, because a classical publisher has little control of its own event after being published; and a subscriber has to accept all the events from the subscribed event type with no choice. Few existing publish/subscribe middleware have built-in mechanisms to address the above issues. In this paper, we present a novel access control framework, which is capable of preserving IoT service's policy privacy. In particular, we adopt the publish/subscribe paradigm as the IoT service communication infrastructure to facilitate the protection of IoT services policy privacy. The key idea in our policy-privacy solution is using a two-layer cooperating method to match bi-directional privacy control requirements: (a) data layer for protecting IoT events; and (b) application layer for preserving the privacy of service policy. Furthermore, the anonymous-set-based principle is adopted to realize the functionalities of the framework, including policy embedding and policy encoding as well as policy matching. Our security analysis shows that the policy privacy framework is Chosen-Plaintext Attack secure. We extend the open source Apache ActiveMQ broker by building into a policy-based authorization mechanism to enforce the privacy policy. The performance evaluation results indicate that our approach is scalable with reasonable overheads.

  20. Text-based plagiarism in scientific publishing: issues, developments and education.

    Science.gov (United States)

    Li, Yongyan

    2013-09-01

    Text-based plagiarism, or copying language from sources, has recently become an issue of growing concern in scientific publishing. Use of CrossCheck (a computational text-matching tool) by journals has sometimes exposed an unexpected amount of textual similarity between submissions and databases of scholarly literature. In this paper I provide an overview of the relevant literature, to examine how journal gatekeepers perceive textual appropriation, and how automated plagiarism-screening tools have been developed to detect text matching, with the technique now available for self-check of manuscripts before submission; I also discuss issues around English as an additional language (EAL) authors and in particular EAL novices being the typical offenders of textual borrowing. The final section of the paper proposes a few educational directions to take in tackling text-based plagiarism, highlighting the roles of the publishing industry, senior authors and English for academic purposes professionals.

  1. METADATA EXPANDED SEMANTICALLY BASED RESOURCE SEARCH IN EDUCATION GRID

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    With the rapid increase of educational resources, how to search for necessary educational resource quickly is one of most important issues. Educational resources have the characters of distribution and heterogeneity, which are the same as the characters of Grid resources. Therefore, the technology of Grid resources search was adopted to implement the educational resources search. Motivated by the insufficiency of currently resources search methods based on metadata, a method of extracting semantic relations between words constituting metadata is proposed. We mainly focus on acquiring synonymy, hyponymy, hypernymy and parataxis relations. In our schema, we extract texts related to metadata that will be expanded from text spatial through text extraction templates. Next, metadata will be obtained through metadata extraction templates. Finally, we compute semantic similarity to eliminate false relations and construct a semantic expansion knowledge base. The proposed method in this paper has been applied on the education grid.

  2. Entropy-Based Search Algorithm for Experimental Design

    CERN Document Server

    Malakar, N K

    2010-01-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. ...

  3. Free Search Algorithm Based Estimation in WSN Location

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hui; LI Dan-mei; SHAO Shi-huang; XU Chen

    2009-01-01

    This paper proposes a novel intelligent estimation algorithm in Wireless Sensor Network nodes location based on Free Search, which converts parameter estimation to on-line optimization of nonlinear function and estimates the coordinates of senor nodes using the Free Search optimization. Compared to the least-squares estimation algorithms, the localization accuracy has been increased significantly, which has been verified by the simulation results.

  4. Variable Neighborhood Search Based Algorithm for University Course Timetabling Problem

    OpenAIRE

    Kralev, Velin; Kraleva, Radoslava

    2016-01-01

    In this paper a variable neighborhood search approach as a method for solving combinatoric optimization problems is presented. A variable neighborhood search based algorithm for solving the problem concerning the university course timetable design has been developed. This algorithm is used to solve the real problem regarding the university course timetable design. It is compared with other algorithms that are tested on the same sets of input data. The object and the methodology of study are p...

  5. An Index Based Skip Search Multiple Pattern Matching Algorithm

    OpenAIRE

    Raju Bhukya; Balram Parmer,; Anand Kulkarni

    2011-01-01

    DNA Pattern matching, the problem of finding sub sequences within a long DNA sequence has many applications in computational biology. As the sequences can be long, matching can be an expensive operation, especially as approximate matching is allowed. Searching DNA related data is a common activity for molecular biologists. In this paper we explore the applicability of a new pattern matching technique called Index based Skip Search Multiple Pattern matching algorithm (ISMPM), for DNA sequences...

  6. Beacon-Based Service Publishing Framework in Multiservice Wi-Fi Hotspots

    Directory of Open Access Journals (Sweden)

    Di Sorte Dario

    2007-01-01

    Full Text Available In an expected future multiaccess and multiservice IEEE 802.11 environment, the problem of providing users with useful service-related information to support a correct rapid network selection is expected to become a very important issue. A feasible short-term 802.11-tailored working solution, compliant with existing equipment, is to publish service information encoded within the SSID information element within beacon frames. This makes it possible for an operator to implement service publishing in 802.11 networks while waiting for a standardized mechanism. Also, this straightforward approach has allowed us to evaluate experimentally the performance of a beacon-based service publishing solution. In fact, the main focus of the paper is indeed to present a quantitative comparison of service discovery times between the legacy scenario, where the user is forced to associate and authenticate with a network point of access to check its service offer, and the enhanced scenario where the set of service-related information is broadcasted within beacons. These discovery times are obtained by processing the results of a measurement campaign performed in a multiaccess/service 802.11 environment. This analysis confirms the effectiveness of the beacon-based approach. We also show that the cost in terms of wireless bandwidth consumption of such solution is low.

  7. Architectural Analysis of Systems Based on the Publisher-Subscriber Style

    Science.gov (United States)

    Ganesun, Dharmalingam; Lindvall, Mikael; Ruley, Lamont; Wiegand, Robert; Ly, Vuong; Tsui, Tina

    2010-01-01

    Architectural styles impose constraints on both the topology and the interaction behavior of involved parties. In this paper, we propose an approach for analyzing implemented systems based on the publisher-subscriber architectural style. From the style definition, we derive a set of reusable questions and show that some of them can be answered statically whereas others are best answered using dynamic analysis. The paper explains how the results of static analysis can be used to orchestrate dynamic analysis. The proposed method was successfully applied on the NASA's Goddard Mission Services Evolution Center (GMSEC) software product line. The results show that the GMSEC has a) a novel reusable vendor-independent middleware abstraction layer that allows the NASA's missions to configure the middleware of interest without changing the publishers' or subscribers' source code, and b) some high priority bugs due to behavioral discrepancies, which were eluded during testing and code reviews, among different implementations of the same APIs for different vendors.

  8. In Search of...Brain-Based Education.

    Science.gov (United States)

    Bruer, John T.

    1999-01-01

    Debunks two ideas appearing in brain-based education articles: the educational significance of brain laterality (right brain versus left brain) and claims for a sensitive period of brain development in young children. Brain-based education literature provides a popular but misleading mix of fact, misinterpretation, and fantasy. (47 references (MLH)

  9. A Shape Based Image Search Technique

    Directory of Open Access Journals (Sweden)

    Aratrika Sarkar

    2014-08-01

    Full Text Available This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, imatching of images based on contour matching; iimatching of images based on edge matching; iiimatching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i translation ; ii rotation; iii scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.

  10. Personalized Search Based on Context-Centric Model

    Directory of Open Access Journals (Sweden)

    Mingyang Liu

    2013-07-01

    Full Text Available With the rapid development of the World Wide Web, huge amount of data has been growing exponentially in our daily life. Users will spend much more time on searching the information they really need than before. Even when they make the exactly same searching input, different users would have various goals. Otherwise, users commonly annotate the information resources or make search query according to their own behaviors. As a matter of fact, this process will bring fuzzy results and be time-consuming. Based on the above problems, we propose our methodology that to combine user’s context, users’ profile with users’ Folksonomies together to optimize personal search. At the end of this paper, we make an experiment to evaluate our methodology and from which we can conclude that our work performs better than other samples.

  11. A Domain Specific Ontology Based Semantic Web Search Engine

    CERN Document Server

    Mukhopadhyay, Debajyoti; Mukherjee, Sreemoyee; Bhattacharya, Jhilik; Kim, Young-Chon

    2011-01-01

    Since its emergence in the 1990s the World Wide Web (WWW) has rapidly evolved into a huge mine of global information and it is growing in size everyday. The presence of huge amount of resources on the Web thus poses a serious problem of accurate search. This is mainly because today's Web is a human-readable Web where information cannot be easily processed by machine. Highly sophisticated, efficient keyword based search engines that have evolved today have not been able to bridge this gap. So comes up the concept of the Semantic Web which is envisioned by Tim Berners-Lee as the Web of machine interpretable information to make a machine processable form for expressing information. Based on the semantic Web technologies we present in this paper the design methodology and development of a semantic Web search engine which provides exact search results for a domain specific search. This search engine is developed for an agricultural Website which hosts agricultural information about the state of West Bengal.

  12. Entropy-Based Search Algorithm for Experimental Design

    Science.gov (United States)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  13. Aquaculture: Algae. (Latest citations from the Life Sciences Collection data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-05-01

    The bibliography contains citations concerning the commercial cultivation of algae as a facet of aquaculture. Topics include descriptions and characteristics of algal species, environmental variables affecting productivity, nutritional aspects, infestation and disease, genetic manipulation, and production technology. End product applications examine algae as biomass for energy production, food source for humans, animal feed source, and a source for chemical by-products such as chlorophylls. Harvesting of algae as a source of single-celled protein is referenced in a related bibliography. (Contains a minimum of 171 citations and includes a subject term index and title list.)

  14. Ecosystem models. (latest citations from the NTIS data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-01

    The bibliography contains citations concerning the design and application of ecosystem models. Ecosystem simulation and characterization models, together with models for marine biology, plants, microorganisms, and food chains, are described. Models that assess the effect of pollutants on specific environments and habitat suitability index models are also included. (Contains 250 citations and includes a subject term index and title list.)

  15. Carbon monoxide toxicity. (Latest citations from the Life Sciences Collection data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-01

    The bibliography contains citations concerning the mechanism and clinical manifestations of carbon monoxide (CO) exposure, including the effects on the liver, cardiovascular, and nervous systems. Topics include studies of the carbon monoxide binding affinity with hemoglobin, measurement of carboxyhemoglobin in humans and various animal species, carbon monoxide levels resulting from tobacco and marijuana smoke, occupational exposure and the NIOSH (National Institute for Occupational Safety and Health) biological exposure index, symptomology and percent of blood CO, and intrauterine exposure. Air pollution, tobacco smoking, and occupational exposure are discussed as primary sources of carbon monoxide exposure. The effects of cigarette smoking on fetal development and health are excluded and examined in a separate bibliography. (Contains a minimum of 172 citations and includes a subject term index and title list.)

  16. Genetically engineered microorganisms for improved crop production. (Latest citations from the Biobusiness data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-05-01

    The bibliography contains citations concerning the use of genetically altered bacteria and viruses to improve and increase crop production. The uses of microorganisms to transport desirable genes into the subject plant, and the external applications of microorganisms for frost protection, insect repellent properties, or conversion of nitrogen to fertilizer are among the topics discussed. (Contains 250 citations and includes a subject term index and title list.)

  17. A Theoretical and Empirical Evaluation of Software Component Search Engines, Semantic Search Engines and Google Search Engine in the Context of COTS-Based Development

    CERN Document Server

    Yanes, Nacim; Ghezala, Henda Hajjami Ben

    2012-01-01

    COTS-based development is a component reuse approach promising to reduce costs and risks, and ensure higher quality. The growing availability of COTS components on the Web has concretized the possibility of achieving these objectives. In this multitude, a recurrent problem is the identification of the COTS components that best satisfy the user requirements. Finding an adequate COTS component implies searching among heterogeneous descriptions of the components within a broad search space. Thus, the use of search engines is required to make more efficient the COTS components identification. In this paper, we investigate, theoretically and empirically, the COTS component search performance of eight software component search engines, nine semantic search engines and a conventional search engine (Google). Our empirical evaluation is conducted with respect to precision and normalized recall. We defined ten queries for the assessed search engines. These queries were carefully selected to evaluate the capability of e...

  18. Multiparty Quantum Key Agreement Based on Quantum Search Algorithm.

    Science.gov (United States)

    Cao, Hao; Ma, Wenping

    2017-03-23

    Quantum key agreement is an important topic that the shared key must be negotiated equally by all participants, and any nontrivial subset of participants cannot fully determine the shared key. To date, the embed modes of subkey in all the previously proposed quantum key agreement protocols are based on either BB84 or entangled states. The research of the quantum key agreement protocol based on quantum search algorithms is still blank. In this paper, on the basis of investigating the properties of quantum search algorithms, we propose the first quantum key agreement protocol whose embed mode of subkey is based on a quantum search algorithm known as Grover's algorithm. A novel example of protocols with 5 - party is presented. The efficiency analysis shows that our protocol is prior to existing MQKA protocols. Furthermore it is secure against both external attack and internal attacks.

  19. A knowledge based search tool for performance measures in health care systems.

    Science.gov (United States)

    Beyan, Oya D; Baykal, Nazife

    2012-02-01

    Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.

  20. ARPHA-BioDiv: A toolbox for scholarly publication and dissemination of biodiversity data based on the ARPHA Publishing Platform

    Directory of Open Access Journals (Sweden)

    Lyubomir Penev

    2017-04-01

    Full Text Available The ARPHA-BioDiv Тoolbox for Scholarly Publishing and Dissemination of Biodiversity Data is a set of standards, guidelines, recommendations, tools, workflows, journals and services, based on the ARPHA Publishing Platform of Pensoft, designed to ease scholarly publishing of biodiversity and biodiversity-related data that are of primary interest to EU BON and GEO BON networks. ARPHA-BioDiv is based on the infrastructure, knowledge and exeprience gathered in the years-long research, development and publishing activities of Pensoft, upgraded with novel tools and workflows that resulted from the FP7 project EU BON.

  1. Android Based Effective Search Engine Retrieval System Using Ontology

    Directory of Open Access Journals (Sweden)

    A. Praveena

    2014-05-01

    Full Text Available In the proposed model, users search for the query on either Area specified or user’s location, server retrieves all the data to the user’s computer where ontology is applied. After applying the ontology, it will classify in to two concepts such as location based or content based. User PC displays all the relevant keywords to the user’s mobile, so that user selects the exact requirement. The client collects and stores locally then click through data to protect privacy, whereas tasks such as concept extraction, training, and reranking are performed at the search engine server. Ranking occurs and finally exactly mapped information is produced to the users mobile and addresses the privacy problem by restricting the information in the user profile exposed to the search engine server with two privacy parameters. Finally applied UDD algorithm to eliminate the duplication of records which helps to minimize the number of URL listed to the user.

  2. Fractal image encoding based on adaptive search

    Institute of Scientific and Technical Information of China (English)

    Kya Berthe; Yang Yang; Huifang Bi

    2003-01-01

    Finding the optimal algorithm between an efficient encoding process and the rate distortion is the main research in fractal image compression theory. A new method has been proposed based on the optimization of the Least-Square Error and the orthogonal projection. A large number of domain blocks can be eliminated in order to speed-up fractal image compression. Moreover, since the rate-distortion performance of most fractal image coders is not satisfactory, an efficient bit allocation algorithm to improve the rate distortion is also proposed. The implementation and comparison have been done with the feature extraction method to prove the efficiency of the proposed method.

  3. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  4. Can social tagged images aid concept-based video search?

    NARCIS (Netherlands)

    Setz, A.T.; Snoek, C.G.M.

    2009-01-01

    This paper seeks to unravel whether commonly available social tagged images can be exploited as a training resource for concept-based video search. Since social tags are known to be ambiguous, overly personalized, and often error prone, we place special emphasis on the role of disambiguation. We pre

  5. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong; Trieschnigg, Dolf; Develder, Chris; Hiemstra, Djoerd

    2013-01-01

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  6. Constraint-based local search for container stowage slot planning

    DEFF Research Database (Denmark)

    Pacino, Dario; Jensen, Rune Møller; Bebbington, Tom

    2012-01-01

    -sea vessels. This paper describes the constrained-based local search algorithm used in the second phase of this approach where individual containers are assigned to slots in each bay section. The algorithm can solve this problem in an average of 0.18 seconds per bay, corresponding to a 20 seconds runtime...

  7. Producing an Online Undergraduate Literary Magazine: A Guide to Using Problem-Based Learning in the Writing and Publishing Classroom

    Science.gov (United States)

    Persichetti, Amy L.

    2016-01-01

    This article will illustrate how a problem-based learning (PBL) course (Savery, 2006) can be used in a writing program as a vehicle for both creative and preprofessional learning. English 420: Writing, Publishing, and Editing is offered every fall, and its counterpart, English 423: Writing, Publishing, and Editing is offered each spring. The…

  8. XSemantic: An Extension of LCA Based XML Semantic Search

    Science.gov (United States)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  9. An Algorithm Based on Tabu Search for Satisfiability Problem

    Institute of Scientific and Technical Information of China (English)

    黄文奇; 张德富; 汪厚祥

    2002-01-01

    In this paper, a computationally effective algorithm based on tabu search for solving the satisfiability problem (TSSAT) is proposed. Some novel and efficient heuristic strategies for generating candidate neighborhood of the current assignment and selecting variables to be flipped are presented. Especially, the aspiration criterion and tabu list structure of TSSAT are different from those of traditional tabu search. Computational experiments on a class of problem instances show that, TSSAT, in a reasonable amount of computer time, yields better results than Novelty which is currently among the fastest known. Therefore, TSSAT is feasible and effective.

  10. Research on Quantum Searching Algorithms Based on Phase Shifts

    Institute of Scientific and Technical Information of China (English)

    ZHONG Pu-Cha; BAO Wan-Su

    2008-01-01

    @@ One iterative in Grover's original quantum search algorithm consists of two Hadamard-Walsh transformations, a selective amplitude inversion and a diffusion amplitude inversion. We concentrate on the relation among the probability of success of the algorithm, the phase shifts, the number of target items and the number of iterations via replacing the two amplitude inversions by phase shifts of an arbitrary φ = ψ(0 ≤φ, ψ≤ 2π). Then, according to the relation we find out the optimal phase shifts when the number of iterations is given. We present a new quantum search algorithm based on the optimal phase shifts of 1.018 after 0.5π /√M/N iterations. The new algorithm can obtain either a single target item or multiple target items in the search space with the probability of success at least 93.43%.

  11. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  12. Music-Based Training for Pediatric CI Recipients: A Systematic Analysis of Published Studies

    Science.gov (United States)

    Gfeller, Kate

    2016-01-01

    In recent years, there has been growing interest in the use of music-based training to enhance speech and language development in children with normal hearing and some forms of communication disorders, including pediatric CI users. The use of music training for CI users may initially seem incongruous given that signal processing for CIs presents a degraded version of pitch and timbre, both key elements in music. Furthermore, empirical data of systematic studies of music training, particularly in relation to transfer to speech skills are limited. This study describes the rationale for music training of CI users, describes key features of published studies of music training with CI users, and highlights some developmental and logistical issues that should be taken into account when interpreting or planning studies of music training and speech outcomes with pediatric CI recipients. PMID:27246744

  13. Gradient-Based Cuckoo Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2014-01-01

    Full Text Available One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.

  14. Publish-Subscribe Systems via Gossip: a Study based on Complex Networks

    CERN Document Server

    Ferretti, Stefano

    2011-01-01

    This paper analyzes the adoption of unstructured P2P overlay networks to build publish-subscribe systems. We consider a very simple distributed communication protocol, based on gossip and on the local knowledge each node has about subscriptions made by its neighbours. In particular, upon reception (or generation) of a novel event, a node sends it to those neighbours whose subscriptions match that event. Moreover, the node gossips the event to its "non-interested" neighbours, so that the event can be spread through the overlay. A mathematical analysis is provided to estimate the number of nodes receiving the event, based on the network topology, the amount of subscribers and the gossip probability. These outcomes are compared to those obtained via simulation. Results show even when the amount of subscribers represents a very small (yet non-negligible) portion of network nodes, by tuning the gossip probability the event can percolate through the overlay. Hence, the use of unstructured networks. coupled with sim...

  15. TOA estimation algorithm based on multi-search

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A new time of arrival (TOA) estimation algorithm is proposed. The algorithm computes the optimal sub-correlation length based on the SNR theory. So the robust of TOA acquirement is guaranteed very well. Then, according to the actual transmission environment and network system, the multi-search method is given. From the simulation result,the algorithm shows a very high application value in the realization of wireless location system (WLS).

  16. A new classification algorithm based on RGH-tree search

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.

  17. Line Search-Based Inverse Lithography Technique for Mask Design

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2012-01-01

    Full Text Available As feature size is much smaller than the wavelength of illumination source of lithography equipments, resolution enhancement technology (RET has been increasingly relied upon to minimize image distortions. In advanced process nodes, pixelated mask becomes essential for RET to achieve an acceptable resolution. In this paper, we investigate the problem of pixelated binary mask design in a partially coherent imaging system. Similar to previous approaches, the mask design problem is formulated as a nonlinear program and is solved by gradient-based search. Our contributions are four novel techniques to achieve significantly better image quality. First, to transform the original bound-constrained formulation to an unconstrained optimization problem, we propose a new noncyclic transformation of mask variables to replace the wellknown cyclic one. As our transformation is monotonic, it enables a better control in flipping pixels. Second, based on this new transformation, we propose a highly efficient line search-based heuristic technique to solve the resulting unconstrained optimization. Third, to simplify the optimization, instead of using discretization regularization penalty technique, we directly round the optimized gray mask into binary mask for pattern error evaluation. Forth, we introduce a jump technique in order to jump out of local minimum and continue the search.

  18. Complete Boolean Satisfiability Solving Algorithms Based on Local Search

    Institute of Scientific and Technical Information of China (English)

    Wen-Sheng Guo; Guo-Wu Yang; William N.N.Hung; Xiaoyu Song

    2013-01-01

    Boolean satisfiability (SAT) is a well-known problem in computer science,artificial intelligence,and operations research.This paper focuses on the satisfiability problem of Model RB structure that is similar to graph coloring problems and others.We propose a translation method and three effective complete SAT solving algorithms based on the characterization of Model RB structure.We translate clauses into a graph with exclusive sets and relative sets.In order to reduce search depth,we determine search order using vertex weights and clique in the graph.The results show that our algorithms are much more effective than the best SAT solvers in numerous Model RB benchmarks,especially in those large benchmark instances.

  19. A Multiple-Neighborhood-Based Parallel Composite Local Search Algorithm for Timetable Problem

    Institute of Scientific and Technical Information of China (English)

    颜鹤; 郁松年

    2004-01-01

    This paper presents a parallel composite local search algorithm based on multiple search neighborhoods to solve a special kind of timetable problem. The new algorithm can also effectively solve those problems that can be solved by general local search algorithms. Experimental results show that the new algorithm can generate better solutions than general local search algorithms.

  20. Target searching based on modified implicit ROI encoding scheme

    Institute of Scientific and Technical Information of China (English)

    Bai Xu; Zhang Zhongzhao

    2008-01-01

    An EBCOT-based method is proposed to reduce the priority of background coefficients in the ROI code block without compromising algorithm complexity.The region of interest is encoded to a higher quality level than background,and the target searching time in video-guided penetrating missile can be shortened.Three kinds of coding schemes based on EBCOT are discussed.Experimental results demonstrate that the proposed method shows higher compression efficiency,lower complexity,and good reconstructed ROI image quality in the lower channel capacity.

  1. Intelligent Agent based Flight Search and Booking System

    Directory of Open Access Journals (Sweden)

    Floyd Garvey

    2012-07-01

    Full Text Available The world globalization is widely used, and there are several definitions that may fit this one word. However the reality remains that globalization has impacted and is impacting each individual on this planet. It is defined to be greater movement of people, goods, capital and ideas due to increased economic integration, which in turn is propelled, by increased trade and investment. It is like moving towards living in a borderless world. With the reality of globalization, the travel industry has benefited significantly. It could be said that globalization is benefiting from the flight industry. Regardless of the way one looks at it, more persons are traveling each day and are exploring several places that were distant places on a map. Equally, technology has been growing at an increasingly rapid pace and is being utilized by several persons all over the world. With the combination of globalization and the increase in technology and the frequency in travel there is a need to provide an intelligent application that is capable to meeting the needs of travelers that utilize mobile phones all over. It is a solution that fits in perfectly to a user’s busy lifestyle, offers ease of use and enough intelligence that makes a user’s experience worthwhile. Having recognized this need, the Agent based Mobile Airline Search and Booking System is been developed that is built to work on the Android to perform Airline Search and booking using Biometric. The system also possess agent learning capability to perform the search of Airlines based on some previous search pattern .The development been carried out using JADE-LEAP Agent development kit on Android.

  2. An ontology-based search engine for digital reconstructions of neuronal morphology.

    Science.gov (United States)

    Polavaram, Sridevi; Ascoli, Giorgio A

    2017-03-23

    Neuronal morphology is extremely diverse across and within animal species, developmental stages, brain regions, and cell types. This diversity is functionally important because neuronal structure strongly affects synaptic integration, spiking dynamics, and network connectivity. Digital reconstructions of axonal and dendritic arbors are thus essential to quantify and model information processing in the nervous system. NeuroMorpho.Org is an established repository containing tens of thousands of digitally reconstructed neurons shared by several hundred laboratories worldwide. Each neuron is annotated with specific metadata based on the published references and additional details provided by data owners. The number of represented metadata concepts has grown over the years in parallel with the increase of available data. Until now, however, the lack of standardized terminologies and of an adequately structured metadata schema limited the effectiveness of user searches. Here we present a new organization of NeuroMorpho.Org metadata grounded on a set of interconnected hierarchies focusing on the main dimensions of animal species, anatomical regions, and cell types. We have comprehensively mapped each metadata term in NeuroMorpho.Org to this formal ontology, explicitly resolving all ambiguities caused by synonymy and homonymy. Leveraging this consistent framework, we introduce OntoSearch, a powerful functionality that seamlessly enables retrieval of morphological data based on expert knowledge and logical inferences through an intuitive string-based user interface with auto-complete capability. In addition to returning the data directly matching the search criteria, OntoSearch also identifies a pool of possible hits by taking into consideration incomplete metadata annotation.

  3. Searching WormBase for information about Caenorhabditis elegans.

    Science.gov (United States)

    Schwarz, Erich M; Sternberg, Paul W

    2006-07-01

    WormBase is the major public biological database for the nematode Caenorhabditis elegans. It is meant to be useful to any biologist who wants to use C. elegans, whatever his or her specialty. WormBase contains information about the genomic sequence of C. elegans, its genes and their products, and its higher-level traits such as gene expression patterns and neuronal connectivity. WormBase also contains genomic sequences and gene structures of C. briggsae and C. remanei, two closely related worms. These data are interconnected, so that a search beginning with one object (such as a gene) can be directed to related objects of a different type (e.g., the DNA sequence of the gene or the cells in which the gene is active). One can also perform searches for complex data sets. The WormBase developers group actively invites suggestions for improvements from the database users. WormBase's source code and underlying database are freely available for local installation and modification.

  4. Bibliotherapy and information prescriptions: a summary of the published evidence-base and recommendations from past and ongoing Books on Prescription projects.

    Science.gov (United States)

    Chamberlain, D; Heaps, D; Robert, I

    2008-01-01

    This paper summarizes the published evidence and reports from ongoing and completed projects that used Bibliotherapy and Information Prescription to deliver patient care. A literature search was conducted and relevant papers were summarized into: type of study, type of Bibliotherapy, client group and recommendations. In total, 65 papers were considered with 57 reviewed. A survey was also sent to Library Authorities subscribing to national survey standards asking for details about delivery of Information Prescription projects. There were 21 returned surveys. The experiences and recommendations were then summarized. The aim of the paper is to collate the evidence-base of written research and the experience and recommendations of projects into an easy format so that practitioners interested in using Bibliotherapy/Information Prescription/Books on Prescription have an understanding what they are, the extent of the evidence-base to inform practice, and highlight gaps in the research.

  5. Complications rates of non-oncologic urologic procedures in population-based data: a comparison to published series

    Directory of Open Access Journals (Sweden)

    David S. Aaronson

    2010-10-01

    Full Text Available PUSPOSE: Published single institutional case series are often performed by one or more surgeons with considerable expertise in specific procedures. The reported incidence of complications in these series may not accurately reflect community-based practice. We sought to compare complication and mortality rates following urologic procedures derived from population-based data to those of published single-institutional case series. MATERIALS AND METHODS: In-hospital mortality and complications of common urologic procedures (percutaneous nephrostomy, ureteropelvic junction obstruction repair, ureteroneocystostomy, urethral repair, artificial urethral sphincter implantation, urethral suspension, transurethral resection of the prostate, and penile prosthesis implantation reported in the U.S.’s National Inpatient Sample of the Healthcare Cost and Utilization Project were identified. Rates were then compared to those of published single-institution series using statistical analysis. RESULTS: For 7 of the 8 procedures examined, there was no significant difference in rates of complication or mortality between published studies and our population-based data. However, for percutaneous nephrostomy, two published single-center series had significantly lower mortality rates (p < 0.001. The overall rate of complications in the population-based data was higher than published single or select multi-institutional data for percutaneous nephrostomy performed for urinary obstruction (p < 0.001. CONCLUSIONS: If one assumes that administrative data does not suffer from under reporting of complications then for some common urological procedures, complication rates between population-based data and published case series seem comparable. Endorsement of mandatory collection of clinical outcomes is likely the best way to appropriately counsel patients about the risks of these common urologic procedures.

  6. Shape based indexing for faster search of RNA family databases

    Directory of Open Access Journals (Sweden)

    Reeder Jens

    2008-02-01

    Full Text Available Abstract Background Most non-coding RNA families exert their function by means of a conserved, common secondary structure. The Rfam data base contains more than five hundred structurally annotated RNA families. Unfortunately, searching for new family members using covariance models (CMs is very time consuming. Filtering approaches that use the sequence conservation to reduce the number of CM searches, are fast, but it is unknown to which sacrifice. Results We present a new filtering approach, which exploits the family specific secondary structure and significantly reduces the number of CM searches. The filter eliminates approximately 85% of the queries and discards only 2.6% true positives when evaluating Rfam against itself. First results also capture previously undetected non-coding RNAs in a recent human RNAz screen. Conclusion The RNA shape index filter (RNAsifter is based on the following rationale: An RNA family is characterised by structure, much more succinctly than by sequence content. Structures of individual family members, which naturally have different length and sequence composition, may exhibit structural variation in detail, but overall, they have a common shape in a more abstract sense. Given a fixed release of the Rfam data base, we can compute these abstract shapes for all families. This is called a shape index. If a query sequence belongs to a certain family, it must be able to fold into the family shape with reasonable free energy. Therefore, rather than matching the query against all families in the data base, we can first (and quickly compute its feasible shape(s, and use the shape index to access only those families where a good match is possible due to a common shape with the query.

  7. Applying Cuckoo Search for analysis of LFSR based cryptosystem

    Directory of Open Access Journals (Sweden)

    Maiya Din

    2016-09-01

    Full Text Available Cryptographic techniques are employed for minimizing security hazards to sensitive information. To make the systems more robust, cyphers or crypts being used need to be analysed for which cryptanalysts require ways to automate the process, so that cryptographic systems can be tested more efficiently. Evolutionary algorithms provide one such resort as these are capable of searching global optimal solution very quickly. Cuckoo Search (CS Algorithm has been used effectively in cryptanalysis of conventional systems like Vigenere and Transposition cyphers. Linear Feedback Shift Register (LFSR is a crypto primitive used extensively in design of cryptosystems. In this paper, we analyse LFSR based cryptosystem using Cuckoo Search to find correct initial states of used LFSR. Primitive polynomials of degree 11, 13, 17 and 19 are considered to analyse text crypts of length 200, 300 and 400 characters. Optimal solutions were obtained for the following CS parameters: Levy distribution parameter (β = 1.5 and Alien eggs discovering probability (pa = 0.25.

  8. Visual tracking method based on cuckoo search algorithm

    Science.gov (United States)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  9. Performance Oriented Query Processing In GEO Based Location Search Engines

    CERN Document Server

    Umamaheswari, M

    2010-01-01

    Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable geographic search engines. Query processing is a major bottleneck in standard web search engines, and the main reason for the thousands of machines used by the major engines. Geographic search engine query processing is different in that it requires a combination of text and spatial data processing techniques. We propose several algorithms for efficient query processing in geographic search engines, integrate them into an existing web search query processor, and evaluate them on large sets of real data and query traces.

  10. Publisher's Note: Search for ultrahigh energy neutrinos in highly inclined events at the Pierre Auger Observatory [Phys. Rev. D 84, 122005 (2011)

    Science.gov (United States)

    Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Anticic, T.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Badescu, A. M.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; Benzvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Bohácová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caccianiga, B.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chirinos Diaz, J.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; de Domenico, M.; de Donato, C.; de Jong, S. J.; de La Vega, G.; de Mello, W. J. M., Jr.; de Mello Neto, J. R. T.; de Mitri, I.; de Souza, V.; de Vries, K. D.; Del Peral, L.; Del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; di Giulio, C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fick, B.; Filevich, A.; Filipcic, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gouffon, P.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hansen, P.; Harari, D.; Harmsma, S.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lauer, R.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Micanovic, S.; Micheletti, M. I.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Newton, D.; Nhung, P. T.; Niechciol, M.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; Pekala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Pfendner, C.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Porcelli, A.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-D'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saftoiu, A.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Sima, O.; Smialkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Srivastava, Y. N.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tapia, A.; Tartare, M.; Tascau, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tkaczyk, W.; Todero Peixoto, C. J.; Toma, G.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberic, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Westerhoff, S.; Whelan, B. J.; Widom, A.; Wieczorek, G.; Wiencke, L.; Wilczynska, B.; Wilczynski, H.; Will, M.; Williams, C.; Winchen, T.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.

    2012-01-01

    The Surface Detector of the Pierre Auger Observatory is sensitive to neutrinos of all flavours above 0.1 EeV. These interact through charged and neutral currents in the atmosphere giving rise to extensive air showers. When interacting deeply in the atmosphere at nearly horizontal incidence, neutrinos can be distinguished from regular hadronic cosmic rays by the broad time structure of their shower signals in the water-Cherenkov detectors. In this paper we present for the first time an analysis based on down-going neutrinos. We describe the search procedure, the possible sources of background, the method to compute the exposure and the associated systematic uncertainties. No candidate neutrinos have been found in data collected from 1 January 2004 to 31 May 2010. Assuming an E^-2 differential energy spectrum the limit on the single flavour neutrino is (E^2 * dN/dE) < 1.74x10^-7 GeV cm^-2 s^-1 sr^-1 at 90% C.L. in the energy range 1x10^17 eV < E < 1x10^20 eV.

  11. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  12. Web Image Retrieval Search Engine based on Semantically Shared Annotation

    Directory of Open Access Journals (Sweden)

    Alaa Riad

    2012-03-01

    Full Text Available This paper presents a new majority voting technique that combines the two basic modalities of Web images textual and visual features of image in a re-annotation and search based framework. The proposed framework considers each web page as a voter to vote the relatedness of keyword to the web image, the proposed approach is not only pure combination between image low level feature and textual feature but it take into consideration the semantic meaning of each keyword that expected to enhance the retrieval accuracy. The proposed approach is not used only to enhance the retrieval accuracy of web images; but also able to annotated the unlabeled images.

  13. Axion Search by Laser-based Experiment OSQAR

    OpenAIRE

    Sulc, Miroslav; Pugnat, Pierre; Ballou, Rafik; Deferne, Guy; Duvillaret, Lionel; Flekova, Lucie; Finger, Michael; Finger Jr, Michael; Hošek, Jan; Husek, Thomas; Jost, Rémy; Kral, Miroslav; Kunc, Štěpán; Macuchova, Karolina; Meissner, Krzysztof,

    2012-01-01

    International audience; Laser-based experimentOSQAR in CERN is aimed to the search of the axions by twomethods. The photon regeneration experiment is using two LHC dipole magnets of the length 14.3 m and magnetic field 9.5 T equipped with an optical barrier at the end of the first magnet. It looks as light shining through the wall. No excess of events above the background was detected at this arrangement. Nevertheless, this result extends the exclusion region for the axion mass. The second me...

  14. SHOP: scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Linusson, Anna; Zamora, Ismael

    2007-01-01

    A new GRID-based method for scaffold hopping (SHOP) is presented. In a fully automatic manner, scaffolds were identified in a database based on three types of 3D-descriptors. SHOP's ability to recover scaffolds was assessed and validated by searching a database spiked with fragments of known...... ligands of three different protein targets relevant for drug discovery using a rational approach based on statistical experimental design. Five out of eight and seven out of eight thrombin scaffolds and all seven HIV protease scaffolds were recovered within the top 10 and 31 out of 31 neuraminidase...... scaffolds were in the 31 top-ranked scaffolds. SHOP also identified new scaffolds with substantially different chemotypes from the queries. Docking analysis indicated that the new scaffolds would have similar binding modes to those of the respective query scaffolds observed in X-ray structures...

  15. Proceedings of the ECIR 2012 Workshop on Task-Based and Aggregated Search (TBAS2012)

    DEFF Research Database (Denmark)

    2012-01-01

    Task-based search aims to understand the user's current task and desired outcomes, and how this may provide useful context for the Information Retrieval (IR) process. An example of task-based search is situations where additional user information on e.g. the purpose of the search or what the user...

  16. An analysis of search-based user interaction on the Semantic Web

    NARCIS (Netherlands)

    Hildebrand, M.; Ossenbruggen, J.R. van; Hardman, L.

    2007-01-01

    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of

  17. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  18. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA DFIRM preliminary map out now, published in 2009, Published in 2009, 1:24000 (1in=2000ft) scale, Brown County, WI.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  19. Memoryless cooperative graph search based on the simulated annealing algorithm

    Institute of Scientific and Technical Information of China (English)

    Hou Jian; Yan Gang-Feng; Fan Zhen

    2011-01-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip consensus method based scheme is presented to update the key parameter-radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.

  20. Efficient mining of association rules based on gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    Fariba Khademolghorani

    2011-07-01

    Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.

  1. A Detection Scheme for Cavity-based Dark Matter Searches

    CERN Document Server

    Bukhari, M H S

    2016-01-01

    We present here proposal of a scheme and some useful ideas for resonant cavity-based detection of cold dark matter axions with hope to improve the existing endeavors. The scheme is based upon our idea of a detector, which incorporates an integrated tunnel diode and a GaAs HEMT or HFET, High Electron Mobility Transistor or Heterogenous FET, for resonance detection and amplification from a resonant cavity (in a strong transverse magnetic field from a cylindrical array of halbach magnets). The idea of a TD-oscillator-amplifier combination could possibly serve as a more sensitive and viable resonance detection regime while maintaining an excellent performance with low noise temperature, whereas the halbach magnets array may offer a compact and permanent solution replacing the conventional electromagnets scheme. We believe that all these factors could possibly increase the sensitivity and accuracy of axion detection searches and reduce complications (and associated costs) in the experiments, in addition to help re...

  2. Development of a semantic-based search system for immunization knowledge.

    Science.gov (United States)

    Lee, Li-Hui; Chu, Hsing-Yi; Liou, Der-Ming

    2013-01-01

    This study developed and implemented a children's immunization management system with English and Traditional Chinese immunization ontology for semantic-based search of immunization knowledge. Parents and guardians are able to search vaccination-related information effectively. Jena Java Application Programming Interface (API) was used to search for synonyms and associated classes in this domain and then use them for searching by Google Search API. The searching results do not only contain suggested web links but also include a basic introduction to vaccine and related preventable diseases. Compared with the Google keyword-based search, over half of the 31 trial users prefer using semantic-based search of this system. Although the search runtime on this system is not as fast as well-known search engines such as Google or Yahoo, it can accurately focus on searching for child vaccination information to provide search results that better conform to the needs of users. Furthermore, the system is also one of the few health knowledge platforms that support Traditional Chinese semantic-based search.

  3. Biobotic insect swarm based sensor networks for search and rescue

    Science.gov (United States)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  4. An Efficient Annotation of Search Results Based on Feature

    Directory of Open Access Journals (Sweden)

    A. Jebha

    2015-10-01

    Full Text Available  With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB. A result page returned from WDB has multiple search records (SRR.Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able, relevant information are extracted and labels of data are assigned meaningfully. In this paper, feature ranking is proposed to extract the relevant information of extracted feature from WDB. Feature ranking is practical to enhance ideas of data and identify relevant features. This research explores the performance of feature ranking process by using the linear support vector machines with various feature of WDB database for annotation of relevant results. Experimental result of proposed system provides better result when compared with the earlier methods.

  5. A Content-Based Search Algorithm for Motion Estimation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The basic search algorithm toimplement Motion Estimation (ME) in the H. 263 encoder is a full search.It is simple but time-consuming. Traditional search algorithms are fast, but may cause a fall in image quality or an increase in bit-rate in low bit-rate applications. A fast search algorithm for ME with consideration on image content is proposed in this paper. Experiments show that the proposed algorithm can offer up to 70 percent savings in execution time with almost no sacrifice in PSNR and bit-rate, compared with the full search.

  6. Making the road by searching - A search engine based on Swarm Information Foraging

    CERN Document Server

    Gayo-Avello, Daniel

    2009-01-01

    Search engines are nowadays one of the most important entry points for Internet users and a central tool to solve most of their information needs. Still, there exist a substantial amount of users' searches which obtain unsatisfactory results. Needless to say, several lines of research aim to increase the relevancy of the results users retrieve. In this paper the authors frame this problem within the much broader (and older) one of information overload. They argue that users' dissatisfaction with search engines is a currently common manifestation of such a problem, and propose a different angle from which to tackle with it. As it will be discussed, their approach shares goals with a current hot research topic (namely, learning to rank for information retrieval) but, unlike the techniques commonly applied in that field, their technique cannot be exactly considered machine learning and, additionally, it can be used to change the search engine's response in real-time, driven by the users behavior. Their proposal ...

  7. Professional Microsoft search fast search, Sharepoint search, and search server

    CERN Document Server

    Bennett, Mark; Kehoe, Miles; Voskresenskaya, Natalya

    2010-01-01

    Use Microsoft's latest search-based technology-FAST search-to plan, customize, and deploy your search solutionFAST is Microsoft's latest intelligent search-based technology that boasts robustness and an ability to integrate business intelligence with Search. This in-depth guide provides you with advanced coverage on FAST search and shows you how to use it to plan, customize, and deploy your search solution, with an emphasis on SharePoint 2010 and Internet-based search solutions.With a particular appeal for anyone responsible for implementing and managing enterprise search, this book presents t

  8. Video Image Block-matching Motion Estimation Algorithm Based on Two-step Search

    Institute of Scientific and Technical Information of China (English)

    Wei-qi JIN; Yan CHEN; Ling-xue WANG; Bin LIU; Chong-liang LIU; Ya-zhong SHEN; Gui-qing ZHANG

    2010-01-01

    Aiming at the shortcoming that certain existing blocking-matching algorithms, such as full search, three-step search, and diamond search algorithms, usually can not keep a good balance between high accuracy and low computational complexity, a block-matching motion estimation algorithm based on two-step search is proposed in this paper. According to the fact that the gray values of adjacent pixels will not vary fast, the algorithm employs an interlaced search pattern in the search window to estimate the motion vector of the object-block. Simulation and actual experiments demonstrate that the proposed algorithm greatly outperforms the well-known three-step search and diamond search algorithms, no matter the motion vector is large or small. Compared with the full search algorithm, the proposed one achieves similar performance but requires much less computation, therefore, the algorithm is well qualified for real-time video image processing.

  9. A Harmony Search Based Algorithm for Detecting Distributed Predicates

    Directory of Open Access Journals (Sweden)

    Eslam Al Maghayreh

    2012-10-01

    Full Text Available Detection of distributed predicates (also referred to as runtime verification can be used to verify that a particular run of a given distributed program satisfies certain properties (represented as predicates. Consequently, distributed predicates detection techniques can be used to effectively improve the dependability of a given distributed application. Due to concurrency, the detection of distributed predicates can incur significant overhead. Most of the effective techniques developed to solve this problem work efficiently for certain classes of predicates, like conjunctive predicates. In this paper, we have presented a technique based on harmony search to efficiently detect the satisfaction of a predicate under the possibly modality. We have implemented the proposed technique and we have conducted several experiments to demonstrate its effectiveness.

  10. Axion search by laser-based experiment OSQAR

    Science.gov (United States)

    Sulc, M.; Pugnat, P.; Ballou, R.; Deferne, G.; Duvillaret, L.; Flekova, L.; Finger, M.; Finger, M.; Hosek, J.; Husek, T.; Jost, R.; Kral, M.; Kunc, S.; Macuchova, K.; Meissner, K. A.; Morville, J.; Romanini, D.; Schott, M.; Siemko, A.; Slunecka, M.; Vitrant, G.; Zicha, J.

    2013-08-01

    Laser-based experiment OSQAR in CERN is aimed to the search of the axions by two methods. The photon regeneration experiment is using two LHC dipole magnets of the length 14.3 m and magnetic field 9.5 T equipped with an optical barrier at the end of the first magnet. It looks as light shining through the wall. No excess of events above the background was detected at this arrangement. Nevertheless, this result extends the exclusion region for the axion mass. The second method wants to measure the ultra-fine vacuum magnetic birefringence for the first time. An optical scheme with electro-optical modulator has been proposed, validated and subsequently improved. Cotton-Mouton constant for air was determined in this experiment setup.

  11. Axion search by laser-based experiment OSQAR

    Energy Technology Data Exchange (ETDEWEB)

    Sulc, M., E-mail: miroslav.sulc@tul.cz [Technical University of Liberec (Czech Republic); Pugnat, P. [LNCMI-G, CNRS-UJF-UPS-INSA, BP 166, 38042 Grenoble Cedex-9 (France); Ballou, R. [Institut Néel, CNRS and Université Joseph Fourier, BP 166, 38042 Grenoble Cedex-9 (France); Deferne, G. [CERN, CH-1211 Geneva-23 (Switzerland); Duvillaret, L. [IMEP-LAHC, UMR CNRS 5130, Minatec-INPG, 3 parvis Louis Néel, BP 257, 38016 Grenoble Cedex-1 (France); Flekova, L. [Czech Technical University, Faculty of Mechanical Engineering, Prague (Czech Republic); Finger, M.; Finger, M. [Charles University, Faculty of Mathematics and Physics, Prague (Czech Republic); Hosek, J. [Czech Technical University, Faculty of Mechanical Engineering, Prague (Czech Republic); Husek, T. [Charles University, Faculty of Mathematics and Physics, Prague (Czech Republic); Jost, R. [LSP, UMR CNRS 5588, Université Joseph Fourier, BP 87, 38402 Saint-Martin d' Hères (France); Kral, M. [Czech Technical University, Faculty of Mechanical Engineering, Prague (Czech Republic); Kunc, S. [Technical University of Liberec (Czech Republic); Macuchova, K. [Czech Technical University, Faculty of Mechanical Engineering, Prague (Czech Republic); Meissner, K.A. [Institute of Theoretical Physics, University of Warsaw (Poland); Morville, J. [LASIM, UMR CNRS 5579, Université Claude Bernard Lyon-1, 69622 Villeurbanne (France); Romanini, D. [LSP, UMR CNRS 5588, Université Joseph Fourier, BP 87, 38402 Saint-Martin d' Hères (France); Schott, M.; Siemko, A. [CERN, CH-1211 Geneva-23 (Switzerland); Slunecka, M. [Charles University, Faculty of Mathematics and Physics, Prague (Czech Republic); and others

    2013-08-01

    Laser-based experiment OSQAR in CERN is aimed to the search of the axions by two methods. The photon regeneration experiment is using two LHC dipole magnets of the length 14.3 m and magnetic field 9.5 T equipped with an optical barrier at the end of the first magnet. It looks as light shining through the wall. No excess of events above the background was detected at this arrangement. Nevertheless, this result extends the exclusion region for the axion mass. The second method wants to measure the ultra-fine vacuum magnetic birefringence for the first time. An optical scheme with electro-optical modulator has been proposed, validated and subsequently improved. Cotton–Mouton constant for air was determined in this experiment setup.

  12. Novel cued search strategy based on information gain for phased array radar

    Institute of Scientific and Technical Information of China (English)

    Lu Jianbin; Hu Weidong; Xiao Hui; Yu Wenxian

    2008-01-01

    A search strategy based on the maximal information gain principle is presented for the cued search of phased array radars. First, the method for the determination of the cued search region, arrangement of beam positions, and the calculation of the prior probability distribution of each beam position is discussed. And then,two search algorithms based on information gain are proposed using Shannon entropy and Kullback-Leibler entropy,respectively. With the proposed strategy, the information gain of each beam position is predicted before the radar detection, and the observation is made in the beam position with the maximal information gain. Compared with the conventional method of sequential search and confirm search, simulation results show that the proposed search strategy can distinctly improve the search performance and save radar time resources with the same given detection probability.

  13. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  14. A Bayesian Based Search and Classification System for Product Information of Agricultural Logistics Information Technology

    OpenAIRE

    2011-01-01

    Part 1: Decision Support Systems, Intelligent Systems and Artificial Intelligence Applications; International audience; In order to meet the needs of users who search agricultural products logistics information technology, this paper introduces a search and classification system of agricultural products logistics information technology search and classification. Firstly, the dictionary of field concept word was built based on analyzing the characteristics of agricultural products logistics in...

  15. A Semantic Query Transformation Approach Based on Ontology for Search Engine

    Directory of Open Access Journals (Sweden)

    SAJENDRA KUMAR

    2012-05-01

    Full Text Available These days we are using some popular web search engines for information retrieval in all areas, such engine are as Google, Yahoo!, and Live Search, etc. to obtain initial helpful information.Which information we retrieved via search engine may not be relevant to the search target in the search engine user's mind. When user not found relevant information he has to shortlist the results. Thesesearch engines use traditional search service based on "static keywords", which require the users to type in the exact keywords. This approach clearly puts the users in a critical situation of guessing the exact keyword. The users may want to define their search by using attributes of the search target. But the relevancy of results in most cases may not be satisfactory and the users may not be patient enough to browse through complete list of pages to get a relevant result. The reason behind this is the search engines performs search based on the syntax not on semantics. But they seemed to be less efficient to understand the relationship between the keywords which had an adverse effect on the results it produced. Semantic search engines – only solution to this; which returns concepts not documents according to user query matching. In This paper we proposed a semantic query interface which creates a semantic query according the user input query and study of current semantic search engine techniques for semantic search.

  16. Search Engines and Search Technologies for Web-based Text Data%网络文本数据搜索引擎与搜索技术

    Institute of Scientific and Technical Information of China (English)

    李勇

    2001-01-01

    This paper describes the functions, characteristics and operating principles of search engines based on Web text, and the searching and data mining technologies for Web-based text information. Methods of computer-aided text clustering and abstacting are also given. Finally, it gives some guidelines for the assessment of searching quality.

  17. Developing a Grid-based search and categorization tool

    CERN Document Server

    Haya, Glenn; Vigen, Jens

    2003-01-01

    Grid technology has the potential to improve the accessibility of digital libraries. The participants in Project GRACE (Grid Search And Categorization Engine) are in the process of developing a search engine that will allow users to search through heterogeneous resources stored in geographically distributed digital collections. What differentiates this project from current search tools is that GRACE will be run on the European Data Grid, a large distributed network, and will not have a single centralized index as current web search engines do. In some cases, the distributed approach offers advantages over the centralized approach since it is more scalable, can be used on otherwise inaccessible material, and can provide advanced search options customized for each data source.

  18. The effects of mulching on soil erosion by water. A review based on published data

    Science.gov (United States)

    Prosdocimi, Massimo; Jordán, Antonio; Tarolli, Paolo; Cerdà, Artemi

    2016-04-01

    lands, post-fire affected areas and anthropic sites. Data published in literature have been collected. The results proved the beneficial effects of mulching on soil erosion by water in all the contexts considered, with reduction rates in average sediment concentration, soil loss and runoff volume that, in some cases, exceeded 90%. Furthermore, in most cases, mulching confirmed to be a relatively inexpensive soil conservation practice that allowed to reduce soil erodibility and surface immediately after its application. References Cerdà, A., 1994. The response of abandoned terraces to simulated rain, in: Rickson, R.J., (Ed.), Conserving Soil Resources: European Perspective, CAB International, Wallingford, pp. 44-55. Cerdà, A., Flanagan, D.C., Le Bissonnais, Y., Boardman, J., 2009. Soil erosion and agriculture. Soil & Tillage Research 106, 107-108. Cerdan, O., Govers, G., Le Bissonnais, Y., Van Oost, K., Poesen, J., Saby, N., Gobin, A., Vacca, A., Quinton, J., Auerwald, K., Klik, A., Kwaad, F.J.P.M., Raclot, D., Ionita, I., Rejman, J., Rousseva, S., Muxart, T., Roxo, M.J., Dostal, T., 2010. Rates and spatial variations of soil erosion in Europe: A study based on erosion plot data. Geomorphology 122, 167-177. García-Orenes, F., Roldán A., Mataix-Solera, J, Cerdà, A., Campoy M, Arcenegui, V., Caravaca F. 2009. Soil structural stability and erosion rates influenced by agricultural management practices in a semi-arid Mediterranean agro-ecosystem. Soil Use and Management 28: 571-579. Hayes, S.A., McLaughlin, R.A., Osmond, D.L., 2005. Polyacrylamide use for erosion and turbidity control on construction sites. Journal of soil and water conservation 60(4):193-199. Jordán, A., Zavala, L.M., Muñoz-Rojas, M., 2011. Mulching, effects on soil physical properties. In: Gliński, J., Horabik, J., Lipiec, J. (Eds.), Encyclopedia of Agrophysics. Springer, Dordrecht, pp. 492-496. Montgomery, D.R., 2007. Soil erosion and agricultural sustainability. PNAS 104, 13268-13272. Prats, S

  19. Location-Based Search Engines Tasks and Capabilities: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Hossein Vakili Mofrad

    2007-12-01

    Full Text Available Location-based web searching is one of the popular tasks expected from the search engines. A location-based query consists of a topic and a reference location. Unlike general web search, in location-based search it is expected to find and rank documents which are not only related to the query topic but also geographically related to the location which the query is associated with. There are several issues for developing effective geographic search engines and so far, no global location-based search engine has been reported. Location ambiguity, lack of geographic information on web pages, language-based and country-dependent addressing styles, and multiple locations related to a single web resource are notable difficulties. Search engine companies have started to develop and offer location-based services. However, they are still geographically limited and have not become as successful and popular as general search engines. This paper reviews the architecture and tasks of location-based search engines and compares the capabilities, functionalities and coverage of the current geographic search engines with a user-oriented approach.

  20. Self-Adapting Routing Overlay Network for Frequently Changing Application Traffic in Content-Based Publish/Subscribe System

    Directory of Open Access Journals (Sweden)

    Meng Chi

    2014-01-01

    Full Text Available In the large-scale distributed simulation area, the topology of the overlay network cannot always rapidly adapt to frequently changing application traffic to reduce the overall traffic cost. In this paper, we propose a self-adapting routing strategy for frequently changing application traffic in content-based publish/subscribe system. The strategy firstly trains the traffic information and then uses this training information to predict the application traffic in the future. Finally, the strategy reconfigures the topology of the overlay network based on this predicting information to reduce the overall traffic cost. A predicting path is also introduced in this paper to reduce the reconfiguration numbers in the process of the reconfigurations. Compared to other strategies, the experimental results show that the strategy proposed in this paper could reduce the overall traffic cost of the publish/subscribe system in less reconfigurations.

  1. A Full-Text-Based Search Engine for Finding Highly Matched Documents Across Multiple Categories

    Science.gov (United States)

    Nguyen, Hung D.; Steele, Gynelle C.

    2016-01-01

    This report demonstrates the full-text-based search engine that works on any Web-based mobile application. The engine has the capability to search databases across multiple categories based on a user's queries and identify the most relevant or similar. The search results presented here were found using an Android (Google Co.) mobile device; however, it is also compatible with other mobile phones.

  2. Content-based microarray search using differential expression profiles

    Directory of Open Access Journals (Sweden)

    Thathoo Rahul

    2010-12-01

    Full Text Available Abstract Background With the expansion of public repositories such as the Gene Expression Omnibus (GEO, we are rapidly cataloging cellular transcriptional responses to diverse experimental conditions. Methods that query these repositories based on gene expression content, rather than textual annotations, may enable more effective experiment retrieval as well as the discovery of novel associations between drugs, diseases, and other perturbations. Results We develop methods to retrieve gene expression experiments that differentially express the same transcriptional programs as a query experiment. Avoiding thresholds, we generate differential expression profiles that include a score for each gene measured in an experiment. We use existing and novel dimension reduction and correlation measures to rank relevant experiments in an entirely data-driven manner, allowing emergent features of the data to drive the results. A combination of matrix decomposition and p-weighted Pearson correlation proves the most suitable for comparing differential expression profiles. We apply this method to index all GEO DataSets, and demonstrate the utility of our approach by identifying pathways and conditions relevant to transcription factors Nanog and FoxO3. Conclusions Content-based gene expression search generates relevant hypotheses for biological inquiry. Experiments across platforms, tissue types, and protocols inform the analysis of new datasets.

  3. Online Discovery of Search Objectives for Test-Based Problems.

    Science.gov (United States)

    Liskowski, Paweł; Krawiec, Krzysztof

    2016-03-08

    In test-based problems, commonly approached with competitive coevolutionary algorithms, the fitness of a candidate solution is determined by the outcomes of its interactions with multiple tests. Usually, fitness is a scalar aggregate of interaction outcomes, and as such imposes a complete order on the candidate solutions. However, passing different tests may require unrelated "skills," and candidate solutions may vary with respect to such capabilities. In this study, we provide theoretical evidence that scalar fitness, inherently incapable of capturing such differences, is likely to lead to premature convergence. To mitigate this problem, we propose DISCO, a method that automatically identifies the groups of tests for which the candidate solutions behave similarly and define the above skills. Each such group gives rise to a derived objective, and these objectives together guide the search algorithm in multi-objective fashion. When applied to several well-known test-based problems, the proposed approach significantly outperforms the conventional two-population coevolution. This opens the door to efficient and generic countermeasures to premature convergence for both coevolutionary and evolutionary algorithms applied to problems featuring aggregating fitness functions.

  4. Search Profiles Based on User to Cluster Similarity

    Directory of Open Access Journals (Sweden)

    Saša Bošnjak

    2009-06-01

    Full Text Available Privacy of web users' query search logs has, since the AOL dataset release few years ago, been treated as one of the central issues concerning privacy on the Internet. Therefore, the question of privacy preservation has also raised a lot of attention in different communities surrounding the search engines. Usage of clustering methods for providing low level contextual search while retaining high privacy-utility tradeoff, is examined in this paper. By using only the user`s cluster membership the search query terms could be no longer retained thus providing less privacy concerns both for the users and companies. The paper brings lightweight framework for combining query words, user similarities and clustering in order to provide a meaningful way of mining user searches while protecting their privacy. This differs from previous attempts for privacy preserving in the attempt to anonymize the queries instead of the users.

  5. SEARCH PROFILES BASED ON USER TO CLUSTER SIMILARITY

    Directory of Open Access Journals (Sweden)

    Ilija Subasic

    2007-12-01

    Full Text Available Privacy of web users' query search logs has, since last year's AOL dataset release, been treated as one of the central issues concerning privacy on the Internet, Therefore, the question of privacy preservation has also raised a lot of attention in different communities surrounding the search engines. Usage of clustering methods for providing low level contextual search, wriile retaining high privacy/utility is examined in this paper. By using only the user's cluster membership the search query terms could be no longer retained thus providing less privacy concerns both for the users and companies. The paper brings lightweight framework for combining query words, user similarities and clustering in order to provide a meaningful way of mining user searches while protecting their privacy. This differs from previous attempts for privacy preserving in the attempt to anonymize the queries instead of the users.

  6. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  7. Extracting Communities of Interests for Semantics-Based Graph Searches

    Science.gov (United States)

    Nakatsuji, Makoto; Tanaka, Akimichi; Uchiyama, Toshio; Fujimura, Ko

    Users recently find their interests by checking the contents published or mentioned by their immediate neighbors in social networking services. We propose semantics-based link navigation; links guide the active user to potential neighbors who may provide new interests. Our method first creates a graph that has users as nodes and shared interests as links. Then it divides the graph by link pruning to extract practical numbers, that the active user can navigate, of interest-sharing groups, i.e. communities of interests (COIs). It then attaches a different semantic tag to the link to each representative user, which best reflects the interests of COIs that they are included in, and to the link to each immediate neighbor of the active user. It finally calculates link attractiveness by analyzing the semantic tags on links. The active user can select the link to access by checking the semantic tags and link attractiveness. User interests extracted from large scale actual blog-entries are used to confirm the efficiency of our proposal. Results show that navigation based on link attractiveness and representative users allows the user to find new interests much more accurately than is otherwise possible.

  8. Affordances of students' using the World Wide Web as a publishing medium in project-based learning environments

    Science.gov (United States)

    Bos, Nathan Daniel

    This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing

  9. Explicit Context Matching in Content-Based Publish/Subscribe Systems

    Directory of Open Access Journals (Sweden)

    Miguel Jiménez

    2013-03-01

    Full Text Available Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs. Our approach represents a solution that helps to effciently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (unsubscription cost per node, while improving the global performance/throughput of the network of brokers without altering the cost of SIENA-like topology changes.

  10. Explicit context matching in content-based publish/subscribe systems.

    Science.gov (United States)

    Vavassori, Sergio; Soriano, Javier; Lizcano, David; Jiménez, Miguel

    2013-03-01

    Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S) communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs) and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs). Our approach represents a solution that helps to effciently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (un)subscription cost per node, while improving the global performance/throughput of the network of brokers without altering the cost of SIENA-like topology changes.

  11. A fast block-matching algorithm based on variable shape search

    Institute of Scientific and Technical Information of China (English)

    LIU Hao; ZHANG Wen-jun; CAI Jun

    2006-01-01

    Block-matching motion estimation plays an important role in video coding. The simple and efficient fast block-matching algorithm using Variable Shape Search (VSS) proposed in this paper is based on diamond search and hexagon search. The initial big diamond search is designed to fit the directional centre-biased characteristics of the real-world video sequence, and the directional hexagon search is designed to identify a small region where the best motion vector is expected to locate.Finally, the small diamond search is used to select the best motion vector in the located small region. Experimental results showed that the proposed VSS algorithm can significantly reduce the computational complexity, and provide competitive computational speedup with similar distortion performance as compared with the popular Diamond-based Search (DS) algorithm in the MPEG-4 Simple Profile.

  12. Multiobjective Optimization Method Based on Adaptive Parameter Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    P. Sabarinath

    2015-01-01

    Full Text Available The present trend in industries is to improve the techniques currently used in design and manufacture of products in order to meet the challenges of the competitive market. The crucial task nowadays is to find the optimal design and machining parameters so as to minimize the production costs. Design optimization involves more numbers of design variables with multiple and conflicting objectives, subjected to complex nonlinear constraints. The complexity of optimal design of machine elements creates the requirement for increasingly effective algorithms. Solving a nonlinear multiobjective optimization problem requires significant computing effort. From the literature it is evident that metaheuristic algorithms are performing better in dealing with multiobjective optimization. In this paper, we extend the recently developed parameter adaptive harmony search algorithm to solve multiobjective design optimization problems using the weighted sum approach. To determine the best weightage set for this analysis, a performance index based on least average error is used to determine the index of each weightage set. The proposed approach is applied to solve a biobjective design optimization of disc brake problem and a newly formulated biobjective design optimization of helical spring problem. The results reveal that the proposed approach is performing better than other algorithms.

  13. Evaluating Search Engine Relevance with Click-Based Metrics

    Science.gov (United States)

    Radlinski, Filip; Kurup, Madhu; Joachims, Thorsten

    Automatically judging the quality of retrieval functions based on observable user behavior holds promise for making retrieval evaluation faster, cheaper, and more user centered. However, the relationship between observable user behavior and retrieval quality is not yet fully understood. In this chapter, we expand upon, Radlinski et al. (How does clickthrough data reflect retrieval quality, In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), 43-52, 2008), presenting a sequence of studies investigating this relationship for an operational search engine on the arXiv.org e-print archive. We find that none of the eight absolute usage metrics we explore (including the number of clicks observed, the frequency with which users reformulate their queries, and how often result sets are abandoned) reliably reflect retrieval quality for the sample sizes we consider. However, we find that paired experiment designs adapted from sensory analysis produce accurate and reliable statements about the relative quality of two retrieval functions. In particular, we investigate two paired comparison tests that analyze clickthrough data from an interleaved presentation of ranking pairs, and find that both give accurate and consistent results. We conclude that both paired comparison tests give substantially more accurate and sensitive evaluation results than the absolute usage metrics in our domain.

  14. The Cellular Differential Evolution Based on Chaotic Local Search

    Directory of Open Access Journals (Sweden)

    Qingfeng Ding

    2015-01-01

    Full Text Available To avoid immature convergence and tune the selection pressure in the differential evolution (DE algorithm, a new differential evolution algorithm based on cellular automata and chaotic local search (CLS or ccDE is proposed. To balance the exploration and exploitation tradeoff of differential evolution, the interaction among individuals is limited in cellular neighbors instead of controlling parameters in the canonical DE. To improve the optimizing performance of DE, the CLS helps by exploring a large region to avoid immature convergence in the early evolutionary stage and exploiting a small region to refine the final solutions in the later evolutionary stage. What is more, to improve the convergence characteristics and maintain the population diversity, the binomial crossover operator in the canonical DE may be instead by the orthogonal crossover operator without crossover rate. The performance of ccDE is widely evaluated on a set of 14 bound constrained numerical optimization problems compared with the canonical DE and several DE variants. The simulation results show that ccDE has better performances in terms of convergence rate and solution accuracy than other optimizers.

  15. There are Discipline-Based Differences in Authors’ Perceptions Towards Open Access Publishing. A Review of: Coonin, B., & Younce, L. M. (2010. Publishing in open access education journals: The authors’ perspectives. Behavioral & Social Sciences Librarian, 29, 118-132. doi:10.1080/01639261003742181

    Directory of Open Access Journals (Sweden)

    Lisa Shen

    2011-09-01

    searches for publishing opportunities (40.4%, and professional societies (29.3% for raising their awareness of OA. Moreover, based on voluntary general comments left at end of the survey, researchers observed that some authors viewed the terms open access and electronic “synonymously” and thought of OA publishing only as a “format change” (p.125.Conclusion – The study revealed some discipline-based differences in authors’ attitudes toward scholarly publishing and the concept of OA. The majority of authors publishing in education viewed author fees, a common OA publishing practice in life and medical sciences as undesirable. On the other hand, citation impact, a major determinant for life and medical sciences publishing, was only a minor factor for authors in education. These findings provide useful insights for future research on discipline-based publication differences.The findings also indicated peer review is the primary determinant for authors publishing in education. Moreover, while the majority of authors surveyed considered both print and e-journal format to be equally acceptable, almost one third viewed OA journals as less prestigious than subscription-based publications. Some authors also seemed to confuse the concept between OA and electronic publishing. These findings could generate fresh discussion points between academic librarians and faculty members regarding OA publishing.

  16. Publisher's Announcement

    Science.gov (United States)

    McGlashan, Yasmin

    2008-01-01

    Important changes for 2008 As a result of reviewing several aspects of our content, both in print and online, we have made some changes for 2008. These changes are described below: Article numbering Plasma Physics and Controlled Fusion has moved from sequential page numbering to an article numbering system, offering important advantages and flexibility by speeding up the publication process. Papers in different issues or sections can be published online as soon as they are ready, without having to wait for a whole issue or section to be allocated page numbers. The bibliographic citation will change slightly. Articles should be referenced using the six-digit article number in place of a page number, and this number must include any leading zeros. For instance, from this issue: Z Y Chen et al 2008 Plasma Phys. Control. Fusion 50 015001 Articles will continue to be published on the web in advance of the print edition. A new look and feel We have also taken the opportunity to refresh the design of the journal cover, in order to modernise the typography and create a consistent look and feel across our range of publications. We hope you like the new cover. If you have any questions or comments about any of these changes, please contact us at ppcf@iop.org.

  17. A Picture is Worth a Thousand Keywords: Exploring Mobile Image-Based Web Searching

    Directory of Open Access Journals (Sweden)

    Konrad Tollmar

    2008-01-01

    Full Text Available Using images of objects as queries is a new approach to search for information on the Web. Image-based information retrieval goes beyond only matching images, as information in other modalities also can be extracted from data collections using an image search. We have developed a new system that uses images to search for web-based information. This paper has a particular focus on exploring users' experience of general mobile image-based web searches to find what issues and phenomena it contains. This was achieved in a multipart study by creating and letting respondents test prototypes of mobile image-based search systems and collect data using interviews, observations, video observations, and questionnaires. We observed that searching for information based only on visual similarity and without any assistance is sometimes difficult, especially on mobile devices with limited interaction bandwidth. Most of our subjects preferred a search tool that guides the users through the search result based on contextual information, compared to presenting the search result as a plain ranked list.

  18. Scanned Hardcopy Maps, legato data base; public works, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Scanned Hardcopy Maps dataset, was produced all or in part from Hardcopy Maps information as of 2006. It is described as 'legato data base; public works'. Data...

  19. Effectiveness of school-based interventions in Europe to promote healthy nutrition in children and adolescents: systematic review of published and 'grey' literature.

    Science.gov (United States)

    Van Cauwenberghe, Eveline; Maes, Lea; Spittaels, Heleen; van Lenthe, Frank J; Brug, Johannes; Oppert, Jean-Michel; De Bourdeaudhuij, Ilse

    2010-03-01

    The objective of the present review was to summarise the existing European published and 'grey' literature on the effectiveness of school-based interventions to promote a healthy diet in children (6-12 years old) and adolescents (13-18 years old). Eight electronic databases, websites and contents of key journals were systematically searched, reference lists were screened, and authors and experts in the field were contacted for studies evaluating school-based interventions promoting a healthy diet and aiming at primary prevention of obesity. The studies were included if they were published between 1 January 1990 and 31 December 2007 and reported effects on dietary behaviour or on anthropometrics. Finally, forty-two studies met the inclusion criteria: twenty-nine in children and thirteen in adolescents. In children, strong evidence of effect was found for multicomponent interventions on fruit and vegetable intakes. Limited evidence of effect was found for educational interventions on behaviour, and for environmental interventions on fruit and vegetable intakes. Interventions that specifically targeted children from lower socio-economic status groups showed limited evidence of effect on behaviour. In adolescents, moderate evidence of effect was found for educational interventions on behaviour and limited evidence of effect for multicomponent programmes on behaviour. In children and adolescents, effects on anthropometrics were often not measured, and therefore evidence was lacking or delivered inconclusive evidence. To conclude, evidence was found for the effectiveness of especially multicomponent interventions promoting a healthy diet in school-aged children in European Union countries on self-reported dietary behaviour. Evidence for effectiveness on anthropometrical obesity-related measures is lacking.

  20. Risk-based scheduling of multiple search passes for UUVs

    Science.gov (United States)

    Baylog, John G.; Wettergren, Thomas A.

    2016-05-01

    This paper addresses selected computational aspects of collaborative search planning when multiple search agents seek to find hidden objects (i.e. mines) in operating environments where the detection process is prone to false alarms. A Receiver Operator Characteristic (ROC) analysis is applied to construct a Bayesian cost objective function that weighs and combines missed detection and false alarm probabilities. It is shown that for fixed ROC operating points and a validation criterion consisting of a prerequisite number of detection outcomes, an interval exists in the number of conducted search passes over which the risk objective function is supermodular. We show that this property is not retained beyond validation criterion boundaries. We investigate the use of greedy algorithms for distributing search effort and, in particular, examine the double greedy algorithm for its applicability under conditions of varying criteria. Numerical results are provided to demonstrate the effectiveness of the approach.

  1. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  2. Demeter, persephone, and the search for emergence in agent-based models.

    Energy Technology Data Exchange (ETDEWEB)

    North, M. J.; Howe, T. R.; Collier, N. T.; Vos, J. R.; Decision and Information Sciences; Univ. of Chicago; PantaRei Corp.; Univ. of Illinois

    2006-01-01

    In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent or potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.

  3. Searches for physics beyond the Standard Model using jet-based resonances with the ATLAS Detector

    CERN Document Server

    Frate, Meghan; The ATLAS collaboration

    2016-01-01

    Run2 of the LHC, with its increased center-of-mass energy, is an unprecedented opportunity to discover physics beyond the Standard Model. One interesting possibility to conduct such searches is to use resonances based on jets. The latest search results from the ATLAS experiment, based on either inclusive or heavy-flavour jets, will be presented.

  4. A New Tool for Collaborative Video Search via Content-based Retrieval and Visual Inspection

    NARCIS (Netherlands)

    Hürst, W.O.; Ip Vai Ching, Algernon; Hudelist, Marco A.; Primus, Manfred J.; Schoeffmann, Klaus; Beecks, Chrisitan

    2016-01-01

    We present a new approach for collaborative video search and video browsing relying on a combination of traditional, indexbased video retrieval complemented with large-scale human-based visual inspection. In particular, a traditional PC interface is used for query-based search using advanced indexin

  5. A New Cross Based Gradient Descent Search Algorithm for Block Matching in MPEG-4 Encoder

    Institute of Scientific and Technical Information of China (English)

    WANGZhenzhou; LIGuiling

    2003-01-01

    Motion estimation is an important part of the Moving pictures expert group-4 (MPEG-4) encoder, due to its significant impact on the bit rate and the output quality of the encoder sequence. Unfortunately this feature occupies a significant part of the encoding time especially when using the straightforward Full Search algorithm. For frame based video encoding, a lot of fast algorithms have been proposed, which have proved to be efficient in encoding In this paper We proposed a new algorithm named Cross based gradient descent search (CBGDS) algorithm, which is significantly faster than FS and gives similar quality of the output sequence. At the same time, We compare our algorithm with some other algorithms, such as Three step search (TSS), Improved three step search (ITSS), New three step search (NTSS), Four step search (4SS), Diamond search (DS), Block based gradient descent search (BBGDS) and Cellular search (CS). As the experimental results show, our algorithm has its advantage over the others. For objects based video encoding, most of the existing fast algorithms are not suitable because the arbitrarily shaped objects have more local minima. So we incorporate the alpha information and propose a new algorithm, which is compatible with the previously proposed efficient motion estimation method for arbitrarily shaped video objects.

  6. A quantum search algorithm based on partial adiabatic evolution

    Institute of Scientific and Technical Information of China (English)

    Zhang Ying-Yu; Hu He-Ping; Lu Song-Feng

    2011-01-01

    This paper presents and implements a specified partial adiabatic search algorithm on a quantum circuit. It studies the minimum energy gap between the first excited state and the ground state of the system Hamiltonian and it finds that, in the case of M=1, the algorithm has the same performance as the local adiabatic algorithm. However, the algorithm evolves globally only within a small interval, which implies that it keeps the advantages of global adiabatic algorithms without losing the speedup of the local adiabatic search algorithm.

  7. A Community-Based Event Delivery Protocol in Publish/Subscribe Systems for Delay Tolerant Sensor Networks

    Directory of Open Access Journals (Sweden)

    Haigang Gong

    2009-09-01

    Full Text Available The basic operation of a Delay Tolerant Sensor Network (DTSN is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.

  8. A community-based event delivery protocol in publish/subscribe systems for delay tolerant sensor networks.

    Science.gov (United States)

    Liu, Nianbo; Liu, Ming; Zhu, Jinqi; Gong, Haigang

    2009-01-01

    The basic operation of a Delay Tolerant Sensor Network (DTSN) is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.

  9. Capacitated Dynamic Facility Location Problem Based on Tabu Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    KUANG Yi-jun; ZHU Ke-jun

    2007-01-01

    Facility location problem is a kind of NP-Hard combinational problem. Considering ever-changing demand sites, demand quantity and releasing cost, we formulate a model combining tabu search and FCM (fuzzy clustering method) to solve the eapacitated dynamic facility location problem. Some results are achieved and they show that the proposed method is effective.

  10. Balancing thread based navigation for targeted video search

    NARCIS (Netherlands)

    de Rooij, O.; Snoek, C.G.M.; Worring, M.

    2008-01-01

    Various query methods for video search exist. Because of the semantic gap each method has its limitations. We argue that for effective retrieval query methods need to be combined at retrieval time. However, switching query methods often involves a change in query and browsing interface, which puts a

  11. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user friendly interface. The output of the system is a set of links...

  12. Swarm Robots Search for Multiple Targets Based on an Improved Grouping Strategy.

    Science.gov (United States)

    Tang, Qirong; Ding, Lu; Yu, Fangchao; Zhang, Yuan; Li, Yinghao; Tu, Haibo

    2017-03-14

    Swarm robots search for multiple targets in collaboration in unknown environments has been addressed in this paper. An improved grouping strategy based on constriction factors Particle Swarm Optimization is proposed. Robots are grouped under this strategy after several iterations of stochastic movements, which considers the influence range of targets and environmental information they have sensed. The group structure may change dynamically and each group focuses on searching one target. All targets are supposed to be found finally. Obstacle avoidance is considered during the search process. Simulation compared with previous method demonstrates the adaptability, accuracy and efficiency of the proposed strategy in multiple targets searching.

  13. Designing a soft preference based search interface for the housing market

    NARCIS (Netherlands)

    Oudshoorn, S.K.

    2011-01-01

    Websites aiming to assist users in finding a new house are becoming increasingly popular. Finding a potential relevant house is based on the users’ search criteria and the ability to define these criteria in an easy-to-use search user interface. Due to hard constraints, over-specification is one of

  14. New Diamond Block Based Gradient Descent Search Algorithm for Motion Estimation in the MPEG-4 Encoder

    Institute of Scientific and Technical Information of China (English)

    王振洲; 李桂苓

    2003-01-01

    Motion estimation is an important part of the MPEG-4 encoder, due to its significant impact on the bit rate and the output quality of the encoder sequence. Unfortunately this feature takes a significant part of the encoding time especially when the straightforward full search(FS) algorithm is used. In this paper, a new algorithm named diamond block based gradient descent search (DBBGDS) algorithm, which is significantly faster than FS and gives similar quality of the output sequence, is proposed. At the same time, some other algorithms, such as three step search (TSS), improved three step search (ITSS), new three step search (NTSS), four step search (4SS), cellular search (CS) , diamond search (DS) and block based gradient descent search (BBGDS), are adopted and compared with DBBGDS. As the experimental results show, DBBGDS has its own advantages. Although DS has been adopted by the MPEG-4 VM, its output sequence quality is worse than that of the proposed algorithm while its complexity is similar to the proposed one. Compared with BBGDS, the proposed algorithm can achieve a better output quality.

  15. Cellular Phone Towers, parcel data base attribute, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cellular Phone Towers dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is...

  16. Commercial Properties, parcel data base attribute, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Commercial Properties dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is...

  17. Biomedical phantoms. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-10-01

    The bibliography contains citations concerning the design, development, construction, and evaluation of various anthropomorphic phantoms: mathematical or physical models or constructs simulating human tissue which are used in radiotherapy and diagnostic radiology. The radiation characteristics of phantom materials are addressed, simulating human body tissue, muscles, organs, bones, and skin. (Contains a minimum of 112 citations and includes a subject term index and title list.)

  18. Radioactive waste disposal: Waste Isolation Pilot Plants (WIPP). (Latest citations from the NTIS data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-01

    The bibliography contains citations concerning the Waste Isolation Pilot Plant (WIPP), a geologic repository located in New Mexico for transuranic wastes generated by the U.S. Government. Articles follow the development of the program from initial site selection and characterization through construction and testing, and examine research programs on environmental impacts, structural design, and radionuclide landfill gases. Existing plants and facilities, pilot plants, migration, rock mechanics, economics, regulations, and transport of wastes to the site are also included. The Salt Repository Project and the Crystalline Repository Project are referenced in related bibliographies. (Contains 250 citations and includes a subject term index and title list.)

  19. The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet

    Science.gov (United States)

    Hill, Paul; Rader, Heidi B.; Hino, Jeff

    2012-01-01

    For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…

  20. A Fast Measurement based fixed-point Quantum Search Algorithm

    CERN Document Server

    Mani, Ashish

    2011-01-01

    Generic quantum search algorithm searches for target entity in an unsorted database by repeatedly applying canonical Grover's quantum rotation transform to reach near the vicinity of the target entity represented by a basis state in the Hilbert space associated with the qubits. Thus, when qubits are measured, there is a high probability of finding the target entity. However, the number of times quantum rotation transform is to be applied for reaching near the vicinity of the target is a function of the number of target entities present in the unsorted database, which is generally unknown. A wrong estimate of the number of target entities can lead to overshooting or undershooting the targets, thus reducing the success probability. Some proposals have been made to overcome this limitation. These proposals either employ quantum counting to estimate the number of solutions or fixed point schemes. This paper proposes a new scheme for stopping the application of quantum rotation transformation on reaching near the ...

  1. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  2. Defense Waste Processing Facility (DWPF): The vitrification of high-level nuclear waste. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    The bibliography contains citations concerning a production-scale facility and the world`s largest plant for the vitrification of high-level radioactive nuclear wastes (HLW) located in the United States. Initially based on the selection of borosilicate glass as the reference waste form, the citations present the history of the development including R&D projects and the actual construction of the production facility at the DOE Savannah River Plant (SRP). (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  3. Ontology-Based Information Behaviour to Improve Web Search

    Directory of Open Access Journals (Sweden)

    Silvia Calegari

    2010-10-01

    Full Text Available Web Search Engines provide a huge number of answers in response to a user query, many of which are not relevant, whereas some of the most relevant ones may not be found. In the literature several approaches have been proposed in order to help a user to find the information relevant to his/her real needs on the Web. To achieve this goal the individual Information Behavior can been analyzed to ’keep’ track of the user’s interests. Keeping information is a type of Information Behavior, and in several works researchers have referred to it as the study on what people do during a search on the Web. Generally, the user’s actions (e.g., how the user moves from one Web page to another, or her/his download of a document, etc. are recorded in Web logs. This paper reports on research activities which aim to exploit the information extracted from Web logs (or query logs in personalized user ontologies, with the objective to support the user in the process of discovering Web information relevant to her/his information needs. Personalized ontologies are used to improve the quality of Web search by applying two main techniques: query reformulation and re-ranking of query evaluation results. In this paper we analyze various methodologies presented in the literature aimed at using personalized ontologies, defined on the basis of the observation of Information Behaviour to help the user in finding relevant information.

  4. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA Base Flood Elevations - line shapefile, Published in 2010, 1:2400 (1in=200ft) scale, Effingham County Board Of Commissioners.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:2400 (1in=200ft) scale, was produced all or in part from Published...

  5. A Fast and Adaptive Search Algorithm Based on Rood Pattern and Gradient Descent

    Science.gov (United States)

    Lin, Mu-Long; Yi, Qing-Ming; Shi, Min

    In order to achieve the real-time property for video coding, a fast and adaptive algorithm based on starting search point prediction and early-termination strategy is proposed. It analyzes center-bias property and spatial correlation property of motion vector field, and utilizes the respective characteristics of block based gradient descent search (BBGDS) and adaptive rood pattern search (ARPS) algorithm. The proposed algorithm adaptively chooses different searching strategies according to the type of the image, makes full use of the cross image motion vector distribution characteristics and optimizes the traditional ARPS algorithm. The experimental results show that the proposed algorithm is about 2.3∼9.2 times faster than Diamond Search (DS), 1.2∼4.0 times than ARPS. The algorithm can meet the real-time demand without reducing the image quality.

  6. Information regarding structure and lightness based on phenomenal transparency influences the efficiency of visual search.

    Science.gov (United States)

    Mitsudo, Hiroyuki

    2003-01-01

    Phenomenal transparency reflects a process which makes it possible to recover the structure and lightness of overlapping objects from a fragmented image. This process was investigated by the visual-search paradigm. In three experiments, observers searched for a target that consisted of gray patches among a variable number of distractors and the search efficiency was assessed. Experiments 1 and 2 showed that the search efficiency was greatly improved when the target was distinctive with regard to structure, based on transparency. Experiment 3 showed that the search efficiency was impaired when a target was not distinctive with regard to lightness (ie perceived reflectance), based on transparency. These results suggest that the shape and reflectance of overlapping objects when accompanied by transparency can be calculated in parallel across the visual field, and can be used as a guide for visual attention.

  7. Constructing Virtual Documents for Keyword Based Concept Search in Web Ontology

    Directory of Open Access Journals (Sweden)

    Sapna Paliwal

    2013-04-01

    Full Text Available Web ontologies are structural frameworks for organizing information in semantics web and provide shared concepts. Ontology formally represents knowledge or information about particular entity as a set of concepts within a particular domain on semantic web. Web ontology helps to describe concepts within domain and also help us to enables semantic interoperability between two different applications byusing Falcons concept search. We can facilitate concept searching and ontologies reusing. Constructing virtual documents is a keyword based search in ontology. The proposed method helps us to find how search engine help user to find out ontologies in less time so we can satisfy their needs. It include some supportive technologies with new technique is to constructing virtual documents of concepts for keywordbased search and based on population scheme we rank the concept and ontologies, a way to generate structured snippets according to query. In this concept we can report the user feedback and usabilityevolution.

  8. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  9. A geometry-based image search engine for advanced RADARSAT-1/2 GIS applications

    Science.gov (United States)

    Kotamraju, Vinay; Rabus, Bernhard; Busler, Jennifer

    2012-06-01

    Space-borne Synthetic Aperture Radar (SAR) sensors, such as RADARSAT-1 and -2, enable a multitude of defense and security applications owing to their unique capabilities of cloud penetration, day/night imaging and multi-polarization imaging. As a result, advanced SAR image time series exploitation techniques such as Interferometric SAR (InSAR) and Radargrammetry are now routinely used in applications such as underground tunnel monitoring, infrastructure monitoring and DEM generation. Imaging geometry, as determined by the satellite orbit and imaged terrain, plays a critical role in the success of such techniques. This paper describes the architecture and the current status of development of a geometry-based search engine that allows the search and visualization of archived and future RADARSAT-1 and -2 images appropriate for a variety of advanced SAR techniques and applications. Key features of the search engine's scalable architecture include (a) Interactive GIS-based visualization of the search results; (b) A client-server architecture for online access that produces up-to-date searches of the archive images and that can, in future, be extended to acquisition planning; (c) A techniquespecific search mode, wherein an expert user explicitly sets search parameters to find appropriate images for advanced SAR techniques such as InSAR and Radargrammetry; (d) A future application-specific search mode, wherein all search parameters implicitly default to preset values according to the application of choice such as tunnel monitoring, DEM generation and deformation mapping; (f) Accurate baseline calculations for InSAR searches, and, optimum beam configuration for Radargrammetric searches; (g) Simulated quick look images and technique-specific sensitivity maps in the future.

  10. Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.

    Science.gov (United States)

    Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min

    2013-12-01

    Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.

  11. Road and Street Centerlines, Originally based on TIGER then updated/improved from mulitple sources, Published in 2007, Churchill County, NV.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — as of 2007. It is described as 'Originally based on TIGER then updated/improved from mulitple sources'. Data by this publisher are often provided in State Plane...

  12. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, From FEMA, Published in 2007, 1:1200 (1in=100ft) scale, Town of Cary NC.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from LIDAR...

  13. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, flood plains, Published in 2008, 1:24000 (1in=2000ft) scale, Box Elder County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  14. Sequential search based on kriging: convergence analysis of some algorithms

    CERN Document Server

    Vazquez, Emmanuel

    2011-01-01

    Let $\\FF$ be a set of real-valued functions on a set $\\XX$ and let $S:\\FF \\to \\GG$ be an arbitrary mapping. We consider the problem of making inference about $S(f)$, with $f\\in\\FF$ unknown, from a finite set of pointwise evaluations of $f$. We are mainly interested in the problems of approximation and optimization. In this article, we make a brief review of results concerning average error bounds of Bayesian search methods that use a random process prior about $f$.

  15. Validation of metabolic pathway databases based on chemical substructure search.

    Science.gov (United States)

    Félix, Liliana; Valiente, Gabriel

    2007-09-01

    Metabolic pathway databases such as KEGG contain information on thousands of biochemical reactions drawn from the biomedical literature. Ensuring consistency of such large metabolic pathways is essential to their proper use. In this paper, we present a new method to determine consistency of an important class of biochemical reactions. Our method exploits the knowledge of the atomic rearrangement pattern in biochemical reactions, to reduce the automatic atom mapping problem to a series of chemical substructure searches between the substrate and the product of a biochemical reaction. As an illustrative application, we describe the exhaustive validation of a substantial portion from the latest release of the KEGG LIGAND database.

  16. B-tree search reinforcement learning for model based intelligent agent

    Science.gov (United States)

    Bhuvaneswari, S.; Vignashwaran, R.

    2013-03-01

    Agents trained by learning techniques provide a powerful approximation of active solutions for naive approaches. In this study using B - Trees implying reinforced learning the data search for information retrieval is moderated to achieve accuracy with minimum search time. The impact of variables and tactics applied in training are determined using reinforcement learning. Agents based on these techniques perform satisfactory baseline and act as finite agents based on the predetermined model against competitors from the course.

  17. Ontological Approach for Effective Generation of Concept Based User Profiles to Personalize Search Results

    Directory of Open Access Journals (Sweden)

    R. S.D. Wahidabanu

    2012-01-01

    Full Text Available Problem statement: Ontological user profile generation was a semantic approach to derive richer concept based user profiles. It depends on the semantic relationship of concepts. This study focuses on ontology to derive concept oriented user profile based on user search queries and clicked documents.This study proposes concept based on topic ontology which derives the concept based user profiles more independently. It was possible to improve the search engine processes more efficiently. Approach: This process consists of individual user’s interests, topical categories of user interests and identifies the relationship among the concepts. The proposed approach was based on topic ontology for concept based user profile generation from search engine logs. Spreading activation algorithm was used to optimize the relevance of search engine results. Topic ontology is constructed to identify the user interest by assigning activation values and explore the topics similarity of user preferences. Results: To update and maintain the interest scores, spreading activation algorithm was proposed. User interest may change over the period of time which was reflected to user profiles. According to profile changes, search engine was personalized by assigning interest scores and weight to the topics. Conclusion: Experiments illustrate the efficacy of proposed approach and with the help of topic ontology user preferences can be identified correctly. It improves the quality of the search engine personalization by identifying the user’s precise needs.

  18. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-08

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org.

  19. A Framework for Hierarchical Clustering Based Indexing in Search Engines

    Directory of Open Access Journals (Sweden)

    Parul Gupta

    2011-01-01

    Full Text Available Granting efficient and fast accesses to the index is a key issuefor performances of Web Search Engines. In order to enhancememory utilization and favor fast query resolution, WSEs useInverted File (IF indexes that consist of an array of theposting lists where each posting list is associated with a termand contains the term as well as the identifiers of the documentscontaining the term. Since the document identifiers are stored insorted order, they can be stored as the difference between thesuccessive documents so as to reduce the size of the index. Thispaper describes a clustering algorithm that aims atpartitioning the set of documents into ordered clusters so thatthe documents within the same cluster are similar and are beingassigned the closer document identifiers. Thus the averagevalue of the differences between the successive documents willbe minimized and hence storage space would be saved. Thepaper further presents the extension of this clustering algorithmto be applied for the hierarchical clustering in which similarclusters are clubbed to form a mega cluster and similar megaclusters are then combined to form super cluster. Thus thepaper describes the different levels of clustering whichoptimizes the search process by directing the searchto a specific path from higher levels of clustering to the lowerlevels i.e. from super clusters to mega clusters, then to clustersand finally to the individual documents so that the user gets thebest possible matching results in minimum possible time.

  20. Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG

    Science.gov (United States)

    Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu

    2016-12-01

    Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.

  1. RSECM: Robust Search Engine using Context-based Mining for Educational Big Data

    Directory of Open Access Journals (Sweden)

    D. Pratiba

    2016-12-01

    Full Text Available With an accelerating growth in the educational sector along with the aid of ICT and cloud-based services, there is a consistent rise of educational big data, where storage and processing become the prime matter of challenge. Although many recent attempts have used open source framework e.g. Hadoop for storage, still there are reported issues in sufficient security management and data analyzing problems. Hence, there is less applicability of mining techniques for upcoming search engine due to unstructured educational data. The proposed system introduces a technique called as RSECM i.e. Robust Search Engine using Context-based Modeling that presents a novel archival and search engine. RSECM generates its own massive stream of educational big data and performs the efficient search of data. Outcome exhibits RSECM outperforms SQL based approaches concerning faster retrieval of the dynamic user-defined query.

  2. Q-Learning-Based Adjustable Fixed-Phase Quantum Grover Search Algorithm

    Science.gov (United States)

    Guo, Ying; Shi, Wensha; Wang, Yijun; Hu, Jiankun

    2017-02-01

    We demonstrate that the rotation phase can be suitably chosen to increase the efficiency of the phase-based quantum search algorithm, leading to a dynamic balance between iterations and success probabilities of the fixed-phase quantum Grover search algorithm with Q-learning for a given number of solutions. In this search algorithm, the proposed Q-learning algorithm, which is a model-free reinforcement learning strategy in essence, is used for performing a matching algorithm based on the fraction of marked items λ and the rotation phase α. After establishing the policy function α = π(λ), we complete the fixed-phase Grover algorithm, where the phase parameter is selected via the learned policy. Simulation results show that the Q-learning-based Grover search algorithm (QLGA) enables fewer iterations and gives birth to higher success probabilities. Compared with the conventional Grover algorithms, it avoids the optimal local situations, thereby enabling success probabilities to approach one.

  3. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  4. A Comparison of the Publishing group Based on the Publishing Business in 2012-2014--Taking China South Publishing and Media, Northern United Publishing and Media as the Examples%2012-2014年基于出版业务的出版集团比较--以中南传媒、北方联合为例

    Institute of Scientific and Technical Information of China (English)

    曹红梅

    2015-01-01

    当前,中国正处于转企改制成果保持与进一步发展时期,将转制后的出版集团进行对比分析,以发现各自的优缺,对于促进我国出版集团的进一步发展具有重要的价值,故本文拟从出版集团的概况和经营管理方面对北方联合、中南传媒两家出版集团进行比较分析,并对其经营管理提出有针对性的建议。%At present, China is in a period of the achievements maintaining and further development of enterprises transformation and system reform. The publishing groups after transformation were compared and analyzed to find out their own strengths and shortcomings, which has an important value in promoting the further development of publishing groups in our country. Therefore, this paper is going to compare and analyze the two publishing groups of Northern United Publishing and Media and China South Publishing and Media from the aspects of the publishing groups’general information and operation & management, and put forward several targeted suggestions for their operation&management.

  5. Keyword-based Ciphertext Search Algorithm under Cloud Storage

    Directory of Open Access Journals (Sweden)

    Ren Xunyi

    2016-01-01

    Full Text Available With the development of network storage services, cloud storage have the advantage of high scalability , inexpensive, without access limit and easy to manage. These advantages make more and more small or medium enterprises choose to outsource large quantities of data to a third party. This way can make lots of small and medium enterprises get rid of costs of construction and maintenance, so it has broad market prospects. But now lots of cloud storage service providers can not protect data security.This result leakage of user data, so many users have to use traditional storage method.This has become one of the important factors that hinder the development of cloud storage. In this article, establishing keyword index by extracting keywords from ciphertext data. After that, encrypted data and the encrypted index upload cloud server together.User get related ciphertext by searching encrypted index, so it can response data leakage problem.

  6. Colorize magnetic nanoparticles using a search coil based testing method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kai; Wang, Yi; Feng, Yinglong; Yu, Lina; Wang, Jian-Ping, E-mail: jpwang@umn.edu

    2015-04-15

    Different magnetic nanoparticles (MNPs) possess unique spectral responses to AC magnetic field and we can use this specific magnetic property of MNPs as “colors” in the detection. In this paper, a detection scheme for magnetic nanoparticle size distribution is demonstrated by using an MNPs and search-coils integrated detection system. A low frequency (50 Hz) sinusoidal magnetic field is applied to drive MNPs into saturated region. Then a high frequency sinusoidal field sweeping from 5 kHz to 35 kHz is applied in order to generate mixing frequency signals, which are collected by a pair of balanced search coils. These harmonics are highly specific to the nonlinearity of magnetization curve of the MNPs. Previous work focused on using the amplitude and phase of the 3rd harmonic or the amplitude ratio of the 5th harmonic over 3rd harmonic. Here we demonstrate to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of MNPs. It is found that this method effectively reduces the magnetic colorization error. - Highlights: • We demonstrated to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of magnetic nanoparticles (MNPs). • An easier and simpler way to calibrate amounts of MNPs was developed. • With the same concentration, MNP solution with a larger average particle size could induce higher amplitude, and its amplitude changes greatly with sweeping high frequency. • At lower sweeping frequency, the 5 samples have almost the same phase lag. As the sweeping frequency goes higher, phase lag of large particles drop faster.

  7. The Library Publishing Coalition: organizing libraries to enhance scholarly publishing

    Directory of Open Access Journals (Sweden)

    Sarah Kalikman Lippincott

    2016-07-01

    Full Text Available Library-based publishing efforts are gaining traction in academic and research libraries across the world, primarily in response to perceived gaps in the scholarly publishing system. Though publishing is a new area of work for libraries, it is often a natural outgrowth of their existing infrastructure and skill sets, leveraging the institutional repository as publishing platform and repositioning librarians’ skills as information managers. For decades, these initiatives were primarily ad hoc and local, limiting the potential for library publishing to effect significant change. In 2013, over 60 academic and research libraries collectively founded the Library Publishing Coalition (LPC, a professional association expressly charged with facilitating knowledge sharing, collaboration and advocacy for this growing field. This article offers an overview of library publishing activity, primarily in the US, followed by an account of the creation and mission of the LPC, the first professional association dedicated wholly to the support of library publishers.

  8. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes.

  9. A dichotomous search-based heuristic for the three-dimensional sphere packing problem

    Directory of Open Access Journals (Sweden)

    Mhand Hifi

    2015-12-01

    Full Text Available In this paper, the three-dimensional sphere packing problem is solved by using a dichotomous search-based heuristic. An instance of the problem is defined by a set of $ n $ unequal spheres and an object of fixed width and height and, unlimited length. Each sphere is characterized by its radius and the aim of the problem is to optimize the length of the object containing all spheres without overlapping. The proposed method is based upon beam search, in which three complementary phases are combined: (i a greedy selection phase which determines a series of eligible search subspace, (ii a truncated tree search, using a width-beam search, that explores some promising paths, and (iii a dichotomous search that diversifies the search. The performance of the proposed method is evaluated on benchmark instances taken from the literature where its obtained results are compared to those reached by some recent methods of the literature. The proposed method is competitive and it yields promising results.

  10. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    OpenAIRE

    K.S. Kuppusamy,; Aghila, G.

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the can...

  11. Multi-leg Searching by Adopting Graph-based Knowledge Representation

    Directory of Open Access Journals (Sweden)

    Siti Zarinah Mohd Yusof

    2011-01-01

    Full Text Available This research explores the development of multi-leg searching concept by adopting graph-based knowledge representation. The research is aimed at proposing a searching concept that is capable of providing advanced information through retrieving not only direct but continuous related information from a point. It applies maximal join concept to merge multiple information networks for supporting multi-leg searching process. Node and edge similarity concept are also applied to determine transit node and alternative edges of the same route. A working prototype of flight networks domain is developed to represent the overview of the research.

  12. A nearest neighbor search algorithm of high-dimensional data based on sequential NPsim matrix

    Institute of Scientific and Technical Information of China (English)

    李文法

    2016-01-01

    Problems existin similarity measurement and index tree construction which affect the perform-ance of nearest neighbor search of high-dimensional data .The equidistance problem is solved using NPsim function to calculate similarity .And a sequential NPsim matrix is built to improve indexing performance .To sum up the above innovations , a nearest neighbor search algorithm of high-dimen-sional data based on sequential NPsim matrix is proposed in comparison with the nearest neighbor search algorithms based on KD-tree or SR-tree on Munsell spectral data set .Experimental results show that the proposed algorithm similarity is better than that of other algorithms and searching speed is more than thousands times of others .In addition , the slow construction speed of sequential NPsim matrix can be increased by using parallel computing .

  13. Formalizing dependency directed backtracking and explanation based learning in refinement search

    Energy Technology Data Exchange (ETDEWEB)

    Kambhampati, S. [Arizona State Univ., Tempe, AZ (United States)

    1996-12-31

    The ideas of dependency directed backtracking (DDB) and explanation based learning (EBL) have developed independently in constraint satisfaction, planning and problem solving communities. In this paper, I formalize and unify these ideas under the task-independent framework of refinement search, which can model the search strategies used in both planning and constraint satisfaction. I show that both DDB and EBL depend upon the common theory of explaining search failures, and regressing them to higher levels of the search tree. The relevant issues of importance include (a) how the failures are explained and (b) how many failure explanations are remembered. This task-independent understanding of DDB and EBL helps support cross-fertilization of ideas among Constraint Satisfaction, Planning and Explanation-Based Learning communities.

  14. A self-adaptive step Cuckoo search algorithm based on dimension by dimension improvement

    Directory of Open Access Journals (Sweden)

    Lu REN

    2015-10-01

    Full Text Available The choice of step length plays an important role in convergence speed and precision of Cuckoo search algorithm. In the paper, a self-adaptive step Cuckoo search algorithm based on dimensional improvement is provided. First, since the step in the original self-adaptive step Cuckoo search algorithm is not updated when the current position of the nest is in the optimal position, simple modification of the step is made for the update. Second, evaluation strategy based on dimension by dimension update is introduced to the modified self-adaptive step Cuckoo search algorithm. The experimental results show that the algorithm can balance the contradiction between the global convergence ability and the precision of optimization. Moreover, the proposed algorithm has better convergence speed.

  15. Minimum Distortion Direction Prediction-based Fast Half-pixel Motion Vector Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; ZHANG Qi-shan

    2005-01-01

    A minimum distortion direction prediction-based novel fast half-pixel motion vector search algorithm is proposed, which can reduce considerably the computation load of half-pixel search. Based on the single valley characteristic of half-pixel error matching function inside search grid, the minimum distortion direction is predicted with the help of comparative results of sum of absolute difference(SAD) values of four integer-pixel points around integer-pixel motion vector. The experimental results reveal that, to all kinds of video sequences, the proposed algorithm can obtain almost the same video quality as that of the half-pixel full search algorithm with a decrease of computation cost by more than 66%.

  16. A reliability measure of protein-protein interactions and a reliability measure-based search engine.

    Science.gov (United States)

    Park, Byungkyu; Han, Kyungsook

    2010-02-01

    Many methods developed for estimating the reliability of protein-protein interactions are based on the topology of protein-protein interaction networks. This paper describes a new reliability measure for protein-protein interactions, which does not rely on the topology of protein interaction networks, but expresses biological information on functional roles, sub-cellular localisations and protein classes as a scoring schema. The new measure is useful for filtering many spurious interactions, as well as for estimating the reliability of protein interaction data. In particular, the reliability measure can be used to search protein-protein interactions with the desired reliability in databases. The reliability-based search engine is available at http://yeast.hpid.org. We believe this is the first search engine for interacting proteins, which is made available to public. The search engine and the reliability measure of protein interactions should provide useful information for determining proteins to focus on.

  17. An Integer Programming-based Local Search for Large-scale Maximal Covering Problems

    Directory of Open Access Journals (Sweden)

    Junha Hwang

    2011-02-01

    Full Text Available Maximal covering problem (MCP is classified as a linear integer optimization problem which can be effectively solved by integer programming technique. However, as the problem size grows, integerprogramming requires excessive time to get an optimal solution. This paper suggests a method for applying integer programming-based local search (IPbLS to solve large-scale maximal covering problems. IPbLS, which is a hybrid technique combining integer programming and local search, is a kind of local search using integer programming for neighbor generation. IPbLS itself is very effective for MCP. In addition, we improve the performance of IPbLS for MCP through problem reduction based on the current solution. Experimental results show that the proposed method considerably outperforms any other local search techniques and integer programming.

  18. Book on CPC Published

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    A book that answers 13 questions about how the Communist Party of China(CPC) works in China and why the Party has made great achievements in the past decades has been recently published by the Beijing-based New World Press.

  19. Model-based systems engineering in the execution of search and rescue operations

    OpenAIRE

    Hunt, Spencer S.

    2015-01-01

    Approved for public release; distribution is unlimited Complex systems engineering problems require robust modeling early in the design process in order to analyze crucial design requirements and interactions. This thesis emphasizes the need for such modeling through multiple model-based systems engineering techniques as they apply to the execution of search and rescue. Through the development of a design reference mission, this thesis illustrates how a search and rescue architecture can u...

  20. ProThes: Thesaurus-based Meta-Search Engine for a Specific Application Domain

    OpenAIRE

    Braslavski, P.; Alshanski, G.; Shishkin, A.; П.И. Браславский

    2004-01-01

    In this poster we introduce ProThes, a pilot meta-search engine (MSE) for a specific application domain. ProThes combines three approaches: meta-search, graphical user interface (GUI) for query specification, and thesaurus-based query techniques. ProThes attempts to employ domain-specific knowledge, which is represented by both a conceptual thesaurus and results ranking heuristics. Since the knowledge representation is separated from the MSE core, adjusting the system to a specific domain is ...

  1. Research of the test generation algorithm based on search state dominance for combinational circuit

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    On the basis of EST (Equivalent STate hashing) algorithm, this paper researches a kind of test generation algorithm based on search state dominance for combinational circuit. According to the dominance relation of the E-frontier ( evaluation frontier), we can prove that this algorithm can terminate unnecessary searching step of test pattern earlier than the EST algorithm through some examples, so this algorithm can reduce the time of test generation. The test patterns calculated can detect faults given through simulation.

  2. A study of the disciplinary structure of mechanics based on the titles of published journal articles in mechanics

    Institute of Scientific and Technical Information of China (English)

    CHEN; Lixin; LIU; Zeyuan; LIANG; Liming

    2010-01-01

    Scientometrics is an emerging academic field for the exploration of the structure of science through journal citation relations.However,this article aims to study those subject-relevant journals’contents rather than studying their citations contained therein with the purpose of discovering a given disciplinary structure of science such as mechanics in our case.Based on the title wordings of 68,075 articles published in 66 mechanics journals,and using such research tools as the word frequency analysis,multidimensional scaling analysis and factor analysis,this article analyzes similarity and distinctions of those journals’contents in the subject field of mechanics.We first convert complex internal relations of these mechanics journals into a small number amount of independent indicators.The group of selected mechanics journals is then classified by a cluster analysis.This article demonstrates that the relations of the research contents of mechanics can be shown in an intuitively recognizable map,and we can have them analyzed from a perspective by taking into account about how those major branches of mechanics,such as solid mechanics,fluid mechanics,rational mechanics(including mathematical methods in mechanics),sound and vibration mechanics,computational mechanics,are related to the main thematic tenet of our study.It is hoped that such an approach,buttressed with this new perspective and approach,will enrich our means to explore the disciplinary structure of science and technology in general and mechanics in specific.

  3. SHOP: receptor-based scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Liljefors, Tommy; Sørensen, Morten D

    2009-01-01

    A new field-derived 3D method for receptor-based scaffold hopping, implemented in the software SHOP, is presented. Information from a protein-ligand complex is utilized to substitute a fragment of the ligand with another fragment from a database of synthetically accessible scaffolds. A GRID......-based interaction profile of the receptor and geometrical descriptions of a ligand scaffold are used to obtain new scaffolds with different structural features and are able to replace the original scaffold in the protein-ligand complex. An enrichment study was successfully performed verifying the ability of SHOP...... to find known active CDK2 scaffolds in a database. Additionally, SHOP was used for suggesting new inhibitors of p38 MAP kinase. Four p38 complexes were used to perform six scaffold searches. Several new scaffolds were suggested, and the resulting compounds were successfully docked into the query proteins....

  4. A Nonhomogeneous Cuckoo Search Algorithm Based on Quantum Mechanism for Real Parameter Optimization.

    Science.gov (United States)

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2017-02-01

    Cuckoo search (CS) algorithm is a nature-inspired search algorithm, in which all the individuals have identical search behaviors. However, this simple homogeneous search behavior is not always optimal to find the potential solution to a special problem, and it may trap the individuals into local regions leading to premature convergence. To overcome the drawback, this paper presents a new variant of CS algorithm with nonhomogeneous search strategies based on quantum mechanism to enhance search ability of the classical CS algorithm. Featured contributions in this paper include: 1) quantum-based strategy is developed for nonhomogeneous update laws and 2) we, for the first time, present a set of theoretical analyses on CS algorithm as well as the proposed algorithm, respectively, and conclude a set of parameter boundaries guaranteeing the convergence of the CS algorithm and the proposed algorithm. On 24 benchmark functions, we compare our method with five existing CS-based methods and other ten state-of-the-art algorithms. The numerical results demonstrate that the proposed algorithm is significantly better than the original CS algorithm and the rest of compared methods according to two nonparametric tests.

  5. Proposing LT based Search in PDM Systems for Better Information Retrieval

    CERN Document Server

    Ahmed, Zeeshan

    2011-01-01

    PDM Systems contain and manage heavy amount of data but the search mechanism of most of the systems is not intelligent which can process user"s natural language based queries to extract desired information. Currently available search mechanisms in almost all of the PDM systems are not very efficient and based on old ways of searching information by entering the relevant information to the respective fields of search forms to find out some specific information from attached repositories. Targeting this issue, a thorough research was conducted in fields of PDM Systems and Language Technology. Concerning the PDM System, conducted research provides the information about PDM and PDM Systems in detail. Concerning the field of Language Technology, helps in implementing a search mechanism for PDM Systems to search user"s needed information by analyzing user"s natural language based requests. The accomplished goal of this research was to support the field of PDM with a new proposition of a conceptual model for the imp...

  6. The NeuARt II system: a viewing tool for neuroanatomical data based on published neuroanatomical atlases

    Directory of Open Access Journals (Sweden)

    Cheng Wei-Cheng

    2006-12-01

    Full Text Available Abstract Background Anatomical studies of neural circuitry describing the basic wiring diagram of the brain produce intrinsically spatial, highly complex data of great value to the neuroscience community. Published neuroanatomical atlases provide a spatial framework for these studies. We have built an informatics framework based on these atlases for the representation of neuroanatomical knowledge. This framework not only captures current methods of anatomical data acquisition and analysis, it allows these studies to be collated, compared and synthesized within a single system. Results We have developed an atlas-viewing application ('NeuARt II' in the Java language with unique functional properties. These include the ability to use copyrighted atlases as templates within which users may view, save and retrieve data-maps and annotate them with volumetric delineations. NeuARt II also permits users to view multiple levels on multiple atlases at once. Each data-map in this system is simply a stack of vector images with one image per atlas level, so any set of accurate drawings made onto a supported atlas (in vector graphics format could be uploaded into NeuARt II. Presently the database is populated with a corpus of high-quality neuroanatomical data from the laboratory of Dr Larry Swanson (consisting 64 highly-detailed maps of PHAL tract-tracing experiments, made up of 1039 separate drawings that were published in 27 primary research publications over 17 years. Herein we take selective examples from these data to demonstrate the features of NeuArt II. Our informatics tool permits users to browse, query and compare these maps. The NeuARt II tool operates within a bioinformatics knowledge management platform (called 'NeuroScholar' either as a standalone or a plug-in application. Conclusion Anatomical localization is fundamental to neuroscientific work and atlases provide an easily-understood framework that is widely used by neuroanatomists and non

  7. INTELLIGENT SEARCH ENGINE-BASED UNIVERSAL DESCRIPTION, DISCOVERY AND INTEGRATION FOR WEB SERVICE DISCOVERY

    Directory of Open Access Journals (Sweden)

    Tamilarasi Karuppiah

    2014-01-01

    Full Text Available Web Services standard has been broadly acknowledged by industries and academic researches along with the progress of web technology and e-business. Increasing number of web applications have been bundled as web services that can be published, positioned and invoked across the web. The importance of the issues regarding their publication and innovation attains a maximum as web services multiply and become more advanced and mutually dependent. With the intension of determining the web services through effiective manner with in the minimum time period in this study proposes an UDDI with intelligent serach engine. In order to publishing and discovering web services initially, the web services are published in the UDDI registry subsequently the published web services are indexed. To improve the efficiency of discovery of web services, the indexed web services are saved as index database. The search query is compared with the index database for discovering of web services and the discovered web services are given to the service customer. The way of accessing the web services is stored in a log file, which is then utilized to provide personalized web services to the user. The finding of web service is enhanced significantly by means of an efficient exploring capability provided by the proposed system and it is accomplished of providing the maximum appropriate web service. Universal Description, Discovery and Integration (UDDI.

  8. An Analysis of Literature Searching Anxiety in Evidence-Based Medicine Education

    Directory of Open Access Journals (Sweden)

    Hui-Chin Chang

    2014-01-01

    Full Text Available Introduction. Evidence-Based Medicine (EBM is hurtling towards a cornerstone in lifelong learning for healthcare personnel worldwide. This study aims to evaluate the literature searching anxiety in graduate students in practicing EBM. Method The study participants were 48 graduate students who enrolled the EBM course at aMedical Universityin central Taiwan. Student’s t-test, Pearson correlation and multivariate regression, interviewing are used to evaluate the students’ literature searching anxiety of EBM course. The questionnaire is Literature Searching Anxiety Rating Scale -LSARS. Results The sources of anxiety are uncertainty of database selection, literatures evaluation and selection, technical assistance request, computer programs use, English and EBM education programs were disclosed. The class performance is negatively related to LSARS score, however, the correlation is statistically insignificant with the adjustment of gender, degree program, age category and experience of publication. Conclusion This study helps in understanding the causes and the extent of anxiety in order to work on a better teaching program planning to improve user’s searching skills and the capability of utilization the information; At the same time, provide friendly-user facilities of evidence searching. In short, we need to upgrade the learner’s searching 45 skills and reduce theanxiety. We also need to stress on the auxiliary teaching program for those with the prevalent and profoundanxiety during literature searching.

  9. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  10. Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

    CERN Document Server

    Guez, Arthur; Dayan, Peter

    2012-01-01

    Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty. In this setting, a Bayes-optimal policy captures the ideal trade-off between exploration and exploitation. Unfortunately, finding Bayes-optimal policies is notoriously taxing due to the enormous search space in the augmented belief-state MDP. In this paper we exploit recent advances in sample-based planning, based on Monte-Carlo tree search, to introduce a tractable method for approximate Bayes-optimal planning. Unlike prior work in this area, we avoid expensive applications of Bayes rule within the search tree, by lazily sampling models from the current beliefs. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems.

  11. Sensitive Ground-based Search for Sulfuretted Species on Mars

    Science.gov (United States)

    Khayat, Alain; Villanueva, G. L.; Mumma, M. J.; Riesen, T. E.; Tokunaga, A. T.

    2012-10-01

    We searched for active release of gases on Mars during mid Northern Spring and early Northern Summer seasons, between Ls= 34° and Ls= 110°. The targeted volcanic areas, Tharsis and Syrtis Major, were observed during the interval 23 Nov. 2011 to 13 May 2012, using the high resolution infrared spectrometer (CSHELL) on NASA's Infrared Telescope Facility (NASA/IRTF) and the ultra-high resolution heterodyne receiver (Barney) at the Caltech Submillimeter Observatory (CSO). The two main reservoirs of atmospheric sulfur on Mars are expected to be SO2 and H2S. Because these two species have relatively short photochemical lifetimes, 160 and 9 days respectively (Wong et al. 2004), they stand as powerful indicators of recent activity. Carbonyl sulfide (OCS) is the expected end-product of the reactions between sulfuretted species and other molecules in the Martian atmosphere. Our multi-band survey targeted SO2, SO and H2S at their rotational transitions at 346.523 GHz, 304.078 GHz and 300.505 GHz respectively, and OCS in its combination band (ν1+ν3) at 3.42 µm and its fundamental band (ν3) centered at 4.85 µm. The radiative transfer model used to derive abundance ratios for these species was validated by performing line-inversion retrievals on the carbon monoxide (CO) strong rotational (3-2) line at sub-mm wavelengths (rest frequency 345.796 GHz). Preliminary results and abundance ratios for SO2, H2S, SO, OCS and CO will be presented. We gratefully acknowledge support from the NASA Planetary Astronomy Program (AK, ATT, MJM), NASA Astrobiology Institute (MJM), NASA Planetary Atmospheres Program (GLV), and NSF grant number AST-0838261 to support graduate students at the CSO (AK). References: Wong, A.S., Atreya, S. K., Formisano, V., Encrenaz, T., Ignatiev, N.I., "Atmospheric photochemistry above possible martian hot spots", Advances in Space Research, 33 (2004) 2236-2239.

  12. Magnetic Flux Leakage Signal Inversion of Corrosive Flaws Based on Modified Genetic Local Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    HAN Wen-hua; FANG Ping; XIA Fei; XUE Fang

    2009-01-01

    In this paper, a modified genetic local search algorithm (MGLSA) is proposed. The proposed algorithm is resulted from employing the simulated annealing technique to regulate the variance of the Gaussian mutation of the genetic local search algorithm (GLSA). Then, an MGLSA-based inverse algorithm is proposed for magnetic flux leakage (MFL) signal inversion of corrosive flaws, in which the MGLSA is used to solve the optimization problem in the MFL inverse problem. Experimental results demonstrate that the MGLSA-based inverse algorithm is more robust than GLSA-based inverse algorithm in the presence of noise in the measured MFL signals.

  13. Query sensitive comparative summarization of search results using concept based segmentation

    CERN Document Server

    Chitra, P; Sarukesi, K

    2012-01-01

    Query sensitive summarization aims at providing the users with the summary of the contents of single or multiple web pages based on the search query. This paper proposes a novel idea of generating a comparative summary from a set of URLs from the search result. User selects a set of web page links from the search result produced by search engine. Comparative summary of these selected web sites is generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the query and feature keywords. The important sentences from the concept blocks of different web pages are extracted to compose the comparative summary on the fly. This system reduces the time and effort required for the user to browse various web sites to compare the information. The comparative summary of the contents would help the users in quick decision making.

  14. POLYNOMIAL MODEL BASED FAST FRACTIONAL PIXEL SEARCH ALGORITHM FOR H.264/AVC

    Institute of Scientific and Technical Information of China (English)

    Xi Yinglai; Hao Chongyang; Lai Changcai

    2006-01-01

    This paper proposed a novel fast fractional pixel search algorithm based on polynomial model.With the analysis of distribution characteristics of motion compensation error surface inside fractional pixel searching window, the matching error is fitted with parabola along horizontal and vertical direction respectively. The proposed searching strategy needs to check only 6 points rather than 16 or 24 points, which are used in the Hierarchical Fractional Pel Search algorithm (HFPS) for 1/4-pel and 1/8-pel Motion Estimation (ME). The experimental results show that the proposed algorithm shows very good capability in keeping the rate distortion performance while reduces computation load to a large extent compared with HFPS algorithm.

  15. Improved methods for scheduling flexible manufacturing systems based on Petri nets and heuristic search

    Institute of Scientific and Technical Information of China (English)

    Bo HUANG; Yamin SUN

    2005-01-01

    This paper proposes and evaluates two improved Petri net (PN)-based hybrid search strategies and their applications to flexible manufacturing system (FMS) scheduling.The algorithms proposed in some previous papers,which combine PN simulation capabilities with A* heuristic search within the PN reachability graph,may not find an optimum solution even with an admissible heuristic function.To remedy the defects an improved heuristic search strategy is proposed,which adopts a different method for selecting the promising markings and reserves the admissibility of the algorithm.To speed up the search process,another algorithm is also proposed which invokes faster termination conditions and still guarantees that the solution found is optimum.The scheduling results are compared through a simple FMS between our algorithms and the previous methods.They are also applied and evaluated in a set of randomly-generated FMSs with such characteristics as multiple resources and alternative routes.

  16. Algorithm Based on Taboo Search and Shifting Bottleneck for Job Shop Scheduling

    Institute of Scientific and Technical Information of China (English)

    Wen-Qi Huang; Zhi Huang

    2004-01-01

    In this paper, a computational effective heuristic method for solving the minimum makespan problem of job shop scheduling is presented. It is based on taboo search procedure and on the shifting bottleneck procedure used to jump out of the trap of the taboo search procedure. A key point of the algorithm is that in the taboo search procedure two taboo lists are used to forbid two kinds of reversals of arcs, which is a new and effective way in taboo search methods for job shop scheduling. Computational experiments on a set of benchmark problem instances show that, in several cases, the approach, in reasonable time, yields better solutions than the other heuristic procedures discussed in the literature.

  17. Structure-Based Local Search Heuristics for Circuit-Level Boolean Satisfiability

    CERN Document Server

    Belov, Anton

    2011-01-01

    This work focuses on improving state-of-the-art in stochastic local search (SLS) for solving Boolean satisfiability (SAT) instances arising from real-world industrial SAT application domains. The recently introduced SLS method CRSat has been shown to noticeably improve on previously suggested SLS techniques in solving such real-world instances by combining justification-based local search with limited Boolean constraint propagation on the non-clausal formula representation form of Boolean circuits. In this work, we study possibilities of further improving the performance of CRSat by exploiting circuit-level structural knowledge for developing new search heuristics for CRSat. To this end, we introduce and experimentally evaluate a variety of search heuristics, many of which are motivated by circuit-level heuristics originally developed in completely different contexts, e.g., for electronic design automation applications. To the best of our knowledge, most of the heuristics are novel in the context of SLS for S...

  18. Two-grade search mechanism based motion planning of a three-limbed robot

    Institute of Scientific and Technical Information of China (English)

    Pang Ming; Zang Xizhe; Yan Jihong; Zhao Jie

    2008-01-01

    A novel three-limbed robot was described and its motion planning method was discussed. After the introduction of the robot mechanical structure and the human-robot interface, a two-grade search mechanism based motion planning method was proposed. The first-grade search method using genetic algorithm tries to find an optimized target position and orientation of the three-limbed robot. The second-grade search method using virtual compliance tries to avoid the collision between the three-limbed robot and obstacles in a dynamic environment. Experiment shows the feasibility of the two-grade search mechanism and proves that the proposed motion planning method can be used to solve the motion planning problem of the redundant three-limbed robot without deficiencies of traditional genetic algorithm.

  19. PADB : Published Association Database

    Directory of Open Access Journals (Sweden)

    Lee Jin-Sung

    2007-09-01

    Full Text Available Abstract Background Although molecular pathway information and the International HapMap Project data can help biomedical researchers to investigate the aetiology of complex diseases more effectively, such information is missing or insufficient in current genetic association databases. In addition, only a few of the environmental risk factors are included as gene-environment interactions, and the risk measures of associations are not indexed in any association databases. Description We have developed a published association database (PADB; http://www.medclue.com/padb that includes both the genetic associations and the environmental risk factors available in PubMed database. Each genetic risk factor is linked to a molecular pathway database and the HapMap database through human gene symbols identified in the abstracts. And the risk measures such as odds ratios or hazard ratios are extracted automatically from the abstracts when available. Thus, users can review the association data sorted by the risk measures, and genetic associations can be grouped by human genes or molecular pathways. The search results can also be saved to tab-delimited text files for further sorting or analysis. Currently, PADB indexes more than 1,500,000 PubMed abstracts that include 3442 human genes, 461 molecular pathways and about 190,000 risk measures ranging from 0.00001 to 4878.9. Conclusion PADB is a unique online database of published associations that will serve as a novel and powerful resource for reviewing and interpreting huge association data of complex human diseases.

  20. Web-based Image Search Engines%因特网上的图像搜索引擎

    Institute of Scientific and Technical Information of China (English)

    陈立娜

    2001-01-01

    The operating principle of Web-based image search engines is briefly described. A detailed evaluation of some of image search engines is made. Finally, the paper points out the deficiencies of the present image search engines and their development trend.

  1. Design and Implementation of the Personalized Search Engine Based on the Improved Behavior of User Browsing

    Directory of Open Access Journals (Sweden)

    Wei-Chao Li

    2013-02-01

    Full Text Available An improved user profile based on the user browsing behavior is proposed in this study. In the user profile, the user browsing web pages behaviors, the level of interest to keywords, the user's short-term interest and long-term interest are overall taken into account. The improved user profile based on the user browsing behavior is embedded in the personalized search engine system. The basic framework and the basic functional modules of the system are described detailed in this study. A demonstration system of IUBPSES is developed in the .NET platform. The results of the simulation experiments indicate that the retrieval effects which use the IUBPSES based on the improved user profile for information search surpass the current mainstream search engines. The direction of improvement and further research is proposed in the finally.

  2. Greedy-search based service location in P2P networks

    Institute of Scientific and Technical Information of China (English)

    Zhu Cheng; Liu Zhong; Zhang Weiming; Yang Dongsheng

    2005-01-01

    A model is built to analyze the performance of service location based on greedy search in P2P networks. Hops and relative QoS index of the node found in a service location process are used to evaluate the performance as well as the probability of locating the top 5% nodes with highest QoS level. Both model and simulation results show that, the performance of greedy search based service location improves significantly with the increase of the average degree of the network. It is found that, if changes of both overlay topology and QoS level of nodes can be ignored during a location process, greedy-search based service location has high probability of finding the nodes with relatively high QoS in small number of hops in a big overlay network. Model extension under arbitrary network degree distribution is also studied.

  3. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  4. Secondary eclipses in the CoRoT light curves: A homogeneous search based on Bayesian model selection

    CERN Document Server

    Parviainen, Hannu; Belmonte, Juan Antonio

    2012-01-01

    We aim to identify and characterize secondary eclipses in the original light curves of all published CoRoT planets using uniform detection and evaluation critetia. Our analysis is based on a Bayesian model selection between two competing models: one with and one without an eclipse signal. The search is carried out by mapping the Bayes factor in favor of the eclipse model as a function of the eclipse center time, after which the characterization of plausible eclipse candidates is done by estimating the posterior distributions of the eclipse model parameters using Markov Chain Monte Carlo. We discover statistically significant eclipse events for two planets, CoRoT-6b and CoRoT-11b, and for one brown dwarf, CoRoT-15b. We also find marginally significant eclipse events passing our plausibility criteria for CoRoT-3b, 13b, 18b, and 21b. The previously published CoRoT-1b and CoRoT-2b eclipses are also confirmed.

  5. The optimal time-frequency atom search based on a modified ant colony algorithm

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-feng; LI Yan-jun; YU Rui-xing; ZHANG Ke

    2008-01-01

    In this paper,a new optimal time-frequency atom search method based on a modified ant colony algorithm is proposed to improve the precision of the traditional methods.First,the discretization formula of finite length time-frequency atom is inferred at length.Second; a modified ant colony algorithm in continuous space is proposed.Finally,the optimal timefrequency atom search algorithm based on the modified ant colony algorithm is described in detail and the simulation experiment is carried on.The result indicates that the developed algorithm is valid and stable,and the precision of the method is higher than that of the traditional method.

  6. A Feature-Weighted Instance-Based Learner for Deep Web Search Interface Identification

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2013-02-01

    Full Text Available Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based learner gives better results than classical algorithms such as C4.5, random forest and KNN.

  7. A "cluster" based search scheme in peer-to-peer network

    Institute of Scientific and Technical Information of China (English)

    李振武; 杨舰; 史旭东; 白英彩

    2003-01-01

    This paper presents a "cluster" based search scheme in peer-to-peer network. The idea is based on the fact that data distribution in an information society has structured feature. We designed an algorithm to cluster peers that have similar interests. When receiving a query request, a peer will preferentially forward it to another peer which belongs to the same cluster and shares more similar interests. By this way search efficiency will be remarkably improved and at the same time good resilience against peer failure (the ability to withstand peer failure) is reserved.

  8. A "cluster" based search scheme in peer-to-peer network

    Institute of Scientific and Technical Information of China (English)

    李振武; 杨舰; 史旭东; 白英彩

    2003-01-01

    This paper presents a "cluster" based search scheme in peer-to-peer network. The idea is based on the fact that data distribution in an information society has structured feature. We designed an algorithm to cluster peers that have similar interests. When receiving a query request, a peer will preferentially forward it to another peer which belongs to the same cluster and shares more similar interests. By this way search efficiency will be remarkably improved and at the same time good resilience against peer failure (the ability to withstand peer failure) is reserved.

  9. Free Energy-Based Conformational Search Algorithm Using the Movable Type Sampling Method.

    Science.gov (United States)

    Pan, Li-Li; Zheng, Zheng; Wang, Ting; Merz, Kenneth M

    2015-12-08

    In this article, we extend the movable type (MT) sampling method to molecular conformational searches (MT-CS) on the free energy surface of the molecule in question. Differing from traditional systematic and stochastic searching algorithms, this method uses Boltzmann energy information to facilitate the selection of the best conformations. The generated ensembles provided good coverage of the available conformational space including available crystal structures. Furthermore, our approach directly provides the solvation free energies and the relative gas and aqueous phase free energies for all generated conformers. The method is validated by a thorough analysis of thrombin ligands as well as against structures extracted from both the Protein Data Bank (PDB) and the Cambridge Structural Database (CSD). An in-depth comparison between OMEGA and MT-CS is presented to illustrate the differences between the two conformational searching strategies, i.e., energy-based versus free energy-based searching. These studies demonstrate that our MT-based ligand conformational search algorithm is a powerful approach to delineate the conformational ensembles of molecular species on free energy surfaces.

  10. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    Science.gov (United States)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  11. Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.

    Science.gov (United States)

    Cao, Xiang; Zhu, Daqi; Yang, Simon X

    2016-11-01

    Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.

  12. How Users Search the Library from a Single Search Box

    Science.gov (United States)

    Lown, Cory; Sierra, Tito; Boyer, Josh

    2013-01-01

    Academic libraries are turning increasingly to unified search solutions to simplify search and discovery of library resources. Unfortunately, very little research has been published on library user search behavior in single search box environments. This study examines how users search a large public university library using a prominent, single…

  13. A grammar based methodology for structural motif finding in ncRNA database search.

    Science.gov (United States)

    Quest, Daniel; Tapprich, William; Ali, Hesham

    2007-01-01

    In recent years, sequence database searching has been conducted through local alignment heuristics, pattern-matching, and comparison of short statistically significant patterns. While these approaches have unlocked many clues as to sequence relationships, they are limited in that they do not provide context-sensitive searching capabilities (e.g. considering pseudoknots, protein binding positions, and complementary base pairs). Stochastic grammars (hidden Markov models HMMs and stochastic context-free grammars SCFG) do allow for flexibility in terms of local context, but the context comes at the cost of increased computational complexity. In this paper we introduce a new grammar based method for searching for RNA motifs that exist within a conserved RNA structure. Our method constrains computational complexity by using a chain of topology elements. Through the use of a case study we present the algorithmic approach and benchmark our approach against traditional methods.

  14. COORDINATE-BASED META-ANALYTIC SEARCH FOR THE SPM NEUROIMAGING PIPELINE

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Szewczyk, Marcin; Rasmussen, Peter Mondrup;

    2009-01-01

    . BredeQuery offers a direct link from SPM5 to the Brede Database coordinate-based search engine. BredeQuery is able to ‘grab’ brain location coordinates from the SPM windows and enter them as a query for the Brede Database. Moreover, results of the query can be displayed in an SPM window and/or exported...... of the databases offer so- called coordinate-based searching to the users (e.g. Brede, BrainMap). For such search, the publications, which relate to the brain locations represented by the user coordinates, are retrieved. In this paper we present BredeQuery – a plugin for the widely used SPM5 data analytic pipeline...

  15. A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.

    Science.gov (United States)

    Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R

    2008-04-01

    Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.

  16. Ontology-based Semantic Search Engine for Healthcare Services

    Directory of Open Access Journals (Sweden)

    Jotsna Molly Rajan

    2012-04-01

    Full Text Available With the development of Web Services, the retrieval of relevant services has become a challenge. The keyword-based discovery mechanism using UDDI and WSDL is insufficient due to the retrievalof a large amount of irrelevant information. Also, keywords are insufficient in expressing semantic concepts since a single concept can be referred using syntactically different terms. Hence, service capabilities need to be manually analyzed, which lead to the development of the Semantic Web for automatic service discovery andretrieval of relevant services and resources. This work proposes the incorporation of Semantic matching methodology in Semantic Web for improving the efficiency and accuracy of the discovery mechanism.

  17. Developing Chinese Scientists' Skills for Publishing in English: Evaluating Collaborating-Colleague Workshops Based on Genre Analysis

    Science.gov (United States)

    Cargill, Margaret; O'Connor, Patrick

    2006-01-01

    Getting papers published in the (largely English-language) international literature is important for individual researchers, their institutions, and the academic community, and the resulting pressure is being felt increasingly in China as a result of top-down policy initiatives. For many researchers, reaching this goal involves two intersecting…

  18. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  19. Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.

    Science.gov (United States)

    Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin

    2014-10-01

    With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods.

  20. Efficient Multi-keyword Ranked Search over Outsourced Cloud Data based on Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Nie Mengxi

    2016-01-01

    Full Text Available With the development of cloud computing, more and more data owners are motivated to outsource their data to the cloud server for great flexibility and less saving expenditure. Because the security of outsourced data must be guaranteed, some encryption methods should be used which obsoletes traditional data utilization based on plaintext, e.g. keyword search. To solve the search of encrypted data, some schemes were proposed to solve the search of encrypted data, e.g. top-k single or multiple keywords retrieval. However, the efficiency of these proposed schemes is not high enough to be impractical in the cloud computing. In this paper, we propose a new scheme based on homomorphic encryption to solve this challenging problem of privacy-preserving efficient multi-keyword ranked search over outsourced cloud data. In our scheme, the inner product is adopted to measure the relevance scores and the technique of relevance feedback is used to reflect the search preference of the data users. Security analysis shows that the proposed scheme can meet strict privacy requirements for such a secure cloud data utilization system. Performance evaluation demonstrates that the proposed scheme can achieve low overhead on both computation and communication.

  1. A DIRECT SEARCH FRAME-BASED CONJUGATE GRADIENTS METHOD

    Institute of Scientific and Technical Information of China (English)

    I.D. Coope; C.J. Price

    2004-01-01

    A derivative-free frame-based conjugate gradients algorithm is presented. Convergence is shown for C1 functions, and this is verified in numerical trials. The algorithm is tested on a variety of low dimensional problems, some of which are ill-conditioned, and is also tested on problems of high dimension. Numerical results show that the algorithm is effective on both classes of problems. The results are compared with those from a discrete quasiNewton method, showing that the conjugate gradients algorithm is competitive. The algorithm exhibits the conjugate gradients speed-up on problems for which the Hessian at the solution has repeated or clustered eigenvalues. The algorithm is easily parallelizable.

  2. Selective Search and Intensity Context Based Retina Vessel Image Segmentation.

    Science.gov (United States)

    Tang, Zhaohui; Zhang, Jin; Gui, Weihua

    2017-03-01

    In the framework of computer-aided diagnosis of eye disease, a new contextual image feature named influence degree of average intensity is proposed for retinal vessel image segmentation. This new feature evaluates the influence degree of current detected pixel decreasing the average intensity of the local row where that pixel located. Firstly, Hessian matrix is introduced to detect candidate regions, for the reason of accelerating segmentation. Then, the influence degree of average intensity of each pixel is extracted. Next, contextual feature vector for each pixel is constructed by concatenating the 8 feature neighbors. Finally, a classifier is built to classify each pixel into vessel or non-vessel based on its contextual feature. The effectiveness of the proposed method is demonstrated through receiver operating characteristic analysis on the benchmarked databases of DRIVE and STARE. Experiment results show that our method is comparable with the state-of-the-art methods. For example, the average accuracy, sensitivity, specificity achieved on the database DRIVE and STARE are 0.9611, 0.8174, 0.9747 and 0.9547, 0.7768, 0.9751, respectively.

  3. Eugene Garfield, Francis Narin, and PageRank: The Theoretical Bases of the Google Search Engine

    CERN Document Server

    Bensman, Stephen J

    2013-01-01

    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.

  4. Aspiration Levels and R&D Search in Young Technology-Based Firms

    DEFF Research Database (Denmark)

    Candi, Marina; Saemundsson, Rognvaldur; Sigurjonsson, Olaf

    the same when performance surpasses aspirations. Both positive and negative outlooks reinforce the effects of performance feedback. The combined effect is that the more outcomes and expectations deviate from aspirations the more young technology-based firms invest in R&D search....

  5. A novel approach towards skill-based search and services of Open Educational Resources

    NARCIS (Netherlands)

    Ha, Kyung-Hun; Niemann, Katja; Schwertel, Uta; Holtkamp, Philipp; Pirkkalainen, Henri; Börner, Dirk; Kalz, Marco; Pitsilis, Vassilis; Vidalis, Ares; Pappa, Dimitra; Bick, Markus; Pawlowski, Jan; Wolpers, Martin

    2011-01-01

    Ha, K.-H., Niemann, K., Schwertel, U., Holtkamp, P., Pirkkalainen, H., Börner, D. et al (2011). A novel approach towards skill-based search and services of Open Educational Resources. In E. Garcia-Barriocanal, A. Öztürk, & M. C. Okur (Eds.), Metadata and Semantics Research: 5th International Confere

  6. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA Product, Published in 2009, 1:600 (1in=50ft) scale, Jefferson County Land Information Office.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:600 (1in=50ft) scale, was produced all or in part from Published...

  7. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA FIRM Boundary, Published in 2010, 1:2400 (1in=200ft) scale, Effingham County Board Of Commissioners.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:2400 (1in=200ft) scale, was produced all or in part from Published...

  8. 基于JXTA的Super-peer搜索方法设计%Design JXTA Based Super-peer Search Method

    Institute of Scientific and Technical Information of China (English)

    李歆海; 李善平

    2003-01-01

    Efficient resource search method has significant impact on the scalability and availability of P2P network. Generally there are two search methods, pure Peer-to-Peer method and central index method. Recently, some search methods with super-peer concept are appearing, which are the compromise of those two methods and have favorable scalability and avafiability. In this paper, we compare the advantage and deficiency of these three kinds of search methods, and based on JXTA platform design the super-peer search method.

  9. A Method for Detecting the Real Location of Agency Website Based On Search Engine

    Directory of Open Access Journals (Sweden)

    Chou Xiao-Hui

    2016-01-01

    Full Text Available This paper provides a method to detect the real location of agency website based on search engine. We will analyze and process the target agency website to obtain the server routing information, extract critical feature information from web content, combine with search engine to acquire web data, and calculate word frequency. Through named entity recognition, web text matching calculation and syntactic analysis, we can infer real location of the target agency website in the real world. The experimental results show that our approach is reliable, correct and effective.

  10. A New Genetic Algorithm Based on Niche Technique and Local Search Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The genetic algorithm has been widely used in many fields as an easy robust global search and optimization method. In this paper, a new genetic algorithm based on niche technique and local search method is presented under the consideration of inadequacies of the simple genetic algorithm. In order to prove the adaptability and validity of the improved genetic algorithm, optimization problems of multimodal functions with equal peaks, unequal peaks and complicated peak distribution are discussed. The simulation results show that compared to other niching methods, this improved genetic algorithm has obvious potential on many respects, such as convergence speed, solution accuracy, ability of global optimization, etc.

  11. GeNemo: a search engine for web-based functional genomic data

    OpenAIRE

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-01-01

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of E...

  12. A Beam Search-based Algorithm for Flexible Manufacturing System Scheduling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing-hai; ZHOU Xiao-jun; CAI Jian-guo; FENG Kun

    2002-01-01

    A new algorithm is proposed for the flexible manufacturing system (FMS) scheduling problem in this paper. The proposed algorithm is a heuristic based on filtered beam search. It considers the machines and automated guided vehicle (AGV) as the primary resources, It utilizes system constraints and related manufacturing and processing information to generate machines and AGV schedules. The generated schedules can be an entire scheduling horizon as well as various lengths of scheduling periods. The proposed algorithm is also compared with other well-known dispatching rulesbased FMS scheduling. The results indicate that the beam search algorithm is a simple, valid and promising algorithm that deserves further research in FMS scheduling field.

  13. Publishing with XML structure, enter, publish

    CERN Document Server

    Prost, Bernard

    2015-01-01

    XML is now at the heart of book publishing techniques: it provides the industry with a robust, flexible format which is relatively easy to manipulate. Above all, it preserves the future: the XML text becomes a genuine tactical asset enabling publishers to respond quickly to market demands. When new publishing media appear, it will be possible to very quickly make your editorial content available at a lower cost. On the downside, XML can become a bottomless pit for publishers attracted by its possibilities. There is a strong temptation to switch to audiovisual production and to add video and a

  14. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    Science.gov (United States)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  15. On the importance of graph search algorithms for DRGEP-based mechanism reduction methods

    CERN Document Server

    Niemeyer, Kyle E

    2016-01-01

    The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra's algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra's algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra's algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRG...

  16. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    Directory of Open Access Journals (Sweden)

    K.S. Kuppusamy,

    2011-03-01

    Full Text Available The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines inorder to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web pagesegmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approachinclude instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  17. Project GRACE A grid based search tool for the global digital library

    CERN Document Server

    Scholze, Frank; Vigen, Jens; Prazak, Petra; The Seventh International Conference on Electronic Theses and Dissertations

    2004-01-01

    The paper will report on the progress of an ongoing EU project called GRACE - Grid Search and Categorization Engine (http://www.grace-ist.org). The project participants are CERN, Sheffield Hallam University, Stockholm University, Stuttgart University, GL 2006 and Telecom Italia. The project started in 2002 and will finish in 2005, resulting in a Grid based search engine that will search across a variety of content sources including a number of electronic thesis and dissertation repositories. The Open Archives Initiative (OAI) is expanding and is clearly an interesting movement for a community advocating open access to ETD. However, the OAI approach alone may not be sufficiently scalable to achieve a truly global ETD Digital Library. Many universities simply offer their collections to the world via their local web services without being part of any federated system for archiving and even those dissertations that are provided with OAI compliant metadata will not necessarily be picked up by a centralized OAI Ser...

  18. A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.

    Science.gov (United States)

    Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2017-03-13

    The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.

  19. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approach include instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  20. Differential Evolution Based Intelligent System State Search Method for Composite Power System Reliability Evaluation

    Science.gov (United States)

    Bakkiyaraj, Ashok; Kumarappan, N.

    2015-09-01

    This paper presents a new approach for evaluating the reliability indices of a composite power system that adopts binary differential evolution (BDE) algorithm in the search mechanism to select the system states. These states also called dominant states, have large state probability and higher loss of load curtailment necessary to maintain real power balance. A chromosome of a BDE algorithm represents the system state. BDE is not applied for its traditional application of optimizing a non-linear objective function, but used as tool for exploring more number of dominant states by producing new chromosomes, mutant vectors and trail vectors based on the fitness function. The searched system states are used to evaluate annualized system and load point reliability indices. The proposed search methodology is applied to RBTS and IEEE-RTS test systems and results are compared with other approaches. This approach evaluates the indices similar to existing methods while analyzing less number of system states.

  1. KRBKSS: a keyword relationship based keyword-set search system for peer-to-peer networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Liang; ZOU Fu-tai; MA Fan-yuan

    2005-01-01

    Distributed inverted index technology is used in many peer-to-peer (P2P) systems to help find rapidly document in -set search system for peer-to-peer networkswhich a given word appears. Distributed inverted index by keywords may incur significant bandwidth for executing more complicated search queries such as multiple-attribute queries. In order to reduce query overhead, KSS (keyword-set search) by Gnawali partitions the index by a set of keywords. However, a KSS index is considerably larger than a standard inverted index,since there are more word sets than there are individual words. And the insert overhead and storage overhead are obviously unacceptable for full-text search on a collection of documents even if KSS uses the distance window technology. In this paper, we extract the relationship information between query keywords from websites' queries logs to improve performance of KSS system.Experiments results clearly demonstrated that the improved keyword-set search system based on keywords relationship (KRBKSS) is more efficient than KSS index in insert overhead and storage overhead, and a standard inverted index in terms of communication costs for query.

  2. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  3. Comics, Copyright and Academic Publishing

    Directory of Open Access Journals (Sweden)

    Ronan Deazley

    2014-05-01

    Full Text Available This article considers the extent to which UK-based academics can rely upon the copyright regime to reproduce extracts and excerpts from published comics and graphic novels without having to ask the copyright owner of those works for permission. In doing so, it invites readers to engage with a broader debate about the nature, demands and process of academic publishing.

  4. Genealogical Information Search by Using Parent Bidirectional Breadth Algorithm and Rule Based Relationship

    CERN Document Server

    Nuanmeesri, Sumitra; Meesad, Payung

    2010-01-01

    Genealogical information is the best histories resources for culture study and cultural heritage. The genealogical research generally presents family information and depict tree diagram. This paper presents Parent Bidirectional Breadth Algorithm (PBBA) to find consanguine relationship between two persons. In addition, the paper utilizes rules based system in order to identify consanguine relationship. The study reveals that PBBA is fast to solve the genealogical information search problem and the Rule Based Relationship provides more benefits in blood relationship identification.

  5. An Efficient Minimum Free Energy Structure-Based Search Method for Riboswitch Identification Based on Inverse RNA Folding.

    Science.gov (United States)

    Drory Retwitzer, Matan; Kifer, Ilona; Sengupta, Supratim; Yakhini, Zohar; Barash, Danny

    2015-01-01

    Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is to find additional eukaryotic riboswitches since more than 20 riboswitch classes have been found in prokaryotes but only one class has been found in eukaryotes. Moreover, this single known class of eukaryotic riboswitch, namely the TPP riboswitch class, has been found in bacteria, archaea, fungi and plants but not in animals. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods such as a combination of BLAST and pattern matching techniques that incorporate base-pairing considerations. None of these approaches perform energy minimization structure predictions. There is a clear motivation to develop new bioinformatics methods, aside of the ongoing advances in covariance models, that will sample the sequence search space more flexibly using structural guidance while retaining the computational efficiency of sequence-based methods. We present a new energy minimization approach that transforms structure-based search into a sequence-based search, thereby enabling the utilization of well established sequence-based search utilities such as BLAST and FASTA. The transformation to sequence space is obtained by using an extended inverse RNA folding problem solver with sequence and structure constraints, available within RNAfbinv. Examples in applying the new method are presented for the purine and preQ1 riboswitches. The method is described in detail along with its findings in prokaryotes. Potential uses in finding novel eukaryotic riboswitches and optimizing pre-designed synthetic riboswitches based on ligand simulations are discussed. The method components are freely available for use.

  6. An Efficient Minimum Free Energy Structure-Based Search Method for Riboswitch Identification Based on Inverse RNA Folding.

    Directory of Open Access Journals (Sweden)

    Matan Drory Retwitzer

    Full Text Available Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is to find additional eukaryotic riboswitches since more than 20 riboswitch classes have been found in prokaryotes but only one class has been found in eukaryotes. Moreover, this single known class of eukaryotic riboswitch, namely the TPP riboswitch class, has been found in bacteria, archaea, fungi and plants but not in animals. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods such as a combination of BLAST and pattern matching techniques that incorporate base-pairing considerations. None of these approaches perform energy minimization structure predictions. There is a clear motivation to develop new bioinformatics methods, aside of the ongoing advances in covariance models, that will sample the sequence search space more flexibly using structural guidance while retaining the computational efficiency of sequence-based methods. We present a new energy minimization approach that transforms structure-based search into a sequence-based search, thereby enabling the utilization of well established sequence-based search utilities such as BLAST and FASTA. The transformation to sequence space is obtained by using an extended inverse RNA folding problem solver with sequence and structure constraints, available within RNAfbinv. Examples in applying the new method are presented for the purine and preQ1 riboswitches. The method is described in detail along with its findings in prokaryotes. Potential uses in finding novel eukaryotic riboswitches and optimizing pre-designed synthetic riboswitches based on ligand simulations are discussed. The method components are freely

  7. Web Image Search Re-ranking with Click-based Similarity and Typicality.

    Science.gov (United States)

    Yang, Xiaopeng; Mei, Tao; Zhang, Yong Dong; Liu, Jie; Satoh, Shin'ichi

    2016-07-20

    In image search re-ranking, besides the well known semantic gap, intent gap, which is the gap between the representation of users' query/demand and the real intent of the users, is becoming a major problem restricting the development of image retrieval. To reduce human effects, in this paper, we use image click-through data, which can be viewed as the "implicit feedback" from users, to help overcome the intention gap, and further improve the image search performance. Generally, the hypothesis visually similar images should be close in a ranking list and the strategy images with higher relevance should be ranked higher than others are widely accepted. To obtain satisfying search results, thus, image similarity and the level of relevance typicality are determinate factors correspondingly. However, when measuring image similarity and typicality, conventional re-ranking approaches only consider visual information and initial ranks of images, while overlooking the influence of click-through data. This paper presents a novel re-ranking approach, named spectral clustering re-ranking with click-based similarity and typicality (SCCST). First, to learn an appropriate similarity measurement, we propose click-based multi-feature similarity learning algorithm (CMSL), which conducts metric learning based on clickbased triplets selection, and integrates multiple features into a unified similarity space via multiple kernel learning. Then based on the learnt click-based image similarity measure, we conduct spectral clustering to group visually and semantically similar images into same clusters, and get the final re-rank list by calculating click-based clusters typicality and withinclusters click-based image typicality in descending order. Our experiments conducted on two real-world query-image datasets with diverse representative queries show that our proposed reranking approach can significantly improve initial search results, and outperform several existing re-ranking approaches.

  8. Is Internet search better than structured instruction for web-based health education?

    Science.gov (United States)

    Finkelstein, Joseph; Bedra, McKenzie

    2013-01-01

    Internet provides access to vast amounts of comprehensive information regarding any health-related subject. Patients increasingly use this information for health education using a search engine to identify education materials. An alternative approach of health education via Internet is based on utilizing a verified web site which provides structured interactive education guided by adult learning theories. Comparison of these two approaches in older patients was not performed systematically. The aim of this study was to compare the efficacy of a web-based computer-assisted education (CO-ED) system versus searching the Internet for learning about hypertension. Sixty hypertensive older adults (age 45+) were randomized into control or intervention groups. The control patients spent 30 to 40 minutes searching the Internet using a search engine for information about hypertension. The intervention patients spent 30 to 40 minutes using the CO-ED system, which provided computer-assisted instruction about major hypertension topics. Analysis of pre- and post- knowledge scores indicated a significant improvement among CO-ED users (14.6%) as opposed to Internet users (2%). Additionally, patients using the CO-ED program rated their learning experience more positively than those using the Internet.

  9. Algorithms for Recollection of Search Terms Based on the Wikipedia Category Structure

    Directory of Open Access Journals (Sweden)

    Stijn Vandamme

    2014-01-01

    Full Text Available The common user interface for a search engine consists of a text field where the user can enter queries consisting of one or more keywords. Keyword query based search engines work well when the users have a clear vision what they are looking for and are capable of articulating their query using the same terms as indexed. For our multimedia database containing 202,868 items with text descriptions, we supplement such a search engine with a category-based interface whose category structure is tailored to the content of the database. This facilitates browsing and offers the users the possibility to look for named entities, even if they forgot their names. We demonstrate that this approach allows users who fail to recollect the name of named entities to retrieve data with little effort. In all our experiments, it takes 1 query on a category and on average 2.49 clicks, compared to 5.68 queries on the database’s traditional text search engine for a 68.3% success probability or 6.01 queries when the user also turns to Google, for a 97.1% success probability.

  10. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    Science.gov (United States)

    Wang, J Z

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wavelets and the IRM (Integrated Region Matching) distance. Experiments with a database of 70,000 pathology image fragments have demonstrated high retrieval accuracy and high speed. The algorithm can be combined with our previously developed wavelet-based progressive pathology image transmission and browsing algorithm and is expandable for medical image databases.

  11. Knowledge revision in systems based on an informed tree search strategy : application to cartographic generalisation

    CERN Document Server

    Taillandier, Patrick; Drogoul, Alexis

    2012-01-01

    Many real world problems can be expressed as optimisation problems. Solving this kind of problems means to find, among all possible solutions, the one that maximises an evaluation function. One approach to solve this kind of problem is to use an informed search strategy. The principle of this kind of strategy is to use problem-specific knowledge beyond the definition of the problem itself to find solutions more efficiently than with an uninformed strategy. This kind of strategy demands to define problem-specific knowledge (heuristics). The efficiency and the effectiveness of systems based on it directly depend on the used knowledge quality. Unfortunately, acquiring and maintaining such knowledge can be fastidious. The objective of the work presented in this paper is to propose an automatic knowledge revision approach for systems based on an informed tree search strategy. Our approach consists in analysing the system execution logs and revising knowledge based on these logs by modelling the revision problem as...

  12. Visualizing the search for radiation-damaged DNA bases in real time

    Science.gov (United States)

    Lee, Andrea J.; Wallace, Susan S.

    2016-11-01

    The Base Excision Repair (BER) pathway removes the vast majority of damages produced by ionizing radiation, including the plethora of radiation-damaged purines and pyrimidines. The first enzymes in the BER pathway are DNA glycosylases, which are responsible for finding and removing the damaged base. Although much is known about the biochemistry of DNA glycosylases, how these enzymes locate their specific damage substrates among an excess of undamaged bases has long remained a mystery. Here we describe the use of single molecule fluorescence to observe the bacterial DNA glycosylases, Nth, Fpg and Nei, scanning along undamaged and damaged DNA. We show that all three enzymes randomly diffuse on the DNA molecule and employ a wedge residue to search for and locate damage. The search behavior of the Escherichia coli DNA glycosylases likely provides a paradigm for their homologous mammalian counterparts.

  13. ONLINE PUBLISHING CURRENT SCENARIO

    Directory of Open Access Journals (Sweden)

    Balasubramanian Thiagarajan

    2012-12-01

    Full Text Available This article attempts to unravel the current scenario in online publishing. Advent of internet has brought with it tremendous changes in the publishing industry. What was hither to an industry dominated by publisher has been thrown open to one and sundry. Online publishing has brought with it a reach which was hitherto never been imagined. In the normal course it would take at least a year to publish a manuscript. Online publishing has managed to bring this time down to a few weeks / at most a month. This article attempts to discusses the positives and perils of online publishing scenario.

  14. Sharing our data—An overview of current (2016) USGS policies and practices for publishing data on ScienceBase and an example interactive mapping application

    Science.gov (United States)

    Chase, Katherine J.; Bock, Andrew R.; Sando, Roy

    2017-01-05

    This report provides an overview of current (2016) U.S. Geological Survey policies and practices related to publishing data on ScienceBase, and an example interactive mapping application to display those data. ScienceBase is an integrated data sharing platform managed by the U.S. Geological Survey. This report describes resources that U.S. Geological Survey Scientists can use for writing data management plans, formatting data, and creating metadata, as well as for data and metadata review, uploading data and metadata to ScienceBase, and sharing metadata through the U.S. Geological Survey Science Data Catalog. Because data publishing policies and practices are evolving, scientists should consult the resources cited in this paper for definitive policy information.An example is provided where, using the content of a published ScienceBase data release that is associated with an interpretive product, a simple user interface is constructed to demonstrate how the open source capabilities of the R programming language and environment can interact with the properties and objects of the ScienceBase item and be used to generate interactive maps.

  15. Artificial neural network-based merging score for Meta search engine

    Institute of Scientific and Technical Information of China (English)

    P Vijaya; G Raju; Santosh Kumar Ray

    2016-01-01

    Several users use metasearch engines directly or indirectly to access and gather data from more than one data sources. The effectiveness of a metasearch engine is majorly determined by the quality of the results and it returns and in response to user queries. The rank aggregation methods which have been proposed until now exploits very limited set of parameters such as total number of used resources and the rankings they achieved from each individual resource. In this work, we use the neural network to merge the score computation module effectively. Initially, we give a query to different search engines and the topn list from each search engine is chosen for further processing our technique. We then merge the topn list based on unique links and we do some parameter calculations such as title based calculation, snippet based calculation, content based calculation, domain calculation, position calculation and co-occurrence calculation. We give the solutions of the calculations with user given ranking of links to the neural network to train the system. The system then rank and merge the links we obtain from different search engines for the query we give. Experimentation results reports a retrieval effectiveness of about 80%, precision of about 79% for user queries and about 72% for benchmark queries. The proposed technique also includes a response time of about 76 ms for 50 links and 144 ms for 100 links.

  16. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    OpenAIRE

    Jie-sheng Wang; Shu-xia Li; Jiang-di Song

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird’s nests location. In order to select a reasonable repeat-...

  17. A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-03-27

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  18. Construction of Powerful Online Search Expert System Based on Semantic Web

    Directory of Open Access Journals (Sweden)

    Yasser A. Nada

    2013-01-01

    Full Text Available In this paper we intends to build an expert system based on semantic web for online search using XML, to help users to find the desired software, and read about its features and specifications. The expert system saves user's time and effort of web searching or buying software from available libraries. Building online search expert system is ideal for capturing support knowledge to produce interactive on-line systems that provide searching details, situation-specific advice exactly like setting a session with an expert. Any person can access this interactive system from his web browser and get some questions answer in addition to precise advice which was provided by an expert. The system can provide some troubleshooting diagnose, find the right products; … Etc. The proposed system further combines aspects of three research topics (Semantic Web, Expert System and XML. Semantic web Ontology will be considered as a set of directed graphs where each node represents an item and the edges denote a term which is related to another term. Organizations can now optimize their most valuable expert knowledge through powerful interactive Web-enabled knowledge automation expert system. Online sessions emulate a conversation with a human expert asking focused questions and producing customized recommendations and advice. Hence, the main powerful point of the proposed expert system is that the skills of any domain expert will be available to everyone.

  19. An Experimentally Based Computer Search Identifies Unstructured Membrane-binding Sites in Proteins

    Science.gov (United States)

    Brzeska, Hanna; Guag, Jake; Remmert, Kirsten; Chacko, Susan; Korn, Edward D.

    2010-01-01

    Programs exist for searching protein sequences for potential membrane-penetrating segments (hydrophobic regions) and for lipid-binding sites with highly defined tertiary structures, such as PH, FERM, C2, ENTH, and other domains. However, a rapidly growing number of membrane-associated proteins (including cytoskeletal proteins, kinases, GTP-binding proteins, and their effectors) bind lipids through less structured regions. Here, we describe the development and testing of a simple computer search program that identifies unstructured potential membrane-binding sites. Initially, we found that both basic and hydrophobic amino acids, irrespective of sequence, contribute to the binding to acidic phospholipid vesicles of synthetic peptides that correspond to the putative membrane-binding domains of Acanthamoeba class I myosins. Based on these results, we modified a hydrophobicity scale giving Arg- and Lys-positive, rather than negative, values. Using this basic and hydrophobic scale with a standard search algorithm, we successfully identified previously determined unstructured membrane-binding sites in all 16 proteins tested. Importantly, basic and hydrophobic searches identified previously unknown potential membrane-binding sites in class I myosins, PAKs and CARMIL (capping protein, Arp2/3, myosin I linker; a membrane-associated cytoskeletal scaffold protein), and synthetic peptides and protein domains containing these newly identified sites bound to acidic phospholipids in vitro. PMID:20018884

  20. A minimal path searching approach for active shape model (ASM)-based segmentation of the lung

    Science.gov (United States)

    Guo, Shengwen; Fei, Baowei

    2009-02-01

    We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.

  1. Search Using N-gram Technique Based Statistical Analysis for Knowledge Extraction in Case Based Reasoning Systems

    OpenAIRE

    Karthik, M. N.; Davis, Moshe

    2004-01-01

    Searching techniques for Case Based Reasoning systems involve extensive methods of elimination. In this paper, we look at a new method of arriving at the right solution by performing a series of transformations upon the data. These involve N-gram based comparison and deduction of the input data with the case data, using Morphemes and Phonemes as the deciding parameters. A similar technique for eliminating possible errors using a noise removal function is performed. The error tracking and elim...

  2. Local search methods based on variable focusing for random K -satisfiability

    Science.gov (United States)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.

  3. FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine

    Science.gov (United States)

    Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo

    Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.

  4. Resource discovery algorithm based on hierarchical model and Conscious search in Grid computing system

    Directory of Open Access Journals (Sweden)

    Nasim Nickbakhsh

    2017-03-01

    Full Text Available The distributed system of Grid subscribes the non-homogenous sources at a vast level in a dynamic manner. The resource discovery manner is very influential on the efficiency and of quality the system functionality. The “Bitmap” model is based on the hierarchical and conscious search model that allows for less traffic and low number of messages in relation to other methods in this respect. This proposed method is based on the hierarchical and conscious search model that enhances the Bitmap method with the objective to reduce traffic, reduce the load of resource management processing, reduce the number of emerged messages due to resource discovery and increase the resource according speed. The proposed method and the Bitmap method are simulated through Arena tool. This proposed model is abbreviated as RNTL.

  5. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    Science.gov (United States)

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  6. Knowledge search for new product development: a multi-agent based methodology

    OpenAIRE

    2011-01-01

    Manufacturers are the leaders in developing new products to drive productivity. Higher productivity means more products based on the same materials, energy, labour, and capitals. New product development plays a critical role in the success of manufacturing firms. Activities in the product development process are dependent on the knowledge of new product development team members. Increasingly, many enterprises consider effective knowledge search to be a source of competitive advantage. Th...

  7. Entity-based Stochastic Analysis of Search Results for Query Expansion and Results Re-Ranking

    Science.gov (United States)

    2015-11-20

    Entity-based Stochastic Analysis of Search Results for Query Expansion and Results Re-Ranking Pavlos Fafalios and Yannis Tzitzikas Institute of...dynamically and analyzed stochastically using a Random Walk method. The result of this analysis is exploited in two different contexts: for automatic query...re-ranking a list of re- sults as well as for query-expansion. 2. STOCHASTIC ANALYSIS Query and retrieved documents. At first, the user sub- mits a

  8. Characterization of single layer anti-reflective coatings for bolometer-based rare event searches

    CERN Document Server

    Hansen, E V

    2016-01-01

    A photon signal added to the existing phonon signal can powerfully reduce backgrounds for bolometer-based rare event searches. Anti-reflective coatings can significantly increase the performance of the secondary light sensing bolometer in these experiments. Coatings of SiO2, HfO2, and TiO2 on Ge and Si were fabricated and characterized at room temperature and all angles of incidence.

  9. A Neotropical Miocene Pollen Database Employing Image-Based Search and Semantic Modeling

    Directory of Open Access Journals (Sweden)

    Jing Ginger Han

    2014-08-01

    Full Text Available Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  10. Chemical compound navigator: a web-based chem-BLAST, chemical taxonomy-based search engine for browsing compounds.

    Science.gov (United States)

    Prasanna, M D; Vondrasek, Jiri; Wlodawer, Alexander; Rodriguez, H; Bhat, T N

    2006-06-01

    A novel technique to annotate, query, and analyze chemical compounds has been developed and is illustrated by using the inhibitor data on HIV protease-inhibitor complexes. In this method, all chemical compounds are annotated in terms of standard chemical structural fragments. These standard fragments are defined by using criteria, such as chemical classification; structural, chemical, or functional groups; and commercial, scientific or common names or synonyms. These fragments are then organized into a data tree based on their chemical substructures. Search engines have been developed to use this data tree to enable query on inhibitors of HIV protease (http://xpdb.nist.gov/hivsdb/hivsdb.html). These search engines use a new novel technique, Chemical Block Layered Alignment of Substructure Technique (Chem-BLAST) to search on the fragments of an inhibitor to look for its chemical structural neighbors. This novel technique to annotate and query compounds lays the foundation for the use of the Semantic Web concept on chemical compounds to allow end users to group, sort, and search structural neighbors accurately and efficiently. During annotation, it enables the attachment of "meaning" (i.e., semantics) to data in a manner that far exceeds the current practice of associating "metadata" with data by creating a knowledge base (or ontology) associated with compounds. Intended users of the technique are the research community and pharmaceutical industry, for which it will provide a new tool to better identify novel chemical structural neighbors to aid drug discovery.

  11. What are the personal and professional characteristics that distinguish the researchers who publish in high- and low-impact journals? A multi-national web-based survey

    Science.gov (United States)

    Paiva, Carlos Eduardo; Araujo, Raphael L C; Paiva, Bianca Sakamoto Ribeiro; de Pádua Souza, Cristiano; Cárcano, Flavio Mavignier; Costa, Marina Moreira; Serrano, Sérgio Vicente; Lima, João Paulo Nogueira

    2017-01-01

    Purpose This study identifies the personal and professional profiles of researchers with a greater potential to publish high-impact academic articles. Method The study involved conducting an international survey of journal authors using a web-based questionnaire. The survey examined personal characteristics, funding, and the perceived barriers of research quality, work-life balance, and satisfaction and motivation in relation to career. The processes of manuscript writing and journal publication were measured using an online questionnaire that was developed for this study. The responses were compared between the two groups of researchers using logistic regression models. Results A total of 269 questionnaires were analysed. The researchers shared some common perceptions; both groups reported that they were seeking recognition (or to be leaders in their areas) rather than financial remuneration. Furthermore, both groups identified time and funding constraints as the main obstacles to their scientific activities. The amount of time that was spent on research activities, having >5 graduate students under supervision, never using text editing services prior to the publication of articles, and living in a developed and English-speaking country were the independent variables that were associated with their article getting a greater chance of publishing in a high-impact journal. In contrast, using one’s own resources to perform studies decreased the chance of publishing in high-impact journals. Conclusions The researchers who publish in high-impact journals have distinct profiles compared with the researchers who publish in low-impact journals. English language abilities and the actual amount of time that is dedicated to research and scientific writing, as well as aspects that relate to the availability of financial resources are the factors that are associated with a successful researcher’s profile. PMID:28194230

  12. Web-Based Undergraduate Chemistry Problem-Solving: The Interplay of Task Performance, Domain Knowledge and Web-Searching Strategies

    Science.gov (United States)

    She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua

    2012-01-01

    This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…

  13. Darwin and his publisher.

    Science.gov (United States)

    McClay, David

    2009-01-01

    Charles Darwin's publisher John Murray played an important, if often underrated, role in bringing his theories to the public. As their letters and publishing archives show they had a friendly, business like and successful relationship. This was despite fundamental scientific and religious differences between the men. In addition to publishing Darwin, Murray also published many of the critical and supportive works and reviews which Darwin's own works excited.

  14. CHINA INTERNATIONAL PUBLISHING GROUP

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The China International Publishing Group (CIPG) specializes in international communications. Its operationsencompass reporting, editing, translation, publishing, printing, distribution, and the Internet. It incorporates sevenpublishing companies, five magazines and 19 periodicals, published in over 20 languages. The ChinaInternational Book Trading Corporation, another group facet, distributes all of these to over 180 countries and

  15. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-01-01

    Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

  16. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  17. The Open Data Repository's Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.

    2015-12-01

    Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A

  18. The Open Data Repositorys Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.

    2015-01-01

    Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.

  19. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    Science.gov (United States)

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature.

  20. A local search algorithm based on chromatic classes for university course timetabling problem

    Directory of Open Access Journals (Sweden)

    Velin Kralev

    2016-12-01

    Full Text Available This paper presents a study for a local search algorithm based on chromatic classes for the university course timetabling problem. Several models and approaches to resolving the problem are discussed. The main idea of the approach is through a heuristic algorithm to specify the chromatic classes of a graph in which the events of the timetable correspond to the graph vertices and the set of the edges represents the possible conflicts between events. Then the chromatic classes should be sorted according to specific sort criteria (a total weight or a total count of events in each class, and finally the local search algorithm starts. The aim of the experiments is to determine the best criterion to sort chromatic classes. The results showed that the algorithm generates better solutions when the chromatic classes are sorted in a total weight criterion.

  1. The multi-copy simultaneous search methodology: a fundamental tool for structure-based drug design.

    Science.gov (United States)

    Schubert, Christian R; Stultz, Collin M

    2009-08-01

    Fragment-based ligand design approaches, such as the multi-copy simultaneous search (MCSS) methodology, have proven to be useful tools in the search for novel therapeutic compounds that bind pre-specified targets of known structure. MCSS offers a variety of advantages over more traditional high-throughput screening methods, and has been applied successfully to challenging targets. The methodology is quite general and can be used to construct functionality maps for proteins, DNA, and RNA. In this review, we describe the main aspects of the MCSS method and outline the general use of the methodology as a fundamental tool to guide the design of de novo lead compounds. We focus our discussion on the evaluation of MCSS results and the incorporation of protein flexibility into the methodology. In addition, we demonstrate on several specific examples how the information arising from the MCSS functionality maps has been successfully used to predict ligand binding to protein targets and RNA.

  2. Neural Based Tabu Search method for solving unit commitment problem with cooling-banking constraints

    Directory of Open Access Journals (Sweden)

    Rajan Asir Christober Gnanakkan Charles

    2009-01-01

    Full Text Available This paper presents a new approach to solve short-term unit commitment problem (UCP using Neural Based Tabu Search (NBTS with cooling and banking constraints. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for next H hours. A 7-unit utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 10, 26 and 34 units. Numerical results are shown to compare the superiority of the cost solutions obtained using the Tabu Search (TS method, Dynamic Programming (DP and Lagrangian Relaxation (LR methods in reaching proper unit commitment.

  3. Based on A* and Q-Learning Search and Rescue Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ruiyuan Fan

    2012-11-01

    Full Text Available For the search and rescue robot navigation in unknown environment, a bionic self-learning algorithm based on A* and Q-Learning is put forward. The algorithm utilizes the Growing Self-organizing Map (GSOM to build the environment topology cognitive map. The heuristic search A* algorithm is used the global path planning. When the local environment changes, Q-Learning is used the local path planning. Thereby the robot can obtain the self-learning skill by studying and training like human or animal, and looks for a free path from the initial state to the target state in unknown environment. The theory proves the validity of the method. The simulation result shows the robot obtains the navigation capability.

  4. Towards a Complexity Theory of Randomized Search Heuristics: Ranking-Based Black-Box Complexity

    CERN Document Server

    Doerr, Benjamin

    2011-01-01

    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. A big step forward would be a useful complexity theory for such algorithms. We enrich the two existing black-box complexity notions due to Wegener and other authors by the restrictions that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold.

  5. A Novel Dynamic Clustering Algorithm Based on Immune Network and Tabu Search

    Institute of Scientific and Technical Information of China (English)

    ZHONGJiang; WUZhongfu; WUKaigui; YANGQiang

    2005-01-01

    It's difficult to indicate the rational number of partitions in the data set before clustering usually.The problem can't be solved by traditional clustering algorithm, such as k-means or its variations. This paper proposes a novel Dynamic clustering algorithm based on the artificial immune network and tabu search (DCBIT). It optimizes the number and the location of the clusters at the same time. The algorithm includes two phases, it begins by running immune network algorithm to find a Clustering feasible solution (CFS), then it employs tabu search to get the optimum cluster number and cluster centers on the CFS. Also, the probabilities acquiring the CFS through immune network algorithm have been discussed in this paper. Some experimental results show that new algorithm has satisfied convergent probability and convergent speed.

  6. Case and Relation (CARE based Page Rank Algorithm for Semantic Web Search Engines

    Directory of Open Access Journals (Sweden)

    N. Preethi

    2012-05-01

    Full Text Available Web information retrieval deals with a technique of finding relevant web pages for any given query from a collection of documents. Search engines have become the most helpful tool for obtaining useful information from the Internet. The next-generation Web architecture, represented by the Semantic Web, provides the layered architecture possibly allowing data to be reused across application. The proposed architecture use a hybrid methodology named Case and Relation (CARE based Page Rank algorithm which uses past problem solving experience maintained in the case base to form a best matching relations and then use them for generating graphs and spanning forests to assign a relevant score to the pages.

  7. Handling Conflicts in Depth-First Search for LTL Tableau to Debug Compliance Based Languages

    Directory of Open Access Journals (Sweden)

    Francois Hantry

    2011-09-01

    Full Text Available Providing adequate tools to tackle the problem of inconsistent compliance rules is a critical research topic. This problem is of paramount importance to achieve automatic support for early declarative design and to support evolution of rules in contract-based or service-based systems. In this paper we investigate the problem of extracting temporal unsatisfiable cores in order to detect the inconsistent part of a specification. We extend conflict-driven SAT-solver to provide a new conflict-driven depth-first-search solver for temporal logic. We use this solver to compute LTL unsatisfiable cores without re-exploring the history of the solver.

  8. MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2013-10-01

    Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works

  9. A New Retrieval Model Based on TextTiling for Document Similarity Search

    Institute of Scientific and Technical Information of China (English)

    Xiao-Jun Wan; Yu-Xin Peng

    2005-01-01

    Document similarity search is to find documents similar to a given query document and return a ranked list of similar documents to users, which is widely used in many text and web systems, such as digital library, search engine,etc. Traditional retrieval models, including the Okapi's BM25 model and the Smart's vector space model with length normalization, could handle this problem to some extent by taking the query document as a long query. In practice,the Cosine measure is considered as the best model for document similarity search because of its good ability to measure similarity between two documents. In this paper, the quantitative performances of the above models are compared using experiments. Because the Cosine measure is not able to reflect the structural similarity between documents, a new retrieval model based on TextTiling is proposed in the paper. The proposed model takes into account the subtopic structures of documents. It first splits the documents into text segments with TextTiling and calculates the similarities for different pairs of text segments in the documents. Lastly the overall similarity between the documents is returned by combining the similarities of different pairs of text segments with optimal matching method. Experiments are performed and results show:1) the popular retrieval models (the Okapi's BM25 model and the Smart's vector space model with length normalization)do not perform well for document similarity search; 2) the proposed model based on TextTiling is effective and outperforms other models, including the Cosine measure; 3) the methods for the three components in the proposed model are validated to be appropriately employed.

  10. 管窥我国出版机构数字化转型--基于三家出版社的调研分析%Publishing Digitalization:a Qualitative Analysis Based on Three Publishing Companies

    Institute of Scientific and Technical Information of China (English)

    李世娟; 张鹏翼; 黄文彬

    2016-01-01

    Three publishing companies were selected forehand to study the digitalization process of publishing houses in China. The results of a qualitative analysis showed that each digital company has different strategies in setting up digital publishing center. However, they are facing similar difficulties and chalenges in choosing digital publishing systems, training digital editors etc. The chalenges and opportunities are coexisting. Not only the publishing houses need to change the idea and to explore novel profit models, but also need to investigate how to restructure the publishing workflow. Other issues, such as copyright protection, digital editor training, certification of digital editors etc. should be paid attention to as wel.%选择3家不同类型的出版机构作为调研对象,采用定性研究方法,对我国出版机构数字化转型升级的现状进行调研、分析。发现不同类型出版机构的数字化转型过程不尽相同,但在数字出版系统的选择,数字出版编辑知识能力的培养和出版转型中面临相同的困难和挑战。数字化转型过程中机遇与挑战共存,既需要转变思想观念,探寻新的盈利模式,也需要大胆探索如何对现有出版流程进行重组,还面临着版权问题、数字出版人才培养问题、数字编辑资质认证等问题。

  11. Etiquette in scientific publishing.

    Science.gov (United States)

    Krishnan, Vinod

    2013-10-01

    Publishing a scientific article in a journal with a high impact factor and a good reputation is considered prestigious among one's peer group and an essential achievement for career progression. In the drive to get their work published, researchers can forget, either intentionally or unintentionally, the ethics that should be followed in scientific publishing. In an environment where "publish or perish" rules the day, some authors might be tempted to bend or break rules. This special article is intended to raise awareness among orthodontic journal editors, authors, and readers about the types of scientific misconduct in the current publishing scenario and to provide insight into the ways these misconducts are managed by the Committee of Publishing Ethics. Case studies are presented, and various plagiarism detection software programs used by publishing companies are briefly described.

  12. PUBLISHER'S ANNOUNCEMENT: Refereeing standards

    Science.gov (United States)

    Bender, C.; Scriven, N.

    2004-08-01

    submitting papers to J. Phys. A. In addition to the office staff, the journal has two assets of enormous value. First, there is the pool of referees. It is impossible to have an academic system based on publication of original ideas without peer review. I believe that when one submits papers for publication in journals, one assumes a moral responsibility to participate in the peer review system. A published author has an obligation to referee papers and thereby to keep the scientific quality of published work as high as possible. In general, referees' reports that are submitted to scientific journals vary in quality. Some referees reply quickly and write detailed, careful, and helpful reports; other referees write cursory reports that are not so useful. Over the years J. Phys. A has amassed an amazingly talented and sedulous group of referees. I thank the referees of the journal who have worked so hard and have contributed their time without any expectation of financial compensation. I emphasize that the office tries hard to avoid overburdening referees. Sending back a quick and detailed response does not increase the likelihood of the referee receiving another paper to evaluate. (A number of people have told me that they sit on and delay the refereeing of papers in hopes of reducing the number of papers per year that they receive to referee. The office at J. Phys. A works to make this sort of strategy unnecessary.) The second asset is the Board of Editors and the Advisory Panel. For some journals membership on the Board of Editors is a sinecure. However, the 37 members of the Board of Editors and the 50 members of the Advisory Panel of J. Phys. A have been chosen not only because they are distinguished mathematical physicists but also because of their demonstrated willingness to work hard. Six members of the Board of Editors are designated as Section Editors: H Nishimori, Tokyo Institute of Technology, Japan (Statistical Physics); P Grassberger, Bergische Universität GH

  13. Road and Street Centerlines, Centerlines based on newly platted subdivisions, Published in Not Provided, City of Aurora.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — , was produced all or in part from Hardcopy Maps information as of Not Provided. It is described as 'Centerlines based on newly platted subdivisions'. Data by this...

  14. Going digital: a guide for book publishers

    OpenAIRE

    2008-01-01

    This project report, structured as a guide, strives to inspire and assist small-to-mid-sized Canadian trade publishers to develop their digital strategies. The need for digitization in a period of transition within the publishing industry is explored, as well as the different steps to be taken to create a successful digital strategy. This guide first explores the goals and motivations of digitization, specifically looking at websites, viral marketing, book browsing and searching, and e-books....

  15. Sagace: A web-based search engine for biomedical databases in Japan

    Directory of Open Access Journals (Sweden)

    Morita Mizuki

    2012-10-01

    Full Text Available Abstract Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data and biological resource banks (such as mouse models of disease and cell lines. With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/.

  16. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  17. Pulsar timing array based search for supermassive black hole binaries in the SKA era

    CERN Document Server

    Wang, Yan

    2016-01-01

    The advent of next generation radio telescope facilities, such as the Square Kilometer Array (SKA), will usher in an era where a Pulsar Timing Array (PTA) based search for gravitational waves (GWs) will be able to use hundreds of well timed millisecond pulsars rather than the few dozens in existing PTAs. A realistic assessment of the performance of such an extremely large PTA must take into account the data analysis challenge posed by an exponential increase in the parameter space volume due to the large number of so-called pulsar phase parameters. We address this problem and present such an assessment for isolated supermassive black hole binary (SMBHB) searches using a SKA era PTA containing $10^3$ pulsars. We find that an all-sky search will be able to confidently detect non-evolving sources with redshifted chirp mass of $10^{10}$ $M_\\odot$ out to a redshift of about $28$. The detection of GW signals from optically identified SMBHB candidates similar to PSO J334+01 is assured. If no SMBHB detections occur, ...

  18. Model-based Layer Estimation using a Hybrid Genetic/Gradient Search Optimization Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D; Lehman, S; Dowla, F

    2007-05-17

    A particle swarm optimization (PSO) algorithm is combined with a gradient search method in a model-based approach for extracting interface positions in a one-dimensional multilayer structure from acoustic or radar reflections. The basic approach is to predict the reflection measurement using a simulation of one-dimensional wave propagation in a multi-layer, evaluate the error between prediction and measurement, and then update the simulation parameters to minimize the error. Gradient search methods alone fail due to the number of local minima in the error surface close to the desired global minimum. The PSO approach avoids this problem by randomly sampling the region of the error surface around the global minimum, but at the cost of a large number of evaluations of the simulator. The hybrid approach uses the PSO at the beginning to locate the general area around the global minimum then switches to the gradient search method to zero in on it. Examples of the algorithm applied to the detection of interior walls of a building from reflected ultra-wideband radar signals are shown. Other possible applications are optical inspection of coatings and ultrasonic measurement of multilayer structures.

  19. Energy transmission modes based on Tabu search and particle swarm hybrid optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    LI xiang; CUI Ji-feng; QI Jian-xun; YANG Shang-dong

    2007-01-01

    In China, economic centers are far from energy storage bases, so it is significant to select a proper energy transferring mode to improve the efficiency of energy usage, To solve this problem, an optimal allocation model based on energy transfer mode was proposed after objective function for optimizing energy using efficiency Was established, and then, a new Tabu search and power transmission was gained.Based on the above discussion, some proposals were put forward for optimal allocation of energy transfer modes in China. By comparing other three traditional methodsthat are based on regional price differences. freight rates and annual cost witll the proposed method, the result indicates that the economic efficiency of the energy transfer Can be enhanced by 3.14%, 5.78% and 6.01%, respectively.

  20. High dimensional search-based software engineering: finding tradeoffs among 15 objectives for automating software refactoring using NSGA-III

    OpenAIRE

    Mkaouer, Wiem; Kessentini, Marouane; Bechikh, Slim; Deb, Kalyanmoy; Ó Cinnéide, Mel

    2014-01-01

    peer-reviewed There is a growing need for scalable search-based software engineering approaches that address software engineering problems where a large number of objectives are to be optimized. Software refactoring is one of these problems where a refactoring sequence is sought that optimizes several software metrics. Most of the existing refactoring work uses a large set of quality metrics to evaluate the software design after applying refactoring operations, but current search-based sof...

  1. Investigation of candidate data structures and search algorithms to support a knowledge based fault diagnosis system

    Science.gov (United States)

    Bosworth, Edward L., Jr.

    1987-01-01

    The focus of this research is the investigation of data structures and associated search algorithms for automated fault diagnosis of complex systems such as the Hubble Space Telescope. Such data structures and algorithms will form the basis of a more sophisticated Knowledge Based Fault Diagnosis System. As a part of the research, several prototypes were written in VAXLISP and implemented on one of the VAX-11/780's at the Marshall Space Flight Center. This report describes and gives the rationale for both the data structures and algorithms selected. A brief discussion of a user interface is also included.

  2. A method of characterizing network topology based on the breadth-first search tree

    Science.gov (United States)

    Zhou, Bin; He, Zhe; Wang, Nianxin; Wang, Bing-Hong

    2016-05-01

    A method based on the breadth-first search tree is proposed in this paper to characterize the hierarchical structure of network. In this method, a similarity coefficient is defined to quantitatively distinguish networks, and quantitatively measure the topology stability of the network generated by a model. The applications of the method are discussed in ER random network, WS small-world network and BA scale-free network. The method will be helpful for deeply describing network topology and provide a starting point for researching the topology similarity and isomorphism of networks.

  3. Color Octet Electron Search Potential of the FCC Based e-p Colliders

    CERN Document Server

    Acar, Y C; Oner, B B; Sultansoy, S

    2016-01-01

    Resonant production of color octet electron, e_{8}, at the FCC based ep colliders has been analyzed. It is shown that e-FCC will cover much a wider region of e_{8} masses compared to the LHC. Moreover, with highest electron beam energy, e_{8} search potential of the e-FCC exceeds that of FCC pp collider. If e_{8} is discovered earlier by the FCC pp collider, e-FCC will give opportunity to handle very important additional information. For example, compositeness scale can be probed up to hundreds TeV region.

  4. Query Intent Disambiguation of Keyword-Based Semantic Entity Search in Dataspaces

    Institute of Scientific and Technical Information of China (English)

    Dan Yang; De-Rong Shen; Ge Yu; Yue Kou; Tie-Zheng Nie

    2013-01-01

    Keyword query has attracted much research attention due to its simplicity and wide applications.The inherent ambiguity of keyword query is prone to unsatisfied query results.Moreover some existing techniques on Web query,keyword query in relational databases and XML databases cannot be completely applied to keyword query in dataspaces.So we propose KeymanticES,a novel keyword-based semantic entity search mechanism in dataspaces which combines both keyword query and semantic query features.And we focus on query intent disambiguation problem and propose a novel three-step approach to resolve it.Extensive experimental results show the effectiveness and correctness of our proposed approach.

  5. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  6. Example-Based Sequence Diagrams to Colored Petri Nets Transformation Using Heuristic Search

    Science.gov (United States)

    Kessentini, Marouane; Bouchoucha, Arbi; Sahraoui, Houari; Boukadoum, Mounir

    Dynamic UML models like sequence diagrams (SD) lack sufficient formal semantics, making it difficult to build automated tools for their analysis, simulation and validation. A common approach to circumvent the problem is to map these models to more formal representations. In this context, many works propose a rule-based approach to automatically translate SD into colored Petri nets (CPN). However, finding the rules for such SD-to-CPN transformations may be difficult, as the transformation rules are sometimes difficult to define and the produced CPN may be subject to state explosion. We propose a solution that starts from the hypothesis that examples of good transformation traces of SD-to-CPN can be useful to generate the target model. To this end, we describe an automated SD-to-CPN transformation method which finds the combination of transformation fragments that best covers the SD model, using heuristic search in a base of examples. To achieve our goal, we combine two algorithms for global and local search, namely Particle Swarm Optimization (PSO) and Simulated Annealing (SA). Our empirical results show that the new approach allows deriving the sought CPNs with at least equal performance, in terms of size and correctness, to that obtained by a transformation rule-based tool.

  7. Exploring Multidisciplinary Data Sets through Database Driven Search Capabilities and Map-Based Web Services

    Science.gov (United States)

    O'Hara, S.; Ferrini, V.; Arko, R.; Carbotte, S. M.; Leung, A.; Bonczkowski, J.; Goodwillie, A.; Ryan, W. B.; Melkonian, A. K.

    2008-12-01

    Relational databases containing geospatially referenced data enable the construction of robust data access pathways that can be customized to suit the needs of a diverse user community. Web-based search capabilities driven by radio buttons and pull-down menus can be generated on-the-fly leveraging the power of the relational database and providing specialists a means of discovering specific data and data sets. While these data access pathways are sufficient for many scientists, map-based data exploration can also be an effective means of data discovery and integration by allowing users to rapidly assess the spatial co- registration of several data types. We present a summary of data access tools currently provided by the Marine Geoscience Data System (www.marine-geo.org) that are intended to serve a diverse community of users and promote data integration. Basic search capabilities allow users to discover data based on data type, device type, geographic region, research program, expedition parameters, personnel and references. In addition, web services are used to create database driven map interfaces that provide live access to metadata and data files.

  8. An Experiment and Detection Scheme for Cavity-Based Light Cold Dark Matter Particle Searches

    Directory of Open Access Journals (Sweden)

    Masroor H. S. Bukhari

    2017-01-01

    Full Text Available A resonance detection scheme and some useful ideas for cavity-based searches of light cold dark matter particles (such as axions are presented, as an effort to aid in the on-going endeavors in this direction as well as for future experiments, especially in possibly developing a table-top experiment. The scheme is based on our idea of a resonant detector, incorporating an integrated tunnel diode (TD and GaAs HEMT/HFET (High-Electron Mobility Transistor/Heterogeneous FET transistor amplifier, weakly coupled to a cavity in a strong transverse magnetic field. The TD-amplifier combination is suggested as a sensitive and simple technique to facilitate resonance detection within the cavity while maintaining excellent noise performance, whereas our proposed Halbach magnet array could serve as a low-noise and permanent solution replacing the conventional electromagnets scheme. We present some preliminary test results which demonstrate resonance detection from simulated test signals in a small optimal axion mass range with superior signal-to-noise ratios (SNR. Our suggested design also contains an overview of a simpler on-resonance dc signal read-out scheme replacing the complicated heterodyne read-out. We believe that all these factors and our propositions could possibly improve or at least simplify the resonance detection and read-out in cavity-based DM particle detection searches (and other spectroscopy applications and reduce the complications (and associated costs, in addition to reducing the electromagnetic interference and background.

  9. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (psearching than keywords, laying the foundation for rich and sophisticated information search.

  10. Speeding up tandem mass spectrometry-based database searching by longest common prefix

    Directory of Open Access Journals (Sweden)

    Wang Le-Heng

    2010-11-01

    Full Text Available Abstract Background Tandem mass spectrometry-based database searching has become an important technology for peptide and protein identification. One of the key challenges in database searching is the remarkable increase in computational demand, brought about by the expansion of protein databases, semi- or non-specific enzymatic digestion, post-translational modifications and other factors. Some software tools choose peptide indexing to accelerate processing. However, peptide indexing requires a large amount of time and space for construction, especially for the non-specific digestion. Additionally, it is not flexible to use. Results We developed an algorithm based on the longest common prefix (ABLCP to efficiently organize a protein sequence database. The longest common prefix is a data structure that is always coupled to the suffix array. It eliminates redundant candidate peptides in databases and reduces the corresponding peptide-spectrum matching times, thereby decreasing the identification time. This algorithm is based on the property of the longest common prefix. Even enzymatic digestion poses a challenge to this property, but some adjustments can be made to this algorithm to ensure that no candidate peptides are omitted. Compared with peptide indexing, ABLCP requires much less time and space for construction and is subject to fewer restrictions. Conclusions The ABLCP algorithm can help to improve data analysis efficiency. A software tool implementing this algorithm is available at http://pfind.ict.ac.cn/pfind2dot5/index.htm

  11. Publishing studies: what else?

    Directory of Open Access Journals (Sweden)

    Bertrand Legendre

    2015-07-01

    Full Text Available This paper intends to reposition “publishing studies” in the long process that goes from the beginning of book history to the current research on cultural industries. It raises questions about interdisciplinarity and the possibility of considering publishing independently of other sectors of the media and cultural offerings. Publishing is now included in a large range of industries and, at the same time, analyses tend to become more and more segmented according to production sectors and scientific fields. In addition to the problems created, from the professional point of view, by this double movement, this one requires a questioning of the concept of “publishing studies”.

  12. The ship-borne infrared searching and tracking system based on the inertial platform

    Science.gov (United States)

    Li, Yan; Zhang, Haibo

    2011-08-01

    As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing

  13. Ligand-Based Virtual Screening in a Search for Novel Anti-HIV-1 Chemotypes.

    Science.gov (United States)

    Kurczyk, Agata; Warszycki, Dawid; Musiol, Robert; Kafel, Rafał; Bojarski, Andrzej J; Polanski, Jaroslaw

    2015-10-26

    In a search for new anti-HIV-1 chemotypes, we developed a multistep ligand-based virtual screening (VS) protocol combining machine learning (ML) methods with the privileged structures (PS) concept. In its learning step, the VS protocol was based on HIV integrase (IN) inhibitors fetched from the ChEMBL database. The performances of various ML methods and PS weighting scheme were evaluated and applied as VS filtering criteria. Finally, a database of 1.5 million commercially available compounds was virtually screened using a multistep ligand-based cascade, and 13 selected unique structures were tested by measuring the inhibition of HIV replication in infected cells. This approach resulted in the discovery of two novel chemotypes with moderate antiretroviral activity, that, together with their topological diversity, make them good candidates as lead structures for future optimization.

  14. THE TYPES OF PUBLISHING SLOGANS

    Directory of Open Access Journals (Sweden)

    Ryzhov Konstantin Germanovich

    2015-03-01

    Full Text Available The author of the article focuses his attention on publishing slogans which are posted on 100 present-day Russian publishing houses' official websites and have not yet been studied in the special literature. The author has developed his own classification of publishing slogans based on the results of analysis and considering the current scientific views on the classification of slogans. The examined items are classified into autonomous and text-dependent according to interrelationship with an advertising text; marketable, corporative and mixed according to a presentation subject; rational, emotional and complex depending on the method of influence upon a recipient; slogan-presentation, slogan-assurance, slogan-identifier, slogan-appraisal, slogan-appeal depending on the communicative strategy; slogans consisting of one sentence and of two or more sentences; Russian and foreign ones. The analysis of the slogans of all kinds presented in the actual material allowed the author to determine the dominant features of the Russian publishing slogan which is an autonomous sentence in relation to the advertising text. In spite of that, the slogan shows the publishing output, influences the recipient emotionally, actualizes the communicative strategy of publishing house presentation of its distinguishing features, gives assurance to the target audience and distinguishes the advertised subject among competitors.

  15. The Future of Scholarly Journal Publishing.

    Science.gov (United States)

    Oppenheim, Charles; Greenhalgh, Clare; Rowland, Fytton

    2000-01-01

    Surveys the recent literature on scholarly publishing and its conversion to the electronic medium. Presents results of a questionnaire survey of the United Kingdom-based scholarly publishing industry. Results suggest publishers are moving quickly towards use of the Internet as a major medium for distribution, though they do not expect an early…

  16. Trade Publishing: A Report from the Front.

    Science.gov (United States)

    Fister, Barbara

    2001-01-01

    Reports on the current condition of trade publishing and its future prospects based on interviews with editors, publishers, agents, and others. Discusses academic libraries and the future of trade publishing, including questions relating to electronic books, intellectual property, and social and economic benefits of sharing information…

  17. Metabolic Pathways Associated with Kimchi, a Traditional Korean Food, Based on In Silico Modeling of Published Data.

    Science.gov (United States)

    Shin, Ga Hee; Kang, Byeong-Chul; Jang, Dai Ja

    2016-12-01

    Kimchi is a traditional Korean food prepared by fermenting vegetables, such as Chinese cabbage and radishes, which are seasoned with various ingredients, including red pepper powder, garlic, ginger, green onion, fermented seafood (Jeotgal), and salt. The various unique microorganisms and bioactive components in kimchi show antioxidant activity and have been associated with an enhanced immune response, as well as anti-cancer and anti-diabetic effects. Red pepper inhibits decay due to microorganisms and prevents food from spoiling. The vast amount of biological information generated by academic and industrial research groups is reflected in a rapidly growing body of scientific literature and expanding data resources. However, the genome, biological pathway, and related disease data are insufficient to explain the health benefits of kimchi because of the varied and heterogeneous data types. Therefore, we have constructed an appropriate semantic data model based on an integrated food knowledge database and analyzed the functional and biological processes associated with kimchi in silico. This complex semantic network of several entities and connections was generalized to answer complex questions, and we demonstrated how specific disease pathways are related to kimchi consumption.

  18. Metabolic Pathways Associated with Kimchi, a Traditional Korean Food, Based on In Silico Modeling of Published Data

    Science.gov (United States)

    Shin, Ga Hee; Kang, Byeong-Chul

    2016-01-01

    Kimchi is a traditional Korean food prepared by fermenting vegetables, such as Chinese cabbage and radishes, which are seasoned with various ingredients, including red pepper powder, garlic, ginger, green onion, fermented seafood (Jeotgal), and salt. The various unique microorganisms and bioactive components in kimchi show antioxidant activity and have been associated with an enhanced immune response, as well as anti-cancer and anti-diabetic effects. Red pepper inhibits decay due to microorganisms and prevents food from spoiling. The vast amount of biological information generated by academic and industrial research groups is reflected in a rapidly growing body of scientific literature and expanding data resources. However, the genome, biological pathway, and related disease data are insufficient to explain the health benefits of kimchi because of the varied and heterogeneous data types. Therefore, we have constructed an appropriate semantic data model based on an integrated food knowledge database and analyzed the functional and biological processes associated with kimchi in silico. This complex semantic network of several entities and connections was generalized to answer complex questions, and we demonstrated how specific disease pathways are related to kimchi consumption. PMID:28154515

  19. Publishing for Impact

    NARCIS (Netherlands)

    Gerritsma, W.

    2015-01-01

    The starting point of my presentation is that you have carried out the most valuable, relevant and exciting research. This presentation is to point out to you some publishing tips that should be part of your publishing strategy. My goal is to make you think about a publication strategy. Your publica

  20. PublisherPartners webshop

    OpenAIRE

    Piferrer Torres, Enric

    2013-01-01

    Projecte realitzat en col·laboració amb Fontys University of Applied Sciences i l'empresa PublisherPartners. The main goal in the project was to build a website where the company PublisherPartners could sell and offer its products online to the customer

  1. International Astronomical Search Collaboration: An Online Student-Based Discovery Program in Astronomy (Invited)

    Science.gov (United States)

    Pennypacker, C.; Miller, P.

    2009-12-01

    The past 15 years has seen the development of affordable small telescopes, advanced digital cameras, high speed Internet access, and widely-available image analysis software. With these tools it is possible to provide student programs where they make original astronomical discoveries. High school aged students, even younger, have discovered Main Belt asteroids (MBA), near-Earth objects (NEO), comets, supernovae, and Kuiper Belt objects (KBO). Student-based discovery is truly an innovative way to generate enthusiasm for learning science. The International Astronomical Search Collaboration (IASC = “Isaac”) is an online program where high school and college students make original MBA discoveries and important NEO observations. MBA discoveries are reported to the Minor Planet Center (Harvard) and International Astronomical Union. The NEO observations are included as part of the NASA Near-Earth Object Program (JPL). Provided at no cost to participating schools, IASC is centered at Hardin-Simmons University (Abilene, TX). It is a collaboration of the University, Lawrence Hall of Science (University of California, Berkeley), Astronomical Research Institute (ARI; Charleston, IL), Global Hands-On Universe Association (Portugal),and Astrometrica (Austria). Started in Fall 2006, IASC has reached 135 schools in 14 countries. There are 9 campaigns per year, each with 15 schools and lasting 45 days. Students have discovered 150 MBAs and made > 1,000 NEO observations. One notable discovery was 2009 BD81, discovered by two high school teachers and a graduate student at the Bulgarian Academy of Science. This object, about the size of 3 football fields, crosses Earth’s orbit and poses a serious impact risk. Each night with clear skies and no Moon, the ARI Observatory uses its 24" and 32" prime focus telescopes to take images along the ecliptic. Three images are taken of the same field of view (FOV) over a period of 30 minutes. These are bundled together and placed online at

  2. Efficient Transmit Beamspace Design for Search-Free Based DOA Estimation in MIMO Radar

    Science.gov (United States)

    Khabbazibasmenj, Arash; Hassanien, Aboulnasr; Vorobyov, Sergiy A.; Morency, Matthew W.

    2014-03-01

    In this paper, we address the problem of transmit beamspace design for multiple-input multiple-output (MIMO) radar with colocated antennas in application to direction-of-arrival (DOA) estimation. A new method for designing the transmit beamspace matrix that enables the use of search-free DOA estimation techniques at the receiver is introduced. The essence of the proposed method is to design the transmit beamspace matrix based on minimizing the difference between a desired transmit beampattern and the actual one under the constraint of uniform power distribution across the transmit array elements. The desired transmit beampattern can be of arbitrary shape and is allowed to consist of one or more spatial sectors. The number of transmit waveforms is even but otherwise arbitrary. To allow for simple search-free DOA estimation algorithms at the receive array, the rotational invariance property is established at the transmit array by imposing a specific structure on the beamspace matrix. Semi-definite relaxation is used to transform the proposed formulation into a convex problem that can be solved efficiently. We also propose a spatial-division based design (SDD) by dividing the spatial domain into several subsectors and assigning a subset of the transmit beams to each subsector. The transmit beams associated with each subsector are designed separately. Simulation results demonstrate the improvement in the DOA estimation performance offered by using the proposed joint and SDD transmit beamspace design methods as compared to the traditional MIMO radar technique.

  3. Turn-Based War Chess Model and Its Search Algorithm per Turn

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2016-01-01

    Full Text Available War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1 enumeration by order; (2 enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.

  4. EMBANKS: Towards Disk Based Algorithms For Keyword-Search In Structured Databases

    CERN Document Server

    Gupta, Nitin

    2011-01-01

    In recent years, there has been a lot of interest in the field of keyword querying relational databases. A variety of systems such as DBXplorer [ACD02], Discover [HP02] and ObjectRank [BHP04] have been proposed. Another such system is BANKS, which enables data and schema browsing together with keyword-based search for relational databases. It models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. The size of the database graph that BANKS uses is proportional to the sum of the number of nodes and edges in the graph. Systems such as SPIN, which search on Personal Information Networks and use BANKS as the backend, maintain a lot of information about the users' data. Since these systems run on the user workstation which have other demands of memory, such a heavy use of memory is unreasonable and if possible, should be avoided. In order to alleviate this problem, we introduce EMBANKS (acronym for External Memory BANKS), a framework for an optimized disk-based BANKS sy...

  5. Ovid MEDLINE Instruction can be Evaluated Using a Validated Search Assessment Tool. A Review of: Rana, G. K., Bradley, D. R., Hamstra, S. J., Ross, P. T., Schumacher, R. E., Frohna, J. G., & Lypson, M. L. (2011. A validated search assessment tool: Assessing practice-based learning and improvement in a residency program. Journal of the Medical Library Association, 99(1, 77-81. doi:10.3163/1536-5050.99.1.013

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-01-01

    Full Text Available Objective – To determine the construct validity of a search assessment instrument that is used to evaluate search strategies in Ovid MEDLINE. Design – Cross-sectional, cohort study. Setting – The Academic Medical Center of the University of Michigan. Subjects – All 22 first-year residents in the Department of Pediatrics in 2004 (cohort 1; 10 senior pediatric residents in 2005 (cohort 2; and 9 faculty members who taught evidence based medicine (EBM and published on EBM topics. Methods – Two methods were employed to determine whether the University of Michigan MEDLINE Search Assessment Instrument (UMMSA could show differences between searchers’ construction of a MEDLINE search strategy.The first method tested the search skills of all 22 incoming pediatrics residents (cohort 1 after they received MEDLINE training in 2004, and again upon graduation in 2007. Only 15 of these residents were tested upon graduation; seven were either no longer in the residency program, or had quickly left the institution after graduation. The search test asked study participants to read a clinical scenario, identify the search question in the scenario, and perform an Ovid MEDLINE search. Two librarians scored the blinded search strategies.The second method compared the scores of the 22 residents with the scores of ten senior residents (cohort 2 and nine faculty volunteers. Unlike the first cohort, the ten senior residents had not received any MEDLINE training. The faculty members’ search strategies were used as the gold standard comparison for scoring the search skills of the two cohorts.Main Results – The search strategy scores of the 22 first-year residents, who received training, improved from 2004 to 2007 (mean improvement: 51.7 to 78.7; t(14=5.43, PConclusion – According to the authors, “the results of this study provide evidence for the validity of an instrument to evaluate MEDLINE search strategies” (p. 81, since the instrument under

  6. Semantic snippet construction for search engine results based on segment evaluation

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The result listing from search engines includes a link and a snippet from the web page for each result item. The snippet in the result listing plays a vital role in assisting the user to click on it. This paper proposes a novel approach to construct the snippets based on a semantic evaluation of the segments in the page. The target segment(s) is/are identified by applying a model to evaluate segments present in the page and selecting the segments with top scores. The proposed model makes the user judgment to click on a result item easier since the snippet is constructed semantically after a critical evaluation based on multiple factors. A prototype implementation of the proposed model confirms the empirical validation.

  7. Population Scalability Analysis of Abstract Population-based Random Search: Spectral Radius

    CERN Document Server

    He, Jun

    2011-01-01

    Population-based Random Search (RS) algorithms, such as Evolutionary Algorithms (EAs), Ant Colony Optimization (ACO), Artificial Immune Systems (AIS) and Particle Swarm Optimization (PSO), have been widely applied to solving discrete optimization problems. A common belief in this area is that the performance of a population-based RS algorithm may improve if increasing its population size. The term of population scalability is used to describe the relationship between the performance of RS algorithms and their population size. Although understanding population scalability is important to design efficient RS algorithms, there exist few theoretical results about population scalability so far. Among those limited results, most of them belong to case studies, e.g. simple RS algorithms for simple problems. Different from them, the paper aims at providing a general study. A large family of RS algorithms, called ARS, has been investigated in the paper. The main contribution of this paper is to introduce a novel appro...

  8. Cuckoo search based optimal mask generation for noise suppression and enhancement of speech signal

    Directory of Open Access Journals (Sweden)

    Anil Garg

    2015-07-01

    Full Text Available In this paper, an effective noise suppression technique for enhancement of speech signals using optimized mask is proposed. Initially, the noisy speech signal is broken down into various time–frequency (TF units and the features are extracted by finding out the Amplitude Magnitude Spectrogram (AMS. The signals are then classified based on quality ratio into different classes to generate the initial set of solutions. Subsequently, the optimal mask for each class is generated based on Cuckoo search algorithm. Subsequently, in the waveform synthesis stage, filtered waveforms are windowed and then multiplied by the optimal mask value and summed up to get the enhanced target signal. The experimentation of the proposed technique was carried out using various datasets and the performance is compared with the previous techniques using SNR. The results obtained proved the effectiveness of the proposed technique and its ability to suppress noise and enhance the speech signal.

  9. Infodemiology of status epilepticus: A systematic validation of the Google Trends-based search queries.

    Science.gov (United States)

    Bragazzi, Nicola Luigi; Bacigaluppi, Susanna; Robba, Chiara; Nardone, Raffaele; Trinka, Eugen; Brigo, Francesco

    2016-02-01

    People increasingly use Google looking for health-related information. We previously demonstrated that in English-speaking countries most people use this search engine to obtain information on status epilepticus (SE) definition, types/subtypes, and treatment. Now, we aimed at providing a quantitative analysis of SE-related web queries. This analysis represents an advancement, with respect to what was already previously discussed, in that the Google Trends (GT) algorithm has been further refined and correlational analyses have been carried out to validate the GT-based query volumes. Google Trends-based SE-related query volumes were well correlated with information concerning causes and pharmacological and nonpharmacological treatments. Google Trends can provide both researchers and clinicians with data on realities and contexts that are generally overlooked and underexplored by classic epidemiology. In this way, GT can foster new epidemiological studies in the field and can complement traditional epidemiological tools.

  10. FloPSy - Search-Based Floating Point Constraint Solving for Symbolic Execution

    Science.gov (United States)

    Lakhotia, Kiran; Tillmann, Nikolai; Harman, Mark; de Halleux, Jonathan

    Recently there has been an upsurge of interest in both, Search-Based Software Testing (SBST), and Dynamic Symbolic Execution (DSE). Each of these two approaches has complementary strengths and weaknesses, making it a natural choice to explore the degree to which the strengths of one can be exploited to offset the weakness of the other. This paper introduces an augmented version of DSE that uses a SBST-based approach to handling floating point computations, which are known to be problematic for vanilla DSE. The approach has been implemented as a plug in for the Microsoft Pex DSE testing tool. The paper presents results from both, standard evaluation benchmarks, and two open source programs.

  11. Creative Engineering Based Education with Autonomous Robots Considering Job Search Support

    Science.gov (United States)

    Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou

    The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.

  12. Improved segmentation of abnormal cervical nuclei using a graph-search based approach

    Science.gov (United States)

    Zhang, Ling; Liu, Shaoxiong; Wang, Tianfu; Chen, Siping; Sonka, Milan

    2015-03-01

    Reliable segmentation of abnormal nuclei in cervical cytology is of paramount importance in automation-assisted screening techniques. This paper presents a general method for improving the segmentation of abnormal nuclei using a graph-search based approach. More specifically, the proposed method focuses on the improvement of coarse (initial) segmentation. The improvement relies on a transform that maps round-like border in the Cartesian coordinate system into lines in the polar coordinate system. The costs consisting of nucleus-specific edge and region information are assigned to the nodes. The globally optimal path in the constructed graph is then identified by dynamic programming. We have tested the proposed method on abnormal nuclei from two cervical cell image datasets, Herlev and H and E stained liquid-based cytology (HELBC), and the comparative experiments with recent state-of-the-art approaches demonstrate the superior performance of the proposed method.

  13. Aggregate and Mineral Resources, parcel data base attribute; property code, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Aggregate and Mineral Resources dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006....

  14. Television Transmitter Locations, parcel data base attribute; building type, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Television Transmitter Locations dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of...

  15. Radio Transmitters and Tower Locations, parcel data base attribute; building type, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Radio Transmitters and Tower Locations dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as...

  16. Gaming Facilities, facility data base; casino/hotel, Published in 2006, 1:600 (1in=50ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Gaming Facilities dataset, published at 1:600 (1in=50ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is described...

  17. Generalized Voronoi Partition Based Multi-Agent Search using Heterogeneous Sensors

    CERN Document Server

    Guruprasad, K R

    2009-01-01

    In this paper we propose search strategies for heterogeneous multi-agent systems. Multiple agents, equipped with communication gadget, computational capability, and sensors having heterogeneous capabilities, are deployed in the search space to gather information such as presence of targets. Lack of information about the search space is modeled as an uncertainty density distribution. The uncertainty is reduced on collection of information by the search agents. We propose a generalization of Voronoi partition incorporating the heterogeneity in sensor capabilities, and design optimal deployment strategies for multiple agents, maximizing a single step search effectiveness. The optimal deployment forms the basis for two search strategies, namely, {\\em heterogeneous sequential deploy and search} and {\\em heterogeneous combined deploy and search}. We prove that the proposed strategies can reduce the uncertainty density to arbitrarily low level under ideal conditions. We provide a few formal analysis results related ...

  18. Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape

    Energy Technology Data Exchange (ETDEWEB)

    XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.

    2001-11-01

    The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.

  19. HERA signature-based searches I: events with isolated leptons and missing transverse momentum at HERA

    CERN Document Server

    Ferrando, J

    2008-01-01

    Recent results of searches for events containing isolated leptons and missing transverse momentum at HERA are presented. Searches in this final state have yielded notable excesses over Standard Model expectations in the past. Searches for isolated leptons in channels corresponding to all three generations of leptons over the full HERA running period are now available. The combined H1+ZEUS results for searches for electrons and muons are compatible with the Standard Model.

  20. A simple heuristic for Internet-based evidence search in primary care: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Eberbach A

    2016-08-01

    Full Text Available Andreas Eberbach,1 Annette Becker,1 Justine Rochon,2 Holger Finkemeler,1Achim Wagner,3 Norbert Donner-Banzhoff1 1Department of Family and Community Medicine, Philipp University of Marburg, Marburg, Germany; 2Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany; 3Department of Sport Medicine, Justus-Liebig-University of Giessen, Giessen, Germany Background: General practitioners (GPs are confronted with a wide variety of clinical questions, many of which remain unanswered. Methods: In order to assist GPs in finding quick, evidence-based answers, we developed a learning program (LP with a short interactive workshop based on a simple ­three-step-heuristic to improve their search and appraisal competence (SAC. We evaluated the LP ­effectiveness with a randomized controlled trial (RCT. Participants (intervention group [IG] n=20; ­control group [CG] n=31 rated acceptance and satisfaction and also answered 39 ­knowledge ­questions to assess their SAC. We controlled for previous knowledge in content areas covered by the test. Results: Main outcome – SAC: within both groups, the pre–post test shows significant (P=0.00 improvements in correctness (IG 15% vs CG 11% and confidence (32% vs 26% to find evidence-based answers. However, the SAC difference was not significant in the RCT. Other measures: Most workshop participants rated “learning atmosphere” (90%, “skills acquired” (90%, and “relevancy to my practice” (86% as good or very good. The ­LP-recommendations were implemented by 67% of the IG, whereas 15% of the CG already conformed to LP recommendations spontaneously (odds ratio 9.6, P=0.00. After literature search, the IG showed a (not significantly higher satisfaction regarding “time spent” (IG 80% vs CG 65%, “quality of information” (65% vs 54%, and “amount of information” (53% vs 47%.Conclusion: Long-standing established GPs have a good SAC. Despite high acceptance, strong

  1. Balancing Efficiency and Effectiveness for Fusion-Based Search Engines in the "Big Data" Environment

    Science.gov (United States)

    Li, Jieyu; Huang, Chunlan; Wang, Xiuhong; Wu, Shengli

    2016-01-01

    Introduction: In the big data age, we have to deal with a tremendous amount of information, which can be collected from various types of sources. For information search systems such as Web search engines or online digital libraries, the collection of documents becomes larger and larger. For some queries, an information search system needs to…

  2. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    Science.gov (United States)

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  3. Elearning and digital publishing

    CERN Document Server

    Ching, Hsianghoo Steve; Mc Naught, Carmel

    2006-01-01

    ""ELearning and Digital Publishing"" will occupy a unique niche in the literature accessed by library and publishing specialists, and by university teachers and planners. It examines the interfaces between the work done by four groups of university staff who have been in the past quite separate from, or only marginally related to, each other - library staff, university teachers, university policy makers, and staff who work in university publishing presses. All four groups are directly and intimately connected with the main functions of universities - the creation, management and dissemination

  4. Data Sharing & Publishing at Nature Publishing Group

    Science.gov (United States)

    VanDecar, J. C.; Hrynaszkiewicz, I.; Hufton, A. L.

    2015-12-01

    In recent years, the research community has come to recognize that upon-request data sharing has important limitations1,2. The Nature-titled journals feel that researchers have a duty to share data without undue qualifications, in a manner that allows others to replicate and build upon their published findings. Historically, the Nature journals have been strong supporters of data deposition in communities with existing data mandates, and have required data sharing upon request in all other cases. To help address some of the limitations of upon-request data sharing, the Nature titles have strengthened their existing data policies and forged a new partnership with Scientific Data, to promote wider data sharing in discoverable, citeable and reusable forms, and to ensure that scientists get appropriate credit for sharing3. Scientific Data is a new peer-reviewed journal for descriptions of research datasets, which works with a wide of range of public data repositories4. Articles at Scientific Data may either expand on research publications at other journals or may be used to publish new datasets. The Nature Publishing Group has also signed the Joint Declaration of Data Citation Principles5, and Scientific Data is our first journal to include formal data citations. We are currently in the process of adding data citation support to our various journals. 1 Wicherts, J. M., Borsboom, D., Kats, J. & Molenaar, D. The poor availability of psychological research data for reanalysis. Am. Psychol. 61, 726-728, doi:10.1037/0003-066x.61.7.726 (2006). 2 Vines, T. H. et al. Mandated data archiving greatly improves access to research data. FASEB J. 27, 1304-1308, doi:10.1096/fj.12-218164 (2013). 3 Data-access practices strengthened. Nature 515, 312, doi:10.1038/515312a (2014). 4 More bang for your byte. Sci. Data 1, 140010, doi:10.1038/sdata.2014.10 (2014). 5 Data Citation Synthesis Group: Joint Declaration of Data Citation Principles. (FORCE11, San Diego, CA, 2014).

  5. Web Search Engines

    OpenAIRE

    Rajashekar, TB

    1998-01-01

    The World Wide Web is emerging as an all-in-one information source. Tools for searching Web-based information include search engines, subject directories and meta search tools. We take a look at key features of these tools and suggest practical hints for effective Web searching.

  6. A web-based search engine for triplex-forming oligonucleotide target sequences.

    Science.gov (United States)

    Gaddis, Sara S; Wu, Qi; Thames, Howard D; DiGiovanni, John; Walborg, Earl F; MacLeod, Michael C; Vasquez, Karen M

    2006-01-01

    Triplex technology offers a useful approach for site-specific modification of gene structure and function both in vitro and in vivo. Triplex-forming oligonucleotides (TFOs) bind to their target sites in duplex DNA, thereby forming triple-helical DNA structures via Hoogsteen hydrogen bonding. TFO binding has been demonstrated to site-specifically inhibit gene expression, enhance homologous recombination, induce mutation, inhibit protein binding, and direct DNA damage, thus providing a tool for gene-specific manipulation of DNA. We have developed a flexible web-based search engine to find and annotate TFO target sequences within the human and mouse genomes. Descriptive information about each site, including sequence context and gene region (intron, exon, or promoter), is provided. The engine assists the user in finding highly specific TFO target sequences by eliminating or flagging known repeat sequences and flagging overlapping genes. A convenient way to check for the uniqueness of a potential TFO binding site is provided via NCBI BLAST. The search engine may be accessed at spi.mdanderson.org/tfo.

  7. Improving the Ranking Capability of the Hyperlink Based Search Engines Using Heuristic Approach

    Directory of Open Access Journals (Sweden)

    Haider A. Ramadhan

    2006-01-01

    Full Text Available To evaluate the informative content of a Web page, the Web structure has to be carefully analyzed. Hyperlink analysis, which is capable of measuring the potential information contained in a Web page with respect to the Web space, is gaining more attention. The links to and from Web pages are an important resource that has largely gone unused in existing search engines. Web pages differ from general text in that they posse’s external and internal structure. The Web links between documents can provide useful information in finding pages for a given set of topics. Making use of the Web link information would allow the construction of more powerful tools for answering user queries. Google has been among the first search engines to utilize hyper links in page ranking. Still two main flaws in Google need to be tackled. First, all the backlinks to a page are assigned equal weights. Second, less content rich pages, such as intermediate and transient pages, are not differentiated from more content rich pages. To overcome these pitfalls, this paper proposes a heuristic based solution to differentiate the significance of various backlinks by assigning a different weight factor to them depending on their location in the directory tree of the Web space.

  8. FACC: A Novel Finite Automaton Based on Cloud Computing for the Multiple Longest Common Subsequences Search

    Directory of Open Access Journals (Sweden)

    Yanni Li

    2012-01-01

    Full Text Available Searching for the multiple longest common subsequences (MLCS has significant applications in the areas of bioinformatics, information processing, and data mining, and so forth, Although a few parallel MLCS algorithms have been proposed, the efficiency and effectiveness of the algorithms are not satisfactory with the increasing complexity and size of biologic data. To overcome the shortcomings of the existing MLCS algorithms, and considering that MapReduce parallel framework of cloud computing being a promising technology for cost-effective high performance parallel computing, a novel finite automaton (FA based on cloud computing called FACC is proposed under MapReduce parallel framework, so as to exploit a more efficient and effective general parallel MLCS algorithm. FACC adopts the ideas of matched pairs and finite automaton by preprocessing sequences, constructing successor tables, and common subsequences finite automaton to search for MLCS. Simulation experiments on a set of benchmarks from both real DNA and amino acid sequences have been conducted and the results show that the proposed FACC algorithm outperforms the current leading parallel MLCS algorithm FAST-MLCS.

  9. Spectrum-based method to generate good decoy libraries for spectral library searching in peptide identifications.

    Science.gov (United States)

    Cheng, Chia-Ying; Tsai, Chia-Feng; Chen, Yu-Ju; Sung, Ting-Yi; Hsu, Wen-Lian

    2013-05-01

    As spectral library searching has received increasing attention for peptide identification, constructing good decoy spectra from the target spectra is the key to correctly estimating the false discovery rate in searching against the concatenated target-decoy spectral library. Several methods have been proposed to construct decoy spectral libraries. Most of them construct decoy peptide sequences and then generate theoretical spectra accordingly. In this paper, we propose a method, called precursor-swap, which directly constructs decoy spectral libraries directly at the "spectrum level" without generating decoy peptide sequences by swapping the precursors of two spectra selected according to a very simple rule. Our spectrum-based method does not require additional efforts to deal with ion types (e.g., a, b or c ions), fragment mechanism (e.g., CID, or ETD), or unannotated peaks, but preserves many spectral properties. The precursor-swap method is evaluated on different spectral libraries and the results of obtained decoy ratios show that it is comparable to other methods. Notably, it is efficient in time and memory usage for constructing decoy libraries. A software tool called Precursor-Swap-Decoy-Generation (PSDG) is publicly available for download at http://ms.iis.sinica.edu.tw/PSDG/.

  10. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  11. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer

    Directory of Open Access Journals (Sweden)

    Mauro Castelli

    2015-01-01

    Full Text Available Energy consumption forecasting (ECF is an important policy issue in today’s economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  12. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.

    Science.gov (United States)

    Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo

    2015-01-01

    Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  13. Neural-Based Cuckoo Search of Employee Health and Safety (HS

    Directory of Open Access Journals (Sweden)

    Koffka Khan

    2013-01-01

    Full Text Available A study using the cuckoo search algorithm to evaluate the effects of using computer-aided workstations on employee health and safety (HS is conducted. We collected data for HS risk on employees at their workplaces, analyzed the data and proposed corrective measures applying our methodology. It includes a checklist with nine HS dimensions: work organization, displays, input devices, furniture, work space, environment, software, health hazards and satisfaction. By the checklist, data on HS risk factors are collected. For the calculation of an HS risk index a neural-swarm cuckoo search (NSCS algorithm has been employed. Based on the HS risk index, IHS four groups of HS risk severity are determined: low, moderate, high and extreme HS risk. By this index HS problems are allocated and corrective measures can be applied. This approach is illustrated and validated by a case study. An important advantage of the approach is its easy use and HS index methodology speedily pointing out individual employee specific HS risk.

  14. Olfaction and hearing based mobile robot navigation for odor/sound source search.

    Science.gov (United States)

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  15. Semantic Search-Based Genetic Programming and the Effect of Intron Deletion.

    Science.gov (United States)

    Castelli, Mauro; Vanneschi, Leonardo; Silva, Sara

    2014-01-01

    The concept of semantics (in the sense of input-output behavior of solutions on training data) has been the subject of a noteworthy interest in the genetic programming (GP) research community over the past few years. In this paper, we present a new GP system that uses the concept of semantics to improve search effectiveness. It maintains a distribution of different semantic behaviors and biases the search toward solutions that have similar semantics to the best solutions that have been found so far. We present experimental evidence of the fact that the new semantics-based GP system outperforms the standard GP and the well-known bacterial GP on a set of test functions, showing particularly interesting results for noncontinuous (i.e., generally harder to optimize) test functions. We also observe that the solutions generated by the proposed GP system often have a larger size than the ones returned by standard GP and bacterial GP and contain an elevated number of introns, i.e., parts of code that do not have any effect on the semantics. Nevertheless, we show that the deletion of introns during the evolution does not affect the performance of the proposed method.

  16. A trust-based sensor allocation algorithm in cooperative space search problems

    Science.gov (United States)

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  17. Pattern Recognition based Lexi-Search Approach to the Variant Multi- Dimensional Assignment Problem

    Directory of Open Access Journals (Sweden)

    Purusotham, S.

    2011-08-01

    Full Text Available The Multi Dimensional Assignment Problem (MDAP is a combinatorial optimization problem that is known to be NP –Hard. In this paper we discuss a problem with four dimensions. N jobs can be executed on Nmachines, at k facilities, using l concessions. Every job is to be scheduled on some machine at one of the facilities, using some concession. No two jobs can run on the same machine, at the same facility using the same concession. Furthermore, there is a specified maximum number of jobs that can be run at a given facility, andthere is a maximum number of jobs that can avail of a given concession. C (i, j, k, l be the cost of allocating job ‘i’ on machine ‘j’ at facility ‘k’ using the concession ‘l’. This is provided as a 4 dimensional array. The objective is to schedule the jobs in such a way that the constraints are met and the cost is minimized. For this problem we developed a Pattern Recognition Technique based Lexi Search Algorithm, which comes under the exact methods. The concepts and the algorithm involving in this problem are discussed with a suitable numerical example. We programmed the proposed Lexi Search algorithm using C. This algorithm takesless CPU run time as compared with the existed methods, and hence it suggested for solving the higher dimensional problems.

  18. About EBSCO Publishing

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    <正>EBSCO Publishing,headquartered in Ipswich,Massachusetts[1],is an aggregator of premium full-text content. EBSCO Publishing’s core business is providing online databases via EBSCOhost to libraries worldwide.

  19. A Publisher's Perspective.

    Science.gov (United States)

    McElderry, Margaret K.

    1988-01-01

    Compares the publishing industry of forty years ago to that of today, noting that the earlier market was less demanding and allowed the pursuit of excellence as well as the backlisting of high quality books. (ARH)

  20. Publishing for Impact

    OpenAIRE

    Gerritsma, W.

    2015-01-01

    The starting point of my presentation is that you have carried out the most valuable, relevant and exciting research. This presentation is to point out to you some publishing tips that should be part of your publishing strategy. My goal is to make you think about a publication strategy. Your publication strategy. And assure that your research finds the best possible publication venue and is presented in the most optimal way.

  1. Reducing a Knowledge-Base Search Space When Data Are Missing

    Science.gov (United States)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  2. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2015-01-01

    Full Text Available In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird’s nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO algorithm and artificial bee colony (ABC algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  3. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    Science.gov (United States)

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  4. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    Directory of Open Access Journals (Sweden)

    Andy Hickner

    2011-09-01

    Full Text Available Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS. A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from both case and control groups completed a questionnaire to assess satisfaction with the literature search phases of the review process. Results – A protocol was developed and refined for use by future review teams. The collaboration and the resulting search protocol were beneficial for both the student and the review team members. The questionnaire results did not yield statistically significant differences regarding satisfaction with the search process between case and control groups. Conclusions – Evidence-based reviewers' satisfaction with the literature searching process depends on multiple factors and it was not clear that embedding an LIS specialist in the review team improved satisfaction with the process. Future research with more respondents may elucidate specific factors that may impact reviewers' assessment.

  5. A Comparison of Multi-Parametric Programming, Mixed-Integer Programming, Gradient Descent Based, and the Embedding Approach on Four Published Hybrid Optimal Control Examples

    CERN Document Server

    Meyer, Richard; DeCarlo, Raymond A

    2012-01-01

    This paper compares the embedding approach for solving hybrid optimal control problems to multi-parameter programming, mixed-integer programming, and gradient-descent based methods in the context of four published examples. The four examples include a spring-mass system, moving-target tracking for a mobile robot, two-tank filling, and a DC-DC boost converter. Numerical advantages of the embedding approach are set forth and validated for each example: significantly faster solution time, no ad hoc assumptions (such as predetermined mode sequences) or control models, lower performance index costs, and algorithm convergence when other methods fail. Specific (theoretical) advantages of the embedding approach over the other methods are also described: guaranteed existence of a solution under mild conditions, convexity of the embedded optimization problem solvable with traditional techniques such as sequential quadratic programming with no need for any mixed-integer programming, applicability to nonlinear systems, e...

  6. A NEW SYSTEM DYNAMIC EXTREMUM SELF-SEARCHING METHOD BASED ON CORRELATION ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    李嘉; 刘文江; 胡军; 袁廷奇

    2003-01-01

    Objective To propose a new dynamic extremum self-searching method, which can be used in industrial processes extremum optimum control systems, to overcome the disadvantages of traditional method. Methods This algorithm is based on correlation analysis. A pseudo-random binary signal m-sequence u(t) is added as probe signal in system input, construct cross-correlation function between system input and output, the next step hunting direction is judged by the differential sign. Results Compared with traditional algorithm such as step forward hunting method, the iterative efficient, hunting precision and anti-interference ability of the correlation analysis method is obvious over the traditional algorithm. The computer simulation experimental given illustrate these viewpoints. Conclusion The correlation analysis method can settle the optimum state point of device operating process. It has the advantage of easy condition , simple calculate process.

  7. Multiple-optima search method based on a metamodel and mathematical morphology

    Science.gov (United States)

    Li, Yulin; Liu, Li; Long, Teng; Chen, Xin

    2016-03-01

    This article investigates a non-population-based optimization method using mathematical morphology and the radial basis function (RBF) for multimodal computationally intensive functions. To obtain several feasible solutions, mathematical morphology is employed to search promising regions. Sequential quadratic programming is used to exploit the possible areas to determine the exact positions of the potential optima. To relieve the computational burden, metamodelling techniques are employed. The RBF metamodel in different iterations varies considerably so that the positions of potential optima are moving during optimization. To find the pair of correlative potential optima between the latest two iterations, a tolerance is presented. Furthermore, to ensure that all the output minima are the global or local optima, an optimality judgement criterion is introduced.

  8. Optimal fuzzy PID controller with adjustable factors based on flexible polyhedron search algorithm

    Institute of Scientific and Technical Information of China (English)

    谭冠政; 肖宏峰; 王越超

    2002-01-01

    A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on-line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors xp, xi, and xd are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.

  9. Handwritten Japanese Address Recognition Technique Based on Improved Phased Search of Candidate Rectangle Lattice

    Directory of Open Access Journals (Sweden)

    Hidehisa NAKAYAMA

    2004-08-01

    Full Text Available In the field of handwritten Japanese address recognition, it is common to recognize place-name strings from place-name images. However, in practice, it is necessary to recognize the place-name strings from address images. Therefore, we have proposed the post-processing system, which checks the list of the place-name strings in two-stages for recognizing the place-name images. In this paper, we propose a new technique based on phased search of candidate rectangle lattice, and improve the technique with the detection of key-characters for final output. Applying our proposal to the IPTP 1840 image data of address strings, the results of experiments clearly show the efficiency of our system in handwritten Japanese address recognition.

  10. PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.

    Science.gov (United States)

    Liu, Xiaoyong; Fu, Hui

    2014-01-01

    Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms.

  11. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.

    Science.gov (United States)

    Bhandari, A K; Soni, V; Kumar, A; Singh, G K

    2014-07-01

    This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques.

  12. PSO-Based Support Vector Machine with Cuckoo Search Technique for Clinical Disease Diagnoses

    Directory of Open Access Journals (Sweden)

    Xiaoyong Liu

    2014-01-01

    Full Text Available Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM, particle swarm optimization (PSO, and cuckoo search (CS. The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms.

  13. Privacy-preserving location-based query using location indexes and parallel searching in distributed networks.

    Science.gov (United States)

    Zhong, Cheng; Liu, Lei; Zhao, Jing

    2014-01-01

    An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.

  14. Orthogonal search-based rule extraction for modelling the decision to transfuse.

    Science.gov (United States)

    Etchells, T A; Harrison, M J

    2006-04-01

    Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-based rule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb 13 mm and Hb 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).

  15. SEGMENTATION ALGORITHM BASED ON EDGE-SEARCHING FOR MUlTI-LINEAR STRUCTURED LIGHT IMAGES

    Institute of Scientific and Technical Information of China (English)

    LIU Baohua; LI Bing; JIANG Zhuangde

    2006-01-01

    Aiming at the problem that the existence of disturbances on the edges of light-stripe makes the segmentation of the light-stripes images difficult, a new segmentation algorithm based on edge-searching is presented. It firstly calculates every edge pixel's horizontal coordinate grads to produce the corresponding grads-edge, then uses a designed length-variable 1D template to scan the light-stripes' grads-edges. The template is able to fmd the disturbances with different width utilizing the distributing character of the edge disturbances. The found disturbances are eliminated finally. The algorithm not only can smoothly segment the light-stripes images, but also eliminate most disturbances on the light-stripes' edges without damaging the light-stripes images' 3D information. A practical example of using the proposed algorithm is given in the end. It is proved that the efficiency of the algorithm has been improved obviously by comparison.

  16. WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTERN RETRIEVAL ALGORITHM

    Directory of Open Access Journals (Sweden)

    Pushpa C N

    2013-02-01

    Full Text Available Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO support vector machines (SVM to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values, Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.

  17. Optimized Aircraft Electric Control System Based on Adaptive Tabu Search Algorithm and Fuzzy Logic Control

    Directory of Open Access Journals (Sweden)

    Saifullah Khalid

    2016-09-01

    Full Text Available Three conventional control constant instantaneous power control, sinusoidal current control, and synchronous reference frame techniques for extracting reference currents for shunt active power filters have been optimized using Fuzzy Logic control and Adaptive Tabu search Algorithm and their performances have been compared. Critical analysis of Comparison of the compensation ability of different control strategies based on THD and speed will be done, and suggestions will be given for the selection of technique to be used. The simulated results using MATLAB model are presented, and they will clearly prove the value of the proposed control method of aircraft shunt APF. The waveforms observed after the application of filter will be having the harmonics within the limits and the power quality will be improved.

  18. An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering

    CERN Document Server

    Uwe, Aickelin; Jingpeng, Li

    2007-01-01

    This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is ...

  19. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  20. SAM: String-based sequence search algorithm for mitochondrial DNA database queries

    Science.gov (United States)

    Röck, Alexander; Irwin, Jodi; Dür, Arne; Parsons, Thomas; Parson, Walther

    2011-01-01

    The analysis of the haploid mitochondrial (mt) genome has numerous applications in forensic and population genetics, as well as in disease studies. Although mtDNA haplotypes are usually determined by sequencing, they are rarely reported as a nucleotide string. Traditionally they are presented in a difference-coded position-based format relative to the corrected version of the first sequenced mtDNA. This convention requires recommendations for standardized sequence alignment that is known to vary between scientific disciplines, even between laboratories. As a consequence, database searches that are vital for the interpretation of mtDNA data can suffer from biased results when query and database haplotypes are annotated differently. In the forensic context that would usually lead to underestimation of the absolute and relative frequencies. To address this issue we introduce SAM, a string-based search algorithm that converts query and database sequences to position-free nucleotide strings and thus eliminates the possibility that identical sequences will be missed in a database query. The mere application of a BLAST algorithm would not be a sufficient remedy as it uses a heuristic approach and does not address properties specific to mtDNA, such as phylogenetically stable but also rapidly evolving insertion and deletion events. The software presented here provides additional flexibility to incorporate phylogenetic data, site-specific mutation rates, and other biologically relevant information that would refine the interpretation of mitochondrial DNA data. The manuscript is accompanied by freeware and example data sets that can be used to evaluate the new software (http://stringvalidation.org). PMID:21056022

  1. Semi-supervised weighted kernel clustering based on gravitational search for fault diagnosis.

    Science.gov (United States)

    Li, Chaoshun; Zhou, Jianzhong

    2014-09-01

    Supervised learning method, like support vector machine (SVM), has been widely applied in diagnosing known faults, however this kind of method fails to work correctly when new or unknown fault occurs. Traditional unsupervised kernel clustering can be used for unknown fault diagnosis, but it could not make use of the historical classification information to improve diagnosis accuracy. In this paper, a semi-supervised kernel clustering model is designed to diagnose known and unknown faults. At first, a novel semi-supervised weighted kernel clustering algorithm based on gravitational search (SWKC-GS) is proposed for clustering of dataset composed of labeled and unlabeled fault samples. The clustering model of SWKC-GS is defined based on wrong classification rate of labeled samples and fuzzy clustering index on the whole dataset. Gravitational search algorithm (GSA) is used to solve the clustering model, while centers of clusters, feature weights and parameter of kernel function are selected as optimization variables. And then, new fault samples are identified and diagnosed by calculating the weighted kernel distance between them and the fault cluster centers. If the fault samples are unknown, they will be added in historical dataset and the SWKC-GS is used to partition the mixed dataset and update the clustering results for diagnosing new fault. In experiments, the proposed method has been applied in fault diagnosis for rotatory bearing, while SWKC-GS has been compared not only with traditional clustering methods, but also with SVM and neural network, for known fault diagnosis. In addition, the proposed method has also been applied in unknown fault diagnosis. The results have shown effectiveness of the proposed method in achieving expected diagnosis accuracy for both known and unknown faults of rotatory bearing.

  2. Application of 3D Zernike descriptors to shape-based ligand similarity searching

    Directory of Open Access Journals (Sweden)

    Venkatraman Vishwesh

    2009-12-01

    Full Text Available Abstract Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  3. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  4. Tales from the Field: Search Strategies Applied in Web Searching

    Directory of Open Access Journals (Sweden)

    Soohyung Joo

    2010-08-01

    Full Text Available In their web search processes users apply multiple types of search strategies, which consist of different search tactics. This paper identifies eight types of information search strategies with associated cases based on sequences of search tactics during the information search process. Thirty-one participants representing the general public were recruited for this study. Search logs and verbal protocols offered rich data for the identification of different types of search strategies. Based on the findings, the authors further discuss how to enhance web-based information retrieval (IR systems to support each type of search strategy.

  5. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA Floodway and Flood Boundary Maps, Published in 2005, 1:24000 (1in=2000ft) scale, Lafayette County Land Records.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  6. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, DFIRM's from NC Floodplain Mapping Program, Published in 2009, 1:12000 (1in=1000ft) scale, Iredell County GIS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from LIDAR...

  7. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, Chattahoochee-Flint Regional Data Q3 Flood Data, Published in 2006, 1:12000 (1in=1000ft) scale, Chattahoochee-Flint Regional Development.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from LIDAR...

  8. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA Flood Insurance Rate Maps, Published in 2005, 1:24000 (1in=2000ft) scale, Lafayette County Land Records.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  9. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, Federal Emergency Management Agency (FEMA) - Flood Insurance Rate Maps (FIRM), Published in 2011, 1:1200 (1in=100ft) scale, Polk County, Wisconsin.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Other...

  10. Cellular Phone Towers, Serve as base information for use in GIS systems for general planning, analytical, and research purposes., Published in 2007, 1:24000 (1in=2000ft) scale, Louisiana State University.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cellular Phone Towers dataset, published at 1:24000 (1in=2000ft) scale as of 2007. It is described as 'Serve as base information for use in GIS systems for...

  11. Contours, Elevation contour data are a fundamental base map layer for large scale mapping and GIS analysis., Published in 2001, 1:24000 (1in=2000ft) scale, Louisiana State University.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Contours dataset, published at 1:24000 (1in=2000ft) scale as of 2001. It is described as 'Elevation contour data are a fundamental base map layer for large...

  12. Fairness in scientific publishing

    Science.gov (United States)

    Matthews, Philippa C.

    2017-01-01

    Major changes are afoot in the world of academic publishing, exemplified by innovations in publishing platforms, new approaches to metrics, improvements in our approach to peer review, and a focus on developing and encouraging open access to scientific literature and data. The FAIR acronym recommends that authors and publishers should aim to make their output Findable, Accessible, Interoperable and Reusable. In this opinion article, I explore the parallel view that we should take a collective stance on making the dissemination of scientific data fair in the conventional sense, by being mindful of equity and justice for patients, clinicians, academics, publishers, funders and academic institutions. The views I represent are founded on oral and written dialogue with clinicians, academics and the publishing industry. Further progress is needed to improve collaboration and dialogue between these groups, to reduce misinterpretation of metrics, to minimise inequity that arises as a consequence of geographic setting, to improve economic sustainability, and to broaden the spectrum, scope, and diversity of scientific publication.

  13. Large Neighborhood Search

    DEFF Research Database (Denmark)

    Pisinger, David; Røpke, Stefan

    2010-01-01

    Heuristics based on large neighborhood search have recently shown outstanding results in solving various transportation and scheduling problems. Large neighborhood search methods explore a complex neighborhood by use of heuristics. Using large neighborhoods makes it possible to find better...... candidate solutions in each iteration and hence traverse a more promising search path. Starting from the large neighborhood search method,we give an overview of very large scale neighborhood search methods and discuss recent variants and extensions like variable depth search and adaptive large neighborhood...... search....

  14. Publishers and repositories

    CERN Document Server

    CERN. Geneva

    2007-01-01

    The impact of self-archiving on journals and publishers is an important topic for all those involved in scholarly communication. There is some evidence that the physics arXiv has had no impact on physics journals, while 'economic common sense' suggests that some impact is inevitable. I shall review recent studies of librarian attitudes towards repositories and journals, and place this in the context of IOP Publishing's experiences with arXiv. I shall offer some possible reasons for the mis-match between these perspectives and then discuss how IOP has linked with arXiv and experimented with OA publishing. As well as launching OA journals we have co-operated with Cornell and the arXiv on Eprintweb.org, a platform that offers new features to repository users. View Andrew Wray's biography

  15. Ethics in Scientific Publishing

    Science.gov (United States)

    Sage, Leslie J.

    2012-08-01

    We all learn in elementary school not turn in other people's writing as if it were our own (plagiarism), and in high school science labs not to fake our data. But there are many other practices in scientific publishing that are depressingly common and almost as unethical. At about the 20 percent level authors are deliberately hiding recent work -- by themselves as well as by others -- so as to enhance the apparent novelty of their most recent paper. Some people lie about the dates the data were obtained, to cover up conflicts of interest, or inappropriate use of privileged information. Others will publish the same conference proceeding in multiple volumes, or publish the same result in multiple journals with only trivial additions of data or analysis (self-plagiarism). These shady practices should be roundly condemned and stopped. I will discuss these and other unethical actions I have seen over the years, and steps editors are taking to stop them.

  16. A Software Agent Based Searching Approach for Constructivist Learning Over the Internet

    OpenAIRE

    Pan, Weidong; University of Tecnology; Huang, Mao Lin; University of Tecnology; Hawryszkiewycz, Igor; University of Tecnology

    2004-01-01

    Finding out appropriate learning resources on the Internet is an important step in learning over Internet by using a constructivist method. Because the information available on the Internet grows rapidly, it is often difficult for a learner to search for a particular learning resource through navigating the large information sea. The use of commercial search engines can make the search much easier, but it is still difficult for the ordinary learners. This paper proposes the use of software ag...

  17. Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    Directory of Open Access Journals (Sweden)

    Bin Guo

    2008-01-01

    Full Text Available A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certain conditions, smart objects become hidden ones. However, current systems provide no mechanism to detect and manage objects when this problem occurs. Second, there is no common context infrastructure for building smart artifact systems, which makes it difficult for separately developed applications to interact with each other and uneasy for them to share and reuse knowledge. Unlike previous systems, Home-Explorer builds on an ontology-based knowledge infrastructure named Sixth-Sense, which makes it easy for the system to interact with other applications or agents also based on this ontology. The hidden object problem is also reflected in our ontology, which enables Home-Explorer to deal with both smart objects and hidden objects. A set of rules for deducing an object's status or location information and for locating hidden objects are described and evaluated.

  18. Web publishing of plant operation monitoring system based on SVG/AJAX/Internet%基于SVG/AJAX/Internet的电厂运行监视系统

    Institute of Scientific and Technical Information of China (English)

    胡冰; 章坚民; 马国梁; 方文道; 郭明泽

    2011-01-01

    为了解决电厂运行监视系统的运行状态和监视指标的实时性和便利性问题,提出了一种利用AJAX技术实现系统的Web发布方案,以SVG作为图形系统的Web发布和显示格式,采用以AJAX技术为核心的异步通信机制,有效解决了信息实时动态刷新问题.实际应用表明,采用该方法实现的热电厂监控系统具有节省网络带宽、减少传输时延、页面更新无闪烁的优点,操作人员可以很方便的通过Web浏览器了解热电厂机组的运行状态,有利于协调政府、电网公司、电厂三者不同关切和利益.%In order to solve the problems of the real-time and convenience of operation status and monitoring index of plant operation monitoring system, a method was presented for the Web publishing of monitoring system of power plant based on AJAX. SVG was used as the Web publishing and display format of graphics system. In order to solve the problem of information' s real-time dynamic refreshment, the AJAX was elected as the asynchronous communication mechanism. The application results show that this method has the advantages of saving network bandwidth, reducing transmission delay and flashed-free of page updates. Operation officers can easily learn the operation status of the power plant unit through Web browser. It's benefit for the coordination of government, grid state, power plant.

  19. 基于 ElasticSearch 的元数据搜索与共享平台%A Water Conservancy Metadata Searching and Sharing Platform Based on ElasticSearch

    Institute of Scientific and Technical Information of China (English)

    姜康; 冯钧; 唐志贤; 王超

    2015-01-01

    随着水利行业信息化的发展,针对海量、多源、异构数据的共享与发现成为行业研究的热点。本文设计与实现一种基于ElasticSearch的水利元数据搜索与共享平台,提出针对水利异构数据的解决方案并对海量数据建立索引,利用多租户访问控制策略,保证用户索引数据的一致性与安全性。通过Rest服务对索引资源进行封装,提供搜索与多粒度的共享方式。应用表明,平台能够保证用户准确高效地获得水利行业数据,节约了水利单位构建搜索系统的成本。%With the development of water conservancy industry informatization, the massive, multi-source and heterogeneous data sharing and discovery become a hot research area.This paper presents the design and implementation of a metadata search plat-form based on ElasticSearch, puts forward a solution of water conservancy heterogeneous data and builds the index on massive da-ta.The multi-tenant access control policy is used to ensure the consistency and security of user data. The Rest service encapsu-late index to provide searching and multi-granularity sharing.Application shows that the platform can ensure users accurately and efficiently to obtain the water conservancy industry data, save the water conservancy construction cost of unit search system.

  20. Cluster based hierarchical resource searching model in P2P network

    Institute of Scientific and Technical Information of China (English)

    Yang Ruijuan; Liu Jian; Tian Jingwen

    2007-01-01

    For the problem of large network load generated by the Gnutella resource-searching model in Peer to Peer (P2P) network, a improved model to decrease the network expense is proposed, which establishes a duster in P2P network,auto-organizes logical layers, and applies a hybrid mechanism of directional searching and flooding. The performance analysis and simulation results show that the proposed hierarchical searching model has availably reduced the generated message load and that its searching-response time performance is as fairly good as that of the Gnutella model.

  1. Semantic Based Efficient Retrieval of Relevant Resources and its Services using Search Engines

    Directory of Open Access Journals (Sweden)

    Pradeep Gurunathan

    2014-05-01

    Full Text Available The main objective of this paper is to propose an efficient mechanism for retrieval of resources using semantic approach and to exchange information using Service Oriented Architecture. A framework has been developed to empower the users in locating relevant resources and associated services through a meaningful semantics. The resources are retrieved efficiently by Modified Matchmaking Algorithm and dynamic ranking, which shows an improvement in search technique provided by the proposed search mechanism. The performance of retrieval of the proposed search mechanism is computed and compared with existing popular search engines like google and yahoo which shows a significant amount of improvement.

  2. Aircraft gas-turbine engines: Noise reduction and vibration control. (Latest citations from Information Services in Mechanical Engineering data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-06-01

    The bibliography contains citations concerning the design and analysis of aircraft gas turbine engines with respect to noise and vibration control. Included are studies regarding the measurement and reduction of noise at its source, within the aircraft, and on the ground. Inlet, nozzle and core aerodynamic studies are cited. Propfan, turbofan, turboprop engines, and applications in short take-off and landing (STOL) aircraft are included. (Contains a minimum of 202 citations and includes a subject term index and title list.)

  3. Superstring theories and models: Cosmological implications. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-09-01

    The bibliography contains citations concerning the use of superstrings in studies of such relativistic phenomena as space-time extension and supergravity. Primordial magnetic monopoles, local cosmic strings, and studies of preon models are among the topics discussed. Calbi-Yau manifolds, and supersymmetrical Kaluza-Klein theories are also considered. Citations relating specifically to particle studies are included in a separate bibliography. (Contains a minimum of 103 citations and includes a subject term index and title list.)

  4. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  5. Hprints - Licence to publish

    DEFF Research Database (Denmark)

    Rabow, Ingegerd; Sikström, Marjatta; Drachen, Thea Marie;

    2010-01-01

    realised the potential advantages for them. The universities have a role here as well as the libraries that manage the archives and support scholars in various aspects of the publishing processes. Libraries are traditionally service providers with a mission to facilitate the knowledge production...

  6. Scholars | Digital Representation | Publishing

    Science.gov (United States)

    Hodgson, Justin

    2014-01-01

    Understanding the current state of digital publishing means that writers can now do more and say more in more ways than ever before in human history. As modes, methods, media and mechanisms of expression mutate into newer and newer digital forms, writers find themselves at a moment when they can create, critique collaborate, and comment according…

  7. Searching for first-degree familial relationships in California's offender DNA database: validation of a likelihood ratio-based approach.

    Science.gov (United States)

    Myers, Steven P; Timken, Mark D; Piucci, Matthew L; Sims, Gary A; Greenwald, Michael A; Weigand, James J; Konzak, Kenneth C; Buoncristiani, Martin R

    2011-11-01

    A validation study was performed to measure the effectiveness of using a likelihood ratio-based approach to search for possible first-degree familial relationships (full-sibling and parent-child) by comparing an evidence autosomal short tandem repeat (STR) profile to California's ∼1,000,000-profile State DNA Index System (SDIS) database. Test searches used autosomal STR and Y-STR profiles generated for 100 artificial test families. When the test sample and the first-degree relative in the database were characterized at the 15 Identifiler(®) (Applied Biosystems(®), Foster City, CA) STR loci, the search procedure included 96% of the fathers and 72% of the full-siblings. When the relative profile was limited to the 13 Combined DNA Index System (CODIS) core loci, the search procedure included 93% of the fathers and 61% of the full-siblings. These results, combined with those of functional tests using three real families, support the effectiveness of this tool. Based upon these results, the validated approach was implemented as a key, pragmatic and demonstrably practical component of the California Department of Justice's Familial Search Program. An investigative lead created through this process recently led to an arrest in the Los Angeles Grim Sleeper serial murders.

  8. Agent based simulation on the process of human flesh search-From perspective of knowledge and emotion

    Science.gov (United States)

    Zhu, Hou; Hu, Bin

    2017-03-01

    Human flesh search as a new net crowed behavior, on the one hand can help us to find some special information, on the other hand may lead to privacy leaking and offending human right. In order to study the mechanism of human flesh search, this paper proposes a simulation model based on agent-based model and complex networks. The computational experiments show some useful results. Discovered information quantity and involved personal ratio are highly correlated, and most of net citizens will take part in the human flesh search or will not take part in the human flesh search. Knowledge quantity does not influence involved personal ratio, but influences whether HFS can find out the target human. When the knowledge concentrates on hub nodes, the discovered information quantity is either perfect or almost zero. Emotion of net citizens influences both discovered information quantity and involved personal ratio. Concretely, when net citizens are calm to face the search topic, it will be hardly to find out the target; But when net citizens are agitated, the target will be found out easily.

  9. A surrogate-based metaheuristic global search method for beam angle selection in radiation treatment planning

    Science.gov (United States)

    Zhang, H. H.; Gao, S.; Chen, W.; Shi, L.; D'Souza, W. D.; Meyer, R. R.

    2013-03-01

    An important element of radiation treatment planning for cancer therapy is the selection of beam angles (out of all possible coplanar and non-coplanar angles in relation to the patient) in order to maximize the delivery of radiation to the tumor site and minimize radiation damage to nearby organs-at-risk. This category of combinatorial optimization problem is particularly difficult because direct evaluation of the quality of treatment corresponding to any proposed selection of beams requires the solution of a large-scale dose optimization problem involving many thousands of variables that represent doses delivered to volume elements (voxels) in the patient. However, if the quality of angle sets can be accurately estimated without expensive computation, a large number of angle sets can be considered, increasing the likelihood of identifying a very high quality set. Using a computationally efficient surrogate beam set evaluation procedure based on single-beam data extracted from plans employing equally-spaced beams (eplans), we have developed a global search metaheuristic process based on the nested partitions framework for this combinatorial optimization problem. The surrogate scoring mechanism allows us to assess thousands of beam set samples within a clinically acceptable time frame. Tests on difficult clinical cases demonstrate that the beam sets obtained via our method are of superior quality.

  10. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    Science.gov (United States)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  11. Augmented Reality for Searching Potential Assets in Medan using GPS based Tracking

    Science.gov (United States)

    Muchtar, M. A.; Syahputra, M. F.; Syahputra, N.; Ashrafia, S.; Rahmat, R. F.

    2017-01-01

    Every city is required to introduce its variety of potential assets so that the people know how to utilize or to develop their area. Potential assets include infrastructure, facilities, people, communities, organizations, customs that affects the characteristics and the way of life in Medan. Due to lack of socialization and information, most of people in Medan only know a few parts of the assets. Recently, so many mobile apps provide search and mapping locations used to find the location of potential assets in user’s area. However, the available information, such as text and digital maps, sometimes do not much help the user clearly and dynamically. Therefore, Augmented Reality technology able to display information in real world vision is implemented in this research so that the information can be more interactive and easily understood by user. This technology will be implemented in mobile apps using GPS based tracking and define the coordinate of user’s smartphone as a marker so that it can help people dynamically and easily find the location of potential assets in the nearest area based on the direction of user’s view on camera.

  12. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  13. NESSiE: The Experimental Sterile Neutrino Search in Short-Base-Line at CERN

    CERN Document Server

    Kose, Umut

    2013-01-01

    Several different experimental results are indicating the existence of anomalies in the neutrino sector. Models beyond the standard model have been developed to explain these results and involve one or more additional neutrinos that do not weakly interact. A new experimental program is therefore needed to study this potential new physics with a possibly new Short-Base-Line neutrino beam at CERN. CERN is actually promoting the start up of a New Neutrino Facility in the North Area site, which may host two complementary detectors, one based on LAr technology and one corresponding to a muon spectrometer. The system is doubled in two different sites. With regards to the latter option, NESSiE, Neutrino Experiment with Spectrometers in Europe, had been proposed for the search of sterile neutrinos studying Charged Current (CC) muon neutrino and antineutrino ineractions. The detectors consists of two magnetic spectrometers to be located in two sites:"Near" and "Far" from the proton target of the CERN-SPS beam. Each sp...

  14. Searching chromosomal landmarks in Indian lentils through EMA-based Giemsa staining method.

    Science.gov (United States)

    Jha, Timir Baran; Halder, Mihir

    2016-09-01

    Lentil is one of the oldest protein-rich food crop with only one cultivated and six wild species. India is one important cultivator, producer and consumer of lentils and possesses a large number of germplasms. All species of lentil show 2n = 14 chromosomes. The primary objective of the present paper is to search chromosomal landmarks through enzymatic maceration and air drying (EMA)-based Giemsa staining method in five Indian lentil species not reported elsewhere at a time. Additionally, gametic chromosome analysis, tendril formation and seed morphology have been studied to ascertain interspecific relationships in lentils. Chromosome analysis in Lens culinaris, Lens orientalis and Lens odemensis revealed that they contain intercalary sat chromosome and similar karyotypic formula, while Lens nigricans and Lens lamottei showed presence of terminal sat chromosomes not reported earlier. This distinct morphological feature in L. nigricans and L. lamottei may be considered as chromosomal landmark. Meiotic analysis showed n = 7 bivalents in L. culinaris, L. nigricans and L. lamottei. No tendril formation was observed in L. culinaris, L. orientalis and L. odemensis while L. nigricans and L. lamottei developed very prominent tendrils. Based on chromosomal analysis, tendril formation and seed morphology, the five lentil species can be separated into two distinct groups. The outcome of this research may enrich conventional and biotechnological breeding programmes in lentil and may facilitate an easy and alternative method for identification of interspecific hybrids.

  15. [Evidence-based clinical practice. Part II--Searching evidence databases].

    Science.gov (United States)

    Bernardo, Wanderley Marques; Nobre, Moacyr Roberto Cuce; Jatene, Fábio Biscegli

    2004-01-01

    The inadequacy of most of traditional sources for medical information, like textbook and review article, do not sustained the clinical decision based on the best evidence current available, exposing the patient to a unnecessary risk. Although not integrated around clinical problem areas in the convenient way of textbooks, current best evidence from specific studies of clinical problems can be found in an increasing number of Internet and electronic databases. The sources that have already undergone rigorous critical appraisal are classified as secondary information sources, others that provide access to original article or abstract, as primary information source, where the quality assessment of the article rely on the clinician oneself . The most useful primary information source are SciELO, the online collection of Brazilian scientific journals, and Medline, the most comprehensive database of the USA National Library of Medicine, where the search may start with use of keywords, that were obtained at the structured answer construction (P.I.C.O.), with the addition of boolean operators "AND", "OR", "NOT". Between the secondary information sources, some of them provide critically appraised articles, like ACP Journal Club, Evidence Based Medicine and InfoPOEMs, others provide evidences organized as online texts, such as "Clinical Evidence" and "UpToDate", and finally, Cochrane Library are composed by systematic reviews of randomized controlled trials. To get studies that could answer the clinical question is part of a mindful practice, that is, becoming quicker and quicker and dynamic with the use of PDAs, Palmtops and Notebooks.

  16. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    Science.gov (United States)

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  17. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    Directory of Open Access Journals (Sweden)

    Wee Loon Lim

    2016-01-01

    Full Text Available The quadratic assignment problem (QAP is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO, a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  18. R-Hadron Search at ATLAS

    DEFF Research Database (Denmark)

    Heisterkamp, Simon Johann Franz

    In this thesis I motivate and present a search for long lived massive R-hadrons using the data collected by the ATLAS detector in 2011. Both ionisation- and time-of-ight-based methods are described. Since no signal was found, a lower limit on the mass of such particles is set. The analysis was also...... published by the ATLAS collboration in Phys.Lett.B. titled `Searches for heavy long-lived sleptons and R-Hadrons with the ATLAS detector in pp collisions at sqrt(s) = 7 TeV'....

  19. Effective access to digital assets: An XML-based EAD search system

    NARCIS (Netherlands)

    Zhang, J.; Fachry, K.N.; Kamps, J.; Tibbo, H.R.; Hank, C.; Lee, C.A.; Clemens, R.

    2009-01-01

    This paper focuses on the question of effective access methods, by developing novel search tools that will be crucial on the massive scale of digital asset repositories. We illustrate concretely why XML matters in digital curation by describing an implementation of a baseline digital asset search sy

  20. Identifying the Impact of Domain Knowledge and Cognitive Style on Web-Based Information Search Behavior

    Science.gov (United States)

    Park, Young; Black, John B.

    2007-01-01

    Although information searching in hypermedia environments has become a new important problem solving capability, there is not much known about what types of individual characteristics constitute a successful information search behavior. This study mainly investigated which of the 2 factors, 1) natural characteristics (cognitive style), and 2)…

  1. Predicting relevance based on assessor disagreement: analysis and practical applications for search evaluation

    NARCIS (Netherlands)

    Demeester, Thomas; Aly, Robin; Hiemstra, Djoerd; Nguyen, Dong-Phuong; Develder, Chris

    2015-01-01

    Evaluation of search engines relies on assessments of search results for selected test queries, from which we would ideally like to draw conclusions in terms of relevance of the results for general (e.g., future, unknown) users. In practice however, most evaluation scenarios only allow us to conclus

  2. A Strategic Analysis of Search Engine Advertising in Web based-commerce

    Directory of Open Access Journals (Sweden)

    Ela Kumar

    2007-08-01

    Full Text Available Endeavor of this paper is to explore the role play of Search Engine in Online Business Industry. This paper discusses the Search Engine advertising programs and provides an insight about the revenue generated online via Search Engine. It explores the growth of Online Business Industry in India and emphasis on the role of Search Engine as the major advertising vehicle. A case study on re volution of Indian Advertising Industry has been conducted and its impact on online revenu e evaluated. Search Engine advertising strategies have been discussed in detail and the impact of Search Engine on Indian Advertising Industry has been analyzed. It also provides an analytical and competitive study of online advertising strategies with traditional advertising tools to evaluate their efficiencies against important advertising parameters. The paper concludes with a brief discussion on the malpractices that have adversarial impact on the efficiency of the Search Engine advertising model and highlight key hurdle Search Engine Industry is facing in Indian Business Scenario

  3. Design of personalized search engine based on user-webpage dynamic model

    Science.gov (United States)

    Li, Jihan; Li, Shanglin; Zhu, Yingke; Xiao, Bo

    2013-12-01

    Personalized search engine focuses on establishing a user-webpage dynamic model. In this model, users' personalized factors are introduced so that the search engine is better able to provide the user with targeted feedback. This paper constructs user and webpage dynamic vector tables, introduces singular value decomposition analysis in the processes of topic categorization, and extends the traditional PageRank algorithm.

  4. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    Science.gov (United States)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  5. Optimizing Online Suicide Prevention: A Search Engine-Based Tailored Approach.

    Science.gov (United States)

    Arendt, Florian; Scherr, Sebastian

    2016-10-14

    Search engines are increasingly used to seek suicide-related information online, which can serve both harmful and helpful purposes. Google acknowledges this fact and presents a suicide-prevention result for particular search terms. Unfortunately, the result is only presented to a limited number of visitors. Hence, Google is missing the opportunity to provide help to vulnerable people. We propose a two-step approach to a tailored optimization: First, research will identify the risk factors. Second, search engines will reweight algorithms according to the risk factors. In this study, we show that the query share of the search term "poisoning" on Google shows substantial peaks corresponding to peaks in actual suicidal behavior. Accordingly, thresholds for showing the suicide-prevention result should be set to the lowest levels during the spring, on Sundays and Mondays, on New Year's Day, and on Saturdays following Thanksgiving. Search engines can help to save lives globally by utilizing a more tailored approach to suicide prevention.

  6. Collab-Analyzer: An Environment for Conducting Web-Based Collaborative Learning Activities and Analyzing Students' Information-Searching Behaviors

    Science.gov (United States)

    Wu, Chih-Hsiang; Hwang, Gwo-Jen; Kuo, Fan-Ray

    2014-01-01

    Researchers have found that students might get lost or feel frustrated while searching for information on the Internet to deal with complex problems without real-time guidance or supports. To address this issue, a web-based collaborative learning system, Collab-Analyzer, is proposed in this paper. It is not only equipped with a collaborative…

  7. Design considerations for a large-scale image-based text search engine in historical manuscript collections

    NARCIS (Netherlands)

    Schomaker, Lambertus

    2016-01-01

    This article gives an overview of design considerations for a handwriting search engine based on pattern recognition and high-performance computing, “Monk”. In order to satisfy multiple and often conflicting technological requirements, an architecture is used which heavily relies on high-performance

  8. BredeQuery: Coordinate-Based Meta-analytic Search of Neuroscientific Literature from the SPM Environment

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Szewczyk, Marcin Marek; Rasmussen, Peter Mondrup

    2010-01-01

    Query offers a direct link from SPM to the Brede Database coordinate-based search engine. BredeQuery is able to ‘grab’ brain location coordinates from the SPM windows and enter them as a query for the Brede Database. Moreover, results of the query can be displayed in a MATLAB window and/or exported directly...

  9. A new model of information behaviour based on the Search Situation Transition schema Information searching, Information behaviour, Behavior, Information retrieval, Information seeking

    Directory of Open Access Journals (Sweden)

    Nils Pharo

    2004-01-01

    Full Text Available This paper presents a conceptual model of information behaviour. The model is part of the Search Situation Transition method schema. The method schema is developed to discover and analyse interplay between phenomena traditionally analysed as factors influencing either information retrieval or information seeking. In this paper the focus is on the model's five main categories: the work task, the searcher, the social/organisational environment, the search task, and the search process. In particular, the search process and its sub-categories search situation and transition and the relationship between these are discussed. To justify the method schema an empirical study was designed according to the schema's specifications. In the paper a subset of the study is presented analysing the effects of work tasks on Web information searching. Findings from this small-scale study indicate a strong relationship between the work task goal and the level of relevance used for judging resources during search processes.

  10. Publishing translated works: Examining the process

    OpenAIRE

    Garby, Taisha Mary

    2015-01-01

    Greystone Books Ltd., based in Vancouver, publishes many translated works. This report is intended to examine the benefits of publishing translated works and compare that to publishing original English language works. This report will analyze two translated works: Gut: The Inside Story of Our Body’s Most Underrated Organ by Giulia Enders, which was translated from German to English, and 1000 Lashes: Because I Say What I Think by Raif Badawi, which was translated from Arabic to English. Greyst...

  11. Prepare to publish.

    Science.gov (United States)

    Price, P M

    2000-01-01

    "I couldn't possibly write an article." "I don't have anything worthwhile to write about." "I am not qualified to write for publication." Do any of these statements sound familiar? This article is intended to dispel these beliefs. You can write an article. You care for the most complex patients in the health care system so you do have something worthwhile to write about. Beside correct spelling and grammar there are no special skills, certificates or diplomas required for publishing. You are qualified to write for publication. The purpose of this article is to take the mystique out of the publication process. Each step of publishing an article will be explained, from idea formation to framing your first article. Practical examples and recommendations will be presented. The essential components of the APA format necessary for Dynamics: The Official Journal of the Canadian Association of Critical Care Nurses will be outlined and resources to assist you will be provided.

  12. Support open access publishing

    DEFF Research Database (Denmark)

    Ekstrøm, Jeannette

    2013-01-01

    Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante tidsskriftsinformati......Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante...... tidsskriftsinformationer (faglig disciplin, BFI niveau, Impact Factor, Open Access) vil kunne danne sig et hurtigt overblik, for derved at kunne træffe et kvalificeret valg om, hvor og hvordan man skal publicere sine forskningsresultater....

  13. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    Science.gov (United States)

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  14. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud

    Directory of Open Access Journals (Sweden)

    Shyamala Devi Munisamy

    2015-01-01

    Full Text Available Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider’s premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  15. Universal Keyword Classifier on Public Key Based Encrypted Multikeyword Fuzzy Search in Public Cloud.

    Science.gov (United States)

    Munisamy, Shyamala Devi; Chokkalingam, Arun

    2015-01-01

    Cloud computing has pioneered the emerging world by manifesting itself as a service through internet and facilitates third party infrastructure and applications. While customers have no visibility on how their data is stored on service provider's premises, it offers greater benefits in lowering infrastructure costs and delivering more flexibility and simplicity in managing private data. The opportunity to use cloud services on pay-per-use basis provides comfort for private data owners in managing costs and data. With the pervasive usage of internet, the focus has now shifted towards effective data utilization on the cloud without compromising security concerns. In the pursuit of increasing data utilization on public cloud storage, the key is to make effective data access through several fuzzy searching techniques. In this paper, we have discussed the existing fuzzy searching techniques and focused on reducing the searching time on the cloud storage server for effective data utilization. Our proposed Asymmetric Classifier Multikeyword Fuzzy Search method provides classifier search server that creates universal keyword classifier for the multiple keyword request which greatly reduces the searching time by learning the search path pattern for all the keywords in the fuzzy keyword set. The objective of using BTree fuzzy searchable index is to resolve typos and representation inconsistencies and also to facilitate effective data utilization.

  16. Towards Hypermedia Electronic Publishing

    OpenAIRE

    Konstantas, Dimitri; Morin, Jean-Henry

    1995-01-01

    The most important problem that decision makers face in today's ever increasing information flux is how to find efficiently and fast the useful information. Hypermedia Electronic Publishing systems, supporting active information distribution and offering hypertext browsing facilities, provide a promising solution to this problem. Nevertheless several issues, like value added services, retrieval and access mechanism, information marketing as well as financial and security aspects should be res...

  17. Reclaiming Society Publishing

    Directory of Open Access Journals (Sweden)

    Philip E. Steinberg

    2015-07-01

    Full Text Available Learned societies have become aligned with commercial publishers, who have increasingly taken over the latter’s function as independent providers of scholarly information. Using the example of geographical societies, the advantages and disadvantages of this trend are examined. It is argued that in an era of digital publication, learned societies can offer leadership with a new model of open access that can guarantee high quality scholarly material whose publication costs are supported by society membership dues.

  18. Health risk assessment of polycyclic aromatic hydrocarbons in the source water and drinking water of China: Quantitative analysis based on published monitoring data.

    Science.gov (United States)

    Wu, Bing; Zhang, Yan; Zhang, Xu-Xiang; Cheng, Shu-Pei

    2011-12-01

    A carcinogenic risk assessment of polycyclic aromatic hydrocarbons (PAHs) in source water and drinking water of China was conducted using probabilistic techniques from a national perspective. The published monitoring data of PAHs were gathered and converted into BaP equivalent (BaP(eq)) concentrations. Based on the transformed data, comprehensive risk assessment was performed by considering different age groups and exposure pathways. Monte Carlo simulation and sensitivity analysis were applied to quantify uncertainties of risk estimation. The risk analysis indicated that, the risk values for children and teens were lower than the accepted value (1.00E-05), indicating no significant carcinogenic risk. The probability of risk values above 1.00E-05 was 5.8% and 6.7% for adults and lifetime groups, respectively. Overall, carcinogenic risks of PAHs in source water and drinking water of China were mostly accepted. However, specific regions, such as Yellow river of Lanzhou reach and Qiantang river should be paid more attention. Notwithstanding the uncertainties inherent in the risk assessment, this study is the first attempt to provide information on carcinogenic risk of PAHs in source water and drinking water of China, and might be useful for potential strategies of carcinogenic risk management and reduction.

  19. Custom Search Engines: Tools & Tips

    Science.gov (United States)

    Notess, Greg R.

    2008-01-01

    Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…

  20. Sensitivity Comparison of Searches for Binary Black Hole Coalescences with Ground-based Gravitational-Wave Detectors

    CERN Document Server

    Mohapatra, Satya; Caudill, Sarah; Clark, James; Hanna, Chad; Klimenko, Sergey; Pankow, Chris; Vaulin, Ruslan; Vedovato, Gabriele; Vitale, Salvatore

    2014-01-01

    Searches for gravitational-wave transients from binary black hole coalescences typically rely on one of two approaches: matched filtering with templates and morphology-independent excess power searches. Multiple algorithmic implementations in the analysis of data from the first generation of ground-based gravitational wave interferometers have used different strategies for the suppression of non-Gaussian noise transients, and targeted different regions of the binary black hole parameter space. In this paper we compare the sensitivity of three such algorithms: matched filtering with full coalescence templates, matched filtering with ringdown templates and a morphology-independent excess power search. The comparison is performed at a fixed false alarm rate and relies on Monte-carlo simulations of binary black hole coalescences for spinning, non-precessing systems with total mass 25-350 solar mass, which covers the parameter space of stellar mass and intermediate mass black hole binaries. We find that in the mas...