WorldWideScience

Sample records for base published search

  1. PUBLISHING WEB SERVICES WHICH ENABLE SYSTEMATIC SEARCH

    Directory of Open Access Journals (Sweden)

    Vitezslav Nezval

    2012-01-01

    Full Text Available Web Services (WS are used for development of distributed applications containing native code assembled with references to remote Web Services. There are thousands of Web Services available on Web but the problem is how to find an appropriate WS by discovering its details, i.e. description of functionality of object methods exposed for public use. Several models have been suggested, some of them implemented, but so far none of them allowing systematic, publicly available, search. This paper suggests a model for publishing WS in a flexible way which allows an automated way of finding the desired Web Service by category and/or functionality without having to access any dedicated servers. The search results in a narrow set of Web Services suitable for problem solution according to the user specification.

  2. Publishing studies: the search for an elusive academic object

    Directory of Open Access Journals (Sweden)

    Sophie Noël

    2015-07-01

    Full Text Available This paper questions the validity of the so-called “publishing studies” as an academic discipline, while trying to situate them within the field of social sciences and to contextualize their success. It argues that a more appropriate frame could be adopted to describe what people studying the transformations of book publishing do – or should do – both at a theoretical and methodological level. The paper begins by providing an overview of the scholarly and academic context in France as far as book publishing is concerned, highlighting its genesis and current development. It goes on to underline the main pitfalls that such a sub-field as publishing studies is faced with, before making suggestions as to the bases for a stimulating analysis of publishing, making a case for an interdisciplinary approach nurtured by social sciences. The paper is based on a long-term field study on independent presses in France, together with a survey of literature on the subject.

  3. Web-Based Computing Resource Agent Publishing

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Web-based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent-child agent framework and primary-slave agent framework were proposed respectively and discussed in detail.

  4. Heat pumps: Industrial applications. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    The bibliography contains citations concerning design, development, and applications of heat pumps for industrial processes. Included are thermal energy exchanges based on air-to-air, ground-coupled, air-to-water, and water-to-water systems. Specific applications include industrial process heat, drying, district heating, and waste processing plants. Other Published Searches in this series cover heat pump technology and economics, and heat pumps for residential and commercial applications. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  5. Enrich the E-publishing Community Website with Search Engine Optimization Technique

    Directory of Open Access Journals (Sweden)

    Vadivel Rangasamy

    2011-09-01

    Full Text Available Internet has played vital role in the online business. Every business peoples are needed to show their information clients or end user. In search engines have million indexed pages. A search engine optimization technique has to implement both web applications static and dynamic. There is no issue for create search engine optimization contents to static (web contents does not change until that web site is re host web application and keep up the search engine optimization regulations and state of affairs. A few significant challenges to dynamic content poses. To overcome these challenges to have a fully functional dynamic site that is optimized as much as a static site can be optimized. Whatever user search and they can get information their information quickly. In that circumstance we are using few search engine optimization dynamic web application methods such as User Friendly URL's, URL Redirector and HTML Generic. Both internal and external elements of the site affect the way it's ranked in any given search engine, so all of these elements should be taken into consideration. Implement these concepts to E-publishing Community Website that web site have large amount of dynamic fields with dynamic validations with the help of XML, XSL Java script. A database plays a major role to accomplish this functionality. We can use 3D (static, dynamic and Meta database structures. One of the advantages of the XML/XSLT combination is the ability to separate content from presentation. A data source can return an XML document, then by using an XSLT, the data can be transformed into whatever HTML is needed, based on the data in the XML document. The flexibility of XML/XLST can be combined with the power of ASP.NET server/client controls by using an XSLT to generate the server/client controls dynamically, thus leveraging the best of both worlds.

  6. A dynamic knowledge base based search engine

    Institute of Scientific and Technical Information of China (English)

    WANG Hui-jin; HU Hua; LI Qing

    2005-01-01

    Search engines have greatly helped us to find thedesired information from the Intemet. Most search engines use keywords matching technique. This paper discusses a Dynamic Knowledge Base based Search Engine (DKBSE), which can expand the user's query using the keywords' concept or meaning. To do this, the DKBSE needs to construct and maintain the knowledge base dynamically via the system's searching results and the user's feedback information. The DKBSE expands the user's initial query using the knowledge base, and returns the searched information after the expanded query.

  7. Tag Based Audio Search Engine

    Directory of Open Access Journals (Sweden)

    Parameswaran Vellachu

    2012-03-01

    Full Text Available The volume of the music database is increasing day by day. Getting the required song as per the choice of the listener is a big challenge. Hence, it is really hard to manage this huge quantity, in terms of searching, filtering, through the music database. It is surprising to see that the audio and music industry still rely on very simplistic metadata to describe music files. However, while searching audio resource, an efficient "Tag Based Audio Search Engine" is necessary. The current research focuses on two aspects of the musical databases 1. Tag Based Semantic Annotation Generation using the tag based approach.2. An audio search engine, using which the user can retrieve the songs based on the users choice. The proposed method can be used to annotation and retrieve songs based on musical instruments used , mood of the song, theme of the song, singer, music director, artist, film director, instrument, genre or style and so on.

  8. Quantum searching application in search based software engineering

    Science.gov (United States)

    Wu, Nan; Song, FangMin; Li, Xiangdong

    2013-05-01

    The Search Based Software Engineering (SBSE) is widely used in software engineering for identifying optimal solutions. However, there is no polynomial-time complexity solution used in the traditional algorithms for SBSE, and that causes the cost very high. In this paper, we analyze and compare several quantum search algorithms that could be applied for SBSE: quantum adiabatic evolution searching algorithm, fixed-point quantum search (FPQS), quantum walks, and a rapid modified Grover quantum searching method. The Grover's algorithm is thought as the best choice for a large-scaled unstructured data searching and theoretically it can be applicable to any search-space structure and any type of searching problems.

  9. Location-based Web Search

    Science.gov (United States)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  10. Chemical and biological warfare: General studies. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    The bibliography contains citations concerning federally sponsored and conducted studies into chemical and biological warfare operations and planning. These studies cover areas not addressed in other parts of this series. The topics include production and storage of agents, delivery techniques, training, military and civil defense, general planning studies, psychological reactions to chemical warfare, evaluations of materials exposed to chemical agents, and studies on banning or limiting chemical warfare. Other published searches in this series on chemical warfare cover detection and warning, defoliants, protection, and biological studies, including chemistry and toxicology. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  11. Chemical and biological warfare: General studies. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The bibliography contains citations concerning federally sponsored and conducted studies into chemical and biological warfare operations and planning. These studies cover areas not addressed in other parts of this series. The topics include production and storage of agents, delivery techniques, training, military and civil defense, general planning studies, psychological reactions to chemical warfare, evaluations of materials exposed to chemical agents, and studies on banning or limiting chemical warfare. Other published searches in this series on chemical warfare cover detection and warning, defoliants, protection, and biological studies, including chemistry and toxicology.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  12. A Quantitative Analysis of Published Skull Base Endoscopy Literature.

    Science.gov (United States)

    Hardesty, Douglas A; Ponce, Francisco A; Little, Andrew S; Nakaji, Peter

    2016-02-01

    Objectives Skull base endoscopy allows for minimal access approaches to the sinonasal contents and cranial base. Advances in endoscopic technique and applications have been published rapidly in recent decades. Setting We utilized an Internet-based scholarly database (Web of Science, Thomson Reuters) to query broad-based phrases regarding skull base endoscopy literature. Participants All skull base endoscopy publications. Main Outcome Measures Standard bibliometrics outcomes. Results We identified 4,082 relevant skull base endoscopy English-language articles published between 1973 and 2014. The 50 top-cited publications (n = 51, due to articles with equal citation counts) ranged in citation count from 397 to 88. Most of the articles were clinical case series or technique descriptions. Most (96% [49/51])were published in journals specific to either neurosurgery or otolaryngology. Conclusions A relatively small number of institutions and individuals have published a large amount of the literature. Most of the publications consisted of case series and technical advances, with a lack of randomized trials. PMID:26949585

  13. Search Based Software Project Management

    OpenAIRE

    Ren, J

    2013-01-01

    This thesis investigates the application of Search Based Software Engineering (SBSE) approach in the field of Software Project Management (SPM). With SBSE approaches, a pool of candidate solutions to an SPM problem is automatically generated and gradually evolved to be increasingly more desirable. The thesis is motivated by the observation from industrial practice that it is much more helpful to the project manager to provide insightful knowledge than exact solutions. We investigate whether S...

  14. Distributed search engine architecture based on topic specific searches

    Science.gov (United States)

    Abudaqqa, Yousra; Patel, Ahmed

    2015-05-01

    Indisputably, search engines (SEs) abound. The monumental growth of users performing online searches on the Web is a contending issue in the contemporary world nowadays. For example, there are tens of billions of searches performed everyday, which typically offer the users many irrelevant results which are time consuming and costly to the user. Based on the afore-going problem it has become a herculean task for existing Web SEs to provide complete, relevant and up-to-date information response to users' search queries. To overcome this problem, we developed the Distributed Search Engine Architecture (DSEA), which is a new means of smart information query and retrieval of the World Wide Web (WWW). In DSEAs, multiple autonomous search engines, owned by different organizations or individuals, cooperate and act as a single search engine. This paper includes the work reported in this research focusing on development of DSEA, based on topic-specific specialised search engines. In DSEA, the results to specific queries could be provided by any of the participating search engines, for which the user is unaware of. The important design goal of using topic-specific search engines in the research is to build systems that can effectively be used by larger number of users simultaneously. Efficient and effective usage with good response is important, because it involves leveraging the vast amount of searched data from the World Wide Web, by categorising it into condensed focused topic -specific results that meet the user's queries. This design model and the development of the DSEA adopt a Service Directory (SD) to route queries towards topic-specific document hosting SEs. It displays the most acceptable performance which is consistent with the requirements of the users. The evaluation results of the model return a very high priority score which is associated with each frequency of a keyword.

  15. Developing a Comprehensive Search Strategy for Evidence Based Systematic Reviews

    Directory of Open Access Journals (Sweden)

    Sekhar Thadiparthi

    2008-03-01

    Full Text Available Objective ‐ Within the health care field it becomes ever more critical to conduct systematic reviews of the research literature to guide programmatic activities, policy‐making decisions, and future research. Conducting systematic reviews requires a comprehensive search of behavioural, social, and policy research to identify relevant literature. As a result, the validity of the systematic review findings and recommendations is partly a function of the quality of the systematic search of the literature. Therefore, a carefully thought out and organized plan for developing and testing a comprehensive search strategy should be followed. This paper uses the HIV/AIDS prevention literature to provide a framework for developing, testing, and conducting a comprehensive search strategy looking beyond RCTs.Methods ‐ Comprehensive search strategies, including automated and manual search techniques, were developed, tested, and implemented to locate published and unpublished citations in order to build a database of HIV/AIDS and sexually transmitted diseases (STD literature. The search incorporated various automated and manual search methods to decrease the chance of missing pertinent information. The automated search was implemented in MEDLINE, EMBASE,PsycINFO, Sociological Abstracts and AIDSLINE. These searches utilized both index terms as well as keywords including truncation, proximity, and phrases. The manual search method includes physically examining journals (hand searching, reference list checks, and researching key authors.Results ‐ Using automated and manual search components, the search strategy retrieved 17,493 articles about prevention of HIV/AIDS and STDs for the years 1988‐2005. The automated search found 91%, and the manual search contributed 9% of the articles reporting on HIV/AIDS or STD interventions with behavioural/biologic outcomes. Among the citations located with automated searches, 48% were found in only one database (20

  16. A web based Publish-Subscribe framework for mobile computing

    Directory of Open Access Journals (Sweden)

    Cosmina Ivan

    2014-05-01

    Full Text Available The growing popularity of mobile devices is permanently changing the Internet user’s computing experience. Smartphones and tablets begin to replace the desktop as the primary means of interacting with various information technology and web resources. While mobile devices facilitate in consuming web resources in the form of web services, the growing demand for consuming services on mobile device is introducing a complex ecosystem in the mobile environment. This research addresses the communication challenges involved in mobile distributed networks and proposes an event-driven communication approach for information dissemination. This research investigates different communication techniques such as polling, long-polling and server-side push as client-server interaction mechanisms and the latest web technologies standard WebSocket , as communication protocol within a Publish/Subscribe paradigm. Finally, this paper introduces and evaluates the proposed framework, that is a hybrid approach of WebSocket and event-based publish/subscribe for operating in mobile environments.

  17. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings. PMID:27433158

  18. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G.; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E.; Wilkinson, Mark D.

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be “FAIR”—Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences—the Pathogen-Host Interaction Database (PHI-base)—to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings. PMID:27433158

  19. Publishing FAIR Data: an exemplar methodology utilizing PHI-base

    Directory of Open Access Journals (Sweden)

    Alejandro eRodríguez Iglesias

    2016-05-01

    Full Text Available Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species versus the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be FAIR - Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences - the Pathogen-Host Interaction Database (PHI-base - to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  20. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  1. Proposal of Tabu Search Algorithm Based on Cuckoo Search

    OpenAIRE

    Ahmed T.Sadiq Al-Obaidi; Ahmed Badre Al-Deen Majeed

    2014-01-01

    This paper presents a new version of Tabu Search (TS) based on Cuckoo Search (CS) called (Tabu-Cuckoo Search TCS) to reduce the effect of the TS problems. The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.

  2. Proposal of Tabu Search Algorithm Based on Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Ahmed T. Sadiq Al-Obaidi

    2014-03-01

    Full Text Available This paper presents a new version of Tabu Search (TS based on Cuckoo Search (CS called (Tabu-Cuckoo Search TCS to reduce the effect of the TS problems. The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.

  3. Mathematical programming solver based on local search

    CERN Document Server

    Gardi, Frédéric; Darlay, Julien; Estellon, Bertrand; Megel, Romain

    2014-01-01

    This book covers local search for combinatorial optimization and its extension to mixed-variable optimization. Although not yet understood from the theoretical point of view, local search is the paradigm of choice for tackling large-scale real-life optimization problems. Today's end-users demand interactivity with decision support systems. For optimization software, this means obtaining good-quality solutions quickly. Fast iterative improvement methods, like local search, are suited to satisfying such needs. Here the authors show local search in a new light, in particular presenting a new kind of mathematical programming solver, namely LocalSolver, based on neighborhood search. First, an iconoclast methodology is presented to design and engineer local search algorithms. The authors' concern about industrializing local search approaches is of particular interest for practitioners. This methodology is applied to solve two industrial problems with high economic stakes. Software based on local search induces ex...

  4. An Ontology Based Personalised Mobile Search Engine

    OpenAIRE

    Mrs. Rashmi A. Jolhe; Dr. Sudhir D. Sawarkar

    2014-01-01

    As the amount of Web information grows rapidly, Search engines must be able to retrieve information according to the user's preference. In this paper, we propose Ontology Based Personalised Mobile Search Engine (OBPMSE) that captures user‟s interest and preferences in the form of concepts by mining search results and their clickthroughs. OBPMSE profile the user‟s interest and personalised the search results according to user‟s profile. OBPMSE classifies these concepts into content concepts an...

  5. ArraySearch: A Web-Based Genomic Search Engine

    OpenAIRE

    Wilson, Tyler J; Ge, Steven X

    2012-01-01

    Recent advances in microarray technologies have resulted in a flood of genomics data. This large body of accumulated data could be used as a knowledge base to help researchers interpret new experimental data. ArraySearch finds statistical correlations between newly observed gene expression profiles and the huge source of well-characterized expression signatures deposited in the public domain. A search query of a list of genes will return experiments on which the genes are significantly up- or...

  6. ArraySearch: A Web-Based Genomic Search Engine

    Directory of Open Access Journals (Sweden)

    Tyler J. Wilson

    2012-01-01

    Full Text Available Recent advances in microarray technologies have resulted in a flood of genomics data. This large body of accumulated data could be used as a knowledge base to help researchers interpret new experimental data. ArraySearch finds statistical correlations between newly observed gene expression profiles and the huge source of well-characterized expression signatures deposited in the public domain. A search query of a list of genes will return experiments on which the genes are significantly up- or downregulated collectively. Searches can also be conducted using gene expression signatures from new experiments. This resource will empower biological researchers with a statistical method to explore expression data from their own research by comparing it with expression signatures from a large public archive.

  7. Modeling and Implementing Ontology-Based Publish/Subscribe Using Semantic Web Technologies

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk; Hansen, Klaus Marius

    2010-01-01

    Publish/subscribe is a communication paradigm for distributed interaction. The paradigm provides decoupling in time, space, and synchronization for interacting entities and several variants of publish/subscribe exists including topic-based, subject-based, and type-based publish/subscribe. A centr...

  8. Systematic search and evaluation of published scientific research:implications for schizophrenia research

    OpenAIRE

    Mäkinen, J.

    2010-01-01

    Abstract The aim of this doctoral thesis is to present methods of search, evaluation and analysis of a specific research domain (schizophrenia) from four perspectives: bibliometric analysis of 1) Finnish doctoral theses and 2) Finnish journal articles on schizophrenia, and meta-analysis to determine the prevalence of 3) alcohol use disorders and 4) cannabis use disorders in schizophrenia. Over the years, the number of Finnish articles on schizophrenia has increased, as well as the amou...

  9. Indoor radon pollution: Control and mitigation. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-05-01

    The bibliography contains citations concerning the control and mitigation of radon pollution in homes and commercial buildings. Citations cover radon transport studies in buildings and soils, remedial action proposals on contaminated buildings, soil venting, building ventilation, sealants, filtration systems, water degassing, reduction of radon sources in building materials, and evaluation of existing radon mitigation programs, including their cost effectiveness. Analysis and detection of radon and radon toxicity are covered in separate published bibliographies. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  10. Chemical and biological warfare: Protection, decontamination, and disposal. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-11-01

    The bibliography contains citations concerning the means to defend against chemical and biological agents used in military operations, and to eliminate the effects of such agents on personnel, equipment, and grounds. Protection is accomplished through protective clothing and masks, and in buildings and shelters through filtration. Elimination of effects includes decontamination and removal of the agents from clothing, equipment, buildings, grounds, and water, using chemical deactivation, incineration, and controlled disposal of material in injection wells and ocean dumping. Other Published Searches in this series cover chemical warfare detection; defoliants; general studies; biochemistry and therapy; and biology, chemistry, and toxicology associated with chemical warfare agents.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  11. Chemical and biological warfare: Protection, decontamination, and disposal. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    The bibliography contains citations concerning the means to defend against chemical and biological agents used in military operations, and to eliminate the effects of such agents on personnel, equipment, and grounds. Protection is accomplished through protective clothing and masks, and in buildings and shelters through filtration. Elimination of effects includes decontamination and removal of the agents from clothing, equipment, buildings, grounds, and water, using chemical deactivation, incineration, and controlled disposal of material in injection wells and ocean dumping. Other Published Searches in this series cover chemical warfare detection; defoliants; general studies; biochemistry and therapy; and biology, chemistry, and toxicology associated with chemical warfare agents. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  12. Search Result Diversification Based on Query Facets

    Institute of Scientific and Technical Information of China (English)

    胡莎; 窦志成; 王晓捷; 继荣

    2015-01-01

    In search engines, different users may search for different information by issuing the same query. To satisfy more users with limited search results, search result diversification re-ranks the results to cover as many user intents as possible. Most existing intent-aware diversification algorithms recognize user intents as subtopics, each of which is usually a word, a phrase, or a piece of description. In this paper, we leverage query facets to understand user intents in diversification, where each facet contains a group of words or phrases that explain an underlying intent of a query. We generate subtopics based on query facets and propose faceted diversification approaches. Experimental results on the public TREC 2009 dataset show that our faceted approaches outperform state-of-the-art diversification models.

  13. Water pollution analysis and detection. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    The bibliography contains citations concerning water pollution analysis, detection, monitoring, and regulation. Citations review online systems, bioassay monitoring, laser-based detection, sensor and biosensor systems, metabolic analyzers, and microsystem techniques. References cover fiber-optic portable detection instruments and rapid detection of toxicants in drinking water. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  14. Ceramic heat exchangers. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-08-01

    The bibliography contains citations concerning the development, fabrication, and performance of ceramic heat exchangers. References discuss applications in coal-fired gas turbine power plants. Topics cover high temperature corrosion resistance, fracture properties, nondestructive evaluations, thermal shock and fatigue, silicon carbide-based ceramics, and composite joining. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  15. Coal gasification. (Latest citations from the EI compendex*plus database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    The bibliography contains citations concerning the development and assessment of coal gasification technology. Combined-cycle gas turbine power plants are reviewed. References also discuss dry-feed gasification, gas turbine interface, coal gasification pilot plants, underground coal gasification, gasification with nuclear heat, and molten bath processes. Clean-coal based electric power generation and environmental issues are examined. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  16. Battery electrolytes. (Latest citations from the EI Compendex*plus database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The bibliography contains citations concerning the design, construction, and applications of solid, liquid, and gaseous battery electrolytes. Most recent citations focus on solid state battery electrolytes based on lithium or lithium-related chemistry. Some attention is given to the composition of the electrodes associated with solid state batteries. Electrolyte properties and battery performance, maintenance, and safety are also considered. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  17. Partial evolution based local adiabatic quantum search

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Lu Song-Feng; Liu Fang; Yang Li-Ping

    2012-01-01

    Recently,Zhang and Lu provided a quantum search algorithm based on partial adiabatic evolution,which beats the time bound of local adiabatic search when the number of marked items in the unsorted database is larger than one.Later,they found that the above two adiabatic search algorithms had the same time complexity when there is only one marked item in the database.In the present paper,following the idea of Roland and Cerf [Roland J and Cerf N J 2002Phys.Rev.A 65 042308],if within the small symmetric evolution interval defined by Zhang et al.,a local adiabatic evolution is performed instead of the original “global” one,this “new” algorithm exhibits slightly better performance,although they are progressively equivalent with M increasing.In addition,the proof of the optimality for this partial evolution based local adiabatic search when M =1 is also presented.Two other special cases of the adiabatic algorithm obtained by appropriately tuning the evolution interval of partial adiabatic evolution based quantum search,which are found to have the same phenomenon above,are also discussed.

  18. An Ontology Based Personalised Mobile Search Engine

    Directory of Open Access Journals (Sweden)

    Mrs. Rashmi A. Jolhe

    2014-02-01

    Full Text Available As the amount of Web information grows rapidly, Search engines must be able to retrieve information according to the user's preference. In this paper, we propose Ontology Based Personalised Mobile Search Engine (OBPMSE that captures user‟s interest and preferences in the form of concepts by mining search results and their clickthroughs. OBPMSE profile the user‟s interest and personalised the search results according to user‟s profile. OBPMSE classifies these concepts into content concepts and location concepts. In addition, users‟ locations (positioned by GPS are used to supplement the location concepts in OBPMSE. The user preferences are organized in an ontology-based, multifacet user profile, used to adapt a personalized ranking function which in turn used for rank adaptation of future search results. we propose to define personalization effectiveness based on the entropies and use it to balance the weights between the content and location facets. In our design, the client collects and stores locally the clickthrough data to protect privacy, whereas heavy tasks such as concept extraction ,training, and reranking are performed at the OBPMSE server. OBPMSE provide client-server architecture and distribute the task to each individual component to decrease the complexity.

  19. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all...... situations, for example in urban environments. We propose a system to provide location-based services using image searches without requiring GPS. The goal of this system is to assist tourists in cities with additional information using their mobile phones and built-in cameras. Based upon the result...... of the image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user....

  20. Space based microlensing planet searches

    Directory of Open Access Journals (Sweden)

    Tisserand Patrick

    2013-04-01

    Full Text Available The discovery of extra-solar planets is arguably the most exciting development in astrophysics during the past 15 years, rivalled only by the detection of dark energy. Two projects unite the communities of exoplanet scientists and cosmologists: the proposed ESA M class mission EUCLID and the large space mission WFIRST, top ranked by the Astronomy 2010 Decadal Survey report. The later states that: “Space-based microlensing is the optimal approach to providing a true statistical census of planetary systems in the Galaxy, over a range of likely semi-major axes”. They also add: “This census, combined with that made by the Kepler mission, will determine how common Earth-like planets are over a wide range of orbital parameters”. We will present a status report of the results obtained by microlensing on exoplanets and the new objectives of the next generation of ground based wide field imager networks. We will finally discuss the fantastic prospect offered by space based microlensing at the horizon 2020–2025.

  1. Ontology-Based Search of Genomic Metadata.

    Science.gov (United States)

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries. PMID:26529777

  2. New similarity search based glioma grading

    Energy Technology Data Exchange (ETDEWEB)

    Haegler, Katrin; Brueckmann, Hartmut; Linn, Jennifer [Ludwig-Maximilians-University of Munich, Department of Neuroradiology, Munich (Germany); Wiesmann, Martin; Freiherr, Jessica [RWTH Aachen University, Department of Neuroradiology, Aachen (Germany); Boehm, Christian [Ludwig-Maximilians-University of Munich, Department of Computer Science, Munich (Germany); Schnell, Oliver; Tonn, Joerg-Christian [Ludwig-Maximilians-University of Munich, Department of Neurosurgery, Munich (Germany)

    2012-08-15

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  3. New similarity search based glioma grading

    International Nuclear Information System (INIS)

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  4. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  5. A Hybrid Metaheuristic for Biclustering Based on Scatter Search and Genetic Algorithms

    Science.gov (United States)

    Nepomuceno, Juan A.; Troncoso, Alicia; Aguilar–Ruiz, Jesús S.

    In this paper a hybrid metaheuristic for biclustering based on Scatter Search and Genetic Algorithms is presented. A general scheme of Scatter Search has been used to obtain high-quality biclusters, but a way of generating the initial population and a method of combination based on Genetic Algorithms have been chosen. Experimental results from yeast cell cycle and human B-cell lymphoma are reported. Finally, the performance of the proposed hybrid algorithm is compared with a genetic algorithm recently published.

  6. Ontology-based prior art search

    OpenAIRE

    Bondarenok, A.

    2003-01-01

    This article describes a method of prior art document search based on semantic similarities of a user query and indexed documents. The ontology-based technology of knowledge extraction and representation is used to build document and query images, which are compared using the semantic similarity technique. Documents are ranked according to their semantic similarities to the query, and the top results are shown to the user.

  7. Activity based video indexing and search

    Science.gov (United States)

    Chen, Yang; Jiang, Qin; Medasani, Swarup; Allen, David; Lu, Tsai-ching

    2010-04-01

    We describe a method for searching videos in large video databases based on the activity contents present in the videos. Being able to search videos based on the contents (such as human activities) has many applications such as security, surveillance, and other commercial applications such as on-line video search. Conventional video content-based retrieval (CBR) systems are either feature based or semantics based, with the former trying to model the dynamics video contents using the statistics of image features, and the latter relying on automated scene understanding of the video contents. Neither approach has been successful. Our approach is inspired by the success of visual vocabulary of "Video Google" by Sivic and Zisserman, and the work of Nister and Stewenius who showed that building a visual vocabulary tree can improve the performance in both scalability and retrieval accuracy for 2-D images. We apply visual vocabulary and vocabulary tree approach to spatio-temporal video descriptors for video indexing, and take advantage of the discrimination power of these descriptors as well as the scalability of vocabulary tree for indexing. Furthermore, this approach does not rely on any model-based activity recognition. In fact, training of the vocabulary tree is done off-line using unlabeled data with unsupervised learning. Therefore the approach is widely applicable. Experimental results using standard human activity recognition videos will be presented that demonstrate the feasibility of this approach.

  8. Music Publishing

    OpenAIRE

    A.Manuel B. Simoes; J.Joao Dias De Almeida

    2003-01-01

    Current music publishing in the Internet is mainly concerned with sound publishing. We claim that music publishing is not only to make sound available but also to define relations between a set of music objects like music scores, guitar chords, lyrics and their meta-data. We want an easy way to publish music in the Internet, to make high quality paper booklets and even to create Audio CD's. In this document we present a workbench for music publishing based on open formats, using open-source t...

  9. Physiologically Based Pharmacokinetic (PBPK) Modeling and Simulation Approaches: A Systematic Review of Published Models, Applications, and Model Verification.

    Science.gov (United States)

    Sager, Jennifer E; Yu, Jingjing; Ragueneau-Majlessi, Isabelle; Isoherranen, Nina

    2015-11-01

    Modeling and simulation of drug disposition has emerged as an important tool in drug development, clinical study design and regulatory review, and the number of physiologically based pharmacokinetic (PBPK) modeling related publications and regulatory submissions have risen dramatically in recent years. However, the extent of use of PBPK modeling by researchers, and the public availability of models has not been systematically evaluated. This review evaluates PBPK-related publications to 1) identify the common applications of PBPK modeling; 2) determine ways in which models are developed; 3) establish how model quality is assessed; and 4) provide a list of publically available PBPK models for sensitive P450 and transporter substrates as well as selective inhibitors and inducers. PubMed searches were conducted using the terms "PBPK" and "physiologically based pharmacokinetic model" to collect published models. Only papers on PBPK modeling of pharmaceutical agents in humans published in English between 2008 and May 2015 were reviewed. A total of 366 PBPK-related articles met the search criteria, with the number of articles published per year rising steadily. Published models were most commonly used for drug-drug interaction predictions (28%), followed by interindividual variability and general clinical pharmacokinetic predictions (23%), formulation or absorption modeling (12%), and predicting age-related changes in pharmacokinetics and disposition (10%). In total, 106 models of sensitive substrates, inhibitors, and inducers were identified. An in-depth analysis of the model development and verification revealed a lack of consistency in model development and quality assessment practices, demonstrating a need for development of best-practice guidelines.

  10. Building Permits, permits plus data base, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Building Permits dataset, was produced all or in part from Published Reports/Deeds information as of 2006. It is described as 'permits plus data base'. Data by...

  11. Community Colleges, school data base attribute, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Community Colleges dataset, was produced all or in part from Published Reports/Deeds information as of 2006. It is described as 'school data base attribute'....

  12. Skip List Data Structure Based New Searching Algorithm and Its Applications: Priority Search

    Directory of Open Access Journals (Sweden)

    Mustafa Aksu

    2016-02-01

    Full Text Available Our new algorithm, priority search, was created with the help of skip list data structure and algorithms. Skip list data structure consists of linked lists formed in layers, which were linked in a pyramidal way. The time complexity of searching algorithm is equal to O(lgN in an N-element skip list data structure. The new developed searching algorithm was based on the hit search number for each searched data. If a datum has greater hit search number, then it was upgraded in the skip list data structure to the upper level. That is, the mostly searched data were located in the upper levels of the skip list data structure and rarely searched data were located in the lower levels of the skip list data structure. The pyramidal structure of data was constructed by using the hit search numbers, in another word, frequency of each data. Thus, the time complexity of searching was almost ?(1 for N records data set. In this paper, the applications of searching algorithms like linear search, binary search, and priority search were realized, and the obtained results were compared. The results demonstrated that priority search algorithm was better than the binary search algorithm.

  13. Strategies for searching and managing evidence-based practice resources.

    Science.gov (United States)

    Robb, Meigan; Shellenbarger, Teresa

    2014-10-01

    Evidence-based nursing practice requires the use of effective search strategies to locate relevant resources to guide practice change. Continuing education and staff development professionals can assist nurses to conduct effective literature searches. This article provides suggestions for strategies to aid in identifying search terms. Strategies also are recommended for refining searches by using controlled vocabulary, truncation, Boolean operators, PICOT (Population/Patient Problem, Intervention, Comparison, Outcome, Time) searching, and search limits. Suggestions for methods of managing resources also are identified. Using these approaches will assist in more effective literature searches and may help evidence-based practice decisions. PMID:25221988

  14. Iris Localization Based on Edge Searching Strategies

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Han Jiuqiang

    2005-01-01

    An iris localization scheme based on edge searching strategies is presented. First, the edge detection operator Laplacian-ofGaussian (LoG) is used to iris original image to search its inner boundary. Then, a circle detection operator is introduced to locate the outer boundary and its center, which is invariant of translation, rotation and scale. Finally, the method of curve fitting is developed in localization of eyelid. The performance of the proposed method is tested with 756 iris images from 108 different classes in CASIA Iris Database and compared with the conventional Hough transform method. The experimental results show that without loss of localization accuracy, the proposed iris localization algorithm is apparently faster than Hough transform.

  15. SEMANTIC BASED MULTIPLE WEB SEARCH ENGINE

    Directory of Open Access Journals (Sweden)

    MS.S.LATHA SHANMUGAVADIVU,

    2010-08-01

    Full Text Available With the tremendous growth of information available to end users through the Web, search engines come to play ever a more critical role. Nevertheless, because of their general-purpose approach, it is always less uncommon that obtained result sets provide a burden ofuseless pages. The next-generation Web architecture, represented by the Semantic Web, provides the layered architecture possibly allowing overcoming this limitation. Several search engines have been proposed, which allow increasing information retrieval accuracy by exploiting a key content of Semantic Web resources, that is, relations. To make the Semantic Web work, well-structured data andrules are necessary for agents to roam the Web [2]. XML and RDF are two important technologies: we can create our own structures by XML without indicating what they mean; RDF uses sets of triples which express basic concepts [2]. DAML is the extension of XML and RDF The aim of this project is to develop a search engine based on ontologymatching within the Semantic Web. It uses the data in Semantic Web form such as DAML or RDF. When the user input a query, the program accepts the query and transfers it to a machine learning agent. Then the agent measures the similarity between different ontology’s, and feedback the matched item to the user.

  16. A Feedback-Based Web Search Engine

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei-feng; XU Bao-wen; ZHOU Xiao-yu

    2004-01-01

    Web search engines are very useful information service tools in the Internet.The current web search engines produce search results relating to the search terms and the actual information collected by them.Since the selections of the search results cannot affect the future ones, they may not cover most people's interests.In this paper, feedback information produced by the users' accessing lists will be represented by the rough set and can reconstruct the query string and influence the search results.And thus the search engines can provide self-adaptability.

  17. Differential Search Algorithm Based Edge Detection

    Science.gov (United States)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  18. Decomposition During Search for Propagation-Based Constraint Solvers

    OpenAIRE

    Mann, Martin; Tack, Guido; Will, Sebastian

    2007-01-01

    We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have impl...

  19. Search-based software test data generation using evolutionary computation

    OpenAIRE

    Maragathavalli, P.

    2011-01-01

    Search-based Software Engineering has been utilized for a number of software engineering activities.One area where Search-Based Software Engineering has seen much application is test data generation. Evolutionary testing designates the use of metaheuristic search methods for test case generation. The search space is the input domain of the test object, with each individual or potential solution, being an encoded set of inputs to that test object. The fitness function is tailored to find...

  20. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    Science.gov (United States)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  1. Mashup Based Content Search Engine for Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-05-01

    Full Text Available Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, pdf files, moving picture.

  2. Study of a Quantum Framework for Search Based Software Engineering

    Science.gov (United States)

    Wu, Nan; Song, Fangmin; Li, Xiangdong

    2013-06-01

    The Search Based Software Engineering (SBSE) is widely used in the software engineering to identify optimal solutions. The traditional methods and algorithms used in SBSE are criticized due to their high costs. In this paper, we propose a rapid modified-Grover quantum searching method for SBSE, and theoretically this method can be applied to any search-space structure and any type of searching problems.

  3. Search Relevance based on the Semantic Web

    OpenAIRE

    Bicer, Veli

    2012-01-01

    In this thesis, we explore the challenge of search relevance in the context of semantic search. Specifically, the notion of semantic relevance can be distinguished from the other types of relevance in Information Retrieval (IR) in terms of employing an underlying semantic model. We propose the emerging Semantic Web data on the Web which is represented in RDF graph structures as an important candidate to become such a semantic model in a search process.

  4. The OGC Publish/Subscribe specification in the context of sensor-based applications

    Science.gov (United States)

    Bigagli, Lorenzo

    2014-05-01

    The Open Geospatial Consortium Publish/Subscribe Standards Working Group (in short, OGC PubSub SWG) was chartered in 2010 to specify a mechanism to support publish/subscribe requirements across OGC service interfaces and data types (coverage, feature, etc.) The Publish/Subscribe Interface Standard 1.0 - Core (13-131) defines an abstract description of the basic mandatory functionality, along with several optional, extended capabilities. The Core is independent of the underlying binding, for which two extensions are currently considered in the PubSub SWG scope: a SOAP binding and RESTful binding. Two primary parties characterize the publish/subscribe model: a Publisher, which is publishing information, and a Subscriber, which expresses an interest in all or part of the published information. In many cases, the Subscriber and the entity to which data is to be delivered (the Receiver) are one and the same. However, they are distinguished in PubSub to allow for these roles to be segregated. This is useful, for example, in event-based systems, where system entities primarily react to incoming information and may emit new information to other interested entities. The Publish/Subscribe model is distinguished from the typical request/response model, where a client makes a request and the server responds with either the requested information or a failure. This provides relatively immediate feedback, but can be insufficient in cases where the client is waiting for a specific event (such as data arrival, server changes, or data updates). In fact, while waiting for an event, a client must repeatedly request the desired information (polling). This has undesirable side effects: if a client polls frequently this can increase server load and network traffic, and if a client polls infrequently it may not receive a message when it is needed. These issues are accentuated when event occurrences are unpredictable, or when the delay between event occurrence and client notification must

  5. Empirical Evidences in Citation-Based Search Engines: Is Microsoft Academic Search dead?

    OpenAIRE

    Orduna-Malea, Enrique; Ayllon, Juan Manuel; Martin-Martin, Alberto; Lopez-Cozar, Emilio Delgado

    2014-01-01

    The goal of this working paper is to summarize the main empirical evidences provided by the scientific community as regards the comparison between the two main citation based academic search engines: Google Scholar and Microsoft Academic Search, paying special attention to the following issues: coverage, correlations between journal rankings, and usage of these academic search engines. Additionally, selfelaborated data is offered, which are intended to provide current evidence about the popul...

  6. A cluster-based simulation of facet-based search

    OpenAIRE

    Urruty, T.; Hopfgartner, F.; Villa, R.; Gildea, N.; Jose, J.M.

    2008-01-01

    The recent increase of online video has challenged the research in the field of video information retrieval. Video search engines are becoming more and more interactive, helping the user to easily find what he or she is looking for. In this poster, we present a new approach of using an iterative clustering algorithm on text and visual features to simulate users creating new facets in a facet-based interface. Our experimental results prove the usefulness of such an approach.

  7. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  8. Multi-agent based cooperative search in combinatorial optimisation

    OpenAIRE

    Martin, Simon

    2013-01-01

    Cooperative search provides a class of strategies to design more effective search methodologies by combining (meta-) heuristics for solving combinatorial optimisation problems. This area has been little explored in operational research. This thesis proposes a general agent-based distributed framework where each agent implements a (meta-) heuristic. An agent continuously adapts itself during the search process using a cooperation protocol based on reinforcement learning and pattern matching. G...

  9. A web-based rapid assessment tool for production publishing solutions

    Science.gov (United States)

    Sun, Tong

    2010-02-01

    Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.

  10. Semantic Web Based Efficient Search Using Ontology and Mathematical Model

    Directory of Open Access Journals (Sweden)

    K.Palaniammal

    2014-01-01

    Full Text Available The semantic web is the forthcoming technology in the world of search engine. It becomes mainly focused towards the search which is more meaningful rather than the syntactic search prevailing now. This proposed work concerns about the semantic search with respect to the educational domain. In this paper, we propose semantic web based efficient search using ontology and mathematical model that takes into account the misleading, unmatched kind of service information, lack of relevant domain knowledge and the wrong service queries. To solve these issues in this framework is designed to make three major contributions, which are ontology knowledge base, Natural Language Processing (NLP techniques and search model. Ontology knowledge base is to store domain specific service ontologies and service description entity (SDE metadata. The search model is to retrieve SDE metadata as efficient for Education lenders, which include mathematical model. The Natural language processing techniques for spell-check and synonym based search. The results are retrieved and stored in an ontology, which in terms prevents the data redundancy. The results are more accurate to search, sensitive to spell check and synonymous context. This paper reduces the user’s time and complexity in finding for the correct results of his/her search text and our model provides more accurate results. A series of experiments are conducted in order to respectively evaluate the mechanism and the employed mathematical model.

  11. Realizing IoT service's policy privacy over publish/subscribe-based middleware.

    Science.gov (United States)

    Duan, Li; Zhang, Yang; Chen, Shiping; Wang, Shiyao; Cheng, Bo; Chen, Junliang

    2016-01-01

    The publish/subscribe paradigm makes IoT service collaborations more scalable and flexible, due to the space, time and control decoupling of event producers and consumers. Thus, the paradigm can be used to establish large-scale IoT service communication infrastructures such as Supervisory Control and Data Acquisition systems. However, preserving IoT service's policy privacy is difficult in this paradigm, because a classical publisher has little control of its own event after being published; and a subscriber has to accept all the events from the subscribed event type with no choice. Few existing publish/subscribe middleware have built-in mechanisms to address the above issues. In this paper, we present a novel access control framework, which is capable of preserving IoT service's policy privacy. In particular, we adopt the publish/subscribe paradigm as the IoT service communication infrastructure to facilitate the protection of IoT services policy privacy. The key idea in our policy-privacy solution is using a two-layer cooperating method to match bi-directional privacy control requirements: (a) data layer for protecting IoT events; and (b) application layer for preserving the privacy of service policy. Furthermore, the anonymous-set-based principle is adopted to realize the functionalities of the framework, including policy embedding and policy encoding as well as policy matching. Our security analysis shows that the policy privacy framework is Chosen-Plaintext Attack secure. We extend the open source Apache ActiveMQ broker by building into a policy-based authorization mechanism to enforce the privacy policy. The performance evaluation results indicate that our approach is scalable with reasonable overheads. PMID:27652188

  12. Realizing IoT service's policy privacy over publish/subscribe-based middleware.

    Science.gov (United States)

    Duan, Li; Zhang, Yang; Chen, Shiping; Wang, Shiyao; Cheng, Bo; Chen, Junliang

    2016-01-01

    The publish/subscribe paradigm makes IoT service collaborations more scalable and flexible, due to the space, time and control decoupling of event producers and consumers. Thus, the paradigm can be used to establish large-scale IoT service communication infrastructures such as Supervisory Control and Data Acquisition systems. However, preserving IoT service's policy privacy is difficult in this paradigm, because a classical publisher has little control of its own event after being published; and a subscriber has to accept all the events from the subscribed event type with no choice. Few existing publish/subscribe middleware have built-in mechanisms to address the above issues. In this paper, we present a novel access control framework, which is capable of preserving IoT service's policy privacy. In particular, we adopt the publish/subscribe paradigm as the IoT service communication infrastructure to facilitate the protection of IoT services policy privacy. The key idea in our policy-privacy solution is using a two-layer cooperating method to match bi-directional privacy control requirements: (a) data layer for protecting IoT events; and (b) application layer for preserving the privacy of service policy. Furthermore, the anonymous-set-based principle is adopted to realize the functionalities of the framework, including policy embedding and policy encoding as well as policy matching. Our security analysis shows that the policy privacy framework is Chosen-Plaintext Attack secure. We extend the open source Apache ActiveMQ broker by building into a policy-based authorization mechanism to enforce the privacy policy. The performance evaluation results indicate that our approach is scalable with reasonable overheads.

  13. Text-based plagiarism in scientific publishing: issues, developments and education.

    Science.gov (United States)

    Li, Yongyan

    2013-09-01

    Text-based plagiarism, or copying language from sources, has recently become an issue of growing concern in scientific publishing. Use of CrossCheck (a computational text-matching tool) by journals has sometimes exposed an unexpected amount of textual similarity between submissions and databases of scholarly literature. In this paper I provide an overview of the relevant literature, to examine how journal gatekeepers perceive textual appropriation, and how automated plagiarism-screening tools have been developed to detect text matching, with the technique now available for self-check of manuscripts before submission; I also discuss issues around English as an additional language (EAL) authors and in particular EAL novices being the typical offenders of textual borrowing. The final section of the paper proposes a few educational directions to take in tackling text-based plagiarism, highlighting the roles of the publishing industry, senior authors and English for academic purposes professionals.

  14. Text-based plagiarism in scientific publishing: issues, developments and education.

    Science.gov (United States)

    Li, Yongyan

    2013-09-01

    Text-based plagiarism, or copying language from sources, has recently become an issue of growing concern in scientific publishing. Use of CrossCheck (a computational text-matching tool) by journals has sometimes exposed an unexpected amount of textual similarity between submissions and databases of scholarly literature. In this paper I provide an overview of the relevant literature, to examine how journal gatekeepers perceive textual appropriation, and how automated plagiarism-screening tools have been developed to detect text matching, with the technique now available for self-check of manuscripts before submission; I also discuss issues around English as an additional language (EAL) authors and in particular EAL novices being the typical offenders of textual borrowing. The final section of the paper proposes a few educational directions to take in tackling text-based plagiarism, highlighting the roles of the publishing industry, senior authors and English for academic purposes professionals. PMID:22535578

  15. Text-Based Plagiarism in Scientific Publishing: Issues, Developments and Education

    OpenAIRE

    Li, Yongyan

    2012-01-01

    Text-based plagiarism, or copying language from sources, has recently become an issue of growing concern in scientific publishing. Use of CrossCheck (a computational text-matching tool) by journals has sometimes exposed an unexpected amount of textual similarity between submissions and databases of scholarly literature. In this paper I provide an overview of the relevant literature, to examine how journal gatekeepers perceive textual appropriation, and how automated plagiarism-screening tools...

  16. Grover quantum searching algorithm based on weighted targets

    Institute of Scientific and Technical Information of China (English)

    Li Panchi; Li Shiyong

    2008-01-01

    The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm.Finally, the validity of the algorithm is proved by a simple searching example.

  17. Beacon-Based Service Publishing Framework in Multiservice Wi-Fi Hotspots

    Directory of Open Access Journals (Sweden)

    Di Sorte Dario

    2007-01-01

    Full Text Available In an expected future multiaccess and multiservice IEEE 802.11 environment, the problem of providing users with useful service-related information to support a correct rapid network selection is expected to become a very important issue. A feasible short-term 802.11-tailored working solution, compliant with existing equipment, is to publish service information encoded within the SSID information element within beacon frames. This makes it possible for an operator to implement service publishing in 802.11 networks while waiting for a standardized mechanism. Also, this straightforward approach has allowed us to evaluate experimentally the performance of a beacon-based service publishing solution. In fact, the main focus of the paper is indeed to present a quantitative comparison of service discovery times between the legacy scenario, where the user is forced to associate and authenticate with a network point of access to check its service offer, and the enhanced scenario where the set of service-related information is broadcasted within beacons. These discovery times are obtained by processing the results of a measurement campaign performed in a multiaccess/service 802.11 environment. This analysis confirms the effectiveness of the beacon-based approach. We also show that the cost in terms of wireless bandwidth consumption of such solution is low.

  18. Attribute-based proxy re-encryption with keyword search.

    Directory of Open Access Journals (Sweden)

    Yanfeng Shi

    Full Text Available Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (ABRKS, which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, ABRKS allows (i the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy. We formalize the syntax and security definitions for ABRKS, and propose two concrete constructions for ABRKS: key-policy ABRKS and ciphertext-policy ABRKS. In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography.

  19. Mragyati : A System for Keyword-based Searching in Databases

    OpenAIRE

    Sarda, N. L.; Jain, Ankur

    2001-01-01

    The web, through many search engine sites, has popularized the keyword-based search paradigm, where a user can specify a string of keywords and expect to retrieve relevant documents, possibly ranked by their relevance to the query. Since a lot of information is stored in databases (and not as HTML documents), it is important to provide a similar search paradigm for databases, where users can query a database without knowing the database schema and database query languages such as SQL. In this...

  20. A Survey on Keyword Based Search over Outsourced Encrypted Data

    Directory of Open Access Journals (Sweden)

    S. Evangeline Sharon

    2013-04-01

    Full Text Available To ensure security, encryption techniques play a major role when data are outsourced to the cloud. Problem of retrieving the data from the cloud servers is considered. Many searching techniques are used for retrieving the data. This study focused on a set of keyword based search algorithms. It provides secure data retrieval with high efficiency. It concludes Ranked Searchable Symmetric Encryption (RSSE scheme meant to be best methodology for searching the encrypted data.

  1. Search-Based Software Test Data Generation Using Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    P. Maragathavalli

    2011-02-01

    Full Text Available Search-based Software Engineering has been utilized for a number of software engineering activities.One area where Search-Based Software Engineering has seen much application is test data generation. Evolutionary testing designates the use of metaheuristic search methods for test case generation. The search space is the input domain of the test object, with each individual or potential solution, being an encoded set of inputs to that test object. The fitness function is tailored to find test data for the type of testthat is being undertaken. Evolutionary Testing (ET uses optimizing search techniques such as evolutionary algorithms to generate test data. The effectiveness of GA-based testing system is compared with a Random testing system. For simple programs both testing systems work fine, but as the complexity of the program or the complexity of input domain grows, GA-based testing system significantly outperforms Random testing.

  2. Weblog Search Engine Based on Quality Criteria

    Directory of Open Access Journals (Sweden)

    F. Azimzadeh,

    2011-01-01

    Full Text Available Nowadays, increasing amount of human knowledge is placed in computerized repositories such as the World Wide Web. This gives rise to the problem of how to locate specific pieces of information in these often quite unstructured repositories. Search engines is the best solved. Some studied show that, almost half of the traffic to the blog server comes from search engines. The more outgoing and informal social nature of the blogosphere opens the opportunity for exploiting more socially-oriented features. The nature of blogs, which are usually characterized by their personal and informal nature, dynamically and constructed on the new relational links required new quality measurement for blog search engine. Link analysis algorithms that exploit the Web graph may not work well in the blogosphere in general. (Gonçalves et al 2010 indicated that most of the popular blogs in the dataset (70% have a PageRank value equal -1, being thus almost invisible to the search engine. We expected that incorporated the special blogs quality criteria would be more desirably retrieved by search engines.

  3. The Evolution of Web Searching.

    Science.gov (United States)

    Green, David

    2000-01-01

    Explores the interrelation between Web publishing and information retrieval technologies and lists new approaches to Web indexing and searching. Highlights include Web directories; search engines; portalisation; Internet service providers; browser providers; meta search engines; popularity based analysis; natural language searching; links-based…

  4. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  5. Prospects for Supersymmetry Discovery Based on Inclusive Searches

    CERN Document Server

    The ATLAS collaboration

    2008-01-01

    This note describes searches for generic SUSY models with R-parity conservation in the ATLAS detector at the CERN Large Hadron Collider. SUSY particles would be produced in pairs and decay to the lightest SUSY particle, (chi^0_1) which escapes the detector, giving signatures involving jets, possible leptons, and ETMISS. The integrated luminosity simulated is 1 fb^-1 . This article relies on work published elsewhere in this collection, where the Standard Model backgrounds for SUSY are discussed.

  6. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine.

  7. Semantic Search among Heterogeneous Biological Databases Based on Gene Ontology

    Institute of Scientific and Technical Information of China (English)

    Shun-Liang CAO; Lei QIN; Wei-Zhong HE; Yang ZHONG; Yang-Yong ZHU; Yi-Xue LI

    2004-01-01

    Semantic search is a key issue in integration of heterogeneous biological databases. In thispaper, we present a methodology for implementing semantic search in BioDW, an integrated biological datawarehouse. Two tables are presented: the DB2GO table to correlate Gene Ontology (GO) annotated entriesfrom BioDW data sources with GO, and the semantic similarity table to record similarity scores derived fromany pair of GO terms. Based on the two tables, multifarious ways for semantic search are provided and thecorresponding entries in heterogeneous biological databases in semantic terms can be expediently searched.

  8. Routing Optimization Based on Taboo Search Algorithm for Logistic Distribution

    Directory of Open Access Journals (Sweden)

    Hongxue Yang

    2014-04-01

    Full Text Available Along with the widespread application of the electronic commerce in the modern business, the logistic distribution has become increasingly important. More and more enterprises recognize that the logistic distribution plays an important role in the process of production and sales. A good routing for logistic distribution can cut down transport cost and improve efficiency. In order to cut down transport cost and improve efficiency, a routing optimization based on taboo search for logistic distribution is proposed in this paper. Taboo search is a metaheuristic search method to perform local search used for logistic optimization. The taboo search is employed to accelerate convergence and the aspiration criterion is combined with the heuristics algorithm to solve routing optimization. Simulation experimental results demonstrate that the optimal routing in the logistic distribution can be quickly obtained by the taboo search algorithm

  9. Gossip-based Search in Multipeer Communication Networks

    CERN Document Server

    Jaho, Eva; Tang, Siyu; Stavrakakis, Ioannis; Van Mieghem, Piet

    2009-01-01

    We study a gossip-based algorithm for searching data objects in a multipeer communication network. All of the nodes in the network are able to communicate with each other. There exists an initiator node that starts a round of searches by randomly querying one or more of its neighbors for a desired object. The queried nodes can also be activated and look for the object. We examine several behavioural patterns of nodes with respect to their willingness to cooperate in the search. We derive mathematical models for the search process based on the balls and bins model, as well as known approximations for the rumour-spreading problem. All models are validated with simulations. We also evaluate the performance of the algorithm and examine the impact of search parameters.

  10. LAHS: A novel harmony search algorithm based on learning automata

    Science.gov (United States)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  11. Stochastic Models for Budget Optimization in Search-Based Advertising

    OpenAIRE

    Muthukrishnan, S.; Pal, Martin; Svitkina, Zoya

    2006-01-01

    Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and ...

  12. Content-based Music Search and Recommendation System

    Science.gov (United States)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  13. Music-based training for pediatric CI recipients: A systematic analysis of published studies.

    Science.gov (United States)

    Gfeller, K

    2016-06-01

    In recent years, there has been growing interest in the use of music-based training to enhance speech and language development in children with normal hearing and some forms of communication disorders, including pediatric CI users. The use of music training for CI users may initially seem incongruous given that signal processing for CIs presents a degraded version of pitch and timbre, both key elements in music. Furthermore, empirical data of systematic studies of music training, particularly in relation to transfer to speech skills are limited. This study describes the rationale for music training of CI users, describes key features of published studies of music training with CI users, and highlights some developmental and logistical issues that should be taken into account when interpreting or planning studies of music training and speech outcomes with pediatric CI recipients. PMID:27246744

  14. Music-Based Training for Pediatric CI Recipients: A Systematic Analysis of Published Studies

    Science.gov (United States)

    Gfeller, Kate

    2016-01-01

    In recent years, there has been growing interest in the use of music-based training to enhance speech and language development in children with normal hearing and some forms of communication disorders, including pediatric CI users. The use of music training for CI users may initially seem incongruous given that signal processing for CIs presents a degraded version of pitch and timbre, both key elements in music. Furthermore, empirical data of systematic studies of music training, particularly in relation to transfer to speech skills are limited. This study describes the rationale for music training of CI users, describes key features of published studies of music training with CI users, and highlights some developmental and logistical issues that should be taken into account when interpreting or planning studies of music training and speech outcomes with pediatric CI recipients. PMID:27246744

  15. Entropy-Based Search Algorithm for Experimental Design

    CERN Document Server

    Malakar, N K

    2010-01-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. ...

  16. METADATA EXPANDED SEMANTICALLY BASED RESOURCE SEARCH IN EDUCATION GRID

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    With the rapid increase of educational resources, how to search for necessary educational resource quickly is one of most important issues. Educational resources have the characters of distribution and heterogeneity, which are the same as the characters of Grid resources. Therefore, the technology of Grid resources search was adopted to implement the educational resources search. Motivated by the insufficiency of currently resources search methods based on metadata, a method of extracting semantic relations between words constituting metadata is proposed. We mainly focus on acquiring synonymy, hyponymy, hypernymy and parataxis relations. In our schema, we extract texts related to metadata that will be expanded from text spatial through text extraction templates. Next, metadata will be obtained through metadata extraction templates. Finally, we compute semantic similarity to eliminate false relations and construct a semantic expansion knowledge base. The proposed method in this paper has been applied on the education grid.

  17. Carbon monoxide toxicity. (Latest citations from the Life Sciences Collection data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-01

    The bibliography contains citations concerning the mechanism and clinical manifestations of carbon monoxide (CO) exposure, including the effects on the liver, cardiovascular, and nervous systems. Topics include studies of the carbon monoxide binding affinity with hemoglobin, measurement of carboxyhemoglobin in humans and various animal species, carbon monoxide levels resulting from tobacco and marijuana smoke, occupational exposure and the NIOSH (National Institute for Occupational Safety and Health) biological exposure index, symptomology and percent of blood CO, and intrauterine exposure. Air pollution, tobacco smoking, and occupational exposure are discussed as primary sources of carbon monoxide exposure. The effects of cigarette smoking on fetal development and health are excluded and examined in a separate bibliography. (Contains a minimum of 172 citations and includes a subject term index and title list.)

  18. Human gene therapy: methods and materials. (latest citations from the biobusiness data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-06-01

    The bibliography contains citations concerning the evolution of technologies for genetic identification and treatment of diseases such as cancer, immune deficiencies, anemias, hemophilias, muscular dystrophy, and diabetes. Emphasis is placed upon development and application of genetic engineering techniques for the production of medicinal biological preparations. Other topics include the use of DNA (deoxyribonucleic acid) probes for gene isolation and disease marker identification, methods for replacing missing or defective genetic material, and mapping of the human genome. Governmental regulation, and moral and ethical implications are briefly reviewed. (Contains 250 citations and includes a subject term index and title list.)

  19. Network-Based Electronic Publishing of Scholarly Works: A Selective Bibliography

    OpenAIRE

    Bailey, Jr., Charles W.

    1995-01-01

    This bibliography presents selected articles, books, electronic documents, and other sources that are useful in understanding scholarly electronic publishing efforts on the Internet and other networks. Most sources have been published between 1990 and the present; however, a limited number of key sources published prior to 1990 are also included. Where possible, links are provided to sources that are available via the Internet. Version 26 is the final update of this bibliography. For more cu...

  20. A knowledge based search tool for performance measures in health care systems.

    Science.gov (United States)

    Beyan, Oya D; Baykal, Nazife

    2012-02-01

    Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.

  1. A Domain Specific Ontology Based Semantic Web Search Engine

    CERN Document Server

    Mukhopadhyay, Debajyoti; Mukherjee, Sreemoyee; Bhattacharya, Jhilik; Kim, Young-Chon

    2011-01-01

    Since its emergence in the 1990s the World Wide Web (WWW) has rapidly evolved into a huge mine of global information and it is growing in size everyday. The presence of huge amount of resources on the Web thus poses a serious problem of accurate search. This is mainly because today's Web is a human-readable Web where information cannot be easily processed by machine. Highly sophisticated, efficient keyword based search engines that have evolved today have not been able to bridge this gap. So comes up the concept of the Semantic Web which is envisioned by Tim Berners-Lee as the Web of machine interpretable information to make a machine processable form for expressing information. Based on the semantic Web technologies we present in this paper the design methodology and development of a semantic Web search engine which provides exact search results for a domain specific search. This search engine is developed for an agricultural Website which hosts agricultural information about the state of West Bengal.

  2. Entropy-Based Search Algorithm for Experimental Design

    Science.gov (United States)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  3. SHOP: scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Linusson, Anna; Zamora, Ismael

    2007-01-01

    A new GRID-based method for scaffold hopping (SHOP) is presented. In a fully automatic manner, scaffolds were identified in a database based on three types of 3D-descriptors. SHOP's ability to recover scaffolds was assessed and validated by searching a database spiked with fragments of known...

  4. A Shape Based Image Search Technique

    Directory of Open Access Journals (Sweden)

    Aratrika Sarkar

    2014-08-01

    Full Text Available This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, imatching of images based on contour matching; iimatching of images based on edge matching; iiimatching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i translation ; ii rotation; iii scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.

  5. FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM

    OpenAIRE

    VIPINKUMAR TIWARI

    2012-01-01

    Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face ...

  6. Personalized Web Search Using Trust Based Hubs And Authorities

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2014-07-01

    Full Text Available In this paper method has been proposed to improve the precision of Personalized Web Search (PWS using Trust based Hubs and Authorities(HA where Hubs are the high quality resource pages and Authorities are the high quality content pages in the specific topic generated using Hyperlink- Induced Topic Search (HITS. The Trust is used in HITS for increasing the reliability of HITS in identifying the good hubs and authorities for effective web search and overcome the problem of topic drift found in HITS. Experimental Study was conducted on the data set of web query sessions to test the effectiveness of PWS with Trust based HA in domains Academics, Entertainment and Sport. The experimental results were compared on the basis of improvement in average precision using PWS with HA (with/without Trust. The results verified statistically show the significant improvement in precision using PWS with HA (with Trust.

  7. Android Based Effective Search Engine Retrieval System Using Ontology

    Directory of Open Access Journals (Sweden)

    A. Praveena

    2014-05-01

    Full Text Available In the proposed model, users search for the query on either Area specified or user’s location, server retrieves all the data to the user’s computer where ontology is applied. After applying the ontology, it will classify in to two concepts such as location based or content based. User PC displays all the relevant keywords to the user’s mobile, so that user selects the exact requirement. The client collects and stores locally then click through data to protect privacy, whereas tasks such as concept extraction, training, and reranking are performed at the search engine server. Ranking occurs and finally exactly mapped information is produced to the users mobile and addresses the privacy problem by restricting the information in the user profile exposed to the search engine server with two privacy parameters. Finally applied UDD algorithm to eliminate the duplication of records which helps to minimize the number of URL listed to the user.

  8. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  9. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  10. FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM

    Directory of Open Access Journals (Sweden)

    VIPINKUMAR TIWARI

    2012-07-01

    Full Text Available Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face recognition is presented in this paper.

  11. IRPPS Editoria Elettronica: an electronic publishing web portal based on Open Journal Systems (OJS)

    OpenAIRE

    Nobile, Marianna; Pecoraro, Fabrizio; GreyNet, Grey Literature Network Service

    2013-01-01

    This paper presents IRPPS Editoria Elettronica, an e-publishing service developed by the Institute for Research on Population and Social Policies (IRPPS) of the Italian National Research Council (CNR). Its aim is reorganize the Institute scientific editorial activity, manage its in-house publications and diffuse its scientific results. In particular this paper focuses on: the IRPPS editorial activities, the platform used to develop the service, the publishing process and the web portal develo...

  12. A Compound Object Authoring and Publishing Tool for Literary Scholars based on the IFLA-FRBR

    OpenAIRE

    Anna Gerber; Jane Hunter

    2009-01-01

    This paper presents LORE (Literature Object Re-use and Exchange), a light-weight tool which is designed to allow literature scholars and teachers to author, edit and publish compound information objects encapsulating related digital resources and bibliographic records. LORE enables users to easily create OAI-ORE-compliant compound objects, which build on the IFLA FRBR model, and also enables them to describe and publish them to an RDF repository as Named Graphs. Using the tool, literary schol...

  13. Review of Online Evidence-based Practice Point-of-Care Information Summary Providers: Response by the Publisher of DynaMed

    OpenAIRE

    Alper, Brian S.

    2010-01-01

    In response to Banzi's et al review of online evidence-based practice point-of-care resources published in the Journal of Medical Internet Research, the publisher of DynaMed clarifies his evidence-based methodology.

  14. Complications rates of non-oncologic urologic procedures in population-based data: a comparison to published series

    Directory of Open Access Journals (Sweden)

    David S. Aaronson

    2010-10-01

    Full Text Available PUSPOSE: Published single institutional case series are often performed by one or more surgeons with considerable expertise in specific procedures. The reported incidence of complications in these series may not accurately reflect community-based practice. We sought to compare complication and mortality rates following urologic procedures derived from population-based data to those of published single-institutional case series. MATERIALS AND METHODS: In-hospital mortality and complications of common urologic procedures (percutaneous nephrostomy, ureteropelvic junction obstruction repair, ureteroneocystostomy, urethral repair, artificial urethral sphincter implantation, urethral suspension, transurethral resection of the prostate, and penile prosthesis implantation reported in the U.S.’s National Inpatient Sample of the Healthcare Cost and Utilization Project were identified. Rates were then compared to those of published single-institution series using statistical analysis. RESULTS: For 7 of the 8 procedures examined, there was no significant difference in rates of complication or mortality between published studies and our population-based data. However, for percutaneous nephrostomy, two published single-center series had significantly lower mortality rates (p < 0.001. The overall rate of complications in the population-based data was higher than published single or select multi-institutional data for percutaneous nephrostomy performed for urinary obstruction (p < 0.001. CONCLUSIONS: If one assumes that administrative data does not suffer from under reporting of complications then for some common urological procedures, complication rates between population-based data and published case series seem comparable. Endorsement of mandatory collection of clinical outcomes is likely the best way to appropriately counsel patients about the risks of these common urologic procedures.

  15. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images...

  16. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong; Trieschnigg, Dolf; Develder, Chris; Hiemstra, Djoerd

    2013-01-01

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  17. An analysis of search-based user interaction on the semantic web

    OpenAIRE

    Hildebrand, M; Ossenbruggen, van, Jacco; Hardman, HL Lynda

    2007-01-01

    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of semantic search features that are used during query construction, the core search process, the presentation of the search results and user feedback on query and results. For each of these, we cons...

  18. An Algorithm Based on Tabu Search for Satisfiability Problem

    Institute of Scientific and Technical Information of China (English)

    黄文奇; 张德富; 汪厚祥

    2002-01-01

    In this paper, a computationally effective algorithm based on tabu search for solving the satisfiability problem (TSSAT) is proposed. Some novel and efficient heuristic strategies for generating candidate neighborhood of the current assignment and selecting variables to be flipped are presented. Especially, the aspiration criterion and tabu list structure of TSSAT are different from those of traditional tabu search. Computational experiments on a class of problem instances show that, TSSAT, in a reasonable amount of computer time, yields better results than Novelty which is currently among the fastest known. Therefore, TSSAT is feasible and effective.

  19. XSemantic: An Extension of LCA Based XML Semantic Search

    Science.gov (United States)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  20. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  1. Gradient-Based Cuckoo Search for Global Optimization

    OpenAIRE

    Fateen, Seif-Eddeen K.; Adrián Bonilla-Petriciolet

    2014-01-01

    One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global opt...

  2. Gradient-Based Cuckoo Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2014-01-01

    Full Text Available One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.

  3. The Value of Interdisciplinarity: A Study Based on the Design of Internet Search Engines.

    Science.gov (United States)

    Herring, Susan Davis

    1999-01-01

    Examines whether search engine design shows a pattern of interdisciplinarity focusing on two disciplines: computer science and library/information science. A citation analysis measured levels of interdisciplinary research and publishing in search engine design and development. Results show a higher level of interdisciplinarity among library and…

  4. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA DFIRM preliminary map out now, published in 2009, Published in 2009, 1:24000 (1in=2000ft) scale, Brown County, WI.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  5. Context Disambiguation Based Semantic Web Search for Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    M. Barathi

    2011-01-01

    Full Text Available Problem statement: Search queries are short and ambiguous and are insufficient for specifying precise user needs. To overcome this problem, some search engines suggest terms that are semantically related to the submitted queries, so that users can choose from the suggestions based on their information needs. Approach: In this study, we introduce an effective approach that captures the user’s specific context by using the WordNet based semantic relatedness measure and the measures of joint keyword occurrences in the web page. Results: The context of the user query is identified and formulated. The user query is enriched to get more relevant web pages that the user needs. Conclusion: Experimental results show that our approach has better precision and recall than the existing methods.

  6. A new classification algorithm based on RGH-tree search

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.

  7. User-Based Information Search across Multiple Social Media

    OpenAIRE

    Gåre, Marte Lise

    2015-01-01

    Most of todays Internet users are registered to one or more social media applications. As so many are registered to multiple application, it has become difficult to locate friends, former colleagues, peers and acquaintances. Reasons for this include private profiles, name collisions, multiple usernames, lack of profile attributes and profile picture. The system designed and implemented in this thesis enable automatic user-based information search across multiple social media without relyi...

  8. Approximation Error Based Suitable Domain Search for Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Vijayshri Chaurasia

    2010-02-01

    Full Text Available Fractal Image compression is a very advantageous technique in the field of image compression. The coding phase of this technique is very time consuming because of computational expenses of suitable domain search. In this paper we have proposed an approximation error based speed-up technique with the use of feature extraction. Proposed scheme reduces the number of range-domain comparisons with significant amount and gives improved time performance.

  9. Ground-Based Photometric Searches for Transiting Planets

    OpenAIRE

    Mazeh, Tsevi

    2009-01-01

    This paper reviews the basic technical characteristics of the ground-based photometric searches for transiting planets, and discusses a possible observational selection effect. I suggest that additional photometric observations of the already observed fields might discover new transiting planets with periods around 4-6 days. The set of known transiting planets support the intriguing correlation between the planetary mass and the orbital period suggested already in 2005.

  10. Where Is It? How Deaf Adolescents Complete Fact-Based Internet Search Tasks

    Science.gov (United States)

    Smith, Chad E.

    2007-01-01

    An exploratory study was designed to describe Internet search behaviors of deaf adolescents who used Internet search engines to complete fact-based search tasks. The study examined search behaviors of deaf high school students such as query formation, query modification, Web site identification, and Web site selection. Consisting of two fact-based…

  11. A Multiple-Neighborhood-Based Parallel Composite Local Search Algorithm for Timetable Problem

    Institute of Scientific and Technical Information of China (English)

    颜鹤; 郁松年

    2004-01-01

    This paper presents a parallel composite local search algorithm based on multiple search neighborhoods to solve a special kind of timetable problem. The new algorithm can also effectively solve those problems that can be solved by general local search algorithms. Experimental results show that the new algorithm can generate better solutions than general local search algorithms.

  12. Target searching based on modified implicit ROI encoding scheme

    Institute of Scientific and Technical Information of China (English)

    Bai Xu; Zhang Zhongzhao

    2008-01-01

    An EBCOT-based method is proposed to reduce the priority of background coefficients in the ROI code block without compromising algorithm complexity.The region of interest is encoded to a higher quality level than background,and the target searching time in video-guided penetrating missile can be shortened.Three kinds of coding schemes based on EBCOT are discussed.Experimental results demonstrate that the proposed method shows higher compression efficiency,lower complexity,and good reconstructed ROI image quality in the lower channel capacity.

  13. Complete Boolean Satisfiability Solving Algorithms Based on Local Search

    Institute of Scientific and Technical Information of China (English)

    Wen-Sheng Guo; Guo-Wu Yang; William N.N.Hung; Xiaoyu Song

    2013-01-01

    Boolean satisfiability (SAT) is a well-known problem in computer science,artificial intelligence,and operations research.This paper focuses on the satisfiability problem of Model RB structure that is similar to graph coloring problems and others.We propose a translation method and three effective complete SAT solving algorithms based on the characterization of Model RB structure.We translate clauses into a graph with exclusive sets and relative sets.In order to reduce search depth,we determine search order using vertex weights and clique in the graph.The results show that our algorithms are much more effective than the best SAT solvers in numerous Model RB benchmarks,especially in those large benchmark instances.

  14. Intelligent Agent based Flight Search and Booking System

    Directory of Open Access Journals (Sweden)

    Floyd Garvey

    2012-07-01

    Full Text Available The world globalization is widely used, and there are several definitions that may fit this one word. However the reality remains that globalization has impacted and is impacting each individual on this planet. It is defined to be greater movement of people, goods, capital and ideas due to increased economic integration, which in turn is propelled, by increased trade and investment. It is like moving towards living in a borderless world. With the reality of globalization, the travel industry has benefited significantly. It could be said that globalization is benefiting from the flight industry. Regardless of the way one looks at it, more persons are traveling each day and are exploring several places that were distant places on a map. Equally, technology has been growing at an increasingly rapid pace and is being utilized by several persons all over the world. With the combination of globalization and the increase in technology and the frequency in travel there is a need to provide an intelligent application that is capable to meeting the needs of travelers that utilize mobile phones all over. It is a solution that fits in perfectly to a user’s busy lifestyle, offers ease of use and enough intelligence that makes a user’s experience worthwhile. Having recognized this need, the Agent based Mobile Airline Search and Booking System is been developed that is built to work on the Android to perform Airline Search and booking using Biometric. The system also possess agent learning capability to perform the search of Airlines based on some previous search pattern .The development been carried out using JADE-LEAP Agent development kit on Android.

  15. The effects of mulching on soil erosion by water. A review based on published data

    Science.gov (United States)

    Prosdocimi, Massimo; Jordán, Antonio; Tarolli, Paolo; Cerdà, Artemi

    2016-04-01

    lands, post-fire affected areas and anthropic sites. Data published in literature have been collected. The results proved the beneficial effects of mulching on soil erosion by water in all the contexts considered, with reduction rates in average sediment concentration, soil loss and runoff volume that, in some cases, exceeded 90%. Furthermore, in most cases, mulching confirmed to be a relatively inexpensive soil conservation practice that allowed to reduce soil erodibility and surface immediately after its application. References Cerdà, A., 1994. The response of abandoned terraces to simulated rain, in: Rickson, R.J., (Ed.), Conserving Soil Resources: European Perspective, CAB International, Wallingford, pp. 44-55. Cerdà, A., Flanagan, D.C., Le Bissonnais, Y., Boardman, J., 2009. Soil erosion and agriculture. Soil & Tillage Research 106, 107-108. Cerdan, O., Govers, G., Le Bissonnais, Y., Van Oost, K., Poesen, J., Saby, N., Gobin, A., Vacca, A., Quinton, J., Auerwald, K., Klik, A., Kwaad, F.J.P.M., Raclot, D., Ionita, I., Rejman, J., Rousseva, S., Muxart, T., Roxo, M.J., Dostal, T., 2010. Rates and spatial variations of soil erosion in Europe: A study based on erosion plot data. Geomorphology 122, 167-177. García-Orenes, F., Roldán A., Mataix-Solera, J, Cerdà, A., Campoy M, Arcenegui, V., Caravaca F. 2009. Soil structural stability and erosion rates influenced by agricultural management practices in a semi-arid Mediterranean agro-ecosystem. Soil Use and Management 28: 571-579. Hayes, S.A., McLaughlin, R.A., Osmond, D.L., 2005. Polyacrylamide use for erosion and turbidity control on construction sites. Journal of soil and water conservation 60(4):193-199. Jordán, A., Zavala, L.M., Muñoz-Rojas, M., 2011. Mulching, effects on soil physical properties. In: Gliński, J., Horabik, J., Lipiec, J. (Eds.), Encyclopedia of Agrophysics. Springer, Dordrecht, pp. 492-496. Montgomery, D.R., 2007. Soil erosion and agricultural sustainability. PNAS 104, 13268-13272. Prats, S

  16. The effects of mulching on soil erosion by water. A review based on published data

    Science.gov (United States)

    Prosdocimi, Massimo; Jordán, Antonio; Tarolli, Paolo; Cerdà, Artemi

    2016-04-01

    lands, post-fire affected areas and anthropic sites. Data published in literature have been collected. The results proved the beneficial effects of mulching on soil erosion by water in all the contexts considered, with reduction rates in average sediment concentration, soil loss and runoff volume that, in some cases, exceeded 90%. Furthermore, in most cases, mulching confirmed to be a relatively inexpensive soil conservation practice that allowed to reduce soil erodibility and surface immediately after its application. References Cerdà, A., 1994. The response of abandoned terraces to simulated rain, in: Rickson, R.J., (Ed.), Conserving Soil Resources: European Perspective, CAB International, Wallingford, pp. 44-55. Cerdà, A., Flanagan, D.C., Le Bissonnais, Y., Boardman, J., 2009. Soil erosion and agriculture. Soil & Tillage Research 106, 107-108. Cerdan, O., Govers, G., Le Bissonnais, Y., Van Oost, K., Poesen, J., Saby, N., Gobin, A., Vacca, A., Quinton, J., Auerwald, K., Klik, A., Kwaad, F.J.P.M., Raclot, D., Ionita, I., Rejman, J., Rousseva, S., Muxart, T., Roxo, M.J., Dostal, T., 2010. Rates and spatial variations of soil erosion in Europe: A study based on erosion plot data. Geomorphology 122, 167-177. García-Orenes, F., Roldán A., Mataix-Solera, J, Cerdà, A., Campoy M, Arcenegui, V., Caravaca F. 2009. Soil structural stability and erosion rates influenced by agricultural management practices in a semi-arid Mediterranean agro-ecosystem. Soil Use and Management 28: 571-579. Hayes, S.A., McLaughlin, R.A., Osmond, D.L., 2005. Polyacrylamide use for erosion and turbidity control on construction sites. Journal of soil and water conservation 60(4):193-199. Jordán, A., Zavala, L.M., Muñoz-Rojas, M., 2011. Mulching, effects on soil physical properties. In: Gliński, J., Horabik, J., Lipiec, J. (Eds.), Encyclopedia of Agrophysics. Springer, Dordrecht, pp. 492-496. Montgomery, D.R., 2007. Soil erosion and agricultural sustainability. PNAS 104, 13268-13272. Prats, S

  17. Web Search Result Clustering based on Cuckoo Search and Consensus Clustering

    OpenAIRE

    Alam, Mansaf; Sadaf, Kishwar

    2015-01-01

    Clustering of web search result document has emerged as a promising tool for improving retrieval performance of an Information Retrieval (IR) system. Search results often plagued by problems like synonymy, polysemy, high volume etc. Clustering other than resolving these problems also provides the user the easiness to locate his/her desired information. In this paper, a method, called WSRDC-CSCC, is introduced to cluster web search result using cuckoo search meta-heuristic method and Consensus...

  18. A DE-Based Scatter Search for Global Optimization Problems

    Directory of Open Access Journals (Sweden)

    Kun Li

    2015-01-01

    Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.

  19. Self-Adapting Routing Overlay Network for Frequently Changing Application Traffic in Content-Based Publish/Subscribe System

    Directory of Open Access Journals (Sweden)

    Meng Chi

    2014-01-01

    Full Text Available In the large-scale distributed simulation area, the topology of the overlay network cannot always rapidly adapt to frequently changing application traffic to reduce the overall traffic cost. In this paper, we propose a self-adapting routing strategy for frequently changing application traffic in content-based publish/subscribe system. The strategy firstly trains the traffic information and then uses this training information to predict the application traffic in the future. Finally, the strategy reconfigures the topology of the overlay network based on this predicting information to reduce the overall traffic cost. A predicting path is also introduced in this paper to reduce the reconfiguration numbers in the process of the reconfigurations. Compared to other strategies, the experimental results show that the strategy proposed in this paper could reduce the overall traffic cost of the publish/subscribe system in less reconfigurations.

  20. Visual tracking method based on cuckoo search algorithm

    Science.gov (United States)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  1. Similarity preserving snippet-based visualization of web search results.

    Science.gov (United States)

    Gomez-Nieto, Erick; San Roman, Frizzi; Pagliosa, Paulo; Casaca, Wallace; Helou, Elias S; de Oliveira, Maria Cristina F; Nonato, Luis Gustavo

    2014-03-01

    Internet users are very familiar with the results of a search query displayed as a ranked list of snippets. Each textual snippet shows a content summary of the referred document (or webpage) and a link to it. This display has many advantages, for example, it affords easy navigation and is straightforward to interpret. Nonetheless, any user of search engines could possibly report some experience of disappointment with this metaphor. Indeed, it has limitations in particular situations, as it fails to provide an overview of the document collection retrieved. Moreover, depending on the nature of the query--for example, it may be too general, or ambiguous, or ill expressed--the desired information may be poorly ranked, or results may contemplate varied topics. Several search tasks would be easier if users were shown an overview of the returned documents, organized so as to reflect how related they are, content wise. We propose a visualization technique to display the results of web queries aimed at overcoming such limitations. It combines the neighborhood preservation capability of multidimensional projections with the familiar snippet-based representation by employing a multidimensional projection to derive two-dimensional layouts of the query search results that preserve text similarity relations, or neighborhoods. Similarity is computed by applying the cosine similarity over a "bag-of-words" vector representation of collection built from the snippets. If the snippets are displayed directly according to the derived layout, they will overlap considerably, producing a poor visualization. We overcome this problem by defining an energy functional that considers both the overlapping among snippets and the preservation of the neighborhood structure as given in the projected layout. Minimizing this energy functional provides a neighborhood preserving two-dimensional arrangement of the textual snippets with minimum overlap. The resulting visualization conveys both a global

  2. Self-Adapting Routing Overlay Network for Frequently Changing Application Traffic in Content-Based Publish/Subscribe System

    OpenAIRE

    Meng Chi; Shufen Liu; Changhong Hu

    2014-01-01

    In the large-scale distributed simulation area, the topology of the overlay network cannot always rapidly adapt to frequently changing application traffic to reduce the overall traffic cost. In this paper, we propose a self-adapting routing strategy for frequently changing application traffic in content-based publish/subscribe system. The strategy firstly trains the traffic information and then uses this training information to predict the application traffic in the future. Finally, the strat...

  3. Performance Oriented Query Processing In GEO Based Location Search Engines

    CERN Document Server

    Umamaheswari, M

    2010-01-01

    Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable geographic search engines. Query processing is a major bottleneck in standard web search engines, and the main reason for the thousands of machines used by the major engines. Geographic search engine query processing is different in that it requires a combination of text and spatial data processing techniques. We propose several algorithms for efficient query processing in geographic search engines, integrate them into an existing web search query processor, and evaluate them on large sets of real data and query traces.

  4. Library Publishing is Special: Selection and Eligibility in Library Publishing

    OpenAIRE

    Royster, Paul

    2014-01-01

    Traditional publishing is based on ownership, commerce, paid exchanges, and scholarship as a commodity, while library activities are based on a service model of sharing resources and free exchange. I believe library publishing should be based on those values and should not duplicate or emulate traditional publishing. University presses have mixed views of library publishing, and libraries should not adopt those attitudes. Library publishers are not gatekeepers; their mission is dissemination....

  5. Web Image Retrieval Search Engine based on Semantically Shared Annotation

    Directory of Open Access Journals (Sweden)

    Alaa Riad

    2012-03-01

    Full Text Available This paper presents a new majority voting technique that combines the two basic modalities of Web images textual and visual features of image in a re-annotation and search based framework. The proposed framework considers each web page as a voter to vote the relatedness of keyword to the web image, the proposed approach is not only pure combination between image low level feature and textual feature but it take into consideration the semantic meaning of each keyword that expected to enhance the retrieval accuracy. The proposed approach is not used only to enhance the retrieval accuracy of web images; but also able to annotated the unlabeled images.

  6. Constraint-based local search for container stowage slot planning

    DEFF Research Database (Denmark)

    Pacino, Dario; Jensen, Rune Møller; Bebbington, Tom

    2012-01-01

    Due to the economical importance of stowage planning, there recently has been an increasing interest in developing optimization algorithms for this problem. We have developed 2-phase approach that in most cases can generate near optimal stowage plans within a few hundred seconds for large deep......-sea vessels. This paper describes the constrained-based local search algorithm used in the second phase of this approach where individual containers are assigned to slots in each bay section. The algorithm can solve this problem in an average of 0.18 seconds per bay, corresponding to a 20 seconds runtime...

  7. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  8. Publishing in the Next Few Years: A Commercial Publisher's Perspective

    Science.gov (United States)

    Blom, Harry J. J.

    Over the past 15 years, internet technology changed the ways of publishing tremendously. It is truly revolutionary that both fresh and historic science publications are so much easier to search and find. This revolution has not been completed and all parties involved in science publishing are continuously adjusting their activities to the new rules and opportunities. From a commercial publisher's perspective, I will extrapolate what happens today to predict what happens in the next few years with journal subscriptions, book publishing, marketing, production and other steps in the publishing process.

  9. The Critical Role of Journal Selection in Scholarly Publishing: A Search for Journal Options in Language-related Research Areas and Disciplines

    OpenAIRE

    2012-01-01

    Problem statement: With the globalization in academia, pressures on academics to publish internationally have been increasing all over the world. However, participating in global scientific communication through publishing in well-regarded international journals is a very challenging and daunting task particularly for nonnative speaker (NNS) scholars. Recent research has pointed out both linguistic and nonlinguistic factors behind the challenges facing NNS scholars in their attempts to publis...

  10. Performance Oriented Query Processing In GEO Based Location Search Engines

    OpenAIRE

    Umamaheswari, M.; S. Sivasubramanian

    2010-01-01

    Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable g...

  11. Affordances of students' using the World Wide Web as a publishing medium in project-based learning environments

    Science.gov (United States)

    Bos, Nathan Daniel

    This dissertation investigates the emerging affordance of the World Wide Web as a place for high school students to become authors and publishers of information. Two empirical studies lay groundwork for student publishing by examining learning issues related to audience adaptation in writing, motivation and engagement with hypermedia, design, problem-solving, and critical evaluation. Two models of student publishing on the World Wide Web were investigated over the course of two 11spth grade project-based science curriculums. In the first curricular model, students worked in pairs to design informative hypermedia projects about infectious diseases that were published on the Web. Four case studies were written, drawing on both product- and process-related data sources. Four theoretically important findings are illustrated through these cases: (1) multimedia, especially graphics, seemed to catalyze some students' design processes by affecting the sequence of their design process and by providing a connection between the science content and their personal interest areas, (2) hypermedia design can demand high levels of analysis and synthesis of science content, (3) students can learn to think about science content representation through engagement with challenging design tasks, and (4) students' consideration of an outside audience can be facilitated by teacher-given design principles. The second Web-publishing model examines how students critically evaluate scientific resources on the Web, and how students can contribute to the Web's organization and usability by publishing critical reviews. Students critically evaluated Web resources using a four-part scheme: summarization of content, content, evaluation of credibility, evaluation of organizational structure, and evaluation of appearance. Content analyses comparing students' reviews and reviewed Web documents showed that students were proficient at summarizing content of Web documents, identifying their publishing

  12. Eosinophilic pustular folliculitis: A published work-based comprehensive analysis of therapeutic responsiveness.

    Science.gov (United States)

    Nomura, Takashi; Katoh, Mayumi; Yamamoto, Yosuke; Miyachi, Yoshiki; Kabashima, Kenji

    2016-08-01

    Eosinophilic pustular folliculitis (EPF) is a non-infectious inflammatory dermatosis of unknown etiology that principally affects the hair follicles. There are three variants of EPF: (i) classic EPF; (ii) immunosuppression-associated EPF, which is subdivided into HIV-associated (IS/HIV) and non-HIV-associated (IS/non-HIV); and (iii) infancy-associated EPF. Oral indomethacin is efficacious, especially for classic EPF. No comprehensive information on the efficacies of other medical management regimens is currently available. In this study, we surveyed regimens for EPF that were described in articles published between 1965 and 2013. In total, there were 1171 regimens; 874, 137, 45 and 115 of which were applied to classic, IS/HIV, IS/non-HIV and infancy-associated EPF, respectively. Classic EPF was preferentially treated with oral indomethacin with efficacy of 84% whereas topical steroids were preferred for IS/HIV, IS/non-HIV and infancy-associated EPF with efficacy of 47%, 73% and 82%, respectively. Other regimens such as oral Sairei-to (a Chinese-Japanese herbal medicine), diaminodiphenyl sulfone, cyclosporin and topical tacrolimus were effective for indomethacin-resistant cases. Although the preclusion of direct comparison among cases was one limitation, this study provides a dataset that is applicable to the construction of therapeutic algorithms for EPF. PMID:26875627

  13. Explicit Context Matching in Content-Based Publish/Subscribe Systems

    Directory of Open Access Journals (Sweden)

    Miguel Jiménez

    2013-03-01

    Full Text Available Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs. Our approach represents a solution that helps to effciently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (unsubscription cost per node, while improving the global performance/throughput of the network of brokers without altering the cost of SIENA-like topology changes.

  14. Explicit context matching in content-based publish/subscribe systems.

    Science.gov (United States)

    Vavassori, Sergio; Soriano, Javier; Lizcano, David; Jiménez, Miguel

    2013-03-01

    Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S) communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs) and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs). Our approach represents a solution that helps to effciently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (un)subscription cost per node, while improving the global performance/throughput of the network of brokers without altering the cost of SIENA-like topology changes.

  15. On optimizing distance-based similarity search for biological databases.

    Science.gov (United States)

    Mao, Rui; Xu, Weijia; Ramakrishnan, Smriti; Nuckolls, Glen; Miranker, Daniel P

    2005-01-01

    Similarity search leveraging distance-based index structures is increasingly being used for both multimedia and biological database applications. We consider distance-based indexing for three important biological data types, protein k-mers with the metric PAM model, DNA k-mers with Hamming distance and peptide fragmentation spectra with a pseudo-metric derived from cosine distance. To date, the primary driver of this research has been multimedia applications, where similarity functions are often Euclidean norms on high dimensional feature vectors. We develop results showing that the character of these biological workloads is different from multimedia workloads. In particular, they are not intrinsically very high dimensional, and deserving different optimization heuristics. Based on MVP-trees, we develop a pivot selection heuristic seeking centers and show it outperforms the most widely used corner seeking heuristic. Similarly, we develop a data partitioning approach sensitive to the actual data distribution in lieu of median splits. PMID:16447992

  16. Proceedings of the ECIR 2012 Workshop on Task-Based and Aggregated Search (TBAS2012)

    DEFF Research Database (Denmark)

    2012-01-01

    Task-based search aims to understand the user's current task and desired outcomes, and how this may provide useful context for the Information Retrieval (IR) process. An example of task-based search is situations where additional user information on e.g. the purpose of the search or what the user...

  17. An analysis of search-based user interaction on the Semantic Web

    NARCIS (Netherlands)

    Hildebrand, M.; Ossenbruggen, J.R. van; Hardman, L.

    2007-01-01

    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of

  18. Scanned Hardcopy Maps, legato data base; public works, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Scanned Hardcopy Maps dataset, was produced all or in part from Hardcopy Maps information as of 2006. It is described as 'legato data base; public works'. Data...

  19. Where is smoking research published?

    OpenAIRE

    A. Liguori(ISAAS, Trieste); Hughes, J. R.

    1996-01-01

    OBJECTIVE: To identify journals that have a focus on human nicotine/smoking research and to investigate the coverage of smoking in "high-impact" journals. DESIGN: The MEDLINE computer database was searched for English-language articles on human studies published in 1988-1992 using "nicotine", "smoking", "smoking cessation", "tobacco", or "tobacco use disorder" as focus descriptors. This search was supplemented with a similar search of the PSYCLIT computer database. Fifty-eight journals ...

  20. Efficient mining of association rules based on gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    Fariba Khademolghorani

    2011-07-01

    Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.

  1. A Detection Scheme for Cavity-based Dark Matter Searches

    CERN Document Server

    Bukhari, M H S

    2016-01-01

    We present here proposal of a scheme and some useful ideas for resonant cavity-based detection of cold dark matter axions with hope to improve the existing endeavors. The scheme is based upon our idea of a detector, which incorporates an integrated tunnel diode and a GaAs HEMT or HFET, High Electron Mobility Transistor or Heterogenous FET, for resonance detection and amplification from a resonant cavity (in a strong transverse magnetic field from a cylindrical array of halbach magnets). The idea of a TD-oscillator-amplifier combination could possibly serve as a more sensitive and viable resonance detection regime while maintaining an excellent performance with low noise temperature, whereas the halbach magnets array may offer a compact and permanent solution replacing the conventional electromagnets scheme. We believe that all these factors could possibly increase the sensitivity and accuracy of axion detection searches and reduce complications (and associated costs) in the experiments, in addition to help re...

  2. Memoryless cooperative graph search based on the simulated annealing algorithm

    Institute of Scientific and Technical Information of China (English)

    Hou Jian; Yan Gang-Feng; Fan Zhen

    2011-01-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip consensus method based scheme is presented to update the key parameter-radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.

  3. Development of a semantic-based search system for immunization knowledge.

    Science.gov (United States)

    Lee, Li-Hui; Chu, Hsing-Yi; Liou, Der-Ming

    2013-01-01

    This study developed and implemented a children's immunization management system with English and Traditional Chinese immunization ontology for semantic-based search of immunization knowledge. Parents and guardians are able to search vaccination-related information effectively. Jena Java Application Programming Interface (API) was used to search for synonyms and associated classes in this domain and then use them for searching by Google Search API. The searching results do not only contain suggested web links but also include a basic introduction to vaccine and related preventable diseases. Compared with the Google keyword-based search, over half of the 31 trial users prefer using semantic-based search of this system. Although the search runtime on this system is not as fast as well-known search engines such as Google or Yahoo, it can accurately focus on searching for child vaccination information to provide search results that better conform to the needs of users. Furthermore, the system is also one of the few health knowledge platforms that support Traditional Chinese semantic-based search.

  4. There are Discipline-Based Differences in Authors’ Perceptions Towards Open Access Publishing. A Review of: Coonin, B., & Younce, L. M. (2010. Publishing in open access education journals: The authors’ perspectives. Behavioral & Social Sciences Librarian, 29, 118-132. doi:10.1080/01639261003742181

    Directory of Open Access Journals (Sweden)

    Lisa Shen

    2011-09-01

    searches for publishing opportunities (40.4%, and professional societies (29.3% for raising their awareness of OA. Moreover, based on voluntary general comments left at end of the survey, researchers observed that some authors viewed the terms open access and electronic “synonymously” and thought of OA publishing only as a “format change” (p.125.Conclusion – The study revealed some discipline-based differences in authors’ attitudes toward scholarly publishing and the concept of OA. The majority of authors publishing in education viewed author fees, a common OA publishing practice in life and medical sciences as undesirable. On the other hand, citation impact, a major determinant for life and medical sciences publishing, was only a minor factor for authors in education. These findings provide useful insights for future research on discipline-based publication differences.The findings also indicated peer review is the primary determinant for authors publishing in education. Moreover, while the majority of authors surveyed considered both print and e-journal format to be equally acceptable, almost one third viewed OA journals as less prestigious than subscription-based publications. Some authors also seemed to confuse the concept between OA and electronic publishing. These findings could generate fresh discussion points between academic librarians and faculty members regarding OA publishing.

  5. Improving Image Search based on User Created Communities

    CERN Document Server

    Joshi, Amruta; Radev, Dragomir; Hassan, Ahmed

    2011-01-01

    Tag-based retrieval of multimedia content is a difficult problem, not only because of the shorter length of tags associated with images and videos, but also due to mismatch in the terminologies used by searcher and content creator. To alleviate this problem, we propose a simple concept-driven probabilistic model for improving text-based rich-media search. While our approach is similar to existing topic-based retrieval and cluster-based language modeling work, there are two important differences: (1) our proposed model considers not only the query-generation likelihood from cluster, but explicitly accounts for the overall "popularity" of the cluster or underlying concept, and (2) we explore the possibility of inferring the likely concept relevant to a rich-media content through the user-created communities that the content belongs to. We implement two methods of concept extraction: a traditional cluster based approach, and the proposed community based approach. We evaluate these two techniques for how effectiv...

  6. Semantic Web Search based on Ontology Modeling using Protege Reasoner

    OpenAIRE

    Shekhar, Monica; K, Saravanaguru RA.

    2013-01-01

    The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy if a search by understanding the intent of the search and providing contextually relevant results. This paper describes a semantic approach toward web search through a PHP application. The goal was to parse through a user's browsing history and return semantically relevant web pages for th...

  7. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  8. An Efficient Annotation of Search Results Based on Feature

    Directory of Open Access Journals (Sweden)

    A. Jebha

    2015-10-01

    Full Text Available  With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB. A result page returned from WDB has multiple search records (SRR.Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able, relevant information are extracted and labels of data are assigned meaningfully. In this paper, feature ranking is proposed to extract the relevant information of extracted feature from WDB. Feature ranking is practical to enhance ideas of data and identify relevant features. This research explores the performance of feature ranking process by using the linear support vector machines with various feature of WDB database for annotation of relevant results. Experimental result of proposed system provides better result when compared with the earlier methods.

  9. Video Image Block-matching Motion Estimation Algorithm Based on Two-step Search

    Institute of Scientific and Technical Information of China (English)

    Wei-qi JIN; Yan CHEN; Ling-xue WANG; Bin LIU; Chong-liang LIU; Ya-zhong SHEN; Gui-qing ZHANG

    2010-01-01

    Aiming at the shortcoming that certain existing blocking-matching algorithms, such as full search, three-step search, and diamond search algorithms, usually can not keep a good balance between high accuracy and low computational complexity, a block-matching motion estimation algorithm based on two-step search is proposed in this paper. According to the fact that the gray values of adjacent pixels will not vary fast, the algorithm employs an interlaced search pattern in the search window to estimate the motion vector of the object-block. Simulation and actual experiments demonstrate that the proposed algorithm greatly outperforms the well-known three-step search and diamond search algorithms, no matter the motion vector is large or small. Compared with the full search algorithm, the proposed one achieves similar performance but requires much less computation, therefore, the algorithm is well qualified for real-time video image processing.

  10. A Content-Based Search Algorithm for Motion Estimation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The basic search algorithm toimplement Motion Estimation (ME) in the H. 263 encoder is a full search.It is simple but time-consuming. Traditional search algorithms are fast, but may cause a fall in image quality or an increase in bit-rate in low bit-rate applications. A fast search algorithm for ME with consideration on image content is proposed in this paper. Experiments show that the proposed algorithm can offer up to 70 percent savings in execution time with almost no sacrifice in PSNR and bit-rate, compared with the full search.

  11. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  12. Novel cued search strategy based on information gain for phased array radar

    Institute of Scientific and Technical Information of China (English)

    Lu Jianbin; Hu Weidong; Xiao Hui; Yu Wenxian

    2008-01-01

    A search strategy based on the maximal information gain principle is presented for the cued search of phased array radars. First, the method for the determination of the cued search region, arrangement of beam positions, and the calculation of the prior probability distribution of each beam position is discussed. And then,two search algorithms based on information gain are proposed using Shannon entropy and Kullback-Leibler entropy,respectively. With the proposed strategy, the information gain of each beam position is predicted before the radar detection, and the observation is made in the beam position with the maximal information gain. Compared with the conventional method of sequential search and confirm search, simulation results show that the proposed search strategy can distinctly improve the search performance and save radar time resources with the same given detection probability.

  13. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  14. Accelerated Search for Gaussian Generator Based on Triple Prime Integers

    Directory of Open Access Journals (Sweden)

    Boris S. Verkhovsky

    2009-01-01

    Full Text Available Problem statement: Modern cryptographic algorithms are based on complexity of two problems: Integer factorization of real integers and a Discrete Logarithm Problem (DLP. Approach: The latter problem is even more complicated in the domain of complex integers, where Public Key Cryptosystems (PKC had an advantage over analogous encryption-decryption protocols in arithmetic of real integers modulo p: The former PKC have quadratic cycles of order O (p2 while the latter PKC had linear cycles of order O(p. Results: An accelerated non-deterministic search algorithm for a primitive root (generator in a domain of complex integers modulo triple prime p was provided in this study. It showed the properties of triple primes, the frequencies of their occurrence on a specified interval and analyzed the efficiency of the proposed algorithm. Conclusion: Numerous computer experiments and their analysis indicated that three trials were sufficient on average to find a Gaussian generator.

  15. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  16. MISH publishes new framework for fear-based, abstinence-only education.

    Science.gov (United States)

    Mayer, R

    1997-01-01

    The US Medical Institute for Sexual Health (MISH) "National Guidelines for Sexuality and Character Education" is a fear-based, abstinence-only framework for sexuality education. This document is virtually identical in format, conceptual framework, and typeface to that produced by the Sexuality Information and Education Council of the United States (SIECUS) and adopts SIECUS language in many sections. SIECUS agrees with approximately 60% of the MISH messages and finds it noteworthy that the MISH guidelines provide a blueprint for sex education from elementary school through high school. However, MISH and SIECUS follow very different approaches to sex education. SIECUS seeks to help young people acquire the necessary information to safeguard their sexual health and make proper decisions, while the single goal of MISH is to promote abstinence until marriage (avoiding sexual intercourse and any activity involving genital contact or stimulation). MISH promotes this view with fear-based messages, uses only negative terms to describe adolescent sexual relations, and provides scant and misleading information about contraception (including the assertion that adolescent use of birth control is often ineffective). The MISH curriculum also promotes the anti-abortion viewpoint that life begins at conception. While acknowledging the changing composition of the US family, MISH promotes a view of the nuclear family as the "best" type of family in which to rear children. MISH skirts the issue of sexual orientation and avoids giving information about ways to seek treatment for sexually transmitted diseases or prenatal care. MISH guidelines make unsubstantiated statements about the value of abstinence, provide almost no information about how to adapt their framework to various communities, discuss contraceptives and condoms only in terms of failures, and suggest that all adolescent sexual relations have negative consequences. PMID:12319710

  17. A Semantic Query Transformation Approach Based on Ontology for Search Engine

    Directory of Open Access Journals (Sweden)

    SAJENDRA KUMAR

    2012-05-01

    Full Text Available These days we are using some popular web search engines for information retrieval in all areas, such engine are as Google, Yahoo!, and Live Search, etc. to obtain initial helpful information.Which information we retrieved via search engine may not be relevant to the search target in the search engine user's mind. When user not found relevant information he has to shortlist the results. Thesesearch engines use traditional search service based on "static keywords", which require the users to type in the exact keywords. This approach clearly puts the users in a critical situation of guessing the exact keyword. The users may want to define their search by using attributes of the search target. But the relevancy of results in most cases may not be satisfactory and the users may not be patient enough to browse through complete list of pages to get a relevant result. The reason behind this is the search engines performs search based on the syntax not on semantics. But they seemed to be less efficient to understand the relationship between the keywords which had an adverse effect on the results it produced. Semantic search engines – only solution to this; which returns concepts not documents according to user query matching. In This paper we proposed a semantic query interface which creates a semantic query according the user input query and study of current semantic search engine techniques for semantic search.

  18. A community-based event delivery protocol in publish/subscribe systems for delay tolerant sensor networks.

    Science.gov (United States)

    Liu, Nianbo; Liu, Ming; Zhu, Jinqi; Gong, Haigang

    2009-01-01

    The basic operation of a Delay Tolerant Sensor Network (DTSN) is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.

  19. A Community-Based Event Delivery Protocol in Publish/Subscribe Systems for Delay Tolerant Sensor Networks

    Directory of Open Access Journals (Sweden)

    Haigang Gong

    2009-09-01

    Full Text Available The basic operation of a Delay Tolerant Sensor Network (DTSN is to finish pervasive data gathering in networks with intermittent connectivity, while the publish/subscribe (Pub/Sub for short paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extension of Pub/Sub systems in DTSNs has become a promising research topic. However, due to the unique frequent partitioning characteristic of DTSNs, extension of a Pub/Sub system in a DTSN is a considerably difficult and challenging problem, and there are no good solutions to this problem in published works. To ad apt Pub/Sub systems to DTSNs, we propose CED, a community-based event delivery protocol. In our design, event delivery is based on several unchanged communities, which are formed by sensor nodes in the network according to their connectivity. CED consists of two components: event delivery and queue management. In event delivery, events in a community are delivered to mobile subscribers once a subscriber comes into the community, for improving the data delivery ratio. The queue management employs both the event successful delivery time and the event survival time to decide whether an event should be delivered or dropped for minimizing the transmission overhead. The effectiveness of CED is demonstrated through comprehensive simulation studies.

  20. Multiple search methods for similarity-based virtual screening: analysis of search overlap and precision

    OpenAIRE

    Holliday John D; Kanoulas Evangelos; Malim Nurul; Willett Peter

    2011-01-01

    Abstract Background Data fusion methods are widely used in virtual screening, and make the implicit assumption that the more often a molecule is retrieved in multiple similarity searches, the more likely it is to be active. This paper tests the correctness of this assumption. Results Sets of 25 searches using either the same reference structure and 25 different similarity measures (similarity fusion) or 25 different reference structures and the same similarity measure (group fusion) show that...

  1. Concept Search

    OpenAIRE

    Giunchiglia, Fausto; Kharkevich, Uladzimir; Zaihrayeu, Ilya

    2008-01-01

    In this paper we present a novel approach, called Concept Search, which extends syntactic search, i.e., search based on the computation of string similarity between words, with semantic search, i.e., search based on the computation of semantic relations between concepts. The key idea of Concept Search is to operate on complex concepts and to maximally exploit the semantic information available, reducing to syntactic search only when necessary, i.e., when no semantic information is available. ...

  2. Cost Analysis of Screening for, Diagnosing, and Staging Prostate Cancer Based on a Systematic Review of Published Studies

    Directory of Open Access Journals (Sweden)

    Donatus U. Ekwueme, PhD

    2007-10-01

    Full Text Available IntroductionThe reported estimates of the economic costs associated with prostate cancer screening, diagnostic testing, and clinical staging are substantial. However, the resource costs (i.e., factors such as physician’s time, laboratory tests, patient’s time away from work included in these estimates are unknown. We examined the resource costs for prostate cancer screening, diagnostic tests, and staging; examined how these costs differ in the United States from costs in other industrialized countries; and estimated the cost per man screened for prostate cancer, per man given a diagnostic test, and per man given a clinically staged diagnosis of this disease.Methods We searched the electronic databases MEDLINE, EMBASE, and CINAHL for articles and reports on prostate cancer published from January 1980 through December 2003. Studies were selected according to the following criteria: the article was published in English; the full text was available for review; the study reported the resource or input cost data used to estimate the cost of prostate cancer testing, diagnosing, or clinical staging; and the study was conducted in an established market economy. We used descriptive statistics, weighted mean, and Monte Carlo simulation methods to pool and analyze the abstracted data.Results Of 262 studies examined, 28 met our selection criteria (15 from the United States and 13 from other industrialized countries. For studies conducted in the United States, the pooled baseline resource cost was $37.23 for screening with prostate-specific antigen (PSA and $31.77 for screening with digital rectal examination (DRE. For studies conducted in other industrialized countries, the pooled baseline resource cost was $30.92 for screening with PSA and $33.54 for DRE. For diagnostic and staging methods, the variation in the resource costs between the United States and other industrialized countries was mixed.Conclusion Because national health resources are limited

  3. Search Engines and Search Technologies for Web-based Text Data%网络文本数据搜索引擎与搜索技术

    Institute of Scientific and Technical Information of China (English)

    李勇

    2001-01-01

    This paper describes the functions, characteristics and operating principles of search engines based on Web text, and the searching and data mining technologies for Web-based text information. Methods of computer-aided text clustering and abstacting are also given. Finally, it gives some guidelines for the assessment of searching quality.

  4. Developing a Grid-based search and categorization tool

    CERN Document Server

    Haya, Glenn; Vigen, Jens

    2003-01-01

    Grid technology has the potential to improve the accessibility of digital libraries. The participants in Project GRACE (Grid Search And Categorization Engine) are in the process of developing a search engine that will allow users to search through heterogeneous resources stored in geographically distributed digital collections. What differentiates this project from current search tools is that GRACE will be run on the European Data Grid, a large distributed network, and will not have a single centralized index as current web search engines do. In some cases, the distributed approach offers advantages over the centralized approach since it is more scalable, can be used on otherwise inaccessible material, and can provide advanced search options customized for each data source.

  5. Commercial Properties, parcel data base attribute, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Commercial Properties dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is...

  6. Cellular Phone Towers, parcel data base attribute, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cellular Phone Towers dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is...

  7. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  8. Professional Microsoft search fast search, Sharepoint search, and search server

    CERN Document Server

    Bennett, Mark; Kehoe, Miles; Voskresenskaya, Natalya

    2010-01-01

    Use Microsoft's latest search-based technology-FAST search-to plan, customize, and deploy your search solutionFAST is Microsoft's latest intelligent search-based technology that boasts robustness and an ability to integrate business intelligence with Search. This in-depth guide provides you with advanced coverage on FAST search and shows you how to use it to plan, customize, and deploy your search solution, with an emphasis on SharePoint 2010 and Internet-based search solutions.With a particular appeal for anyone responsible for implementing and managing enterprise search, this book presents t

  9. Extracting Communities of Interests for Semantics-Based Graph Searches

    Science.gov (United States)

    Nakatsuji, Makoto; Tanaka, Akimichi; Uchiyama, Toshio; Fujimura, Ko

    Users recently find their interests by checking the contents published or mentioned by their immediate neighbors in social networking services. We propose semantics-based link navigation; links guide the active user to potential neighbors who may provide new interests. Our method first creates a graph that has users as nodes and shared interests as links. Then it divides the graph by link pruning to extract practical numbers, that the active user can navigate, of interest-sharing groups, i.e. communities of interests (COIs). It then attaches a different semantic tag to the link to each representative user, which best reflects the interests of COIs that they are included in, and to the link to each immediate neighbor of the active user. It finally calculates link attractiveness by analyzing the semantic tags on links. The active user can select the link to access by checking the semantic tags and link attractiveness. User interests extracted from large scale actual blog-entries are used to confirm the efficiency of our proposal. Results show that navigation based on link attractiveness and representative users allows the user to find new interests much more accurately than is otherwise possible.

  10. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the ycbcr color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach is

  11. A fast block-matching algorithm based on variable shape search

    Institute of Scientific and Technical Information of China (English)

    LIU Hao; ZHANG Wen-jun; CAI Jun

    2006-01-01

    Block-matching motion estimation plays an important role in video coding. The simple and efficient fast block-matching algorithm using Variable Shape Search (VSS) proposed in this paper is based on diamond search and hexagon search. The initial big diamond search is designed to fit the directional centre-biased characteristics of the real-world video sequence, and the directional hexagon search is designed to identify a small region where the best motion vector is expected to locate.Finally, the small diamond search is used to select the best motion vector in the located small region. Experimental results showed that the proposed VSS algorithm can significantly reduce the computational complexity, and provide competitive computational speedup with similar distortion performance as compared with the popular Diamond-based Search (DS) algorithm in the MPEG-4 Simple Profile.

  12. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    Science.gov (United States)

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  13. SEARCH PROFILES BASED ON USER TO CLUSTER SIMILARITY

    Directory of Open Access Journals (Sweden)

    Ilija Subasic

    2007-12-01

    Full Text Available Privacy of web users' query search logs has, since last year's AOL dataset release, been treated as one of the central issues concerning privacy on the Internet, Therefore, the question of privacy preservation has also raised a lot of attention in different communities surrounding the search engines. Usage of clustering methods for providing low level contextual search, wriile retaining high privacy/utility is examined in this paper. By using only the user's cluster membership the search query terms could be no longer retained thus providing less privacy concerns both for the users and companies. The paper brings lightweight framework for combining query words, user similarities and clustering in order to provide a meaningful way of mining user searches while protecting their privacy. This differs from previous attempts for privacy preserving in the attempt to anonymize the queries instead of the users.

  14. A self-adaptive step Cuckoo search algorithm based on dimension by dimension improvement

    OpenAIRE

    Ren, Lu; Li, Haiyang; He, Xingshi

    2015-01-01

    The choice of step length plays an important role in convergence speed and precision of Cuckoo search algorithm. In the paper, a self-adaptive step Cuckoo search algorithm based on dimensional improvement is provided. First, since the step in the original self-adaptive step Cuckoo search algorithm is not updated when the current position of the nest is in the optimal position, simple modification of the step is made for the update. Second, evaluation strategy based on dimension by dimension u...

  15. A generic agent-based framework for cooperative search using pattern matching and reinforcement learning

    OpenAIRE

    Martin, Simon; Ouelhadj, Djamila; Beullens, P.; Ozcan, E.

    2011-01-01

    Cooperative search provides a class of strategies to design more effective search methodologies through combining (meta-) heuristics for solving combinatorial optimisation problems. This area has been little explored in operational research. In this study, we propose a general agent-based distributed framework where each agent implements a (meta-) heuristic. An agent continuously adapts itself during the search process using a cooperation protocol based on reinforcement learning and pattern m...

  16. Generating MEDLINE search strategies using a librarian knowledge-based system.

    OpenAIRE

    P. Peng; Aguirre, A.; Johnson, S. B.; Cimino, J. J.

    1993-01-01

    We describe a librarian knowledge-based system that generates a search strategy from a query representation based on a user's information need. Together with the natural language parser AQUA, the system functions as a human/computer interface, which translates a user query from free text into a BRS Onsite search formulation, for searching the MEDLINE bibliographic database. In the system, conceptual graphs are used to represent the user's information need. The UMLS Metathesaurus and Semantic ...

  17. Multiobjective Optimization Method Based on Adaptive Parameter Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    P. Sabarinath

    2015-01-01

    Full Text Available The present trend in industries is to improve the techniques currently used in design and manufacture of products in order to meet the challenges of the competitive market. The crucial task nowadays is to find the optimal design and machining parameters so as to minimize the production costs. Design optimization involves more numbers of design variables with multiple and conflicting objectives, subjected to complex nonlinear constraints. The complexity of optimal design of machine elements creates the requirement for increasingly effective algorithms. Solving a nonlinear multiobjective optimization problem requires significant computing effort. From the literature it is evident that metaheuristic algorithms are performing better in dealing with multiobjective optimization. In this paper, we extend the recently developed parameter adaptive harmony search algorithm to solve multiobjective design optimization problems using the weighted sum approach. To determine the best weightage set for this analysis, a performance index based on least average error is used to determine the index of each weightage set. The proposed approach is applied to solve a biobjective design optimization of disc brake problem and a newly formulated biobjective design optimization of helical spring problem. The results reveal that the proposed approach is performing better than other algorithms.

  18. Evaluating Search Engine Relevance with Click-Based Metrics

    Science.gov (United States)

    Radlinski, Filip; Kurup, Madhu; Joachims, Thorsten

    Automatically judging the quality of retrieval functions based on observable user behavior holds promise for making retrieval evaluation faster, cheaper, and more user centered. However, the relationship between observable user behavior and retrieval quality is not yet fully understood. In this chapter, we expand upon, Radlinski et al. (How does clickthrough data reflect retrieval quality, In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), 43-52, 2008), presenting a sequence of studies investigating this relationship for an operational search engine on the arXiv.org e-print archive. We find that none of the eight absolute usage metrics we explore (including the number of clicks observed, the frequency with which users reformulate their queries, and how often result sets are abandoned) reliably reflect retrieval quality for the sample sizes we consider. However, we find that paired experiment designs adapted from sensory analysis produce accurate and reliable statements about the relative quality of two retrieval functions. In particular, we investigate two paired comparison tests that analyze clickthrough data from an interleaved presentation of ranking pairs, and find that both give accurate and consistent results. We conclude that both paired comparison tests give substantially more accurate and sensitive evaluation results than the absolute usage metrics in our domain.

  19. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, flood plains, Published in 2008, 1:24000 (1in=2000ft) scale, Box Elder County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  20. Road and Street Centerlines, Originally based on TIGER then updated/improved from mulitple sources, Published in 2007, Churchill County, NV.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — as of 2007. It is described as 'Originally based on TIGER then updated/improved from mulitple sources'. Data by this publisher are often provided in State Plane...

  1. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, From FEMA, Published in 2007, 1:1200 (1in=100ft) scale, Town of Cary NC.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from LIDAR...

  2. Demeter, persephone, and the search for emergence in agent-based models.

    Energy Technology Data Exchange (ETDEWEB)

    North, M. J.; Howe, T. R.; Collier, N. T.; Vos, J. R.; Decision and Information Sciences; Univ. of Chicago; PantaRei Corp.; Univ. of Illinois

    2006-01-01

    In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent or potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.

  3. A quick search method for audio signals based on a piecewise linear representation of feature trajectories

    CERN Document Server

    Kimura, Akisato; Kurozumi, Takayuki; Murase, Hiroshi

    2007-01-01

    This paper presents a new method for a quick similarity-based search through long unlabeled audio streams to detect and locate audio clips provided by users. The method involves feature-dimension reduction based on a piecewise linear representation of a sequential feature trajectory extracted from a long audio stream. Two techniques enable us to obtain a piecewise linear representation: the dynamic segmentation of feature trajectories and the segment-based Karhunen-L\\'{o}eve (KL) transform. The proposed search method guarantees the same search results as the search method without the proposed feature-dimension reduction method in principle. Experiment results indicate significant improvements in search speed. For example the proposed method reduced the total search time to approximately 1/12 that of previous methods and detected queries in approximately 0.3 seconds from a 200-hour audio database.

  4. A phenomenological relative biological effectiveness (RBE) model for proton therapy based on all published in vitro cell survival data

    International Nuclear Information System (INIS)

    Proton therapy treatments are currently planned and delivered using the assumption that the proton relative biological effectiveness (RBE) relative to photons is 1.1. This assumption ignores strong experimental evidence that suggests the RBE varies along the treatment field, i.e. with linear energy transfer (LET) and with tissue type. A recent review study collected over 70 experimental reports on proton RBE, providing a comprehensive dataset for predicting RBE for cell survival. Using this dataset we developed a model to predict proton RBE based on dose, dose average LET (LETd) and the ratio of the linear-quadratic model parameters for the reference radiation (α/β)x, as the tissue specific parameter.The proposed RBE model is based on the linear quadratic model and was derived from a nonlinear regression fit to 287 experimental data points. The proposed model predicts that the RBE increases with increasing LETd and decreases with increasing (α/β)x. This agrees with previous theoretical predictions on the relationship between RBE, LETd and (α/β)x. The model additionally predicts a decrease in RBE with increasing dose and shows a relationship between both α and β with LETd. Our proposed phenomenological RBE model is derived using the most comprehensive collection of proton RBE experimental data to date. Previously published phenomenological models, based on a limited data set, may have to be revised. (paper)

  5. Searches for physics beyond the Standard Model using jet-based resonances with the ATLAS Detector

    CERN Document Server

    Frate, Meghan; The ATLAS collaboration

    2016-01-01

    Run2 of the LHC, with its increased center-of-mass energy, is an unprecedented opportunity to discover physics beyond the Standard Model. One interesting possibility to conduct such searches is to use resonances based on jets. The latest search results from the ATLAS experiment, based on either inclusive or heavy-flavour jets, will be presented.

  6. A Trustability Metric for Code Search based on Developer Karma

    CERN Document Server

    Gysin, Florian S

    2010-01-01

    The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.

  7. Risk-based scheduling of multiple search passes for UUVs

    Science.gov (United States)

    Baylog, John G.; Wettergren, Thomas A.

    2016-05-01

    This paper addresses selected computational aspects of collaborative search planning when multiple search agents seek to find hidden objects (i.e. mines) in operating environments where the detection process is prone to false alarms. A Receiver Operator Characteristic (ROC) analysis is applied to construct a Bayesian cost objective function that weighs and combines missed detection and false alarm probabilities. It is shown that for fixed ROC operating points and a validation criterion consisting of a prerequisite number of detection outcomes, an interval exists in the number of conducted search passes over which the risk objective function is supermodular. We show that this property is not retained beyond validation criterion boundaries. We investigate the use of greedy algorithms for distributing search effort and, in particular, examine the double greedy algorithm for its applicability under conditions of varying criteria. Numerical results are provided to demonstrate the effectiveness of the approach.

  8. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    OpenAIRE

    Maryam Akbari; Mozafar Cheshme Sohrabi; Ebrahim Afshar Zanjani

    2012-01-01

    The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post...

  9. Bielefeld Academic Search Engine (BASE) An end-user oriented institutional repository search service

    OpenAIRE

    Pieper, Dirk; Summann, Friedrich

    2006-01-01

    In a SPARC position paper (http://www.arl.org/sparc/IR/ir.html) published in 2002 Raym Crow defined an institutional repository as a "digital collection capturing and preserving the intellectual output of a single or multi-university community". Repository servers can help institutions to increase their visibility and, in addition, they are beginning to change the system of scholarly communication. There are some multi-institutional driven repository servers but most of the repositories a...

  10. Lyman-Kutcher-Burman NTCP model parameters for radiation pneumonitis and xerostomia based on combined analysis of published clinical data

    International Nuclear Information System (INIS)

    Knowledge of accurate parameter estimates is essential for incorporating normal tissue complication probability (NTCP) models into biologically based treatment planning. The purpose of this work is to derive parameter estimates for the Lyman-Kutcher-Burman (LKB) NTCP model using a combined analysis of multi-institutional toxicity data for the lung (radiation pneumonitis) and parotid gland (xerostomia). A series of published clinical datasets describing dose response for radiation pneumonitis (RP) and xerostomia were identified for this analysis. The data support the notion of large volume effect for the lung and parotid gland with the estimates of the n parameter being close to unity. Assuming that n = 1, the m and TD50 parameters of the LKB model were estimated by the maximum likelihood method from plots of complication rate as a function of mean organ dose. Ninety five percent confidence intervals for parameter estimates were obtained by the profile likelihood method. If daily fractions other than 2 Gy had been used in a published report, mean organ doses were converted to 2 Gy/fraction-equivalent doses using the linear-quadratic (LQ) formula with α/β = 3 Gy. The following parameter estimates were obtained for the endpoint of symptomatic RP when the lung is considered a paired organ: m = 0.41 (95% CI 0.38, 0.45) and TD50 = 29.9 Gy (95% CI 28.2, 31.8). When RP incidence was evaluated as a function of dose to the ipsilateral lung rather than total lung, estimates were m = 0.35 (95% CI 0.29, 0.43) and TD50 = 37.6 Gy (95% CI 34.6, 41.4). For xerostomia expressed as reduction in stimulated salivary flow below 25% within six months after radiotherapy, the following values were obtained: m = 0.53 (95% CI 0.45, 0.65) and TD50 = 31.4 Gy (95% CI 29.1, 34.0). Although a large number of parameter estimates for different NTCP models and critical structures exist and continue to appear in the literature, it is hard to justify the use of any single parameter set obtained at a

  11. Biomedical phantoms. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-10-01

    The bibliography contains citations concerning the design, development, construction, and evaluation of various anthropomorphic phantoms: mathematical or physical models or constructs simulating human tissue which are used in radiotherapy and diagnostic radiology. The radiation characteristics of phantom materials are addressed, simulating human body tissue, muscles, organs, bones, and skin. (Contains a minimum of 112 citations and includes a subject term index and title list.)

  12. World Search Engine IQ Test Based on the Internet IQ Evaluation Algorithms

    OpenAIRE

    Feng Liu; Yong Shi; Bo Wang

    2015-01-01

    With increasing concern about Internet intelligence, this paper proposes concepts of the Internet and Internet subsystem IQs over search engines. Based on human IQ calculations, the paper first establishes a 2014 Internet Intelligence Scale and designs an intelligence test bank regarding search engines. Then, an intelligence test using such test bank is carried out on 50 typical search engines from 25 countries and regions across the world. Meanwhile, another intelligence test is also conduct...

  13. Semantic search using modular ontology learning and case-based reasoning

    OpenAIRE

    Ben Mustapha, Nesrine; Baazaoui, Hajer; Aufaure, Marie-Aude; Ben Ghezala, Henda

    2010-01-01

    International audience In this paper, we present a semantic search approach based on Case-based reasoning and modular Ontology learning. A case is defined by a set of similar queries associated with its relevant results. The case base is used for ontology learning and for contextualizing the search process. Modular ontologies are designed to be used for case representation and indexing. Our work aims at improving ontology-based information retrieval by the integration of the traditional in...

  14. Ontological Approach for Effective Generation of Concept Based User Profiles to Personalize Search Results

    OpenAIRE

    R. S. D Wahidabanu; S. Prabaharan

    2012-01-01

    Problem statement: Ontological user profile generation was a semantic approach to derive richer concept based user profiles. It depends on the semantic relationship of concepts. This study focuses on ontology to derive concept oriented user profile based on user search queries and clicked documents.This study proposes concept based on topic ontology which derives the concept based user profiles more independently. It was possible to improve the search engine processes more efficiently. ...

  15. A Comparison of the Publishing group Based on the Publishing Business in 2012-2014--Taking China South Publishing and Media, Northern United Publishing and Media as the Examples%2012-2014年基于出版业务的出版集团比较--以中南传媒、北方联合为例

    Institute of Scientific and Technical Information of China (English)

    曹红梅

    2015-01-01

    当前,中国正处于转企改制成果保持与进一步发展时期,将转制后的出版集团进行对比分析,以发现各自的优缺,对于促进我国出版集团的进一步发展具有重要的价值,故本文拟从出版集团的概况和经营管理方面对北方联合、中南传媒两家出版集团进行比较分析,并对其经营管理提出有针对性的建议。%At present, China is in a period of the achievements maintaining and further development of enterprises transformation and system reform. The publishing groups after transformation were compared and analyzed to find out their own strengths and shortcomings, which has an important value in promoting the further development of publishing groups in our country. Therefore, this paper is going to compare and analyze the two publishing groups of Northern United Publishing and Media and China South Publishing and Media from the aspects of the publishing groups’general information and operation & management, and put forward several targeted suggestions for their operation&management.

  16. Searching for evidence-based information in eye care

    OpenAIRE

    Karen Blackhall

    2005-01-01

    A growth in health awareness has led to an increase in the volume and availability of health information. Health care professionals may feel under pressure to read this increasing volume of material. A search on the internet is often a quick and efficient way to find information and this can be done by using one of the many search engines such as Google or Google Scholar2 or one of the health care information portals such as Omni. A previous article by Sally Parsley in the Community Eye Healt...

  17. A quantum search algorithm based on partial adiabatic evolution

    Institute of Scientific and Technical Information of China (English)

    Zhang Ying-Yu; Hu He-Ping; Lu Song-Feng

    2011-01-01

    This paper presents and implements a specified partial adiabatic search algorithm on a quantum circuit. It studies the minimum energy gap between the first excited state and the ground state of the system Hamiltonian and it finds that, in the case of M=1, the algorithm has the same performance as the local adiabatic algorithm. However, the algorithm evolves globally only within a small interval, which implies that it keeps the advantages of global adiabatic algorithms without losing the speedup of the local adiabatic search algorithm.

  18. A constrained optimization algorithm based on the simplex search method

    Science.gov (United States)

    Mehta, Vivek Kumar; Dasgupta, Bhaskar

    2012-05-01

    In this article, a robust method is presented for handling constraints with the Nelder and Mead simplex search method, which is a direct search algorithm for multidimensional unconstrained optimization. The proposed method is free from the limitations of previous attempts that demand the initial simplex to be feasible or a projection of infeasible points to the nonlinear constraint boundaries. The method is tested on several benchmark problems and the results are compared with various evolutionary algorithms available in the literature. The proposed method is found to be competitive with respect to the existing algorithms in terms of effectiveness and efficiency.

  19. A study of the disciplinary structure of mechanics based on the titles of published journal articles in mechanics

    Institute of Scientific and Technical Information of China (English)

    CHEN; Lixin; LIU; Zeyuan; LIANG; Liming

    2010-01-01

    Scientometrics is an emerging academic field for the exploration of the structure of science through journal citation relations.However,this article aims to study those subject-relevant journals’contents rather than studying their citations contained therein with the purpose of discovering a given disciplinary structure of science such as mechanics in our case.Based on the title wordings of 68,075 articles published in 66 mechanics journals,and using such research tools as the word frequency analysis,multidimensional scaling analysis and factor analysis,this article analyzes similarity and distinctions of those journals’contents in the subject field of mechanics.We first convert complex internal relations of these mechanics journals into a small number amount of independent indicators.The group of selected mechanics journals is then classified by a cluster analysis.This article demonstrates that the relations of the research contents of mechanics can be shown in an intuitively recognizable map,and we can have them analyzed from a perspective by taking into account about how those major branches of mechanics,such as solid mechanics,fluid mechanics,rational mechanics(including mathematical methods in mechanics),sound and vibration mechanics,computational mechanics,are related to the main thematic tenet of our study.It is hoped that such an approach,buttressed with this new perspective and approach,will enrich our means to explore the disciplinary structure of science and technology in general and mechanics in specific.

  20. The NeuARt II system: a viewing tool for neuroanatomical data based on published neuroanatomical atlases

    Directory of Open Access Journals (Sweden)

    Cheng Wei-Cheng

    2006-12-01

    Full Text Available Abstract Background Anatomical studies of neural circuitry describing the basic wiring diagram of the brain produce intrinsically spatial, highly complex data of great value to the neuroscience community. Published neuroanatomical atlases provide a spatial framework for these studies. We have built an informatics framework based on these atlases for the representation of neuroanatomical knowledge. This framework not only captures current methods of anatomical data acquisition and analysis, it allows these studies to be collated, compared and synthesized within a single system. Results We have developed an atlas-viewing application ('NeuARt II' in the Java language with unique functional properties. These include the ability to use copyrighted atlases as templates within which users may view, save and retrieve data-maps and annotate them with volumetric delineations. NeuARt II also permits users to view multiple levels on multiple atlases at once. Each data-map in this system is simply a stack of vector images with one image per atlas level, so any set of accurate drawings made onto a supported atlas (in vector graphics format could be uploaded into NeuARt II. Presently the database is populated with a corpus of high-quality neuroanatomical data from the laboratory of Dr Larry Swanson (consisting 64 highly-detailed maps of PHAL tract-tracing experiments, made up of 1039 separate drawings that were published in 27 primary research publications over 17 years. Herein we take selective examples from these data to demonstrate the features of NeuArt II. Our informatics tool permits users to browse, query and compare these maps. The NeuARt II tool operates within a bioinformatics knowledge management platform (called 'NeuroScholar' either as a standalone or a plug-in application. Conclusion Anatomical localization is fundamental to neuroscientific work and atlases provide an easily-understood framework that is widely used by neuroanatomists and non

  1. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    Science.gov (United States)

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  2. Balancing thread based navigation for targeted video search

    NARCIS (Netherlands)

    O. de Rooij; C.G.M. Snoek; M. Worring

    2008-01-01

    Various query methods for video search exist. Because of the semantic gap each method has its limitations. We argue that for effective retrieval query methods need to be combined at retrieval time. However, switching query methods often involves a change in query and browsing interface, which puts a

  3. Capacitated Dynamic Facility Location Problem Based on Tabu Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    KUANG Yi-jun; ZHU Ke-jun

    2007-01-01

    Facility location problem is a kind of NP-Hard combinational problem. Considering ever-changing demand sites, demand quantity and releasing cost, we formulate a model combining tabu search and FCM (fuzzy clustering method) to solve the eapacitated dynamic facility location problem. Some results are achieved and they show that the proposed method is effective.

  4. New Diamond Block Based Gradient Descent Search Algorithm for Motion Estimation in the MPEG-4 Encoder

    Institute of Scientific and Technical Information of China (English)

    王振洲; 李桂苓

    2003-01-01

    Motion estimation is an important part of the MPEG-4 encoder, due to its significant impact on the bit rate and the output quality of the encoder sequence. Unfortunately this feature takes a significant part of the encoding time especially when the straightforward full search(FS) algorithm is used. In this paper, a new algorithm named diamond block based gradient descent search (DBBGDS) algorithm, which is significantly faster than FS and gives similar quality of the output sequence, is proposed. At the same time, some other algorithms, such as three step search (TSS), improved three step search (ITSS), new three step search (NTSS), four step search (4SS), cellular search (CS) , diamond search (DS) and block based gradient descent search (BBGDS), are adopted and compared with DBBGDS. As the experimental results show, DBBGDS has its own advantages. Although DS has been adopted by the MPEG-4 VM, its output sequence quality is worse than that of the proposed algorithm while its complexity is similar to the proposed one. Compared with BBGDS, the proposed algorithm can achieve a better output quality.

  5. Shopping Malls, Updates of the shopping malls and centers based of land use, Published in unknown, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Shopping Malls dataset, was produced all or in part from Published Reports/Deeds information as of unknown. It is described as 'Updates of the shopping malls...

  6. The Academic Publishing Industry

    DEFF Research Database (Denmark)

    Nell, Phillip Christopher; Wenzel, Tim Ole; Schmidt, Florian

    2014-01-01

    . The case is intended to be used as a basis for class discussion rather than to illustrate effective handling of a managerial situation. It is based on published sources, interviews, and personal experience. The authors have disguised some names and other identifying information to protect confidentiality....

  7. Book on CPC Published

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    A book that answers 13 questions about how the Communist Party of China(CPC) works in China and why the Party has made great achievements in the past decades has been recently published by the Beijing-based New World Press.

  8. Defense Waste Processing Facility (DWPF): The vitrification of high-level nuclear waste. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    The bibliography contains citations concerning a production-scale facility and the world`s largest plant for the vitrification of high-level radioactive nuclear wastes (HLW) located in the United States. Initially based on the selection of borosilicate glass as the reference waste form, the citations present the history of the development including R&D projects and the actual construction of the production facility at the DOE Savannah River Plant (SRP). (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  9. A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles

    OpenAIRE

    Neas, Charles Bennett

    2010-01-01

    This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search tech...

  10. The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet

    Science.gov (United States)

    Hill, Paul; Rader, Heidi B.; Hino, Jeff

    2012-01-01

    For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…

  11. Constructing Virtual Documents for Keyword Based Concept Search in Web Ontology

    Directory of Open Access Journals (Sweden)

    Sapna Paliwal

    2013-04-01

    Full Text Available Web ontologies are structural frameworks for organizing information in semantics web and provide shared concepts. Ontology formally represents knowledge or information about particular entity as a set of concepts within a particular domain on semantic web. Web ontology helps to describe concepts within domain and also help us to enables semantic interoperability between two different applications byusing Falcons concept search. We can facilitate concept searching and ontologies reusing. Constructing virtual documents is a keyword based search in ontology. The proposed method helps us to find how search engine help user to find out ontologies in less time so we can satisfy their needs. It include some supportive technologies with new technique is to constructing virtual documents of concepts for keywordbased search and based on population scheme we rank the concept and ontologies, a way to generate structured snippets according to query. In this concept we can report the user feedback and usabilityevolution.

  12. Optimal attack strategy of complex networks based on tabu search

    Science.gov (United States)

    Deng, Ye; Wu, Jun; Tan, Yue-jin

    2016-01-01

    The problem of network disintegration has broad applications and recently has received growing attention, such as network confrontation and disintegration of harmful networks. This paper presents an optimized attack strategy model for complex networks and introduces the tabu search into the network disintegration problem to identify the optimal attack strategy, which is a heuristic optimization algorithm and rarely applied to the study of network robustness. The efficiency of the proposed solution was verified by comparing it with other attack strategies used in various model networks and real-world network. Numerical experiments suggest that our solution can improve the effect of network disintegration and that the "best" choice for node failure attacks can be identified through global searches. Our understanding of the optimal attack strategy may also shed light on a new property of the nodes within network disintegration and deserves additional study.

  13. Multilevel Threshold Based Gray Scale Image Segmentation using Cuckoo Search

    OpenAIRE

    Samantaa, Sourav; Dey, Nilanjan; Das, Poulami; Acharjee, Suvojit; Chaudhuri, Sheli Sinha

    2013-01-01

    Image Segmentation is a technique of partitioning the original image into some distinct classes. Many possible solutions may be available for segmenting an image into a certain number of classes, each one having different quality of segmentation. In our proposed method, multilevel thresholding technique has been used for image segmentation. A new approach of Cuckoo Search (CS) is used for selection of optimal threshold value. In other words, the algorithm is used to achieve the best solution ...

  14. Tree-Based Search for Stochastic Simulation Algorithm

    OpenAIRE

    Vo Hong, Thanh; Zunino, Roberto

    2011-01-01

    In systems biology, the cell behavior is governed by a series of biochemical reactions. The stochastic simulation algorithm (SSA), which was introduced by Gillespie, is a standard method to properly realize the dynamic and stochastic nature of such systems. In general, SSA follows a two-step approach: finding the next reaction firing, and updating the system accordingly. In this paper we apply the Huffman tree, an optimal tree for data compression, so to improve the search for the next reacti...

  15. Storage Ring Based EDM Search — Achievements and Goals

    Science.gov (United States)

    Lehrach, Andreas

    2016-02-01

    This paper summarizes the experimental achievements of the JEDI (Jülich Electric Dipole moment Investigations) Collaboration to exploit and demonstrate the feasibility of charged particle Electric Dipole Moment searches with storage rings at the Cooler Synchrotron COSY of the Forschungszentrum Jülich. Recent experimental results, design and optimization of critical accelerator elements, progress in beam and spin tracking, and future goals of the R & D program at COSY are presented.

  16. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  17. Ontology-Based Information Behaviour to Improve Web Search

    Directory of Open Access Journals (Sweden)

    Silvia Calegari

    2010-10-01

    Full Text Available Web Search Engines provide a huge number of answers in response to a user query, many of which are not relevant, whereas some of the most relevant ones may not be found. In the literature several approaches have been proposed in order to help a user to find the information relevant to his/her real needs on the Web. To achieve this goal the individual Information Behavior can been analyzed to ’keep’ track of the user’s interests. Keeping information is a type of Information Behavior, and in several works researchers have referred to it as the study on what people do during a search on the Web. Generally, the user’s actions (e.g., how the user moves from one Web page to another, or her/his download of a document, etc. are recorded in Web logs. This paper reports on research activities which aim to exploit the information extracted from Web logs (or query logs in personalized user ontologies, with the objective to support the user in the process of discovering Web information relevant to her/his information needs. Personalized ontologies are used to improve the quality of Web search by applying two main techniques: query reformulation and re-ranking of query evaluation results. In this paper we analyze various methodologies presented in the literature aimed at using personalized ontologies, defined on the basis of the observation of Information Behaviour to help the user in finding relevant information.

  18. Semiconductor-based experiments for neutrinoless double beta decay search

    Science.gov (United States)

    Barnabé Heider, Marik; Gerda Collaboration

    2012-08-01

    Three experiments are employing semiconductor detectors in the search for neutrinoless double beta (0νββ) decay: COBRA, Majorana and GERDA. COBRA is studying the prospects of using CdZnTe detectors in terms of achievable energy resolution and background suppression. These detectors contain several ββ emitters and the most promising for 0νββ-decay search is 116Cd. Majorana and GERDA will use isotopically enriched high purity Ge detectors to search for 0νββ-decay of 76Ge. Their aim is to achieve a background ⩽10-3 counts/(kgṡyṡkeV) at the Q improvement compared to the present state-of-art. Majorana will operate Ge detectors in electroformed-Cu vacuum cryostats. A first cryostat housing a natural-Ge detector array is currently under preparation. In contrast, GERDA is operating bare Ge detectors submerged in liquid argon. The construction of the GERDA experiment is completed and a commissioning run started in June 2010. A string of natural-Ge detectors is operated to test the complete experimental setup and to determine the background before submerging the detectors enriched in 76Ge. An overview and a comparison of these three experiments will be presented together with the latest results and developments.

  19. Bielefeld Academic Search Engine: a (Potential Information-)BASE for the Working Mathematician

    OpenAIRE

    Höppner, Michael; Becker, Hans; Stange, Kari; Wegner, Bernd

    2004-01-01

    A modern search engine based approach to scientific information retrieval is described as the consequent next step after building up digital libraries. Bielefeld University Library has just established two demonstrators for possible retrieval services, one of them from mathematics.

  20. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  1. A Feature-Weighted Instance-Based Learner for Deep Web Search Interface Identification

    OpenAIRE

    Hong Wang; Qingsong Xu; Youyang Chen; Jinsong Lan

    2013-01-01

    Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based lear...

  2. Ontological Approach for Effective Generation of Concept Based User Profiles to Personalize Search Results

    Directory of Open Access Journals (Sweden)

    R. S.D. Wahidabanu

    2012-01-01

    Full Text Available Problem statement: Ontological user profile generation was a semantic approach to derive richer concept based user profiles. It depends on the semantic relationship of concepts. This study focuses on ontology to derive concept oriented user profile based on user search queries and clicked documents.This study proposes concept based on topic ontology which derives the concept based user profiles more independently. It was possible to improve the search engine processes more efficiently. Approach: This process consists of individual user’s interests, topical categories of user interests and identifies the relationship among the concepts. The proposed approach was based on topic ontology for concept based user profile generation from search engine logs. Spreading activation algorithm was used to optimize the relevance of search engine results. Topic ontology is constructed to identify the user interest by assigning activation values and explore the topics similarity of user preferences. Results: To update and maintain the interest scores, spreading activation algorithm was proposed. User interest may change over the period of time which was reflected to user profiles. According to profile changes, search engine was personalized by assigning interest scores and weight to the topics. Conclusion: Experiments illustrate the efficacy of proposed approach and with the help of topic ontology user preferences can be identified correctly. It improves the quality of the search engine personalization by identifying the user’s precise needs.

  3. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-01

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org.

  4. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-01

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. PMID:27098038

  5. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes.

  6. Reranking and Classifying Search Results Exhaustively Based on Edit-and-Propagate Operations

    Science.gov (United States)

    Yamamoto, Takehiro; Nakamura, Satoshi; Tanaka, Katsumi

    Search engines return a huge number of Web search results, and the user usually checks merely the top 5 or 10 results. However, the user sometimes must collect information exhaustively such as collecting all the publications which a certain person had written, or gathering a lot of useful information which assists the user to buy. In this case, the user must repeatedly check search results that are clearly irrelevant. We believe that people would use a search system which provides the reranking or classifying functions by the user’s interaction. We have already proposed a reranking system based on the user’s edit-and-propagate operations. In this paper, we introduce the drag-and-drop operation into our system to support the user’s exhaustive search.

  7. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  8. Rank-Based Similarity Search: Reducing the Dimensional Dependence.

    Science.gov (United States)

    Houle, Michael E; Nett, Michael

    2015-01-01

    This paper introduces a data structure for k-NN search, the Rank Cover Tree (RCT), whose pruning tests rely solely on the comparison of similarity values; other properties of the underlying space, such as the triangle inequality, are not employed. Objects are selected according to their ranks with respect to the query object, allowing much tighter control on the overall execution costs. A formal theoretical analysis shows that with very high probability, the RCT returns a correct query result in time that depends very competitively on a measure of the intrinsic dimensionality of the data set. The experimental results for the RCT show that non-metric pruning strategies for similarity search can be practical even when the representational dimension of the data is extremely high. They also show that the RCT is capable of meeting or exceeding the level of performance of state-of-the-art methods that make use of metric pruning or other selection tests involving numerical constraints on distance values. PMID:26353214

  9. Development of an item bank for food parenting practices based on published instruments and reports from Canadian and US parents.

    Science.gov (United States)

    O'Connor, Teresia M; Pham, Truc; Watts, Allison W; Tu, Andrew W; Hughes, Sheryl O; Beauchamp, Mark R; Baranowski, Tom; Mâsse, Louise C

    2016-08-01

    Research to understand how parents influence their children's dietary intake and eating behaviors has expanded in the past decades and a growing number of instruments are available to assess food parenting practices. Unfortunately, there is no consensus on how constructs should be defined or operationalized, making comparison of results across studies difficult. The aim of this study was to develop a food parenting practice item bank with items from published scales and supplement with parenting practices that parents report using. Items from published scales were identified from two published systematic reviews along with an additional systematic review conducted for this study. Parents (n = 135) with children 5-12 years old from the US and Canada, stratified to represent the demographic distribution of each country, were recruited to participate in an online semi-qualitative survey on food parenting. Published items and parent responses were coded using the same framework to reduce the number of items into representative concepts using a binning and winnowing process. The literature contributed 1392 items and parents contributed 1985 items, which were reduced to 262 different food parenting concepts (26% exclusive from literature, 12% exclusive from parents, and 62% represented in both). Food parenting practices related to 'Structure of Food Environment' and 'Behavioral and Educational' were emphasized more by parent responses, while practices related to 'Consistency of Feeding Environment' and 'Emotional Regulation' were more represented among published items. The resulting food parenting item bank should next be calibrated with item response modeling for scientists to use in the future. PMID:27131416

  10. A dichotomous search-based heuristic for the three-dimensional sphere packing problem

    Directory of Open Access Journals (Sweden)

    Mhand Hifi

    2015-12-01

    Full Text Available In this paper, the three-dimensional sphere packing problem is solved by using a dichotomous search-based heuristic. An instance of the problem is defined by a set of $ n $ unequal spheres and an object of fixed width and height and, unlimited length. Each sphere is characterized by its radius and the aim of the problem is to optimize the length of the object containing all spheres without overlapping. The proposed method is based upon beam search, in which three complementary phases are combined: (i a greedy selection phase which determines a series of eligible search subspace, (ii a truncated tree search, using a width-beam search, that explores some promising paths, and (iii a dichotomous search that diversifies the search. The performance of the proposed method is evaluated on benchmark instances taken from the literature where its obtained results are compared to those reached by some recent methods of the literature. The proposed method is competitive and it yields promising results.

  11. Keyword-based Ciphertext Search Algorithm under Cloud Storage

    Directory of Open Access Journals (Sweden)

    Ren Xunyi

    2016-01-01

    Full Text Available With the development of network storage services, cloud storage have the advantage of high scalability , inexpensive, without access limit and easy to manage. These advantages make more and more small or medium enterprises choose to outsource large quantities of data to a third party. This way can make lots of small and medium enterprises get rid of costs of construction and maintenance, so it has broad market prospects. But now lots of cloud storage service providers can not protect data security.This result leakage of user data, so many users have to use traditional storage method.This has become one of the important factors that hinder the development of cloud storage. In this article, establishing keyword index by extracting keywords from ciphertext data. After that, encrypted data and the encrypted index upload cloud server together.User get related ciphertext by searching encrypted index, so it can response data leakage problem.

  12. Multi-leg Searching by Adopting Graph-based Knowledge Representation

    Directory of Open Access Journals (Sweden)

    Siti Zarinah Mohd Yusof

    2011-01-01

    Full Text Available This research explores the development of multi-leg searching concept by adopting graph-based knowledge representation. The research is aimed at proposing a searching concept that is capable of providing advanced information through retrieving not only direct but continuous related information from a point. It applies maximal join concept to merge multiple information networks for supporting multi-leg searching process. Node and edge similarity concept are also applied to determine transit node and alternative edges of the same route. A working prototype of flight networks domain is developed to represent the overview of the research.

  13. Colorize magnetic nanoparticles using a search coil based testing method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kai; Wang, Yi; Feng, Yinglong; Yu, Lina; Wang, Jian-Ping, E-mail: jpwang@umn.edu

    2015-04-15

    Different magnetic nanoparticles (MNPs) possess unique spectral responses to AC magnetic field and we can use this specific magnetic property of MNPs as “colors” in the detection. In this paper, a detection scheme for magnetic nanoparticle size distribution is demonstrated by using an MNPs and search-coils integrated detection system. A low frequency (50 Hz) sinusoidal magnetic field is applied to drive MNPs into saturated region. Then a high frequency sinusoidal field sweeping from 5 kHz to 35 kHz is applied in order to generate mixing frequency signals, which are collected by a pair of balanced search coils. These harmonics are highly specific to the nonlinearity of magnetization curve of the MNPs. Previous work focused on using the amplitude and phase of the 3rd harmonic or the amplitude ratio of the 5th harmonic over 3rd harmonic. Here we demonstrate to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of MNPs. It is found that this method effectively reduces the magnetic colorization error. - Highlights: • We demonstrated to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of magnetic nanoparticles (MNPs). • An easier and simpler way to calibrate amounts of MNPs was developed. • With the same concentration, MNP solution with a larger average particle size could induce higher amplitude, and its amplitude changes greatly with sweeping high frequency. • At lower sweeping frequency, the 5 samples have almost the same phase lag. As the sweeping frequency goes higher, phase lag of large particles drop faster.

  14. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    Science.gov (United States)

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  15. A self-adaptive step Cuckoo search algorithm based on dimension by dimension improvement

    Directory of Open Access Journals (Sweden)

    Lu REN

    2015-10-01

    Full Text Available The choice of step length plays an important role in convergence speed and precision of Cuckoo search algorithm. In the paper, a self-adaptive step Cuckoo search algorithm based on dimensional improvement is provided. First, since the step in the original self-adaptive step Cuckoo search algorithm is not updated when the current position of the nest is in the optimal position, simple modification of the step is made for the update. Second, evaluation strategy based on dimension by dimension update is introduced to the modified self-adaptive step Cuckoo search algorithm. The experimental results show that the algorithm can balance the contradiction between the global convergence ability and the precision of optimization. Moreover, the proposed algorithm has better convergence speed.

  16. The Effect of Problem Reduction in the Integer Programming-based Local Search

    Directory of Open Access Journals (Sweden)

    Junha Hwang

    2016-06-01

    Full Text Available Integer Programming-based Local Search (IPbLS is a kind of local search. IPbLS is based on the first-choice hill-climbing search and uses integer programming to generate a neighbor solution. IPbLS has been applied to solve various NP-hard combinatorial optimization problems like knapsack problem, set covering problem, set partitioning problem, and so on. In this paper, we investigate the effect of problem reduction in the IPbLS experimentally using the n-queens maximization problem. The characteristics of IPbLS are examined by comparing IPbLS using strong problem reduction with IPbLS using weak problem reduction, and also IPbLS is compared with other local search strategies like simulated annealing. Experimental results show the importance of problem reduction in IPbLS.

  17. An Integer Programming-based Local Search for Large-scale Maximal Covering Problems

    Directory of Open Access Journals (Sweden)

    Junha Hwang

    2011-02-01

    Full Text Available Maximal covering problem (MCP is classified as a linear integer optimization problem which can be effectively solved by integer programming technique. However, as the problem size grows, integerprogramming requires excessive time to get an optimal solution. This paper suggests a method for applying integer programming-based local search (IPbLS to solve large-scale maximal covering problems. IPbLS, which is a hybrid technique combining integer programming and local search, is a kind of local search using integer programming for neighbor generation. IPbLS itself is very effective for MCP. In addition, we improve the performance of IPbLS for MCP through problem reduction based on the current solution. Experimental results show that the proposed method considerably outperforms any other local search techniques and integer programming.

  18. Minimum Distortion Direction Prediction-based Fast Half-pixel Motion Vector Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; ZHANG Qi-shan

    2005-01-01

    A minimum distortion direction prediction-based novel fast half-pixel motion vector search algorithm is proposed, which can reduce considerably the computation load of half-pixel search. Based on the single valley characteristic of half-pixel error matching function inside search grid, the minimum distortion direction is predicted with the help of comparative results of sum of absolute difference(SAD) values of four integer-pixel points around integer-pixel motion vector. The experimental results reveal that, to all kinds of video sequences, the proposed algorithm can obtain almost the same video quality as that of the half-pixel full search algorithm with a decrease of computation cost by more than 66%.

  19. Critique of EPS/RIN/RCUK/DTI "Evidence-Based Analysis of Data Concerning Scholarly Journal Publishing"

    OpenAIRE

    Harnad, Stevan

    2006-01-01

    This Report on UK Scholarly Journals was commissioned by RIN, RCUK and DTI, and conducted by ELS, but its questions, answers and interpretations are clearly far more concerned with the interests of the publishing lobby than with those of the research community. The Report's two relevant overall findings are correct and stated very fairly in their summary form: [1] "Overall, [self-archiving] of articles in open access repositories seems to be associated with both a larger number of citations, ...

  20. Research of the test generation algorithm based on search state dominance for combinational circuit

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    On the basis of EST (Equivalent STate hashing) algorithm, this paper researches a kind of test generation algorithm based on search state dominance for combinational circuit. According to the dominance relation of the E-frontier ( evaluation frontier), we can prove that this algorithm can terminate unnecessary searching step of test pattern earlier than the EST algorithm through some examples, so this algorithm can reduce the time of test generation. The test patterns calculated can detect faults given through simulation.

  1. A tensor-based selection hyper-heuristic for cross-domain heuristic search

    OpenAIRE

    Asta, Shahriar; Özcan, Ender

    2015-01-01

    Hyper-heuristics have emerged as automated high level search methodologies that manage a set of low level heuristics for solving computationally hard problems. A generic selection hyper-heuristic combines heuristic selection and move acceptance methods under an iterative single point-based search framework. At each step, the solution in hand is modified after applying a selected heuristic and a decision is made whether the new solution is accepted or not. In this study, we represent the trail...

  2. Development of an Ontology Based Forensic Search Mechanism: Proof of Concept

    Directory of Open Access Journals (Sweden)

    Jill Slay

    2006-03-01

    Full Text Available This paper examines the problems faced by Law Enforcement in searching large quantities of electronic evidence. It examines the use of ontologies as the basis for new forensic software filters and provides a proof of concept tool based on an ontological design. It demonstrates that efficient searching is produced through the use of such a design and points to further work that might be carried out to extend this concept.

  3. DISA at ImageCLEF 2014 Revised: Search-based Image Annotation with DeCAF Features

    OpenAIRE

    Budikova, Petra; Botorek, Jan; Batko, Michal; Zezula, Pavel

    2014-01-01

    This paper constitutes an extension to the report on DISA-MU team participation in the ImageCLEF 2014 Scalable Concept Image Annotation Task as published in [3]. Specifically, we introduce a new similarity search component that was implemented into the system, report on the results achieved by utilizing this component, and analyze the influence of different similarity search parameters on the annotation quality.

  4. INTELLIGENT SEARCH ENGINE-BASED UNIVERSAL DESCRIPTION, DISCOVERY AND INTEGRATION FOR WEB SERVICE DISCOVERY

    Directory of Open Access Journals (Sweden)

    Tamilarasi Karuppiah

    2014-01-01

    Full Text Available Web Services standard has been broadly acknowledged by industries and academic researches along with the progress of web technology and e-business. Increasing number of web applications have been bundled as web services that can be published, positioned and invoked across the web. The importance of the issues regarding their publication and innovation attains a maximum as web services multiply and become more advanced and mutually dependent. With the intension of determining the web services through effiective manner with in the minimum time period in this study proposes an UDDI with intelligent serach engine. In order to publishing and discovering web services initially, the web services are published in the UDDI registry subsequently the published web services are indexed. To improve the efficiency of discovery of web services, the indexed web services are saved as index database. The search query is compared with the index database for discovering of web services and the discovered web services are given to the service customer. The way of accessing the web services is stored in a log file, which is then utilized to provide personalized web services to the user. The finding of web service is enhanced significantly by means of an efficient exploring capability provided by the proposed system and it is accomplished of providing the maximum appropriate web service. Universal Description, Discovery and Integration (UDDI.

  5. Proposing LT based Search in PDM Systems for Better Information Retrieval

    CERN Document Server

    Ahmed, Zeeshan

    2011-01-01

    PDM Systems contain and manage heavy amount of data but the search mechanism of most of the systems is not intelligent which can process user"s natural language based queries to extract desired information. Currently available search mechanisms in almost all of the PDM systems are not very efficient and based on old ways of searching information by entering the relevant information to the respective fields of search forms to find out some specific information from attached repositories. Targeting this issue, a thorough research was conducted in fields of PDM Systems and Language Technology. Concerning the PDM System, conducted research provides the information about PDM and PDM Systems in detail. Concerning the field of Language Technology, helps in implementing a search mechanism for PDM Systems to search user"s needed information by analyzing user"s natural language based requests. The accomplished goal of this research was to support the field of PDM with a new proposition of a conceptual model for the imp...

  6. On the Definition and Connotation of Digital Publishing Base%数字出版基地的内涵及界定

    Institute of Scientific and Technical Information of China (English)

    杨伟晔

    2014-01-01

    建设数字出版基地是政府倡导的发展数字出版产业的重要模式,并在各地掀起了建设数字出版基地的热潮。由于国家没有对数字出版基地做出权威的界定,各地又缺乏统一的认识,导致在数字出版基地建设实践中出现了诸多问题。本文以数字出版的定义为基础,研究数字出版产业的构成,进而探讨数字出版基地的内涵,认为数字出版基地是数字出版产业集群化发展的一种表现形式和高级阶段,并对数字出版基地做出了界定,形成了较为完整的数字出版基地的概念。%As an important pattern proposed by the government to develop the digital publishing industry ,the digital publishing base has aroused a great upsurge in its construction around the coun‐try .However ,many issues have been emerged in the construction of digital publishing base due to the lack of authorized definition by the government and general understanding among different cities .This essay studies the elements of digital publishing industry on the basis of its definition ,and further dis‐cusses the connotation of digital publishing base .It argues that the digital publishing base is a pattern of manifestation and an advanced stage of industrial cluster development of digital publication .A rela‐tively complete definition of the digital publishing base is defined and generalized in this essay .

  7. An Analysis of Literature Searching Anxiety in Evidence-Based Medicine Education

    Directory of Open Access Journals (Sweden)

    Hui-Chin Chang

    2014-01-01

    Full Text Available Introduction. Evidence-Based Medicine (EBM is hurtling towards a cornerstone in lifelong learning for healthcare personnel worldwide. This study aims to evaluate the literature searching anxiety in graduate students in practicing EBM. Method The study participants were 48 graduate students who enrolled the EBM course at aMedical Universityin central Taiwan. Student’s t-test, Pearson correlation and multivariate regression, interviewing are used to evaluate the students’ literature searching anxiety of EBM course. The questionnaire is Literature Searching Anxiety Rating Scale -LSARS. Results The sources of anxiety are uncertainty of database selection, literatures evaluation and selection, technical assistance request, computer programs use, English and EBM education programs were disclosed. The class performance is negatively related to LSARS score, however, the correlation is statistically insignificant with the adjustment of gender, degree program, age category and experience of publication. Conclusion This study helps in understanding the causes and the extent of anxiety in order to work on a better teaching program planning to improve user’s searching skills and the capability of utilization the information; At the same time, provide friendly-user facilities of evidence searching. In short, we need to upgrade the learner’s searching 45 skills and reduce theanxiety. We also need to stress on the auxiliary teaching program for those with the prevalent and profoundanxiety during literature searching.

  8. Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

    CERN Document Server

    Guez, Arthur; Dayan, Peter

    2012-01-01

    Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty. In this setting, a Bayes-optimal policy captures the ideal trade-off between exploration and exploitation. Unfortunately, finding Bayes-optimal policies is notoriously taxing due to the enormous search space in the augmented belief-state MDP. In this paper we exploit recent advances in sample-based planning, based on Monte-Carlo tree search, to introduce a tractable method for approximate Bayes-optimal planning. Unlike prior work in this area, we avoid expensive applications of Bayes rule within the search tree, by lazily sampling models from the current beliefs. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems.

  9. Publishing with XML structure, enter, publish

    CERN Document Server

    Prost, Bernard

    2015-01-01

    XML is now at the heart of book publishing techniques: it provides the industry with a robust, flexible format which is relatively easy to manipulate. Above all, it preserves the future: the XML text becomes a genuine tactical asset enabling publishers to respond quickly to market demands. When new publishing media appear, it will be possible to very quickly make your editorial content available at a lower cost. On the downside, XML can become a bottomless pit for publishers attracted by its possibilities. There is a strong temptation to switch to audiovisual production and to add video and a

  10. A New RFID Anti-collision Algorithm Based on the Q-Ary Search Scheme

    Institute of Scientific and Technical Information of China (English)

    SU Jian; WEN Guangjun; HONG Danfeng

    2015-01-01

    Deterministic tree-based algorithms are mostly used to guarantee that all the tags in the reader field are successfully identified, and to achieve the best performance. Through an analysis of the deficiencies of ex-isting tree-based algorithms, a Q-ary search algorithm was proposed. The Q-ary search (QAS) algorithm introduced a bit encoding mechanism of tag ID by which the multi-bit collision arbitration was implemented. According to the encoding mechanism, the collision cycle was reduced. The theoretical analysis and simulation results showed that the proposed MS algorithm overcame the shortcoming of exist-ing tree-based algorithms and exhibited good performance during identification.

  11. Magnetic Flux Leakage Signal Inversion of Corrosive Flaws Based on Modified Genetic Local Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    HAN Wen-hua; FANG Ping; XIA Fei; XUE Fang

    2009-01-01

    In this paper, a modified genetic local search algorithm (MGLSA) is proposed. The proposed algorithm is resulted from employing the simulated annealing technique to regulate the variance of the Gaussian mutation of the genetic local search algorithm (GLSA). Then, an MGLSA-based inverse algorithm is proposed for magnetic flux leakage (MFL) signal inversion of corrosive flaws, in which the MGLSA is used to solve the optimization problem in the MFL inverse problem. Experimental results demonstrate that the MGLSA-based inverse algorithm is more robust than GLSA-based inverse algorithm in the presence of noise in the measured MFL signals.

  12. Comics, Copyright and Academic Publishing

    Directory of Open Access Journals (Sweden)

    Ronan Deazley

    2014-05-01

    Full Text Available This article considers the extent to which UK-based academics can rely upon the copyright regime to reproduce extracts and excerpts from published comics and graphic novels without having to ask the copyright owner of those works for permission. In doing so, it invites readers to engage with a broader debate about the nature, demands and process of academic publishing.

  13. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account. PMID:26352993

  14. Improved methods for scheduling flexible manufacturing systems based on Petri nets and heuristic search

    Institute of Scientific and Technical Information of China (English)

    Bo HUANG; Yamin SUN

    2005-01-01

    This paper proposes and evaluates two improved Petri net (PN)-based hybrid search strategies and their applications to flexible manufacturing system (FMS) scheduling.The algorithms proposed in some previous papers,which combine PN simulation capabilities with A* heuristic search within the PN reachability graph,may not find an optimum solution even with an admissible heuristic function.To remedy the defects an improved heuristic search strategy is proposed,which adopts a different method for selecting the promising markings and reserves the admissibility of the algorithm.To speed up the search process,another algorithm is also proposed which invokes faster termination conditions and still guarantees that the solution found is optimum.The scheduling results are compared through a simple FMS between our algorithms and the previous methods.They are also applied and evaluated in a set of randomly-generated FMSs with such characteristics as multiple resources and alternative routes.

  15. Algorithm Based on Taboo Search and Shifting Bottleneck for Job Shop Scheduling

    Institute of Scientific and Technical Information of China (English)

    Wen-Qi Huang; Zhi Huang

    2004-01-01

    In this paper, a computational effective heuristic method for solving the minimum makespan problem of job shop scheduling is presented. It is based on taboo search procedure and on the shifting bottleneck procedure used to jump out of the trap of the taboo search procedure. A key point of the algorithm is that in the taboo search procedure two taboo lists are used to forbid two kinds of reversals of arcs, which is a new and effective way in taboo search methods for job shop scheduling. Computational experiments on a set of benchmark problem instances show that, in several cases, the approach, in reasonable time, yields better solutions than the other heuristic procedures discussed in the literature.

  16. POLYNOMIAL MODEL BASED FAST FRACTIONAL PIXEL SEARCH ALGORITHM FOR H.264/AVC

    Institute of Scientific and Technical Information of China (English)

    Xi Yinglai; Hao Chongyang; Lai Changcai

    2006-01-01

    This paper proposed a novel fast fractional pixel search algorithm based on polynomial model.With the analysis of distribution characteristics of motion compensation error surface inside fractional pixel searching window, the matching error is fitted with parabola along horizontal and vertical direction respectively. The proposed searching strategy needs to check only 6 points rather than 16 or 24 points, which are used in the Hierarchical Fractional Pel Search algorithm (HFPS) for 1/4-pel and 1/8-pel Motion Estimation (ME). The experimental results show that the proposed algorithm shows very good capability in keeping the rate distortion performance while reduces computation load to a large extent compared with HFPS algorithm.

  17. Query sensitive comparative summarization of search results using concept based segmentation

    CERN Document Server

    Chitra, P; Sarukesi, K

    2012-01-01

    Query sensitive summarization aims at providing the users with the summary of the contents of single or multiple web pages based on the search query. This paper proposes a novel idea of generating a comparative summary from a set of URLs from the search result. User selects a set of web page links from the search result produced by search engine. Comparative summary of these selected web sites is generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the query and feature keywords. The important sentences from the concept blocks of different web pages are extracted to compose the comparative summary on the fly. This system reduces the time and effort required for the user to browse various web sites to compare the information. The comparative summary of the contents would help the users in quick decision making.

  18. Two-grade search mechanism based motion planning of a three-limbed robot

    Institute of Scientific and Technical Information of China (English)

    Pang Ming; Zang Xizhe; Yan Jihong; Zhao Jie

    2008-01-01

    A novel three-limbed robot was described and its motion planning method was discussed. After the introduction of the robot mechanical structure and the human-robot interface, a two-grade search mechanism based motion planning method was proposed. The first-grade search method using genetic algorithm tries to find an optimized target position and orientation of the three-limbed robot. The second-grade search method using virtual compliance tries to avoid the collision between the three-limbed robot and obstacles in a dynamic environment. Experiment shows the feasibility of the two-grade search mechanism and proves that the proposed motion planning method can be used to solve the motion planning problem of the redundant three-limbed robot without deficiencies of traditional genetic algorithm.

  19. Open Access Publishing in Astronomy

    Science.gov (United States)

    Grothkopf, U.; Meakins, S.

    2012-08-01

    Open Access (OA) in scholarly literature means the "immediate, free availability on the public internet, permitting any users to read, download, copy, distribute, print, search or link to the full text of these articles". The Open Access movement has been made possible thanks to the wide-spread availability of internet access and has received increasing interest since the 1990s, mostly due to the fast rising journal subscription prices. This presentation will review the current situation of Open Access in astronomy. It will answer the question why it makes sense to publish in an OA journal and will provide criteria to judge the quality of OA journals and publishers, along with suggestions how to identify so-called predatory publishers.

  20. Searching the "Nuclear Science Abstracts" Data Base by Use of the Berkeley Mass Storage System

    Science.gov (United States)

    Herr, J. Joanne; Smith, Gloria L.

    1972-01-01

    Advantages of the Berkeley Mass Storage System (MSS) for information retrieval other than its size are: high serial-read rate, archival data storage; and random-access capability. By use of this device, the search cost in an SDI system based on the Nuclear Science Abstracts" data base was reduced by 20 percent. (6 references) (Author/NH)

  1. Sensitive Ground-based Search for Sulfuretted Species on Mars

    Science.gov (United States)

    Khayat, Alain; Villanueva, G. L.; Mumma, M. J.; Riesen, T. E.; Tokunaga, A. T.

    2012-10-01

    We searched for active release of gases on Mars during mid Northern Spring and early Northern Summer seasons, between Ls= 34° and Ls= 110°. The targeted volcanic areas, Tharsis and Syrtis Major, were observed during the interval 23 Nov. 2011 to 13 May 2012, using the high resolution infrared spectrometer (CSHELL) on NASA's Infrared Telescope Facility (NASA/IRTF) and the ultra-high resolution heterodyne receiver (Barney) at the Caltech Submillimeter Observatory (CSO). The two main reservoirs of atmospheric sulfur on Mars are expected to be SO2 and H2S. Because these two species have relatively short photochemical lifetimes, 160 and 9 days respectively (Wong et al. 2004), they stand as powerful indicators of recent activity. Carbonyl sulfide (OCS) is the expected end-product of the reactions between sulfuretted species and other molecules in the Martian atmosphere. Our multi-band survey targeted SO2, SO and H2S at their rotational transitions at 346.523 GHz, 304.078 GHz and 300.505 GHz respectively, and OCS in its combination band (ν1+ν3) at 3.42 µm and its fundamental band (ν3) centered at 4.85 µm. The radiative transfer model used to derive abundance ratios for these species was validated by performing line-inversion retrievals on the carbon monoxide (CO) strong rotational (3-2) line at sub-mm wavelengths (rest frequency 345.796 GHz). Preliminary results and abundance ratios for SO2, H2S, SO, OCS and CO will be presented. We gratefully acknowledge support from the NASA Planetary Astronomy Program (AK, ATT, MJM), NASA Astrobiology Institute (MJM), NASA Planetary Atmospheres Program (GLV), and NSF grant number AST-0838261 to support graduate students at the CSO (AK). References: Wong, A.S., Atreya, S. K., Formisano, V., Encrenaz, T., Ignatiev, N.I., "Atmospheric photochemistry above possible martian hot spots", Advances in Space Research, 33 (2004) 2236-2239.

  2. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  3. Web-based Image Search Engines%因特网上的图像搜索引擎

    Institute of Scientific and Technical Information of China (English)

    陈立娜

    2001-01-01

    The operating principle of Web-based image search engines is briefly described. A detailed evaluation of some of image search engines is made. Finally, the paper points out the deficiencies of the present image search engines and their development trend.

  4. Secondary eclipses in the CoRoT light curves: A homogeneous search based on Bayesian model selection

    CERN Document Server

    Parviainen, Hannu; Belmonte, Juan Antonio

    2012-01-01

    We aim to identify and characterize secondary eclipses in the original light curves of all published CoRoT planets using uniform detection and evaluation critetia. Our analysis is based on a Bayesian model selection between two competing models: one with and one without an eclipse signal. The search is carried out by mapping the Bayes factor in favor of the eclipse model as a function of the eclipse center time, after which the characterization of plausible eclipse candidates is done by estimating the posterior distributions of the eclipse model parameters using Markov Chain Monte Carlo. We discover statistically significant eclipse events for two planets, CoRoT-6b and CoRoT-11b, and for one brown dwarf, CoRoT-15b. We also find marginally significant eclipse events passing our plausibility criteria for CoRoT-3b, 13b, 18b, and 21b. The previously published CoRoT-1b and CoRoT-2b eclipses are also confirmed.

  5. Greedy-search based service location in P2P networks

    Institute of Scientific and Technical Information of China (English)

    Zhu Cheng; Liu Zhong; Zhang Weiming; Yang Dongsheng

    2005-01-01

    A model is built to analyze the performance of service location based on greedy search in P2P networks. Hops and relative QoS index of the node found in a service location process are used to evaluate the performance as well as the probability of locating the top 5% nodes with highest QoS level. Both model and simulation results show that, the performance of greedy search based service location improves significantly with the increase of the average degree of the network. It is found that, if changes of both overlay topology and QoS level of nodes can be ignored during a location process, greedy-search based service location has high probability of finding the nodes with relatively high QoS in small number of hops in a big overlay network. Model extension under arbitrary network degree distribution is also studied.

  6. Design and Implementation of the Personalized Search Engine Based on the Improved Behavior of User Browsing

    Directory of Open Access Journals (Sweden)

    Wei-Chao Li

    2013-02-01

    Full Text Available An improved user profile based on the user browsing behavior is proposed in this study. In the user profile, the user browsing web pages behaviors, the level of interest to keywords, the user's short-term interest and long-term interest are overall taken into account. The improved user profile based on the user browsing behavior is embedded in the personalized search engine system. The basic framework and the basic functional modules of the system are described detailed in this study. A demonstration system of IUBPSES is developed in the .NET platform. The results of the simulation experiments indicate that the retrieval effects which use the IUBPSES based on the improved user profile for information search surpass the current mainstream search engines. The direction of improvement and further research is proposed in the finally.

  7. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  8. A Feature-Weighted Instance-Based Learner for Deep Web Search Interface Identification

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2013-02-01

    Full Text Available Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based learner gives better results than classical algorithms such as C4.5, random forest and KNN.

  9. The optimal time-frequency atom search based on a modified ant colony algorithm

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-feng; LI Yan-jun; YU Rui-xing; ZHANG Ke

    2008-01-01

    In this paper,a new optimal time-frequency atom search method based on a modified ant colony algorithm is proposed to improve the precision of the traditional methods.First,the discretization formula of finite length time-frequency atom is inferred at length.Second; a modified ant colony algorithm in continuous space is proposed.Finally,the optimal timefrequency atom search algorithm based on the modified ant colony algorithm is described in detail and the simulation experiment is carried on.The result indicates that the developed algorithm is valid and stable,and the precision of the method is higher than that of the traditional method.

  10. Structure-Based Search for New Inhibitors of Cholinesterases

    Directory of Open Access Journals (Sweden)

    Barbara Malawska

    2013-03-01

    Full Text Available Cholinesterases are important biological targets responsible for regulation of cholinergic transmission, and their inhibitors are used for the treatment of Alzheimer’s disease. To design new cholinesterase inhibitors, of different structure-based design strategies was followed, including the modification of compounds from a previously developed library and a fragment-based design approach. This led to the selection of heterodimeric structures as potential inhibitors. Synthesis and biological evaluation of selected candidates confirmed that the designed compounds were acetylcholinesterase inhibitors with IC50 values in the mid-nanomolar to low micromolar range, and some of them were also butyrylcholinesterase inhibitors.

  11. Nomogram-based search for subspaces of independent attributes

    OpenAIRE

    Moškon, Sašo

    2009-01-01

    In thesis we introduce selective nomograms, an improvement of nomograms for visualization of naive Bayesian classifier. Selective nomograms allow us to interactively explore the domain and discover conditional dependencies between the attributes. We also propose a classification algorithm based on the idea of selectable nomograms. First, we introduce selective nomograms, define conditional dependencies and describe the theoretical background for discovering conditional dependencies betw...

  12. Code generation based on formal BURS theory and heuristic search

    NARCIS (Netherlands)

    Nymeyer, A.; Katoen, J.P.

    1997-01-01

    BURS theory provides a powerful mechanism to efficiently generate pattern matches in a given expression tree. BURS, which stands for bottom-up rewrite system, is based on term rewrite systems, to which costs are added. We formalise the underlying theory, and derive an algorithm that computes all pat

  13. Application of Search Algorithms for Model Based Regression Testing

    Directory of Open Access Journals (Sweden)

    Sidra Noureen

    2014-04-01

    Full Text Available UML models have gained their significance as reported in the literature. The use of a model to describe the behavior of a system is a proven and major advantage to test. With the help of Model Based Testing (MBT, it is possible to automatically generate test cases. When MBT is applied on large industrial systems, there is problem to sampling the test cases from the suit of entire test because it is difficult to execute the huge number of test cases being generated. The motivation of this study is to design a multi objective genetic algorithm based test case selection technique which can select the most appropriate subset of test cases. NSGA (Non-dominated Sorting Genetic Algorithm is used as an optimization algorithm and its fitness function is improved for selecting test cases from the dataset. It is concluded that there is a room to improve the performance of NSGA algorithm by means of tailoring its respective fitness function.

  14. Improving software security using search-based refactoring

    OpenAIRE

    Ghaith, Shadi; ?? Cinn??ide, Mel

    2012-01-01

    Security metrics have been proposed to assess the security of software applications based on the principles of ???reduce attack surface??? and ???grant least privilege.??? While these metrics can help inform the developer in choosing designs that provide better security, they cannot on their own show exactly how to make an application more secure. Even if they could, the onerous task of updating the software to improve its security is left to the developer. In this paper we ...

  15. A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.

    Science.gov (United States)

    Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R

    2008-04-01

    Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.

  16. A grammar based methodology for structural motif finding in ncRNA database search.

    Science.gov (United States)

    Quest, Daniel; Tapprich, William; Ali, Hesham

    2007-01-01

    In recent years, sequence database searching has been conducted through local alignment heuristics, pattern-matching, and comparison of short statistically significant patterns. While these approaches have unlocked many clues as to sequence relationships, they are limited in that they do not provide context-sensitive searching capabilities (e.g. considering pseudoknots, protein binding positions, and complementary base pairs). Stochastic grammars (hidden Markov models HMMs and stochastic context-free grammars SCFG) do allow for flexibility in terms of local context, but the context comes at the cost of increased computational complexity. In this paper we introduce a new grammar based method for searching for RNA motifs that exist within a conserved RNA structure. Our method constrains computational complexity by using a chain of topology elements. Through the use of a case study we present the algorithmic approach and benchmark our approach against traditional methods.

  17. Ontology-based Semantic Search Engine for Healthcare Services

    Directory of Open Access Journals (Sweden)

    Jotsna Molly Rajan

    2012-04-01

    Full Text Available With the development of Web Services, the retrieval of relevant services has become a challenge. The keyword-based discovery mechanism using UDDI and WSDL is insufficient due to the retrievalof a large amount of irrelevant information. Also, keywords are insufficient in expressing semantic concepts since a single concept can be referred using syntactically different terms. Hence, service capabilities need to be manually analyzed, which lead to the development of the Semantic Web for automatic service discovery andretrieval of relevant services and resources. This work proposes the incorporation of Semantic matching methodology in Semantic Web for improving the efficiency and accuracy of the discovery mechanism.

  18. Brain bases of the automaticity via visual search task

    OpenAIRE

    Bueichekú Bohabonay, Elisenda Práxedes

    2016-01-01

    El objetivo principal de esta tesis es estudiar las bases cerebrales de los procesos de búsqueda visual, el desarrollo de la automaticidad a través del entrenamiento, y la relación entre los procesos subyacentes a la búsqueda visual, los modelos teóricos de la atención y la neuroplasticidad. La tarea de búsqueda visual fue utilizada en tres experimentos, recogiendo datos conductuales y de la actividad y conectividad cerebral mediante RMf en población sana. Los resultados señalan las tareas at...

  19. Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.

    Science.gov (United States)

    Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin

    2014-10-01

    With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods. PMID:24939833

  20. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  1. Efficient Multi-keyword Ranked Search over Outsourced Cloud Data based on Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Nie Mengxi

    2016-01-01

    Full Text Available With the development of cloud computing, more and more data owners are motivated to outsource their data to the cloud server for great flexibility and less saving expenditure. Because the security of outsourced data must be guaranteed, some encryption methods should be used which obsoletes traditional data utilization based on plaintext, e.g. keyword search. To solve the search of encrypted data, some schemes were proposed to solve the search of encrypted data, e.g. top-k single or multiple keywords retrieval. However, the efficiency of these proposed schemes is not high enough to be impractical in the cloud computing. In this paper, we propose a new scheme based on homomorphic encryption to solve this challenging problem of privacy-preserving efficient multi-keyword ranked search over outsourced cloud data. In our scheme, the inner product is adopted to measure the relevance scores and the technique of relevance feedback is used to reflect the search preference of the data users. Security analysis shows that the proposed scheme can meet strict privacy requirements for such a secure cloud data utilization system. Performance evaluation demonstrates that the proposed scheme can achieve low overhead on both computation and communication.

  2. Report on TBAS 2012: workshop on task-based and aggregated search

    DEFF Research Database (Denmark)

    Larsen, Birger; Lioma, Christina; de Vries, Arjen

    2012-01-01

    The ECIR half-day workshop on Task-Based and Aggregated Search (TBAS) was held in Barcelona, Spain on 1 April 2012. The program included a keynote talk by Professor Järvelin, six full paper presentations, two poster presentations, and an interactive discussion among the approximately 25 participa...

  3. Exploring Gender Differences in SMS-Based Mobile Library Search System Adoption

    Science.gov (United States)

    Goh, Tiong-Thye

    2011-01-01

    This paper investigates differences in how male and female students perceived a short message service (SMS) library catalog search service when adopting it. Based on a sample of 90 students, the results suggest that there are significant differences in perceived usefulness and intention to use but no significant differences in self-efficacy and…

  4. Information Commitments: Evaluative Standards and Information Searching Strategies in Web-Based Learning Environments

    Science.gov (United States)

    Wu, Ying-Tien; Tsai, Chin-Chung

    2005-01-01

    "Information commitments" include both a set of evaluative standards that Web users utilize to assess the accuracy and usefulness of information in Web-based learning environments (implicit component), and the information searching strategies that Web users use on the Internet (explicit component). An "Information Commitment Survey" (ICS),…

  5. Eugene Garfield, Francis Narin, and PageRank: The Theoretical Bases of the Google Search Engine

    CERN Document Server

    Bensman, Stephen J

    2013-01-01

    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.

  6. EARS: An Online Bibliographic Search and Retrieval System Based on Ordered Explosion.

    Science.gov (United States)

    Ramesh, R.; Drury, Colin G.

    1987-01-01

    Provides overview of Ergonomics Abstracts Retrieval System (EARS), an online bibliographic search and retrieval system in the area of human factors engineering. Other online systems are described, the design of EARS based on inverted file organization is explained, and system expansions including a thesaurus are discussed. (Author/LRW)

  7. A novel approach towards skill-based search and services of Open Educational Resources

    NARCIS (Netherlands)

    Ha, Kyung-Hun; Niemann, Katja; Schwertel, Uta; Holtkamp, Philipp; Pirkkalainen, Henri; Börner, Dirk; Kalz, Marco; Pitsilis, Vassilis; Vidalis, Ares; Pappa, Dimitra; Bick, Markus; Pawlowski, Jan; Wolpers, Martin

    2011-01-01

    Ha, K.-H., Niemann, K., Schwertel, U., Holtkamp, P., Pirkkalainen, H., Börner, D. et al (2011). A novel approach towards skill-based search and services of Open Educational Resources. In E. Garcia-Barriocanal, A. Öztürk, & M. C. Okur (Eds.), Metadata and Semantics Research: 5th International Confere

  8. Aspiration Levels and R&D Search in Young Technology-Based Firms

    DEFF Research Database (Denmark)

    Candi, Marina; Saemundsson, Rognvaldur; Sigurjonsson, Olaf

    the same when performance surpasses aspirations. Both positive and negative outlooks reinforce the effects of performance feedback. The combined effect is that the more outcomes and expectations deviate from aspirations the more young technology-based firms invest in R&D search....

  9. How Users Search the Library from a Single Search Box

    Science.gov (United States)

    Lown, Cory; Sierra, Tito; Boyer, Josh

    2013-01-01

    Academic libraries are turning increasingly to unified search solutions to simplify search and discovery of library resources. Unfortunately, very little research has been published on library user search behavior in single search box environments. This study examines how users search a large public university library using a prominent, single…

  10. 基于JXTA的Super-peer搜索方法设计%Design JXTA Based Super-peer Search Method

    Institute of Scientific and Technical Information of China (English)

    李歆海; 李善平

    2003-01-01

    Efficient resource search method has significant impact on the scalability and availability of P2P network. Generally there are two search methods, pure Peer-to-Peer method and central index method. Recently, some search methods with super-peer concept are appearing, which are the compromise of those two methods and have favorable scalability and avafiability. In this paper, we compare the advantage and deficiency of these three kinds of search methods, and based on JXTA platform design the super-peer search method.

  11. A New Genetic Algorithm Based on Niche Technique and Local Search Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The genetic algorithm has been widely used in many fields as an easy robust global search and optimization method. In this paper, a new genetic algorithm based on niche technique and local search method is presented under the consideration of inadequacies of the simple genetic algorithm. In order to prove the adaptability and validity of the improved genetic algorithm, optimization problems of multimodal functions with equal peaks, unequal peaks and complicated peak distribution are discussed. The simulation results show that compared to other niching methods, this improved genetic algorithm has obvious potential on many respects, such as convergence speed, solution accuracy, ability of global optimization, etc.

  12. Based on A* and Q-Learning Search and Rescue Robot Navigation

    OpenAIRE

    Ruiyuan Fan; Xiaogang Ruan; Tao Pang; Ershen Wang

    2012-01-01

    For the search and rescue robot navigation in unknown environment, a bionic self-learning algorithm based on A* and Q-Learning is put forward. The algorithm utilizes the Growing Self-organizing Map (GSOM) to build the environment topology cognitive map. The heuristic search A* algorithm is used the global path planning. When the local environment changes, Q-Learning is used the local path planning. Thereby the robot can obtain the self-learning skill by studying and training like human or ani...

  13. A Beam Search-based Algorithm for Flexible Manufacturing System Scheduling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing-hai; ZHOU Xiao-jun; CAI Jian-guo; FENG Kun

    2002-01-01

    A new algorithm is proposed for the flexible manufacturing system (FMS) scheduling problem in this paper. The proposed algorithm is a heuristic based on filtered beam search. It considers the machines and automated guided vehicle (AGV) as the primary resources, It utilizes system constraints and related manufacturing and processing information to generate machines and AGV schedules. The generated schedules can be an entire scheduling horizon as well as various lengths of scheduling periods. The proposed algorithm is also compared with other well-known dispatching rulesbased FMS scheduling. The results indicate that the beam search algorithm is a simple, valid and promising algorithm that deserves further research in FMS scheduling field.

  14. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, Published in unknown, Glynn County Board of Commissioners.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, was produced all or in part from Other information as of unknown. Data by this...

  15. Digital Elevation Model (DEM), Digital elevation model based of 2006 LIDAR data., Published in 2007, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Digital Elevation Model (DEM) dataset, was produced all or in part from LIDAR information as of 2007. It is described as 'Digital elevation model based of 2006...

  16. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (ppredicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (ppredicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. PMID:23892296

  17. Genetic Algorithm-based Dynamic Vehicle Route Search using Car-to-Car Communication

    Directory of Open Access Journals (Sweden)

    KIM, J.

    2010-11-01

    Full Text Available Suggesting more efficient driving routes generate benefits not only for individuals by saving commute time, but also for society as a whole by reducing accident rates and social costs by lessening traffic congestion. In this paper, we suggest a new route search algorithm based on a genetic algorithm which is more easily installable into mutually communicating car navigation systems, and validate its usefulness through experiments reflecting real-world situations. The proposed algorithm is capable of searching alternative routes dynamically in unexpected events of system malfunctioning or traffic slow-downs due to accidents. Experimental results demonstrate that our algorithm searches the best route more efficiently and evolves with universal adaptability.

  18. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    Science.gov (United States)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  19. On the importance of graph search algorithms for DRGEP-based mechanism reduction methods

    CERN Document Server

    Niemeyer, Kyle E

    2016-01-01

    The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra's algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra's algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra's algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRG...

  20. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    Directory of Open Access Journals (Sweden)

    K.S. Kuppusamy,

    2011-03-01

    Full Text Available The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines inorder to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web pagesegmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approachinclude instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  1. Project GRACE A grid based search tool for the global digital library

    CERN Document Server

    Scholze, Frank; Vigen, Jens; Prazak, Petra; The Seventh International Conference on Electronic Theses and Dissertations

    2004-01-01

    The paper will report on the progress of an ongoing EU project called GRACE - Grid Search and Categorization Engine (http://www.grace-ist.org). The project participants are CERN, Sheffield Hallam University, Stockholm University, Stuttgart University, GL 2006 and Telecom Italia. The project started in 2002 and will finish in 2005, resulting in a Grid based search engine that will search across a variety of content sources including a number of electronic thesis and dissertation repositories. The Open Archives Initiative (OAI) is expanding and is clearly an interesting movement for a community advocating open access to ETD. However, the OAI approach alone may not be sufficiently scalable to achieve a truly global ETD Digital Library. Many universities simply offer their collections to the world via their local web services without being part of any federated system for archiving and even those dissertations that are provided with OAI compliant metadata will not necessarily be picked up by a centralized OAI Ser...

  2. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approach include instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  3. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  4. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  5. KRBKSS: a keyword relationship based keyword-set search system for peer-to-peer networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Liang; ZOU Fu-tai; MA Fan-yuan

    2005-01-01

    Distributed inverted index technology is used in many peer-to-peer (P2P) systems to help find rapidly document in -set search system for peer-to-peer networkswhich a given word appears. Distributed inverted index by keywords may incur significant bandwidth for executing more complicated search queries such as multiple-attribute queries. In order to reduce query overhead, KSS (keyword-set search) by Gnawali partitions the index by a set of keywords. However, a KSS index is considerably larger than a standard inverted index,since there are more word sets than there are individual words. And the insert overhead and storage overhead are obviously unacceptable for full-text search on a collection of documents even if KSS uses the distance window technology. In this paper, we extract the relationship information between query keywords from websites' queries logs to improve performance of KSS system.Experiments results clearly demonstrated that the improved keyword-set search system based on keywords relationship (KRBKSS) is more efficient than KSS index in insert overhead and storage overhead, and a standard inverted index in terms of communication costs for query.

  6. Scholarly electronic publishing bibliography

    OpenAIRE

    Bailey, Jr., Charles W.

    2005-01-01

    The Scholarly Electronic Publishing Bibliography (SEPB) presents selected English-language articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet. Most sources have been published between 1990 and the present; however, a limited number of key sources published prior to 1990 are also included. Where possible, links are provided to sources that are freely available on the Internet. SEPB includes "Scholar...

  7. Web Document Clustering Using Cuckoo Search Clustering Algorithm based on Levy Flight

    Directory of Open Access Journals (Sweden)

    Moe Moe Zaw

    2013-09-01

    Full Text Available The World Wide Web serves as a huge widely distributed global information service center. The tremendous amount of information on the web is improving day by day. So, the process of finding the relevant information on the web is a major challenge in Information Retrieval. This leads the need for the development of new techniques for helping users to effectively navigate, summarize and organize the overwhelmed information. One of the techniques that can play an important role towards the achievement of this objective is web document clustering. This paper aims to develop a clustering algorithm and apply in web document clustering area. The Cuckoo Search Optimization algorithm is a recently developed optimization algorithm based on the obligate behavior of some cuckoo species in combining with the levy flight. In this paper, Cuckoo Search Clustering Algorithm based on levy flight is proposed. This algorithm is the application of Cuckoo Search Optimization algorithm in web document clustering area to locate the optimal centroids of the cluster and to find global solution of the clustering algorithm. For testing the performance of the proposed method, this paper will show the experience result by using the benchmark dataset. The result obtained shows that the Cuckoo Search Clustering algorithm based on Levy Flight performs well in web document clustering.

  8. TRUST BASED AUTOMATIC QUERY FORMULATION SEARCH ON EXPERT AND KNOWLEDGE USERS SYSTEMS

    Directory of Open Access Journals (Sweden)

    K. Sridharan

    2014-01-01

    Full Text Available Due to enhance in complexity of services, there is a necessity for dynamic interaction models. For a service-oriented system to work properly, we need a context-sensitive trust based search. Automatic information transfer is also deficient when unexpected query is given. However, it shows that search engines are vulnerable in answering intellectual queries and shows an unreliable outcome. The user cannot have a fulfillment with these results due to lack of trusts on blogs. In our modified trust algorithm, which process exact skill matching and retrieval of information based on proper content rank. Our contribution to this system is new modified trust algorithm with automatic formulation of meaningful query search to retrieve the exact contents from the top-ranked documents based on the expert rank and their content quality verified of their resources provided. Some semantic search engines cannot show their important performance in improving precision and lowering recall. It hence effectively reduces complexity in combining HPS and software services.

  9. CHINA INTERNATIONAL PUBLISHING GROUP

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The China International Publishing Group (CIPG) specializes in international communications. Its operationsencompass reporting, editing, translation, publishing, printing, distribution, and the Internet. It incorporates sevenpublishing companies, five magazines and 19 periodicals, published in over 20 languages. The ChinaInternational Book Trading Corporation, another group facet, distributes all of these to over 180 countries and

  10. Genealogical Information Search by Using Parent Bidirectional Breadth Algorithm and Rule Based Relationship

    CERN Document Server

    Nuanmeesri, Sumitra; Meesad, Payung

    2010-01-01

    Genealogical information is the best histories resources for culture study and cultural heritage. The genealogical research generally presents family information and depict tree diagram. This paper presents Parent Bidirectional Breadth Algorithm (PBBA) to find consanguine relationship between two persons. In addition, the paper utilizes rules based system in order to identify consanguine relationship. The study reveals that PBBA is fast to solve the genealogical information search problem and the Rule Based Relationship provides more benefits in blood relationship identification.

  11. The Robustness of Content-Based Search in Hierarchical Peer to Peer Networks

    OpenAIRE

    Renda, Maria Elena; Callan, Jamie

    2004-01-01

    Hierarchical Peer to Peer (P2P) networks with multiple directory services have quickly become one of the dominant architectures for large-scale file sharing due to their effectiveness and efficiency. Recent research argues that such networks are also an effective method of providing large-scale content-based federated search of text-based digital libraries. In both cases the directory services are critical resources that are subject to attack or failure, but the latter architecture may be par...

  12. Predatory Search Strategy Based on Swarm Intelligence for Continuous Optimization Problems

    OpenAIRE

    Wang, J. W.; H. F. Wang; Ip, W. H.; Furuta, K; Kanno, T.; Zhang, W. J.

    2013-01-01

    We propose an approach to solve continuous variable optimization problems. The approach is based on the integration of predatory search strategy (PSS) and swarm intelligence technique. The integration is further based on two newly defined concepts proposed for the PSS, namely, “restriction” and “neighborhood,” and takes the particle swarm optimization (PSO) algorithm as the local optimizer. The PSS is for the switch of exploitation and exploration (in particular by the adjustment of neighborh...

  13. The Open Data Repositorys Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.

    2015-01-01

    Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.

  14. The Open Data Repository's Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.

    2015-12-01

    Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A

  15. PUBLISHER'S ANNOUNCEMENT: Refereeing standards

    Science.gov (United States)

    Bender, C.; Scriven, N.

    2004-08-01

    submitting papers to J. Phys. A. In addition to the office staff, the journal has two assets of enormous value. First, there is the pool of referees. It is impossible to have an academic system based on publication of original ideas without peer review. I believe that when one submits papers for publication in journals, one assumes a moral responsibility to participate in the peer review system. A published author has an obligation to referee papers and thereby to keep the scientific quality of published work as high as possible. In general, referees' reports that are submitted to scientific journals vary in quality. Some referees reply quickly and write detailed, careful, and helpful reports; other referees write cursory reports that are not so useful. Over the years J. Phys. A has amassed an amazingly talented and sedulous group of referees. I thank the referees of the journal who have worked so hard and have contributed their time without any expectation of financial compensation. I emphasize that the office tries hard to avoid overburdening referees. Sending back a quick and detailed response does not increase the likelihood of the referee receiving another paper to evaluate. (A number of people have told me that they sit on and delay the refereeing of papers in hopes of reducing the number of papers per year that they receive to referee. The office at J. Phys. A works to make this sort of strategy unnecessary.) The second asset is the Board of Editors and the Advisory Panel. For some journals membership on the Board of Editors is a sinecure. However, the 37 members of the Board of Editors and the 50 members of the Advisory Panel of J. Phys. A have been chosen not only because they are distinguished mathematical physicists but also because of their demonstrated willingness to work hard. Six members of the Board of Editors are designated as Section Editors: H Nishimori, Tokyo Institute of Technology, Japan (Statistical Physics); P Grassberger, Bergische Universität GH

  16. Etiquette in scientific publishing.

    Science.gov (United States)

    Krishnan, Vinod

    2013-10-01

    Publishing a scientific article in a journal with a high impact factor and a good reputation is considered prestigious among one's peer group and an essential achievement for career progression. In the drive to get their work published, researchers can forget, either intentionally or unintentionally, the ethics that should be followed in scientific publishing. In an environment where "publish or perish" rules the day, some authors might be tempted to bend or break rules. This special article is intended to raise awareness among orthodontic journal editors, authors, and readers about the types of scientific misconduct in the current publishing scenario and to provide insight into the ways these misconducts are managed by the Committee of Publishing Ethics. Case studies are presented, and various plagiarism detection software programs used by publishing companies are briefly described.

  17. Search-matching algorithm for acoustics-based automatic sniper localization

    Science.gov (United States)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    Most of modern automatic sniper localization systems are based on the utilization of the acoustical emissions produced by the gun fire events. In order to estimate the spatial coordinates of the sniper location, these systems measures the time delay of arrival of the acoustical shock wave fronts to a microphone array. In more advanced systems, model based estimation of the nonlinear distortion parameters of the N-waves is used to make projectile trajectory and calibre estimations. In this work we address the sniper localization problem using a model based search-matching approach. The automatic sniper localization algorithm works searching for the acoustics model of ballistic shock waves which best matches the measured data. For this purpose, we implement a previously released acoustics model of ballistic shock waves. Further, the sniper location, the projectile trajectory and calibre, and the muzzle velocity are regarded as the inputs variables of such a model. A search algorithm is implemented in order to found what combination of the input variables minimize a fitness function defined as the distance between measured and simulated data. In such a way, the sniper location, the projectile trajectory and calibre, and the muzzle velocity can be found. In order to evaluate the performance of the algorithm, we conduct computer based experiments using simulated gunfire event data calculated at the nodes of a virtual distributed sensor network. Preliminary simulation results are quite promising showing fast convergence of the algorithm and good localization accuracy.

  18. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    OpenAIRE

    Jie-sheng Wang; Shu-xia Li; Jiang-di Song

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird’s nests location. In order to select a reasonable repeat-...

  19. Budget-Impact Analyses: A Critical Review of Published Studies

    OpenAIRE

    Ewa Orlewska; Laszlo Gulcsi

    2009-01-01

    This article reviews budget-impact analyses (BIAs) published to date in peer-reviewed bio-medical journals with reference to current best practice, and discusses where future research needs to be directed. Published BIAs were identified by conducting a computerized search on PubMed using the search term 'budget impact analysis'. The years covered by the search included January 2000 through November 2008. Only studies (i) named by authors as BIAs and (ii) predicting financial consequences of a...

  20. Predatory Search Strategy Based on Swarm Intelligence for Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    J. W. Wang

    2013-01-01

    Full Text Available We propose an approach to solve continuous variable optimization problems. The approach is based on the integration of predatory search strategy (PSS and swarm intelligence technique. The integration is further based on two newly defined concepts proposed for the PSS, namely, “restriction” and “neighborhood,” and takes the particle swarm optimization (PSO algorithm as the local optimizer. The PSS is for the switch of exploitation and exploration (in particular by the adjustment of neighborhood, while the swarm intelligence technique is for searching the neighborhood. The proposed approach is thus named PSS-PSO. Five benchmarks are taken as test functions (including both unimodal and multimodal ones to examine the effectiveness of the PSS-PSO with the seven well-known algorithms. The result of the test shows that the proposed approach PSS-PSO is superior to all the seven algorithms.

  1. Local search methods based on variable focusing for random K -satisfiability

    Science.gov (United States)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.

  2. Local search methods based on variable focusing for random K-satisfiability.

    Science.gov (United States)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed. PMID:25679737

  3. A New Hardware Architecture for Parallel Shortest Path Searching Processor Based-on FPGA Technology

    Directory of Open Access Journals (Sweden)

    Jassim M. Abdul-Jabbar

    2012-09-01

    Full Text Available In this paper, a new FPGA-based parallel processor for shortest path searching for OSPF networks is designed and implemented. The processor design is based on parallel searching algorithm that overcomes the long time execution of the conventional Dijkstra algorithm which is used originally in OSPF network protocol. Multiple shortest links can be found simultaneously and the execution iterations of the processing phase are limited to − instead of of Dijkstra algorithm. Depending on the FPGA chip resources, the processor is expanded to be able to process an OSPF area with 128 routers. High speed up factors of our proposal processor against the sequential Dijkstra execution times, within (76.77-103.45, are achieved.

  4. Effective Leveraging of Targeted Search Spaces for Improving Peptide Identification in Tandem Mass Spectrometry Based Proteomics.

    Science.gov (United States)

    Shanmugam, Avinash K; Nesvizhskii, Alexey I

    2015-12-01

    In shotgun proteomics, peptides are typically identified using database searching, which involves scoring acquired tandem mass spectra against peptides derived from standard protein sequence databases such as Uniprot, Refseq, or Ensembl. In this strategy, the sensitivity of peptide identification is known to be affected by the size of the search space. Therefore, creating a targeted sequence database containing only peptides likely to be present in the analyzed sample can be a useful technique for improving the sensitivity of peptide identification. In this study, we describe how targeted peptide databases can be created based on the frequency of identification in the global proteome machine database (GPMDB), the largest publicly available repository of peptide and protein identification data. We demonstrate that targeted peptide databases can be easily integrated into existing proteome analysis workflows and describe a computational strategy for minimizing any loss of peptide identifications arising from potential search space incompleteness in the targeted search spaces. We demonstrate the performance of our workflow using several data sets of varying size and sample complexity. PMID:26569054

  5. Construction of Powerful Online Search Expert System Based on Semantic Web

    Directory of Open Access Journals (Sweden)

    Yasser A. Nada

    2013-01-01

    Full Text Available In this paper we intends to build an expert system based on semantic web for online search using XML, to help users to find the desired software, and read about its features and specifications. The expert system saves user's time and effort of web searching or buying software from available libraries. Building online search expert system is ideal for capturing support knowledge to produce interactive on-line systems that provide searching details, situation-specific advice exactly like setting a session with an expert. Any person can access this interactive system from his web browser and get some questions answer in addition to precise advice which was provided by an expert. The system can provide some troubleshooting diagnose, find the right products; … Etc. The proposed system further combines aspects of three research topics (Semantic Web, Expert System and XML. Semantic web Ontology will be considered as a set of directed graphs where each node represents an item and the edges denote a term which is related to another term. Organizations can now optimize their most valuable expert knowledge through powerful interactive Web-enabled knowledge automation expert system. Online sessions emulate a conversation with a human expert asking focused questions and producing customized recommendations and advice. Hence, the main powerful point of the proposed expert system is that the skills of any domain expert will be available to everyone.

  6. PSO-Based Support Vector Machine with Cuckoo Search Technique for Clinical Disease Diagnoses

    OpenAIRE

    Xiaoyong Liu; Hui Fu

    2014-01-01

    Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results ...

  7. Characterization of single layer anti-reflective coatings for bolometer-based rare event searches

    CERN Document Server

    Hansen, E V

    2016-01-01

    A photon signal added to the existing phonon signal can powerfully reduce backgrounds for bolometer-based rare event searches. Anti-reflective coatings can significantly increase the performance of the secondary light sensing bolometer in these experiments. Coatings of SiO2, HfO2, and TiO2 on Ge and Si were fabricated and characterized at room temperature and all angles of incidence.

  8. Block-based disparity estimation by partial finite ridgelet distortion search (PFRDS)

    Science.gov (United States)

    Eslami, Mohammad; Torkamani-Azar, Farah

    2010-01-01

    In stereo vision applications, computing the disparity map is an important issue. Performance of different approaches totally depends on the employed similarity measurements. In this paper finite ridgelet transform is used to define an edge sensitive block distortion similarity measure. Simulation results emphasize to outperform in the conventional criteria and is less sensitive to noise, especially at the edge set of images. To speed computations, a new partial search algorithm based on energy conservation property of FRIT is proposed.

  9. Genealogical Information Search by Using Parent Bidirectional Breadth Algorithm and Rule Based Relationship

    OpenAIRE

    Sumitra Nuanmeesri; Chanasak Baitiang,; Phayung Meesad

    2009-01-01

    Genealogical information is the best histories resources for culture study and cultural heritage. The genealogical research generally presents family information and depict tree diagram. This paper presents Parent Bidirectional Breadth Algorithm (PBBA) to find consanguine relationship between two persons. In addition, the paper utilizes rules based system in order to identify consanguine relationship. The study reveals that PBBA is fast to solve the genealogical information search problem and...

  10. Architecture for Knowledge-Based and Federated Search of Online Clinical Evidence

    OpenAIRE

    Coiera, Enrico; Walther, Martin; Nguyen, Ken; Lovell, Nigel H.

    2005-01-01

    Background It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. Objectives The...

  11. Ontology-Aided vs. Keyword-Based Web Searches: A Comparative User Study

    OpenAIRE

    Kamel, Magdi; Lee, Ann; Powers, Ed

    2007-01-01

    Ontologies are formal explicit description of concepts in a domain of discourse, properties of these concepts, and restrictions on these properties that are specified by semantics that follow the “rules” of the domain of knowledge. As such, ontologies would be extremely useful as knowledge bases for an application attempting to add context to a particular Web search term. This paper describes such an application and reports the results of a user study designed to compare the effec...

  12. Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    OpenAIRE

    Bin Guo; Satoru Satake; Michita Imai

    2008-01-01

    A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects) in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certai...

  13. A Factorial Experiment on Scalability of Search-based Software Testing

    OpenAIRE

    Mehrmand, Arash

    2009-01-01

    Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although ...

  14. A Product Feature-based User-Centric Ranking Model for E-Commerce Search

    OpenAIRE

    Ben Jabeur, Lamjed; Soulier, Laure; Tamine, Lynda; Mousset, Paul

    2016-01-01

    International audience During the online shopping process, users search for interesting products in order to quickly access those that fit with their needs among a long tail of similar or closely related products. Our contribution addresses head queries that are frequently submitted on e-commerce Web sites. Head queries usually target featured products with several variations , accessories, and complementary products. We present in this paper a product feature-based user-centric model for ...

  15. Development of a Google-Based Search Engine for Data Mining Radiology Reports

    OpenAIRE

    Erinjeri, Joseph P.; Picus, Daniel; Prior, Fred W; Rubin, David A.; Koppel, Paul

    2008-01-01

    The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was ...

  16. Publishing studies: what else?

    Directory of Open Access Journals (Sweden)

    Bertrand Legendre

    2015-07-01

    Full Text Available This paper intends to reposition “publishing studies” in the long process that goes from the beginning of book history to the current research on cultural industries. It raises questions about interdisciplinarity and the possibility of considering publishing independently of other sectors of the media and cultural offerings. Publishing is now included in a large range of industries and, at the same time, analyses tend to become more and more segmented according to production sectors and scientific fields. In addition to the problems created, from the professional point of view, by this double movement, this one requires a questioning of the concept of “publishing studies”.

  17. A Neotropical Miocene Pollen Database Employing Image-Based Search and Semantic Modeling

    Directory of Open Access Journals (Sweden)

    Jing Ginger Han

    2014-08-01

    Full Text Available Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.

  18. Extended-search, Bézier Curve-based Lane Detection and Reconstruction System for an Intelligent Vehicle

    Directory of Open Access Journals (Sweden)

    Xiaoyun Huang

    2015-09-01

    Full Text Available To improve the real-time performance and detection rate of a Lane Detection and Reconstruction (LDR system, an extended-search-based lane detection method and a Bézier curve-based lane reconstruction algorithm are proposed in this paper. The extended search-based lane detection method is designed to search boundary blocks from the initial position, in an upwards direction and along the lane, with small search areas including continuous search, discontinuous search and bending search in order to detect different lane boundaries. The Bézier curve-based lane reconstruction algorithm is employed to describe a wide range of lane boundary forms with comparatively simple expressions. In addition, two Bézier curves are adopted to reconstruct the lanes’ outer boundaries with large curvature variation. The lane detection and reconstruction algorithm — including initial-blocks’ determining, extended search, binarization processing and lane boundaries’ fitting in different scenarios — is verified in road tests. The results show that this algorithm is robust against different shadows and illumination variations; the average processing time per frame is 13 ms. Significantly, it presents an 88.6% high-detection rate on curved lanes with large or variable curvatures, where the accident rate is higher than that of straight lanes.

  19. Trade Publishing: A Report from the Front.

    Science.gov (United States)

    Fister, Barbara

    2001-01-01

    Reports on the current condition of trade publishing and its future prospects based on interviews with editors, publishers, agents, and others. Discusses academic libraries and the future of trade publishing, including questions relating to electronic books, intellectual property, and social and economic benefits of sharing information…

  20. THE TYPES OF PUBLISHING SLOGANS

    Directory of Open Access Journals (Sweden)

    Ryzhov Konstantin Germanovich

    2015-03-01

    Full Text Available The author of the article focuses his attention on publishing slogans which are posted on 100 present-day Russian publishing houses' official websites and have not yet been studied in the special literature. The author has developed his own classification of publishing slogans based on the results of analysis and considering the current scientific views on the classification of slogans. The examined items are classified into autonomous and text-dependent according to interrelationship with an advertising text; marketable, corporative and mixed according to a presentation subject; rational, emotional and complex depending on the method of influence upon a recipient; slogan-presentation, slogan-assurance, slogan-identifier, slogan-appraisal, slogan-appeal depending on the communicative strategy; slogans consisting of one sentence and of two or more sentences; Russian and foreign ones. The analysis of the slogans of all kinds presented in the actual material allowed the author to determine the dominant features of the Russian publishing slogan which is an autonomous sentence in relation to the advertising text. In spite of that, the slogan shows the publishing output, influences the recipient emotionally, actualizes the communicative strategy of publishing house presentation of its distinguishing features, gives assurance to the target audience and distinguishes the advertised subject among competitors.

  1. Television Transmitter Locations, parcel data base attribute; building type, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Television Transmitter Locations dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of...

  2. Parcels and Land Ownership, Parcel data based off Landnet and survey grade GPS, Published in unknown, 1:7200 (1in=600ft) scale, Bayfield County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Parcels and Land Ownership dataset, published at 1:7200 (1in=600ft) scale, was produced all or in part from Published Reports/Deeds information as of unknown....

  3. Radio Transmitters and Tower Locations, parcel data base attribute; building type, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Radio Transmitters and Tower Locations dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as...

  4. SVG-Based Web Publishing

    Science.gov (United States)

    Gao, Jerry Z.; Zhu, Eugene; Shim, Simon

    2003-01-01

    With the increasing applications of the Web in e-commerce, advertising, and publication, new technologies are needed to improve Web graphics technology due to the current limitation of technology. The SVG (Scalable Vector Graphics) technology is a new revolutionary solution to overcome the existing problems in the current web technology. It provides precise and high-resolution web graphics using plain text format commands. It sets a new standard for web graphic format to allow us to present complicated graphics with rich test fonts and colors, high printing quality, and dynamic layout capabilities. This paper provides a tutorial overview about SVG technology and its essential features, capability, and advantages. The reports a comparison studies between SVG and other web graphics technologies.

  5. An Improved Harmony Search Based on Teaching-Learning Strategy for Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    2013-01-01

    Full Text Available Harmony search (HS algorithm is an emerging population-based metaheuristic algorithm, which is inspired by the music improvisation process. The HS method has been developed rapidly and applied widely during the past decade. In this paper, an improved global harmony search algorithm, named harmony search based on teaching-learning (HSTL, is presented for high dimension complex optimization problems. In HSTL algorithm, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and random mutation are employed to maintain the proper balance between convergence and population diversity, and dynamic strategy is adopted to change the parameters. The proposed HSTL algorithm is investigated and compared with three other state-of-the-art HS optimization algorithms. Furthermore, to demonstrate the robustness and convergence, the success rate and convergence analysis is also studied. The experimental results of 31 complex benchmark functions demonstrate that the HSTL method has strong convergence and robustness and has better balance capacity of space exploration and local exploitation on high dimension complex optimization problems.

  6. Search Fatigue

    OpenAIRE

    Bruce Ian Carlin; Florian Ederer

    2012-01-01

    Consumer search is not only costly but also tiring. We characterize the intertemporal effects that search fatigue has on oligopoly prices, product proliferation, and the provision of consumer assistance (i.e., advice). These effects vary based on whether search is all-or-nothing or sequential in nature, whether learning takes place, and whether consumers exhibit brand loyalty. We perform welfare analysis and highlight the novel empirical implications that our analysis generates.

  7. Personal publishing and media literacy

    OpenAIRE

    2005-01-01

    Based on a discussion of the terms “digital competence” and “media competence” this paper presents challenges of designing virtual learning arenas based on principles known from weblogs and wikis. Both are personal publishing forms that seem promising in an educational context. The paper outlines a learning environment designed to make it possible for individual users to organize their own learning environments and enabling them to utilize web-based forms of personal communication...

  8. Case and Relation (CARE based Page Rank Algorithm for Semantic Web Search Engines

    Directory of Open Access Journals (Sweden)

    N. Preethi

    2012-05-01

    Full Text Available Web information retrieval deals with a technique of finding relevant web pages for any given query from a collection of documents. Search engines have become the most helpful tool for obtaining useful information from the Internet. The next-generation Web architecture, represented by the Semantic Web, provides the layered architecture possibly allowing data to be reused across application. The proposed architecture use a hybrid methodology named Case and Relation (CARE based Page Rank algorithm which uses past problem solving experience maintained in the case base to form a best matching relations and then use them for generating graphs and spanning forests to assign a relevant score to the pages.

  9. MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2013-10-01

    Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works

  10. Design and Implementation of the Personalized Search Engine Based on the Improved Behavior of User Browsing

    OpenAIRE

    Wei-Chao Li; Jin-Guang Liu

    2013-01-01

    An improved user profile based on the user browsing behavior is proposed in this study. In the user profile, the user browsing web pages behaviors, the level of interest to keywords, the user's short-term interest and long-term interest are overall taken into account. The improved user profile based on the user browsing behavior is embedded in the personalized search engine system. The basic framework and the basic functional modules of the system are described detailed in this study. A demon...

  11. Handling Conflicts in Depth-First Search for LTL Tableau to Debug Compliance Based Languages

    Directory of Open Access Journals (Sweden)

    Francois Hantry

    2011-09-01

    Full Text Available Providing adequate tools to tackle the problem of inconsistent compliance rules is a critical research topic. This problem is of paramount importance to achieve automatic support for early declarative design and to support evolution of rules in contract-based or service-based systems. In this paper we investigate the problem of extracting temporal unsatisfiable cores in order to detect the inconsistent part of a specification. We extend conflict-driven SAT-solver to provide a new conflict-driven depth-first-search solver for temporal logic. We use this solver to compute LTL unsatisfiable cores without re-exploring the history of the solver.

  12. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    Science.gov (United States)

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  13. Based on A* and Q-Learning Search and Rescue Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ruiyuan Fan

    2012-11-01

    Full Text Available For the search and rescue robot navigation in unknown environment, a bionic self-learning algorithm based on A* and Q-Learning is put forward. The algorithm utilizes the Growing Self-organizing Map (GSOM to build the environment topology cognitive map. The heuristic search A* algorithm is used the global path planning. When the local environment changes, Q-Learning is used the local path planning. Thereby the robot can obtain the self-learning skill by studying and training like human or animal, and looks for a free path from the initial state to the target state in unknown environment. The theory proves the validity of the method. The simulation result shows the robot obtains the navigation capability.

  14. A Fast LSF Search Algorithm Based on Interframe Correlation in G.723.1

    Directory of Open Access Journals (Sweden)

    Kulkarni Jaydeep P

    2004-01-01

    Full Text Available We explain a time complexity reduction algorithm that improves the line spectral frequencies (LSF search procedure on the unit circle for low bit rate speech codecs. The algorithm is based on strong interframe correlation exhibited by LSFs. The fixed point C code of ITU-T Recommendation G.723.1, which uses the “real root algorithm” was modified and the results were verified on ARM-7TDMI general purpose RISC processor. The algorithm works for all test vectors provided by International Telecommunications Union-Telecommunication (ITU-T as well as real speech. The average time reduction in the search computation was found to be approximately 20%.

  15. Towards a Complexity Theory of Randomized Search Heuristics: Ranking-Based Black-Box Complexity

    CERN Document Server

    Doerr, Benjamin

    2011-01-01

    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. A big step forward would be a useful complexity theory for such algorithms. We enrich the two existing black-box complexity notions due to Wegener and other authors by the restrictions that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold.

  16. Local search-based heuristics for the multiobjective multidimensional knapsack problem

    Directory of Open Access Journals (Sweden)

    Dalessandro Soares Vianna

    2012-01-01

    Full Text Available In real optimization problems it is generally desirable to optimize more than one performance criterion (or objective at the same time. The goal of the multiobjective combinatorial optimization (MOCO is to optimize simultaneously r > 1 objectives. As in the single-objective case, the use of heuristic/metaheuristic techniques seems to be the most promising approach to MOCO problems because of their efficiency, generality and relative simplicity of implementation. In this work, we develop algorithms based on Greedy Randomized Adaptive Search Procedure (GRASP and Iterated Local Search (ILS metaheuristics for the multiobjective knapsack problem. Computational experiments on benchmark instances show that the proposed algorithms are very robust and outperform other heuristics in terms of solution quality and running times.

  17. SearchLight: a freely available web-based quantitative spectral analysis tool (Conference Presentation)

    Science.gov (United States)

    Prabhat, Prashant; Peet, Michael; Erdogan, Turan

    2016-03-01

    In order to design a fluorescence experiment, typically the spectra of a fluorophore and of a filter set are overlaid on a single graph and the spectral overlap is evaluated intuitively. However, in a typical fluorescence imaging system the fluorophores and optical filters are not the only wavelength dependent variables - even the excitation light sources have been changing. For example, LED Light Engines may have a significantly different spectral response compared to the traditional metal-halide lamps. Therefore, for a more accurate assessment of fluorophore-to-filter-set compatibility, all sources of spectral variation should be taken into account simultaneously. Additionally, intuitive or qualitative evaluation of many spectra does not necessarily provide a realistic assessment of the system performance. "SearchLight" is a freely available web-based spectral plotting and analysis tool that can be used to address the need for accurate, quantitative spectral evaluation of fluorescence measurement systems. This tool is available at: http://searchlight.semrock.com/. Based on a detailed mathematical framework [1], SearchLight calculates signal, noise, and signal-to-noise ratio for multiple combinations of fluorophores, filter sets, light sources and detectors. SearchLight allows for qualitative and quantitative evaluation of the compatibility of filter sets with fluorophores, analysis of bleed-through, identification of optimized spectral edge locations for a set of filters under specific experimental conditions, and guidance regarding labeling protocols in multiplexing imaging assays. Entire SearchLight sessions can be shared with colleagues and collaborators and saved for future reference. [1] Anderson, N., Prabhat, P. and Erdogan, T., Spectral Modeling in Fluorescence Microscopy, http://www.semrock.com (2010).

  18. From protocol to published report

    DEFF Research Database (Denmark)

    Berendt, Louise; Callréus, Torbjörn; Petersen, Lene Grejs;

    2016-01-01

    %) of the sample, and 87% (95% CI: 80 to 94%) of the trials were hospital based. CONCLUSIONS: Overall consistency between protocols and their corresponding published reports was low. Motivators for the inconsistencies are unknown but do not seem restricted to economic incentives....

  19. Elearning and digital publishing

    CERN Document Server

    Ching, Hsianghoo Steve; Mc Naught, Carmel

    2006-01-01

    ""ELearning and Digital Publishing"" will occupy a unique niche in the literature accessed by library and publishing specialists, and by university teachers and planners. It examines the interfaces between the work done by four groups of university staff who have been in the past quite separate from, or only marginally related to, each other - library staff, university teachers, university policy makers, and staff who work in university publishing presses. All four groups are directly and intimately connected with the main functions of universities - the creation, management and dissemination

  20. Data Sharing & Publishing at Nature Publishing Group

    Science.gov (United States)

    VanDecar, J. C.; Hrynaszkiewicz, I.; Hufton, A. L.

    2015-12-01

    In recent years, the research community has come to recognize that upon-request data sharing has important limitations1,2. The Nature-titled journals feel that researchers have a duty to share data without undue qualifications, in a manner that allows others to replicate and build upon their published findings. Historically, the Nature journals have been strong supporters of data deposition in communities with existing data mandates, and have required data sharing upon request in all other cases. To help address some of the limitations of upon-request data sharing, the Nature titles have strengthened their existing data policies and forged a new partnership with Scientific Data, to promote wider data sharing in discoverable, citeable and reusable forms, and to ensure that scientists get appropriate credit for sharing3. Scientific Data is a new peer-reviewed journal for descriptions of research datasets, which works with a wide of range of public data repositories4. Articles at Scientific Data may either expand on research publications at other journals or may be used to publish new datasets. The Nature Publishing Group has also signed the Joint Declaration of Data Citation Principles5, and Scientific Data is our first journal to include formal data citations. We are currently in the process of adding data citation support to our various journals. 1 Wicherts, J. M., Borsboom, D., Kats, J. & Molenaar, D. The poor availability of psychological research data for reanalysis. Am. Psychol. 61, 726-728, doi:10.1037/0003-066x.61.7.726 (2006). 2 Vines, T. H. et al. Mandated data archiving greatly improves access to research data. FASEB J. 27, 1304-1308, doi:10.1096/fj.12-218164 (2013). 3 Data-access practices strengthened. Nature 515, 312, doi:10.1038/515312a (2014). 4 More bang for your byte. Sci. Data 1, 140010, doi:10.1038/sdata.2014.10 (2014). 5 Data Citation Synthesis Group: Joint Declaration of Data Citation Principles. (FORCE11, San Diego, CA, 2014).

  1. About EBSCO Publishing

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    <正>EBSCO Publishing,headquartered in Ipswich,Massachusetts[1],is an aggregator of premium full-text content. EBSCO Publishing’s core business is providing online databases via EBSCOhost to libraries worldwide.

  2. Optimization of fuel cells for BWR based in Tabu modified search

    International Nuclear Information System (INIS)

    The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)

  3. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  4. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, DFIRM's from NC Floodplain Mapping Program, Published in 2009, 1:12000 (1in=1000ft) scale, Iredell County GIS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from LIDAR...

  5. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA Flood Insurance Rate Maps, Published in 2005, 1:24000 (1in=2000ft) scale, Lafayette County Land Records.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  6. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, FEMA Floodway and Flood Boundary Maps, Published in 2005, 1:24000 (1in=2000ft) scale, Lafayette County Land Records.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  7. Geodetic Control Points, Provide a base of reference for latitude, longitude and height throughout the United States., Published in 2004, 1:24000 (1in=2000ft) scale, Louisiana State University.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Geodetic Control Points dataset, published at 1:24000 (1in=2000ft) scale as of 2004. It is described as 'Provide a base of reference for latitude, longitude...

  8. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, Federal Emergency Management Agency (FEMA) - Flood Insurance Rate Maps (FIRM), Published in 2011, 1:1200 (1in=100ft) scale, Polk County, Wisconsin.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Other...

  9. Cellular Phone Towers, Serve as base information for use in GIS systems for general planning, analytical, and research purposes., Published in 2007, 1:24000 (1in=2000ft) scale, Louisiana State University.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cellular Phone Towers dataset, published at 1:24000 (1in=2000ft) scale as of 2007. It is described as 'Serve as base information for use in GIS systems for...

  10. Contours, Elevation contour data are a fundamental base map layer for large scale mapping and GIS analysis., Published in 2001, 1:24000 (1in=2000ft) scale, Louisiana State University.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Contours dataset, published at 1:24000 (1in=2000ft) scale as of 2001. It is described as 'Elevation contour data are a fundamental base map layer for large...

  11. Energy transmission modes based on Tabu search and particle swarm hybrid optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    LI xiang; CUI Ji-feng; QI Jian-xun; YANG Shang-dong

    2007-01-01

    In China, economic centers are far from energy storage bases, so it is significant to select a proper energy transferring mode to improve the efficiency of energy usage, To solve this problem, an optimal allocation model based on energy transfer mode was proposed after objective function for optimizing energy using efficiency Was established, and then, a new Tabu search and power transmission was gained.Based on the above discussion, some proposals were put forward for optimal allocation of energy transfer modes in China. By comparing other three traditional methodsthat are based on regional price differences. freight rates and annual cost witll the proposed method, the result indicates that the economic efficiency of the energy transfer Can be enhanced by 3.14%, 5.78% and 6.01%, respectively.

  12. Nephrogenic systemic fibrosis: risk factors suggested from Japanese published cases

    DEFF Research Database (Denmark)

    Tsushima, Y; Kanal, E; Thomsen, H S

    2010-01-01

    The aim of this article is to review the published cases of nephrogenic systemic fibrosis (NSF) in Japan. The Japanese medical literature database and MedLine were searched using the keywords NSF and nephrogenic fibrosing dermopathy (January 2000 to March 2009). Reports in peer-reviewed journals...... and meeting abstracts were included, and cases with biopsy confirmation were selected. 14 biopsy-verified NSF cases were found. In seven of eight patients reported after the association between gadolinium-based contrast agent (GBCA) and NSF was proposed, GBCA administration was documented: five received only...

  13. Category-based guidance of spatial attention during visual search for feature conjunctions.

    Science.gov (United States)

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record PMID:27213833

  14. Open-Access Publishing

    Directory of Open Access Journals (Sweden)

    Nedjeljko Frančula

    2013-06-01

    Full Text Available Nature, one of the most prominent scientific journals dedicated one of its issues to recent changes in scientific publishing (Vol. 495, Issue 7442, 27 March 2013. Its editors stressed that words technology and revolution are closely related when it comes to scientific publishing. In addition, the transformation of research publishing is not as much a revolution than an attrition war in which all sides are buried. The most important change they refer to is the open-access model in which an author or an institution pays in advance for publishing a paper in a journal, and the paper is then available to users on the Internet free of charge.According to preliminary results of a survey conducted among 23 000 scientists by the publisher of Nature, 45% of them believes all papers should be published in open access, but at the same time 22% of them would not allow the use of papers for commercial purposes. Attitudes toward open access vary according to scientific disciplines, leading the editors to conclude the revolution still does not suit everyone.

  15. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  16. A simple heuristic for Internet-based evidence search in primary care: a randomized controlled trial

    Science.gov (United States)

    Eberbach, Andreas; Becker, Annette; Rochon, Justine; Finkemeler, Holger; Wagner, Achim; Donner-Banzhoff, Norbert

    2016-01-01

    Background General practitioners (GPs) are confronted with a wide variety of clinical questions, many of which remain unanswered. Methods In order to assist GPs in finding quick, evidence-based answers, we developed a learning program (LP) with a short interactive workshop based on a simple three-step-heuristic to improve their search and appraisal competence (SAC). We evaluated the LP effectiveness with a randomized controlled trial (RCT). Participants (intervention group [IG] n=20; control group [CG] n=31) rated acceptance and satisfaction and also answered 39 knowledge questions to assess their SAC. We controlled for previous knowledge in content areas covered by the test. Results Main outcome – SAC: within both groups, the pre–post test shows significant (P=0.00) improvements in correctness (IG 15% vs CG 11%) and confidence (32% vs 26%) to find evidence-based answers. However, the SAC difference was not significant in the RCT. Other measures Most workshop participants rated “learning atmosphere” (90%), “skills acquired” (90%), and “relevancy to my practice” (86%) as good or very good. The LP-recommendations were implemented by 67% of the IG, whereas 15% of the CG already conformed to LP recommendations spontaneously (odds ratio 9.6, P=0.00). After literature search, the IG showed a (not significantly) higher satisfaction regarding “time spent” (IG 80% vs CG 65%), “quality of information” (65% vs 54%), and “amount of information” (53% vs 47%). Conclusion Long-standing established GPs have a good SAC. Despite high acceptance, strong learning effects, positive search experience, and significant increase of SAC in the pre–post test, the RCT of our LP showed no significant difference in SAC between IG and CG. However, we suggest that our simple decision heuristic merits further investigation.

  17. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  18. Query Intent Disambiguation of Keyword-Based Semantic Entity Search in Dataspaces

    Institute of Scientific and Technical Information of China (English)

    Dan Yang; De-Rong Shen; Ge Yu; Yue Kou; Tie-Zheng Nie

    2013-01-01

    Keyword query has attracted much research attention due to its simplicity and wide applications.The inherent ambiguity of keyword query is prone to unsatisfied query results.Moreover some existing techniques on Web query,keyword query in relational databases and XML databases cannot be completely applied to keyword query in dataspaces.So we propose KeymanticES,a novel keyword-based semantic entity search mechanism in dataspaces which combines both keyword query and semantic query features.And we focus on query intent disambiguation problem and propose a novel three-step approach to resolve it.Extensive experimental results show the effectiveness and correctness of our proposed approach.

  19. Color Octet Electron Search Potential of the FCC Based e-p Colliders

    CERN Document Server

    Acar, Y C; Oner, B B; Sultansoy, S

    2016-01-01

    Resonant production of color octet electron, e_{8}, at the FCC based ep colliders has been analyzed. It is shown that e-FCC will cover much a wider region of e_{8} masses compared to the LHC. Moreover, with highest electron beam energy, e_{8} search potential of the e-FCC exceeds that of FCC pp collider. If e_{8} is discovered earlier by the FCC pp collider, e-FCC will give opportunity to handle very important additional information. For example, compositeness scale can be probed up to hundreds TeV region.

  20. EMBANKS: Towards Disk Based Algorithms For Keyword-Search In Structured Databases

    OpenAIRE

    Gupta, Nitin

    2011-01-01

    In recent years, there has been a lot of interest in the field of keyword querying relational databases. A variety of systems such as DBXplorer [ACD02], Discover [HP02] and ObjectRank [BHP04] have been proposed. Another such system is BANKS, which enables data and schema browsing together with keyword-based search for relational databases. It models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. The size of the database graph that BANKS uses is ...

  1. A method of characterizing network topology based on the breadth-first search tree

    Science.gov (United States)

    Zhou, Bin; He, Zhe; Wang, Nianxin; Wang, Bing-Hong

    2016-05-01

    A method based on the breadth-first search tree is proposed in this paper to characterize the hierarchical structure of network. In this method, a similarity coefficient is defined to quantitatively distinguish networks, and quantitatively measure the topology stability of the network generated by a model. The applications of the method are discussed in ER random network, WS small-world network and BA scale-free network. The method will be helpful for deeply describing network topology and provide a starting point for researching the topology similarity and isomorphism of networks.

  2. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul...... with multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  3. Example-Based Sequence Diagrams to Colored Petri Nets Transformation Using Heuristic Search

    Science.gov (United States)

    Kessentini, Marouane; Bouchoucha, Arbi; Sahraoui, Houari; Boukadoum, Mounir

    Dynamic UML models like sequence diagrams (SD) lack sufficient formal semantics, making it difficult to build automated tools for their analysis, simulation and validation. A common approach to circumvent the problem is to map these models to more formal representations. In this context, many works propose a rule-based approach to automatically translate SD into colored Petri nets (CPN). However, finding the rules for such SD-to-CPN transformations may be difficult, as the transformation rules are sometimes difficult to define and the produced CPN may be subject to state explosion. We propose a solution that starts from the hypothesis that examples of good transformation traces of SD-to-CPN can be useful to generate the target model. To this end, we describe an automated SD-to-CPN transformation method which finds the combination of transformation fragments that best covers the SD model, using heuristic search in a base of examples. To achieve our goal, we combine two algorithms for global and local search, namely Particle Swarm Optimization (PSO) and Simulated Annealing (SA). Our empirical results show that the new approach allows deriving the sought CPNs with at least equal performance, in terms of size and correctness, to that obtained by a transformation rule-based tool.

  4. VIS-PROCUUS: A NOVEL PROFILING SYSTEM FOR INSTIGATING USER PROFILES FROM SEARCH ENGINE LOGS BASED ON QUERY SENSE

    Directory of Open Access Journals (Sweden)

    Dr.S.K.JAYANTHI,

    2011-06-01

    Full Text Available Most commercial search engines return roughly the same results for the same query, regardless of the user’s real interest. This paper focus on user report strategy so that the browsers can obtain the web search results based on their profiles in visual mode. Users can be mined from the concept-based user profiles to perform mutual filtering. Browsers with same idea and domain can share their knowledge. From the existing user profiles the interest and domain of the users can be obtained and the search engine personalization is focused in this paper. Finally, the concept-based user profiles can be incorporated into the vis -(Visual ranking algorithm of a searchengine so that search results can be ranked according to individual users’ interests and displayed in visual mode.

  5. Ligand-Based Virtual Screening in a Search for Novel Anti-HIV-1 Chemotypes.

    Science.gov (United States)

    Kurczyk, Agata; Warszycki, Dawid; Musiol, Robert; Kafel, Rafał; Bojarski, Andrzej J; Polanski, Jaroslaw

    2015-10-26

    In a search for new anti-HIV-1 chemotypes, we developed a multistep ligand-based virtual screening (VS) protocol combining machine learning (ML) methods with the privileged structures (PS) concept. In its learning step, the VS protocol was based on HIV integrase (IN) inhibitors fetched from the ChEMBL database. The performances of various ML methods and PS weighting scheme were evaluated and applied as VS filtering criteria. Finally, a database of 1.5 million commercially available compounds was virtually screened using a multistep ligand-based cascade, and 13 selected unique structures were tested by measuring the inhibition of HIV replication in infected cells. This approach resulted in the discovery of two novel chemotypes with moderate antiretroviral activity, that, together with their topological diversity, make them good candidates as lead structures for future optimization.

  6. The ship-borne infrared searching and tracking system based on the inertial platform

    Science.gov (United States)

    Li, Yan; Zhang, Haibo

    2011-08-01

    As a result of the radar system got interferenced or in the state of half silent ,it can cause the guided precision drop badly In the modern electronic warfare, therefore it can lead to the equipment depended on electronic guidance cannot strike the incoming goals exactly. It will need to rely on optoelectronic devices to make up for its shortcomings, but when interference is in the process of radar leading ,especially the electro-optical equipment is influenced by the roll, pitch and yaw rotation ,it can affect the target appear outside of the field of optoelectronic devices for a long time, so the infrared optoelectronic equipment can not exert the superiority, and also it cannot get across weapon-control system "reverse bring" missile against incoming goals. So the conventional ship-borne infrared system unable to track the target of incoming quickly , the ability of optoelectronic rivalry declines heavily.Here we provide a brand new controlling algorithm for the semi-automatic searching and infrared tracking based on inertial navigation platform. Now it is applying well in our XX infrared optoelectronic searching and tracking system. The algorithm is mainly divided into two steps: The artificial mode turns into auto-searching when the deviation of guide exceeds the current scene under the course of leading for radar.When the threshold value of the image picked-up is satisfied by the contrast of the target in the searching scene, the speed computed by using the CA model Least Square Method feeds back to the speed loop. And then combine the infrared information to accomplish the closed-loop control of the infrared optoelectronic system tracking. The algorithm is verified via experiment. Target capturing distance is 22.3 kilometers on the great lead deviation by using the algorithm. But without using the algorithm the capturing distance declines 12 kilometers. The algorithm advances the ability of infrared optoelectronic rivalry and declines the target capturing

  7. Tool Path Generation for Clean-up Machining of Impeller by Point-searching Based Method

    Institute of Scientific and Technical Information of China (English)

    TANG Ming; ZHANG Dinghua; LUO Ming; WU Baohai

    2012-01-01

    Machining quality of clean-up region has a strong influence on the performances of the impeller.In order to plan clean-up tool paths rapidly and obtain good finish surface quality,an efficient and robust tool path generation method is presented,which employs an approach based on point-searching.The clean-up machining mentioned in this paper is pencil-cut and multilayer fillet-cut for a free-form model with a ball-end cutter.For pencil-cut,the cutter center position can be determined via judging whether it satisfies the distance requirement.After the searching direction and the tracing direction have been determined,by employing the point-searching algorithm with the idea of dichotomy,all the cutter contact (CC) points and cutter location (CL)points can be found and the clean-up boundaries can also be defined rapidly.Then the tool path is generated.Based on the main concept of pencil-cut,a multilayer fillet-cut method is proposed,which utilizes a ball-end cuter with its radius less than the design radius of clean-up region.Using a sequence of intermediate virtual cutters to divide the clean-uP region into several layersand given a cusp-height tolerance for the final layer,then the tool paths for all layers are calculated.Finally,computer implementation is also presented in this paper,and the result shows that the proposed method is feasible.

  8. Publishers and repositories

    CERN Document Server

    CERN. Geneva

    2007-01-01

    The impact of self-archiving on journals and publishers is an important topic for all those involved in scholarly communication. There is some evidence that the physics arXiv has had no impact on physics journals, while 'economic common sense' suggests that some impact is inevitable. I shall review recent studies of librarian attitudes towards repositories and journals, and place this in the context of IOP Publishing's experiences with arXiv. I shall offer some possible reasons for the mis-match between these perspectives and then discuss how IOP has linked with arXiv and experimented with OA publishing. As well as launching OA journals we have co-operated with Cornell and the arXiv on Eprintweb.org, a platform that offers new features to repository users. View Andrew Wray's biography

  9. Ethics in Scientific Publishing

    Science.gov (United States)

    Sage, Leslie J.

    2012-08-01

    We all learn in elementary school not turn in other people's writing as if it were our own (plagiarism), and in high school science labs not to fake our data. But there are many other practices in scientific publishing that are depressingly common and almost as unethical. At about the 20 percent level authors are deliberately hiding recent work -- by themselves as well as by others -- so as to enhance the apparent novelty of their most recent paper. Some people lie about the dates the data were obtained, to cover up conflicts of interest, or inappropriate use of privileged information. Others will publish the same conference proceeding in multiple volumes, or publish the same result in multiple journals with only trivial additions of data or analysis (self-plagiarism). These shady practices should be roundly condemned and stopped. I will discuss these and other unethical actions I have seen over the years, and steps editors are taking to stop them.

  10. Turn-Based War Chess Model and Its Search Algorithm per Turn

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2016-01-01

    Full Text Available War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1 enumeration by order; (2 enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.

  11. EMBANKS: Towards Disk Based Algorithms For Keyword-Search In Structured Databases

    CERN Document Server

    Gupta, Nitin

    2011-01-01

    In recent years, there has been a lot of interest in the field of keyword querying relational databases. A variety of systems such as DBXplorer [ACD02], Discover [HP02] and ObjectRank [BHP04] have been proposed. Another such system is BANKS, which enables data and schema browsing together with keyword-based search for relational databases. It models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. The size of the database graph that BANKS uses is proportional to the sum of the number of nodes and edges in the graph. Systems such as SPIN, which search on Personal Information Networks and use BANKS as the backend, maintain a lot of information about the users' data. Since these systems run on the user workstation which have other demands of memory, such a heavy use of memory is unreasonable and if possible, should be avoided. In order to alleviate this problem, we introduce EMBANKS (acronym for External Memory BANKS), a framework for an optimized disk-based BANKS sy...

  12. Ovid MEDLINE Instruction can be Evaluated Using a Validated Search Assessment Tool. A Review of: Rana, G. K., Bradley, D. R., Hamstra, S. J., Ross, P. T., Schumacher, R. E., Frohna, J. G., & Lypson, M. L. (2011. A validated search assessment tool: Assessing practice-based learning and improvement in a residency program. Journal of the Medical Library Association, 99(1, 77-81. doi:10.3163/1536-5050.99.1.013

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-01-01

    Full Text Available Objective – To determine the construct validity of a search assessment instrument that is used to evaluate search strategies in Ovid MEDLINE. Design – Cross-sectional, cohort study. Setting – The Academic Medical Center of the University of Michigan. Subjects – All 22 first-year residents in the Department of Pediatrics in 2004 (cohort 1; 10 senior pediatric residents in 2005 (cohort 2; and 9 faculty members who taught evidence based medicine (EBM and published on EBM topics. Methods – Two methods were employed to determine whether the University of Michigan MEDLINE Search Assessment Instrument (UMMSA could show differences between searchers’ construction of a MEDLINE search strategy.The first method tested the search skills of all 22 incoming pediatrics residents (cohort 1 after they received MEDLINE training in 2004, and again upon graduation in 2007. Only 15 of these residents were tested upon graduation; seven were either no longer in the residency program, or had quickly left the institution after graduation. The search test asked study participants to read a clinical scenario, identify the search question in the scenario, and perform an Ovid MEDLINE search. Two librarians scored the blinded search strategies.The second method compared the scores of the 22 residents with the scores of ten senior residents (cohort 2 and nine faculty volunteers. Unlike the first cohort, the ten senior residents had not received any MEDLINE training. The faculty members’ search strategies were used as the gold standard comparison for scoring the search skills of the two cohorts.Main Results – The search strategy scores of the 22 first-year residents, who received training, improved from 2004 to 2007 (mean improvement: 51.7 to 78.7; t(14=5.43, PConclusion – According to the authors, “the results of this study provide evidence for the validity of an instrument to evaluate MEDLINE search strategies” (p. 81, since the instrument under

  13. Cuckoo search based optimal mask generation for noise suppression and enhancement of speech signal

    Directory of Open Access Journals (Sweden)

    Anil Garg

    2015-07-01

    Full Text Available In this paper, an effective noise suppression technique for enhancement of speech signals using optimized mask is proposed. Initially, the noisy speech signal is broken down into various time–frequency (TF units and the features are extracted by finding out the Amplitude Magnitude Spectrogram (AMS. The signals are then classified based on quality ratio into different classes to generate the initial set of solutions. Subsequently, the optimal mask for each class is generated based on Cuckoo search algorithm. Subsequently, in the waveform synthesis stage, filtered waveforms are windowed and then multiplied by the optimal mask value and summed up to get the enhanced target signal. The experimentation of the proposed technique was carried out using various datasets and the performance is compared with the previous techniques using SNR. The results obtained proved the effectiveness of the proposed technique and its ability to suppress noise and enhance the speech signal.

  14. Infodemiology of status epilepticus: A systematic validation of the Google Trends-based search queries.

    Science.gov (United States)

    Bragazzi, Nicola Luigi; Bacigaluppi, Susanna; Robba, Chiara; Nardone, Raffaele; Trinka, Eugen; Brigo, Francesco

    2016-02-01

    People increasingly use Google looking for health-related information. We previously demonstrated that in English-speaking countries most people use this search engine to obtain information on status epilepticus (SE) definition, types/subtypes, and treatment. Now, we aimed at providing a quantitative analysis of SE-related web queries. This analysis represents an advancement, with respect to what was already previously discussed, in that the Google Trends (GT) algorithm has been further refined and correlational analyses have been carried out to validate the GT-based query volumes. Google Trends-based SE-related query volumes were well correlated with information concerning causes and pharmacological and nonpharmacological treatments. Google Trends can provide both researchers and clinicians with data on realities and contexts that are generally overlooked and underexplored by classic epidemiology. In this way, GT can foster new epidemiological studies in the field and can complement traditional epidemiological tools.

  15. Population Scalability Analysis of Abstract Population-based Random Search: Spectral Radius

    CERN Document Server

    He, Jun

    2011-01-01

    Population-based Random Search (RS) algorithms, such as Evolutionary Algorithms (EAs), Ant Colony Optimization (ACO), Artificial Immune Systems (AIS) and Particle Swarm Optimization (PSO), have been widely applied to solving discrete optimization problems. A common belief in this area is that the performance of a population-based RS algorithm may improve if increasing its population size. The term of population scalability is used to describe the relationship between the performance of RS algorithms and their population size. Although understanding population scalability is important to design efficient RS algorithms, there exist few theoretical results about population scalability so far. Among those limited results, most of them belong to case studies, e.g. simple RS algorithms for simple problems. Different from them, the paper aims at providing a general study. A large family of RS algorithms, called ARS, has been investigated in the paper. The main contribution of this paper is to introduce a novel appro...

  16. Creative Engineering Based Education with Autonomous Robots Considering Job Search Support

    Science.gov (United States)

    Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou

    The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.

  17. Semantic snippet construction for search engine results based on segment evaluation

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The result listing from search engines includes a link and a snippet from the web page for each result item. The snippet in the result listing plays a vital role in assisting the user to click on it. This paper proposes a novel approach to construct the snippets based on a semantic evaluation of the segments in the page. The target segment(s) is/are identified by applying a model to evaluate segments present in the page and selecting the segments with top scores. The proposed model makes the user judgment to click on a result item easier since the snippet is constructed semantically after a critical evaluation based on multiple factors. A prototype implementation of the proposed model confirms the empirical validation.

  18. English for Non-English Departments: In Search for an Essential Home Base

    Directory of Open Access Journals (Sweden)

    Indah Winarni

    2016-02-01

    Full Text Available Promoting the quality of English for the students of non English Departments (henceforth English for undergraduates, which has been characterized as lacking prestige and resources, requires a serious promotion of its status. This means providing a proper home base for the English instructors where standards of profession and quality of service can be pursued, through a solid structure which could nurture academic culture. This paper will describe the various types of the existing structures of English for undergraduates. Illustration on the perseverance of Brawijaya University English instructors in searching for the intended home base through various efforts in staff development and serious research will be presented. What is meant by intended homebase is inspired by Swales's concept of discourse community

  19. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    OpenAIRE

    Andy Hickner; Friese, Christopher R.; Margaret Irwin

    2011-01-01

    Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP) review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS). A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from...

  20. A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems

    OpenAIRE

    Shouheng Tuo; Longquan Yong; Fang’an Deng

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and rand...

  1. Hprints - Licence to publish

    DEFF Research Database (Denmark)

    Rabow, Ingegerd; Sikström, Marjatta; Drachen, Thea Marie;

    2010-01-01

    realised the potential advantages for them. The universities have a role here as well as the libraries that manage the archives and support scholars in various aspects of the publishing processes. Libraries are traditionally service providers with a mission to facilitate the knowledge production...

  2. A simple heuristic for Internet-based evidence search in primary care: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Eberbach A

    2016-08-01

    Full Text Available Andreas Eberbach,1 Annette Becker,1 Justine Rochon,2 Holger Finkemeler,1Achim Wagner,3 Norbert Donner-Banzhoff1 1Department of Family and Community Medicine, Philipp University of Marburg, Marburg, Germany; 2Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany; 3Department of Sport Medicine, Justus-Liebig-University of Giessen, Giessen, Germany Background: General practitioners (GPs are confronted with a wide variety of clinical questions, many of which remain unanswered. Methods: In order to assist GPs in finding quick, evidence-based answers, we developed a learning program (LP with a short interactive workshop based on a simple ­three-step-heuristic to improve their search and appraisal competence (SAC. We evaluated the LP ­effectiveness with a randomized controlled trial (RCT. Participants (intervention group [IG] n=20; ­control group [CG] n=31 rated acceptance and satisfaction and also answered 39 ­knowledge ­questions to assess their SAC. We controlled for previous knowledge in content areas covered by the test. Results: Main outcome – SAC: within both groups, the pre–post test shows significant (P=0.00 improvements in correctness (IG 15% vs CG 11% and confidence (32% vs 26% to find evidence-based answers. However, the SAC difference was not significant in the RCT. Other measures: Most workshop participants rated “learning atmosphere” (90%, “skills acquired” (90%, and “relevancy to my practice” (86% as good or very good. The ­LP-recommendations were implemented by 67% of the IG, whereas 15% of the CG already conformed to LP recommendations spontaneously (odds ratio 9.6, P=0.00. After literature search, the IG showed a (not significantly higher satisfaction regarding “time spent” (IG 80% vs CG 65%, “quality of information” (65% vs 54%, and “amount of information” (53% vs 47%.Conclusion: Long-standing established GPs have a good SAC. Despite high acceptance, strong

  3. Content-Based Search on a Database of Geometric Models: Identifying Objects of Similar Shape

    Energy Technology Data Exchange (ETDEWEB)

    XAVIER, PATRICK G.; HENRY, TYSON R.; LAFARGE, ROBERT A.; MEIRANS, LILITA; RAY, LAWRENCE P.

    2001-11-01

    The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.

  4. A Strategic Analysis of Search Engine Advertising in Web based-commerce

    OpenAIRE

    Ela Kumar; Shruti Kohli

    2007-01-01

    Endeavor of this paper is to explore the role play of Search Engine in Online Business Industry. This paper discusses the Search Engine advertising programs and provides an insight about the revenue generated online via Search Engine. It explores the growth of Online Business Industry in India and emphasis on the role of Search Engine as the major advertising vehicle. A case study on re volution of Indian Advertising Industry has been conducted and its impact on on...

  5. Search and Recommendation

    DEFF Research Database (Denmark)

    Bogers, Toine

    2014-01-01

    -scale application by companies like Amazon, Facebook, and Netflix. But are search and recommendation really two different fields of research that address different problems with different sets of algorithms in papers published at distinct conferences? In my talk, I want to argue that search and recommendation...

  6. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    Science.gov (United States)

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  7. Balancing Efficiency and Effectiveness for Fusion-Based Search Engines in the "Big Data" Environment

    Science.gov (United States)

    Li, Jieyu; Huang, Chunlan; Wang, Xiuhong; Wu, Shengli

    2016-01-01

    Introduction: In the big data age, we have to deal with a tremendous amount of information, which can be collected from various types of sources. For information search systems such as Web search engines or online digital libraries, the collection of documents becomes larger and larger. For some queries, an information search system needs to…

  8. Neural-Based Cuckoo Search of Employee Health and Safety (HS

    Directory of Open Access Journals (Sweden)

    Koffka Khan

    2013-01-01

    Full Text Available A study using the cuckoo search algorithm to evaluate the effects of using computer-aided workstations on employee health and safety (HS is conducted. We collected data for HS risk on employees at their workplaces, analyzed the data and proposed corrective measures applying our methodology. It includes a checklist with nine HS dimensions: work organization, displays, input devices, furniture, work space, environment, software, health hazards and satisfaction. By the checklist, data on HS risk factors are collected. For the calculation of an HS risk index a neural-swarm cuckoo search (NSCS algorithm has been employed. Based on the HS risk index, IHS four groups of HS risk severity are determined: low, moderate, high and extreme HS risk. By this index HS problems are allocated and corrective measures can be applied. This approach is illustrated and validated by a case study. An important advantage of the approach is its easy use and HS index methodology speedily pointing out individual employee specific HS risk.

  9. Spectrum-based method to generate good decoy libraries for spectral library searching in peptide identifications.

    Science.gov (United States)

    Cheng, Chia-Ying; Tsai, Chia-Feng; Chen, Yu-Ju; Sung, Ting-Yi; Hsu, Wen-Lian

    2013-05-01

    As spectral library searching has received increasing attention for peptide identification, constructing good decoy spectra from the target spectra is the key to correctly estimating the false discovery rate in searching against the concatenated target-decoy spectral library. Several methods have been proposed to construct decoy spectral libraries. Most of them construct decoy peptide sequences and then generate theoretical spectra accordingly. In this paper, we propose a method, called precursor-swap, which directly constructs decoy spectral libraries directly at the "spectrum level" without generating decoy peptide sequences by swapping the precursors of two spectra selected according to a very simple rule. Our spectrum-based method does not require additional efforts to deal with ion types (e.g., a, b or c ions), fragment mechanism (e.g., CID, or ETD), or unannotated peaks, but preserves many spectral properties. The precursor-swap method is evaluated on different spectral libraries and the results of obtained decoy ratios show that it is comparable to other methods. Notably, it is efficient in time and memory usage for constructing decoy libraries. A software tool called Precursor-Swap-Decoy-Generation (PSDG) is publicly available for download at http://ms.iis.sinica.edu.tw/PSDG/.

  10. Extraction of microcracks in rock images based on heuristic graph searching and application

    Science.gov (United States)

    Luo, Zhihua; Zhu, Zhende; Ruan, Huaining; Shi, Chong

    2015-12-01

    In this paper, we propose a new method, based on a graph searching technique, for microcrack extraction from scanning electron microscopic images of rocks. This method mainly focuses on how to detect the crack and extract it, and then quantify some basic geometrical features. The crack can be detected automatically with the aid of two endpoints of the crack. The algorithm involves the following process: the A* graph searching technique is first used to find a path throughout the crack region, defined by the initial two endpoints; the pixels of the path will be used as the seeds for the region growing method to restore the primary crack area; then, an automatic filling holes' operation is used to remove the possible holes in the region growing result; the medial axis and distance transformation of the crack area are acquired, and then the final crack is rebuilt by painting disks along a medial axis without branches. The crack result is separated without interaction. In the remaining parts, the crack features are quantified, such as the length, width, angle and area, and error analysis shows that the error percentage of the proposed approach reduces to a low level with actual width increases, and results of some example images are illustrated. The algorithm is efficient and can also be used for image detection of other linear structural objects.

  11. Semantic Search-Based Genetic Programming and the Effect of Intron Deletion.

    Science.gov (United States)

    Castelli, Mauro; Vanneschi, Leonardo; Silva, Sara

    2014-01-01

    The concept of semantics (in the sense of input-output behavior of solutions on training data) has been the subject of a noteworthy interest in the genetic programming (GP) research community over the past few years. In this paper, we present a new GP system that uses the concept of semantics to improve search effectiveness. It maintains a distribution of different semantic behaviors and biases the search toward solutions that have similar semantics to the best solutions that have been found so far. We present experimental evidence of the fact that the new semantics-based GP system outperforms the standard GP and the well-known bacterial GP on a set of test functions, showing particularly interesting results for noncontinuous (i.e., generally harder to optimize) test functions. We also observe that the solutions generated by the proposed GP system often have a larger size than the ones returned by standard GP and bacterial GP and contain an elevated number of introns, i.e., parts of code that do not have any effect on the semantics. Nevertheless, we show that the deletion of introns during the evolution does not affect the performance of the proposed method.

  12. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.

    Science.gov (United States)

    Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo

    2015-01-01

    Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  13. A trust-based sensor allocation algorithm in cooperative space search problems

    Science.gov (United States)

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  14. A web-based search engine for triplex-forming oligonucleotide target sequences.

    Science.gov (United States)

    Gaddis, Sara S; Wu, Qi; Thames, Howard D; DiGiovanni, John; Walborg, Earl F; MacLeod, Michael C; Vasquez, Karen M

    2006-01-01

    Triplex technology offers a useful approach for site-specific modification of gene structure and function both in vitro and in vivo. Triplex-forming oligonucleotides (TFOs) bind to their target sites in duplex DNA, thereby forming triple-helical DNA structures via Hoogsteen hydrogen bonding. TFO binding has been demonstrated to site-specifically inhibit gene expression, enhance homologous recombination, induce mutation, inhibit protein binding, and direct DNA damage, thus providing a tool for gene-specific manipulation of DNA. We have developed a flexible web-based search engine to find and annotate TFO target sequences within the human and mouse genomes. Descriptive information about each site, including sequence context and gene region (intron, exon, or promoter), is provided. The engine assists the user in finding highly specific TFO target sequences by eliminating or flagging known repeat sequences and flagging overlapping genes. A convenient way to check for the uniqueness of a potential TFO binding site is provided via NCBI BLAST. The search engine may be accessed at spi.mdanderson.org/tfo. PMID:16764543

  15. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer

    Directory of Open Access Journals (Sweden)

    Mauro Castelli

    2015-01-01

    Full Text Available Energy consumption forecasting (ECF is an important policy issue in today’s economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  16. Olfaction and hearing based mobile robot navigation for odor/sound source search.

    Science.gov (United States)

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  17. Search for pulsations in M dwarfs in the Kepler short-cadence data base

    Science.gov (United States)

    Rodríguez, E.; Rodríguez-López, C.; López-González, M. J.; Amado, P. J.; Ocando, S.; Berdiñas, Z. M.

    2016-04-01

    The results of a search for stellar pulsations in M dwarf stars in the Kepler short-cadence (SC) data base are presented. This investigation covers all the cool and dwarf stars in the list of Dressing & Charbonneau, which were also observed in SC mode by the Kepler satellite. The sample has been enlarged via selection of stellar parameters (temperature, surface gravity and radius) with available Kepler Input Catalogue values together with JHK and riz photometry. In total, 87 objects observed by the Kepler mission in SC mode were selected and analysed using Fourier techniques. The detection threshold is below 10 μmag for the brightest objects and below 20 μmag for about 40 per cent of the stars in the sample. However, no significant signal in the [˜10,100] cd-1 frequency domain that can be reliably attributable to stellar pulsations has been detected. The periodograms have also been investigated for solar-like oscillations in the >100 cd-1 region, but with unsuccessful results too. Despite these inconclusive photometric results, M dwarfs pulsation amplitudes may still be detected in radial velocity searches. State-of-the-art coming instruments, like CARMENES near-infrared high-precision spectrograph, will play a key role in the possible detection.

  18. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  19. LED TERMINAL AND ADVERTISEMENT PUBLISHING PLATFORM BASED ON XML INTERACTION PROTOCOLS%基于XML交互协议的LED终端及广告发布平台

    Institute of Scientific and Technical Information of China (English)

    梅良刚; 左保河; 李嘉炎

    2011-01-01

    This paper illustrates a design scheme of LED terminal and advertisement publishing platform based on XML interaction protocols. First the Java webpage platform is used to upload the media and to censor them, and then to put them into the media library. Then the embedded C is employed to develop the LED terminals, and to interact by XML protocols. In the case of the number of advertisement terminals is big or the terminals are separated far from each other and are managed and maintained manually, there must be problems occur, including heavy workload, slow update in advertising information, being unable to monitor the states of the advertisement terminals in time, and the flexibility and diversity of the terminals, etc. But the Internet - based B/S joint advertising pattern on the network can tackle the above problems.%说明一种基于XML交互协议的LED终端及广告发布平台的设计方案.首先通过Java网页平台来对媒体上传和进行审核,并放入媒体库中.再通过嵌入式C开发出LED终端,以XML协议进行交互.当终端数量较多,或相隔距离较远时,采用人工方式对每台终端进行管理和维护,必然存在工作量大、广告信息更新慢、不能实时监控终端状态、终端的灵活性与多样性等问题,而基于Internet的B/S联播模式就可以解决以上问题.

  20. SHOP: receptor-based scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Liljefors, Tommy; Sørensen, Morten D;

    2009-01-01

    A new field-derived 3D method for receptor-based scaffold hopping, implemented in the software SHOP, is presented. Information from a protein-ligand complex is utilized to substitute a fragment of the ligand with another fragment from a database of synthetically accessible scaffolds. A GRID-based...

  1. Missile Sites, Former missile field for Whiteman., Published in 2005, 1:12000 (1in=1000ft) scale, Whiteman Air Force Base.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Missile Sites dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from Field Survey/GPS information as of 2005. It is described as...

  2. Hotels and Motels, Hotel and motels locations with in Johnson County based off of land use, Published in unknown, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Hotels and Motels dataset, was produced all or in part from Published Reports/Deeds information as of unknown. It is described as 'Hotel and motels locations...

  3. Parcels and Land Ownership, Tax Assessors Data Base, Published in 1998, 1:600 (1in=50ft) scale, Jones County Board of Commissioners.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Parcels and Land Ownership dataset, published at 1:600 (1in=50ft) scale, was produced all or in part from Uncorrected Imagery information as of 1998. It is...

  4. Biclustering of Gene Expression Data by Correlation-Based Scatter Search

    Directory of Open Access Journals (Sweden)

    Nepomuceno Juan A

    2011-01-01

    Full Text Available Abstract Background The analysis of data generated by microarray technology is very useful to understand how the genetic information becomes functional gene products. Biclustering algorithms can determine a group of genes which are co-expressed under a set of experimental conditions. Recently, new biclustering methods based on metaheuristics have been proposed. Most of them use the Mean Squared Residue as merit function but interesting and relevant patterns from a biological point of view such as shifting and scaling patterns may not be detected using this measure. However, it is important to discover this type of patterns since commonly the genes can present a similar behavior although their expression levels vary in different ranges or magnitudes. Methods Scatter Search is an evolutionary technique that is based on the evolution of a small set of solutions which are chosen according to quality and diversity criteria. This paper presents a Scatter Search with the aim of finding biclusters from gene expression data. In this algorithm the proposed fitness function is based on the linear correlation among genes to detect shifting and scaling patterns from genes and an improvement method is included in order to select just positively correlated genes. Results The proposed algorithm has been tested with three real data sets such as Yeast Cell Cycle dataset, human B-cells lymphoma dataset and Yeast Stress dataset, finding a remarkable number of biclusters with shifting and scaling patterns. In addition, the performance of the proposed method and fitness function are compared to that of CC, OPSM, ISA, BiMax, xMotifs and Samba using Gene the Ontology Database.

  5. Reducing a Knowledge-Base Search Space When Data Are Missing

    Science.gov (United States)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  6. Support open access publishing

    DEFF Research Database (Denmark)

    Ekstrøm, Jeannette

    2013-01-01

    Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante tidsskriftsinformati......Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante...... tidsskriftsinformationer (faglig disciplin, BFI niveau, Impact Factor, Open Access) vil kunne danne sig et hurtigt overblik, for derved at kunne træffe et kvalificeret valg om, hvor og hvordan man skal publicere sine forskningsresultater....

  7. Prepare to publish.

    Science.gov (United States)

    Price, P M

    2000-01-01

    "I couldn't possibly write an article." "I don't have anything worthwhile to write about." "I am not qualified to write for publication." Do any of these statements sound familiar? This article is intended to dispel these beliefs. You can write an article. You care for the most complex patients in the health care system so you do have something worthwhile to write about. Beside correct spelling and grammar there are no special skills, certificates or diplomas required for publishing. You are qualified to write for publication. The purpose of this article is to take the mystique out of the publication process. Each step of publishing an article will be explained, from idea formation to framing your first article. Practical examples and recommendations will be presented. The essential components of the APA format necessary for Dynamics: The Official Journal of the Canadian Association of Critical Care Nurses will be outlined and resources to assist you will be provided.

  8. Superstring theories and models: Cosmological implications. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-09-01

    The bibliography contains citations concerning the use of superstrings in studies of such relativistic phenomena as space-time extension and supergravity. Primordial magnetic monopoles, local cosmic strings, and studies of preon models are among the topics discussed. Calbi-Yau manifolds, and supersymmetrical Kaluza-Klein theories are also considered. Citations relating specifically to particle studies are included in a separate bibliography. (Contains a minimum of 103 citations and includes a subject term index and title list.)

  9. Magnetic refrigeration: Materials, design, and applications. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities datab base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-10-01

    The bibliography contains citations concerning cryogenics using magnetic refrigerants. Refrigerant properties, magnetic materials, and thermal characteristics are discussed. Magnetic refrigerators are used for helium liquefaction, cooling superconductors, and superfluid helium production. Carnot-cycle refrigerators, reciprocating refrigerators, parasitic refrigerators, Ericsson refrigerators, and Stirling cycle refrigerators are among the types of magnetic refrigerators evaluated. (Contains a minimum of 87 citations and includes a subject term index and title list.)

  10. Reclaiming Society Publishing

    Directory of Open Access Journals (Sweden)

    Philip E. Steinberg

    2015-07-01

    Full Text Available Learned societies have become aligned with commercial publishers, who have increasingly taken over the latter’s function as independent providers of scholarly information. Using the example of geographical societies, the advantages and disadvantages of this trend are examined. It is argued that in an era of digital publication, learned societies can offer leadership with a new model of open access that can guarantee high quality scholarly material whose publication costs are supported by society membership dues.

  11. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    Directory of Open Access Journals (Sweden)

    Andy Hickner

    2011-09-01

    Full Text Available Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS. A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from both case and control groups completed a questionnaire to assess satisfaction with the literature search phases of the review process. Results – A protocol was developed and refined for use by future review teams. The collaboration and the resulting search protocol were beneficial for both the student and the review team members. The questionnaire results did not yield statistically significant differences regarding satisfaction with the search process between case and control groups. Conclusions – Evidence-based reviewers' satisfaction with the literature searching process depends on multiple factors and it was not clear that embedding an LIS specialist in the review team improved satisfaction with the process. Future research with more respondents may elucidate specific factors that may impact reviewers' assessment.

  12. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    Science.gov (United States)

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  13. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2015-01-01

    Full Text Available In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird’s nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO algorithm and artificial bee colony (ABC algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  14. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  15. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.

    Science.gov (United States)

    Bhandari, A K; Soni, V; Kumar, A; Singh, G K

    2014-07-01

    This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques. PMID:24893835

  16. PSO-based support vector machine with cuckoo search technique for clinical disease diagnoses.

    Science.gov (United States)

    Liu, Xiaoyong; Fu, Hui

    2014-01-01

    Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms. PMID:24971382

  17. PSO-Based Support Vector Machine with Cuckoo Search Technique for Clinical Disease Diagnoses

    Directory of Open Access Journals (Sweden)

    Xiaoyong Liu

    2014-01-01

    Full Text Available Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM, particle swarm optimization (PSO, and cuckoo search (CS. The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results indicate that the proposed CS-PSO-SVM model achieves better classification accuracy and F-measure than PSO-SVM and GA-SVM. Therefore, we can conclude that our proposed method is very efficient compared to the previously reported algorithms.

  18. WEB SEARCH ENGINE BASED SEMANTIC SIMILARITY MEASURE BETWEEN WORDS USING PATTERN RETRIEVAL ALGORITHM

    Directory of Open Access Journals (Sweden)

    Pushpa C N

    2013-02-01

    Full Text Available Semantic Similarity measures plays an important role in information retrieval, natural language processing and various tasks on web such as relation extraction, community mining, document clustering, and automatic meta-data extraction. In this paper, we have proposed a Pattern Retrieval Algorithm [PRA] to compute the semantic similarity measure between the words by combining both page count method and web snippets method. Four association measures are used to find semantic similarity between words in page count method using web search engines. We use a Sequential Minimal Optimization (SMO support vector machines (SVM to find the optimal combination of page counts-based similarity scores and top-ranking patterns from the web snippets method. The SVM is trained to classify synonymous word-pairs and nonsynonymous word-pairs. The proposed approach aims to improve the Correlation values, Precision, Recall, and F-measures, compared to the existing methods. The proposed algorithm outperforms by 89.8 % of correlation value.

  19. An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering

    CERN Document Server

    Uwe, Aickelin; Jingpeng, Li

    2007-01-01

    This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is ...

  20. SEGMENTATION ALGORITHM BASED ON EDGE-SEARCHING FOR MUlTI-LINEAR STRUCTURED LIGHT IMAGES

    Institute of Scientific and Technical Information of China (English)

    LIU Baohua; LI Bing; JIANG Zhuangde

    2006-01-01

    Aiming at the problem that the existence of disturbances on the edges of light-stripe makes the segmentation of the light-stripes images difficult, a new segmentation algorithm based on edge-searching is presented. It firstly calculates every edge pixel's horizontal coordinate grads to produce the corresponding grads-edge, then uses a designed length-variable 1D template to scan the light-stripes' grads-edges. The template is able to fmd the disturbances with different width utilizing the distributing character of the edge disturbances. The found disturbances are eliminated finally. The algorithm not only can smoothly segment the light-stripes images, but also eliminate most disturbances on the light-stripes' edges without damaging the light-stripes images' 3D information. A practical example of using the proposed algorithm is given in the end. It is proved that the efficiency of the algorithm has been improved obviously by comparison.

  1. Handwritten Japanese Address Recognition Technique Based on Improved Phased Search of Candidate Rectangle Lattice

    Directory of Open Access Journals (Sweden)

    Hidehisa NAKAYAMA

    2004-08-01

    Full Text Available In the field of handwritten Japanese address recognition, it is common to recognize place-name strings from place-name images. However, in practice, it is necessary to recognize the place-name strings from address images. Therefore, we have proposed the post-processing system, which checks the list of the place-name strings in two-stages for recognizing the place-name images. In this paper, we propose a new technique based on phased search of candidate rectangle lattice, and improve the technique with the detection of key-characters for final output. Applying our proposal to the IPTP 1840 image data of address strings, the results of experiments clearly show the efficiency of our system in handwritten Japanese address recognition.

  2. Multiple-optima search method based on a metamodel and mathematical morphology

    Science.gov (United States)

    Li, Yulin; Liu, Li; Long, Teng; Chen, Xin

    2016-03-01

    This article investigates a non-population-based optimization method using mathematical morphology and the radial basis function (RBF) for multimodal computationally intensive functions. To obtain several feasible solutions, mathematical morphology is employed to search promising regions. Sequential quadratic programming is used to exploit the possible areas to determine the exact positions of the potential optima. To relieve the computational burden, metamodelling techniques are employed. The RBF metamodel in different iterations varies considerably so that the positions of potential optima are moving during optimization. To find the pair of correlative potential optima between the latest two iterations, a tolerance is presented. Furthermore, to ensure that all the output minima are the global or local optima, an optimality judgement criterion is introduced.

  3. Function Optimization and Parameter Performance Analysis Based on Gravitation Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-12-01

    Full Text Available The gravitational search algorithm (GSA is a kind of swarm intelligence optimization algorithm based on the law of gravitation. The parameter initialization of all swarm intelligence optimization algorithms has an important influence on the global optimization ability. Seen from the basic principle of GSA, the convergence rate of GSA is determined by the gravitational constant and the acceleration of the particles. The optimization performances on six typical test functions are verified by the simulation experiments. The simulation results show that the convergence speed of the GSA algorithm is relatively sensitive to the setting of the algorithm parameters, and the GSA parameter can be used flexibly to improve the algorithm’s convergence velocity and improve the accuracy of the solutions.

  4. Optimal fuzzy PID controller with adjustable factors based on flexible polyhedron search algorithm

    Institute of Scientific and Technical Information of China (English)

    谭冠政; 肖宏峰; 王越超

    2002-01-01

    A new kind of optimal fuzzy PID controller is proposed, which contains two parts. One is an on-line fuzzy inference system, and the other is a conventional PID controller. In the fuzzy inference system, three adjustable factors xp, xi, and xd are introduced. Their functions are to further modify and optimize the result of the fuzzy inference so as to make the controller have the optimal control effect on a given object. The optimal values of these adjustable factors are determined based on the ITAE criterion and the Nelder and Mead′s flexible polyhedron search algorithm. This optimal fuzzy PID controller has been used to control the executive motor of the intelligent artificial leg designed by the authors. The result of computer simulation indicates that this controller is very effective and can be widely used to control different kinds of objects and processes.

  5. A NEW SYSTEM DYNAMIC EXTREMUM SELF-SEARCHING METHOD BASED ON CORRELATION ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    李嘉; 刘文江; 胡军; 袁廷奇

    2003-01-01

    Objective To propose a new dynamic extremum self-searching method, which can be used in industrial processes extremum optimum control systems, to overcome the disadvantages of traditional method. Methods This algorithm is based on correlation analysis. A pseudo-random binary signal m-sequence u(t) is added as probe signal in system input, construct cross-correlation function between system input and output, the next step hunting direction is judged by the differential sign. Results Compared with traditional algorithm such as step forward hunting method, the iterative efficient, hunting precision and anti-interference ability of the correlation analysis method is obvious over the traditional algorithm. The computer simulation experimental given illustrate these viewpoints. Conclusion The correlation analysis method can settle the optimum state point of device operating process. It has the advantage of easy condition , simple calculate process.

  6. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  7. Application of 3D Zernike descriptors to shape-based ligand similarity searching

    Directory of Open Access Journals (Sweden)

    Venkatraman Vishwesh

    2009-12-01

    Full Text Available Abstract Background The identification of promising drug leads from a large database of compounds is an important step in the preliminary stages of drug design. Although shape is known to play a key role in the molecular recognition process, its application to virtual screening poses significant hurdles both in terms of the encoding scheme and speed. Results In this study, we have examined the efficacy of the alignment independent three-dimensional Zernike descriptor (3DZD for fast shape based similarity searching. Performance of this approach was compared with several other methods including the statistical moments based ultrafast shape recognition scheme (USR and SIMCOMP, a graph matching algorithm that compares atom environments. Three benchmark datasets are used to thoroughly test the methods in terms of their ability for molecular classification, retrieval rate, and performance under the situation that simulates actual virtual screening tasks over a large pharmaceutical database. The 3DZD performed better than or comparable to the other methods examined, depending on the datasets and evaluation metrics used. Reasons for the success and the failure of the shape based methods for specific cases are investigated. Based on the results for the three datasets, general conclusions are drawn with regard to their efficiency and applicability. Conclusion The 3DZD has unique ability for fast comparison of three-dimensional shape of compounds. Examples analyzed illustrate the advantages and the room for improvements for the 3DZD.

  8. Concentrations, correlations and chemical species of PM2.5/PM10 based on published data in China: Potential implications for the revised particulate standard.

    Science.gov (United States)

    Zhou, Xuehua; Cao, Zhaoyu; Ma, Yujie; Wang, Linpeng; Wu, Ruidong; Wang, Wenxing

    2016-02-01

    Particulate matter (PM) has been of great concern in China due to the increasing haze pollution in recent years. In 2012, the Chinese national ambient air quality standard (NAAQS) was amended with a "more strict" regulation on the PM concentrations, i.e., 35 and 70 µg/m(3) for annual PM2.5 and PM10 averages, respectively (Grade-Ⅱ, GB3095-2012). To evaluate the potential of China to attain such new NAAQS and provide a more generalized chemical profile of PM in China, a comprehensive statistical analysis was carried out based on the published data of parallel PM2.5 and PM10 mass concentrations and chemical compositions of PM2.5 and PM10. The results show that most of the measured concentrations far exceed the new NAAQS. PM2.5 and PM10 show a strong positive correlation (R(2) = 0.87, p China. Organic carbon (OC), sulfate and crustal species are the three major components of PM. The NO3(-)/SO4(2-) ratios are 0.43 ± 0.26 in PM2.5 and 0.56 ± 0.29 in PM10, and the OC/EC ratios are 3.63 ± 1.73 in PM2.5 and 4.17 ± 2.09 in PM10, signifying that the stationary emissions from coal combustion remain the main PM source. An evaluation of PM2.5 situation in current China was carried out and the results show that it would take about 27 years to meet the limit value of 35 µg/m(3) in the revised standard, implying a rigorous challenge in PM2.5 control in China in the future.

  9. Open Access Publishing: What Authors Want

    Science.gov (United States)

    Nariani, Rajiv; Fernandez, Leila

    2012-01-01

    Campus-based open access author funds are being considered by many academic libraries as a way to support authors publishing in open access journals. Article processing fees for open access have been introduced recently by publishers and have not yet been widely accepted by authors. Few studies have surveyed authors on their reasons for publishing…

  10. 基于谓词式覆盖技术的发布/订购机制及算法研究%Content-Based Publish/Subscribe Mechanism and Algorithm Based on Predicate Covering

    Institute of Scientific and Technical Information of China (English)

    潘亦; 张凯隆; 潘金贵

    2011-01-01

    The content-based publish/subscribe system is adapted to large-scale distributed interaction applications well and widely due to its asynchronous, many-to-many and loosely-coupled communication properties. Efficient matching and routing algorithm and dynamic adaptability are the key issues in the large-scale content-based publish/subscribe systems. Consequently, in order to enhance publish/subscribe system's matching and routing efficiency, the methods, which can reduce subscription scale and routing table sizes at internal content-based routing routers and optimize the structure of subscription expressions, are much feasible. On analysis of publish/subscribe system's related technologies, this paper proposes the concept of predicate relation and a new structure called the predicate relation binary tree (PRBT). PRBT describes the relations among predicates; designs and implements the subscription maintaining, unsubscription and matching algorithm based on the PRBT. By optimizing the structure of the predicate relation and subscription selectivity transmitting strategy, it not only reduces the maintained subscription sizes at each internal router, but also enhances the publish/subscribe system's performance, such as events matching and routing efficiency.In addition, this paper explains some cases of publish/subscribe system and proves the validity of the PRBT-* algorithm's properties. The theory analysis and extensive experiments reveal that the method of the predicate covering obtains better results in maintenance overhead of subscription scale,algorithm's efficiency and publish/subscribe system's performance.%基于内容路由的发布/订购(Pub/Sub)技术具有异步、松散耦合和多对多通信等特点,使得能更好地应用于大规模分布式交互系统.而高效率的匹配算法、路由算法及较低的订购维护成本(规模)是实现基于内容路由的大规模Pub/Sub系统所要解决的关键问题.提出了谓词式关系(二叉树)

  11. RETRACTION: Publishers' Note

    Science.gov (United States)

    post="(Executive Editor">Graeme Watt,

    2010-06-01

    Withdrawal of the paper "Was the fine-structure constant variable over cosmological time?" by L. D. Thong, N. M. Giao, N. T. Hung and T. V. Hung (EPL, 87 (2009) 69002) This paper has been formally withdrawn on ethical grounds because the article contains extensive and repeated instances of plagiarism. EPL treats all identified evidence of plagiarism in the published articles most seriously. Such unethical behaviour will not be tolerated under any circumstance. It is unfortunate that this misconduct was not detected before going to press. My thanks to Editor colleagues from other journals for bringing this fact to my attention.

  12. X-ENS: Semantic enrichment of web search results at real-time

    OpenAIRE

    Fafalios P.; Tzitzikas Y.

    2013-01-01

    While more and more semantic data are published on the Web, an important question is how typical web users can access and exploit this body of knowledge. Although, existing interaction paradigms in semantic search hide the complexity behind an easy-to-use interface, they have not managed to cover common search needs. In this paper, we present X-ENS (eXplore ENtities in Search), a web search application that enhances the classical, keyword-based, web searching with semantic information, as a m...

  13. A constraint-based search algorithm for parameter identification of environmental models

    Science.gov (United States)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.

    2014-12-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.

  14. Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    Directory of Open Access Journals (Sweden)

    Bin Guo

    2008-01-01

    Full Text Available A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certain conditions, smart objects become hidden ones. However, current systems provide no mechanism to detect and manage objects when this problem occurs. Second, there is no common context infrastructure for building smart artifact systems, which makes it difficult for separately developed applications to interact with each other and uneasy for them to share and reuse knowledge. Unlike previous systems, Home-Explorer builds on an ontology-based knowledge infrastructure named Sixth-Sense, which makes it easy for the system to interact with other applications or agents also based on this ontology. The hidden object problem is also reflected in our ontology, which enables Home-Explorer to deal with both smart objects and hidden objects. A set of rules for deducing an object's status or location information and for locating hidden objects are described and evaluated.

  15. Pep-3D-Search: a method for B-cell epitope prediction based on mimotope analysis

    Directory of Open Access Journals (Sweden)

    Wang Yan

    2008-12-01

    Full Text Available Abstract Background The prediction of conformational B-cell epitopes is one of the most important goals in immunoinformatics. The solution to this problem, even if approximate, would help in designing experiments to precisely map the residues of interaction between an antigen and an antibody. Consequently, this area of research has received considerable attention from immunologists, structural biologists and computational biologists. Phage-displayed random peptide libraries are powerful tools used to obtain mimotopes that are selected by binding to a given monoclonal antibody (mAb in a similar way to the native epitope. These mimotopes can be considered as functional epitope mimics. Mimotope analysis based methods can predict not only linear but also conformational epitopes and this has been the focus of much research in recent years. Though some algorithms based on mimotope analysis have been proposed, the precise localization of the interaction site mimicked by the mimotopes is still a challenging task. Results In this study, we propose a method for B-cell epitope prediction based on mimotope analysis called Pep-3D-Search. Given the 3D structure of an antigen and a set of mimotopes (or a motif sequence derived from the set of mimotopes, Pep-3D-Search can be used in two modes: mimotope or motif. To evaluate the performance of Pep-3D-Search to predict epitopes from a set of mimotopes, 10 epitopes defined by crystallography were compared with the predicted results from a Pep-3D-Search: the average Matthews correlation oefficient (MCC, sensitivity and precision were 0.1758, 0.3642 and 0.6948. Compared with other available prediction algorithms, Pep-3D-Search showed comparable MCC, specificity and precision, and could provide novel, rational results. To verify the capability of Pep-3D-Search to align a motif sequence to a 3D structure for predicting epitopes, 6 test cases were used. The predictive performance of Pep-3D-Search was demonstrated to be

  16. On the trajectories and performance of Infotaxis, an information-based greedy search algorithm

    CERN Document Server

    Barbieri, Carlo; Monasson, Rémi

    2010-01-01

    We present a continuous-space version of Infotaxis, a search algorithm where a searcher greedily moves to maximize the gain in information about the position of the target to be found. Using a combination of analytical and numerical tools we estimate the probability that the search is successful and study the nature of the trajectories in two and three dimensions. We also discuss the analogy with confined polyelectrolytes and possible extensions to non-greedy searches.

  17. G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases

    OpenAIRE

    Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H.

    2009-01-01

    Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML docum...

  18. Semantic Based Efficient Retrieval of Relevant Resources and its Services using Search Engines

    Directory of Open Access Journals (Sweden)

    Pradeep Gurunathan

    2014-05-01

    Full Text Available The main objective of this paper is to propose an efficient mechanism for retrieval of resources using semantic approach and to exchange information using Service Oriented Architecture. A framework has been developed to empower the users in locating relevant resources and associated services through a meaningful semantics. The resources are retrieved efficiently by Modified Matchmaking Algorithm and dynamic ranking, which shows an improvement in search technique provided by the proposed search mechanism. The performance of retrieval of the proposed search mechanism is computed and compared with existing popular search engines like google and yahoo which shows a significant amount of improvement.

  19. Cluster based hierarchical resource searching model in P2P network

    Institute of Scientific and Technical Information of China (English)

    Yang Ruijuan; Liu Jian; Tian Jingwen

    2007-01-01

    For the problem of large network load generated by the Gnutella resource-searching model in Peer to Peer (P2P) network, a improved model to decrease the network expense is proposed, which establishes a duster in P2P network,auto-organizes logical layers, and applies a hybrid mechanism of directional searching and flooding. The performance analysis and simulation results show that the proposed hierarchical searching model has availably reduced the generated message load and that its searching-response time performance is as fairly good as that of the Gnutella model.

  20. Local search engine with global content based on domain specific knowledge

    OpenAIRE

    Pohorec, Sandi; Verlič, Mateja; Zorman, Milan

    2012-01-01

    In the growing need for information we have come to rely on search engines. The use of large scale search engines, such as Google, is as common as surfingthe World Wide Web. We are impressed with the capabilities of these search engines but still there is a need for improvment. A common problem withsearching is the ambiguity of words. Their meaning often depends on the context in which they are used or varies across specific domains. To resolve this we propose a domain specific search engine ...

  1. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening. PMID:25996728

  2. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    Science.gov (United States)

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  3. Reconsidering Visual Search.

    Science.gov (United States)

    Kristjánsson, Árni

    2015-12-01

    The visual search paradigm has had an enormous impact in many fields. A theme running through this literature has been the distinction between preattentive and attentive processing, which I refer to as the two-stage assumption. Under this assumption, slopes of set-size and response time are used to determine whether attention is needed for a given task or not. Even though a lot of findings question this two-stage assumption, it still has enormous influence, determining decisions on whether papers are published or research funded. The results described here show that the two-stage assumption leads to very different conclusions about the operation of attention for identical search tasks based only on changes in response (presence/absence versus Go/No-go responses). Slopes are therefore an ambiguous measure of attentional involvement. Overall, the results suggest that the two-stage model cannot explain all findings on visual search, and they highlight how slopes of response time and set-size should only be used with caution. PMID:27551357

  4. Política educacional, ensino superior público e pesquisa academica: um jogo de xadrez encassinado/Educational politic, education publish and searches academic: the game of chess

    Directory of Open Access Journals (Sweden)

    Maria de Lourdes Almeida,

    2007-01-01

    Full Text Available Neste texto pretendemos abordar criticamente algumas dimensões da configuração das políticas educacionais, no discurso neoliberal, tendo como foco de análise a pesquisa desenvolvida na UNIVERSIDADE PÚBLICA no final do século XX e inicio do século XXI. Na seqüência, apresentaremos algumas considerações gerais sobre como se constrói a retórica neoliberal no campo educacional. Nosso objetivo foi o de questionar a forma do Estado Neoliberal, pensar e projetar a política educacional na pesquisa acadêmica desenvolvida pela Universidade Pública. Finalizaremos destacando algumas das mais evidentes conseqüências dessa farsa rotulada pelo Banco Mundial, BIRD e FMI de Pedagogia da Inclusão (uma inclusão que exclui promovida pelos discursos educacionais em prol da construção de uma cidadania outorgada! O caos educacional que estamos vivenciando possui lógica determinada pela Política Publica Educacional Internacional e Nacional. O que prevalece nesta era da irracionalidade é a sedimentação do padrão americano de pesquisa. A des-construçao da participação do intelectual na práxis social contribui para que a Academia perca sua identidade e contribua ainda mais para a reprodução da desigualdade social e afirmação da Pedagogia da Exclusão. In this text we intend critical to approach some dimensions of the configuration of the educational politics, in the liberal speech, having as focus of analysis the research developed in the UNIVERSITY PUBLISHES in the end of century XX and beginning of century XXI. Our objective was to question the form of the liberal State, to think and to project the educational politics in the academic research developed by the Public University. We will finish detaching some of the evidences consequences of this humbug friction for the World Bank, BIRD and FMI of Pedagogy of the Inclusion (an inclusion that excludes promoted by the educational speeches in favor of the construction of a granted

  5. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    Science.gov (United States)

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  6. Searching for first-degree familial relationships in California's offender DNA database: validation of a likelihood ratio-based approach.

    Science.gov (United States)

    Myers, Steven P; Timken, Mark D; Piucci, Matthew L; Sims, Gary A; Greenwald, Michael A; Weigand, James J; Konzak, Kenneth C; Buoncristiani, Martin R

    2011-11-01

    A validation study was performed to measure the effectiveness of using a likelihood ratio-based approach to search for possible first-degree familial relationships (full-sibling and parent-child) by comparing an evidence autosomal short tandem repeat (STR) profile to California's ∼1,000,000-profile State DNA Index System (SDIS) database. Test searches used autosomal STR and Y-STR profiles generated for 100 artificial test families. When the test sample and the first-degree relative in the database were characterized at the 15 Identifiler(®) (Applied Biosystems(®), Foster City, CA) STR loci, the search procedure included 96% of the fathers and 72% of the full-siblings. When the relative profile was limited to the 13 Combined DNA Index System (CODIS) core loci, the search procedure included 93% of the fathers and 61% of the full-siblings. These results, combined with those of functional tests using three real families, support the effectiveness of this tool. Based upon these results, the validated approach was implemented as a key, pragmatic and demonstrably practical component of the California Department of Justice's Familial Search Program. An investigative lead created through this process recently led to an arrest in the Los Angeles Grim Sleeper serial murders.

  7. Tales from the Field: Search Strategies Applied in Web Searching

    Directory of Open Access Journals (Sweden)

    Soohyung Joo

    2010-08-01

    Full Text Available In their web search processes users apply multiple types of search strategies, which consist of different search tactics. This paper identifies eight types of information search strategies with associated cases based on sequences of search tactics during the information search process. Thirty-one participants representing the general public were recruited for this study. Search logs and verbal protocols offered rich data for the identification of different types of search strategies. Based on the findings, the authors further discuss how to enhance web-based information retrieval (IR systems to support each type of search strategy.

  8. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    Science.gov (United States)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  9. NESSiE: The Experimental Sterile Neutrino Search in Short-Base-Line at CERN

    CERN Document Server

    Kose, Umut

    2013-01-01

    Several different experimental results are indicating the existence of anomalies in the neutrino sector. Models beyond the standard model have been developed to explain these results and involve one or more additional neutrinos that do not weakly interact. A new experimental program is therefore needed to study this potential new physics with a possibly new Short-Base-Line neutrino beam at CERN. CERN is actually promoting the start up of a New Neutrino Facility in the North Area site, which may host two complementary detectors, one based on LAr technology and one corresponding to a muon spectrometer. The system is doubled in two different sites. With regards to the latter option, NESSiE, Neutrino Experiment with Spectrometers in Europe, had been proposed for the search of sterile neutrinos studying Charged Current (CC) muon neutrino and antineutrino ineractions. The detectors consists of two magnetic spectrometers to be located in two sites:"Near" and "Far" from the proton target of the CERN-SPS beam. Each sp...

  10. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    Science.gov (United States)

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  11. Optimization of Nano-Process Deposition Parameters Based on Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Norlina Mohd Sabri

    2016-06-01

    Full Text Available This research is focusing on the radio frequency (RF magnetron sputtering process, a physical vapor deposition technique which is widely used in thin film production. This process requires the optimized combination of deposition parameters in order to obtain the desirable thin film. The conventional method in the optimization of the deposition parameters had been reported to be costly and time consuming due to its trial and error nature. Thus, gravitational search algorithm (GSA technique had been proposed to solve this nano-process parameters optimization problem. In this research, the optimized parameter combination was expected to produce the desirable electrical and optical properties of the thin film. The performance of GSA in this research was compared with that of Particle Swarm Optimization (PSO, Genetic Algorithm (GA, Artificial Immune System (AIS and Ant Colony Optimization (ACO. Based on the overall results, the GSA optimized parameter combination had generated the best electrical and an acceptable optical properties of thin film compared to the others. This computational experiment is expected to overcome the problem of having to conduct repetitive laboratory experiments in obtaining the most optimized parameter combination. Based on this initial experiment, the adaptation of GSA into this problem could offer a more efficient and productive way of depositing quality thin film in the fabrication process.

  12. Marine Planning and Service Platform: specific ontology based semantic search engine serving data management and sustainable development

    Science.gov (United States)

    Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea

    2016-04-01

    The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text

  13. Why publish with AGU?

    Science.gov (United States)

    Graedel, T. E.

    The most visible activity of the American Geophysical Union is its publication of scientific journals. There are eight of these: Journal of Geophysical Research—Space Physics (JGR I), Journal of Geophysical Research—Solid Earth (JGR II), Journal of Geophysical Research—Oceans and Atmospheres (JGR III), Radio Science (RS), Water Resources Research (WRR), Geophysical Research Letters (GRL), Reviews of Geophysics and Space Physics (RGSP), and the newest, Tectonics.AGU's journals have established solid reputations for scientific excellence over the years. Reputation is not sufficient to sustain a high quality journal, however, since other factors enter into an author's decision on where to publish his or her work. In this article the characteristics of AGU's journals are compared with those of its competitors, with the aim of furnishing guidance to prospective authors and a better understanding of the value of the products to purchasers.

  14. StarTracker: An Integrated, Web-based Clinical Search Engine

    OpenAIRE

    Gregg, William; Jirjis, Jim; Lorenzi, Nancy M.; Giuse, Dario

    2003-01-01

    This poster details the design and use of the StarTracker clinical search engine. This program is fully integrated within our electronic medical record system and allows users to enter simple rules that direct formatted searches of multiple legacy databases.

  15. Effective access to digital assets: An XML-based EAD search system

    NARCIS (Netherlands)

    J. Zhang; K.N. Fachry; J. Kamps

    2009-01-01

    This paper focuses on the question of effective access methods, by developing novel search tools that will be crucial on the massive scale of digital asset repositories. We illustrate concretely why XML matters in digital curation by describing an implementation of a baseline digital asset search sy

  16. Identifying the Impact of Domain Knowledge and Cognitive Style on Web-Based Information Search Behavior

    Science.gov (United States)

    Park, Young; Black, John B.

    2007-01-01

    Although information searching in hypermedia environments has become a new important problem solving capability, there is not much known about what types of individual characteristics constitute a successful information search behavior. This study mainly investigated which of the 2 factors, 1) natural characteristics (cognitive style), and 2)…

  17. Predicting relevance based on assessor disagreement: analysis and practical applications for search evaluation

    NARCIS (Netherlands)

    Demeester, Thomas; Aly, Robin; Hiemstra, Djoerd; Nguyen, Dong-Phuong; Develder, Chris

    2015-01-01

    Evaluation of search engines relies on assessments of search results for selected test queries, from which we would ideally like to draw conclusions in terms of relevance of the results for general (e.g., future, unknown) users. In practice however, most evaluation scenarios only allow us to conclus

  18. A Rule-Based System for Hybrid Search and Delivery of Learning Objects to Learners

    Science.gov (United States)

    Biletskiy, Yevgen; Baghi, Hamidreza; Steele, Jarrett; Vovk, Ruslan

    2012-01-01

    Purpose: Presently, searching the internet for learning material relevant to ones own interest continues to be a time-consuming task. Systems that can suggest learning material (learning objects) to a learner would reduce time spent searching for material, and enable the learner to spend more time for actual learning. The purpose of this paper is…

  19. A Strategic Analysis of Search Engine Advertising in Web based-commerce

    Directory of Open Access Journals (Sweden)

    Ela Kumar

    2007-08-01

    Full Text Available Endeavor of this paper is to explore the role play of Search Engine in Online Business Industry. This paper discusses the Search Engine advertising programs and provides an insight about the revenue generated online via Search Engine. It explores the growth of Online Business Industry in India and emphasis on the role of Search Engine as the major advertising vehicle. A case study on re volution of Indian Advertising Industry has been conducted and its impact on online revenu e evaluated. Search Engine advertising strategies have been discussed in detail and the impact of Search Engine on Indian Advertising Industry has been analyzed. It also provides an analytical and competitive study of online advertising strategies with traditional advertising tools to evaluate their efficiencies against important advertising parameters. The paper concludes with a brief discussion on the malpractices that have adversarial impact on the efficiency of the Search Engine advertising model and highlight key hurdle Search Engine Industry is facing in Indian Business Scenario

  20. Scatter search based met heuristic for robust optimization of the deploying of "DWDM" technology on optical networks with survivability

    Directory of Open Access Journals (Sweden)

    Moreno-Pérez José A.

    2005-01-01

    Full Text Available In this paper we discuss the application of a met heuristic approach based on the Scatter Search to deal with robust optimization of the planning problem in the deploying of the Dense Wavelength Division Multiplexing (DWDM technology on an existing optical fiber network taking into account, in addition to the forecasted demands, the uncertainty in the survivability requirements.