WorldWideScience

Sample records for base published search

  1. PUBLISHING WEB SERVICES WHICH ENABLE SYSTEMATIC SEARCH

    Directory of Open Access Journals (Sweden)

    Vitezslav Nezval

    2012-01-01

    Full Text Available Web Services (WS are used for development of distributed applications containing native code assembled with references to remote Web Services. There are thousands of Web Services available on Web but the problem is how to find an appropriate WS by discovering its details, i.e. description of functionality of object methods exposed for public use. Several models have been suggested, some of them implemented, but so far none of them allowing systematic, publicly available, search. This paper suggests a model for publishing WS in a flexible way which allows an automated way of finding the desired Web Service by category and/or functionality without having to access any dedicated servers. The search results in a narrow set of Web Services suitable for problem solution according to the user specification.

  2. Publishing studies: the search for an elusive academic object

    Directory of Open Access Journals (Sweden)

    Sophie Noël

    2015-07-01

    Full Text Available This paper questions the validity of the so-called “publishing studies” as an academic discipline, while trying to situate them within the field of social sciences and to contextualize their success. It argues that a more appropriate frame could be adopted to describe what people studying the transformations of book publishing do – or should do – both at a theoretical and methodological level. The paper begins by providing an overview of the scholarly and academic context in France as far as book publishing is concerned, highlighting its genesis and current development. It goes on to underline the main pitfalls that such a sub-field as publishing studies is faced with, before making suggestions as to the bases for a stimulating analysis of publishing, making a case for an interdisciplinary approach nurtured by social sciences. The paper is based on a long-term field study on independent presses in France, together with a survey of literature on the subject.

  3. Search Based Software Engineering

    OpenAIRE

    Jaspreet Bedi; Kuljit Kaur

    2014-01-01

    This paper reviews the search based software engineering research and finds the major milestones in this direction. The SBSE approach has been the topic of several surveys and reviews. Search Based Software Engineering (SBSE) consists of the application of search-based optimization to software engineering. Using SBSE, a software engineering task is formulated as a search problem by defining a suitable candidate solution representation and a fitness function to differentiate be...

  4. Web-Based Computing Resource Agent Publishing

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Web-based Computing Resource Publishing is a efficient way to provide additional computing capacity for users who need more computing resources than that they themselves could afford by making use of idle computing resources in the Web.Extensibility and reliability are crucial for agent publishing. The parent-child agent framework and primary-slave agent framework were proposed respectively and discussed in detail.

  5. Tabu Search Based Strategies for Conformational Search

    Science.gov (United States)

    Stepanenko, Svetlana; Engels, Bernd

    2009-09-01

    This paper presents an application of the new nonlinear global optimization routine gradient only tabu search (GOTS) to conformational search problems. It is based on the tabu search strategy which tries to determine the global minimum of a function by the steepest descent-modest ascent strategy. The refinement of ranking procedure of the original GOTS method and the exploitation of simulated annealing elements are described, and the modifications of the GOTS algorithm necessary to adopt it to conformation searches are explained. The utility of the GOTS for conformational search problems is tested using various examples.

  6. A dynamic knowledge base based search engine

    Institute of Scientific and Technical Information of China (English)

    WANG Hui-jin; HU Hua; LI Qing

    2005-01-01

    Search engines have greatly helped us to find thedesired information from the Intemet. Most search engines use keywords matching technique. This paper discusses a Dynamic Knowledge Base based Search Engine (DKBSE), which can expand the user's query using the keywords' concept or meaning. To do this, the DKBSE needs to construct and maintain the knowledge base dynamically via the system's searching results and the user's feedback information. The DKBSE expands the user's initial query using the knowledge base, and returns the searched information after the expanded query.

  7. Tag Based Audio Search Engine

    Directory of Open Access Journals (Sweden)

    Parameswaran Vellachu

    2012-03-01

    Full Text Available The volume of the music database is increasing day by day. Getting the required song as per the choice of the listener is a big challenge. Hence, it is really hard to manage this huge quantity, in terms of searching, filtering, through the music database. It is surprising to see that the audio and music industry still rely on very simplistic metadata to describe music files. However, while searching audio resource, an efficient "Tag Based Audio Search Engine" is necessary. The current research focuses on two aspects of the musical databases 1. Tag Based Semantic Annotation Generation using the tag based approach.2. An audio search engine, using which the user can retrieve the songs based on the users choice. The proposed method can be used to annotation and retrieve songs based on musical instruments used , mood of the song, theme of the song, singer, music director, artist, film director, instrument, genre or style and so on.

  8. Quantum searching application in search based software engineering

    Science.gov (United States)

    Wu, Nan; Song, FangMin; Li, Xiangdong

    2013-05-01

    The Search Based Software Engineering (SBSE) is widely used in software engineering for identifying optimal solutions. However, there is no polynomial-time complexity solution used in the traditional algorithms for SBSE, and that causes the cost very high. In this paper, we analyze and compare several quantum search algorithms that could be applied for SBSE: quantum adiabatic evolution searching algorithm, fixed-point quantum search (FPQS), quantum walks, and a rapid modified Grover quantum searching method. The Grover's algorithm is thought as the best choice for a large-scaled unstructured data searching and theoretically it can be applicable to any search-space structure and any type of searching problems.

  9. Location-based Web Search

    Science.gov (United States)

    Ahlers, Dirk; Boll, Susanne

    In recent years, the relation of Web information to a physical location has gained much attention. However, Web content today often carries only an implicit relation to a location. In this chapter, we present a novel location-based search engine that automatically derives spatial context from unstructured Web resources and allows for location-based search: our focused crawler applies heuristics to crawl and analyze Web pages that have a high probability of carrying a spatial relation to a certain region or place; the location extractor identifies the actual location information from the pages; our indexer assigns a geo-context to the pages and makes them available for a later spatial Web search. We illustrate the usage of our spatial Web search for location-based applications that provide information not only right-in-time but also right-on-the-spot.

  10. A Quantitative Analysis of Published Skull Base Endoscopy Literature.

    Science.gov (United States)

    Hardesty, Douglas A; Ponce, Francisco A; Little, Andrew S; Nakaji, Peter

    2016-02-01

    Objectives Skull base endoscopy allows for minimal access approaches to the sinonasal contents and cranial base. Advances in endoscopic technique and applications have been published rapidly in recent decades. Setting We utilized an Internet-based scholarly database (Web of Science, Thomson Reuters) to query broad-based phrases regarding skull base endoscopy literature. Participants All skull base endoscopy publications. Main Outcome Measures Standard bibliometrics outcomes. Results We identified 4,082 relevant skull base endoscopy English-language articles published between 1973 and 2014. The 50 top-cited publications (n = 51, due to articles with equal citation counts) ranged in citation count from 397 to 88. Most of the articles were clinical case series or technique descriptions. Most (96% [49/51])were published in journals specific to either neurosurgery or otolaryngology. Conclusions A relatively small number of institutions and individuals have published a large amount of the literature. Most of the publications consisted of case series and technical advances, with a lack of randomized trials. PMID:26949585

  11. Publishing Support for Small Print-Based Publishers: Options for ARL Libraries

    Science.gov (United States)

    Ivins, October; Luther, Judy

    2011-01-01

    This project was originally defined to explore the potential for ARL libraries to provide support to small, print-only publishers in order to ensure long-term digital access to their content. Research library publishing programs vary widely, from posting PDFs in an institutional repository to full-fledged publishing operations. During the life of…

  12. Search Based Software Project Management

    OpenAIRE

    Ren, J

    2013-01-01

    This thesis investigates the application of Search Based Software Engineering (SBSE) approach in the field of Software Project Management (SPM). With SBSE approaches, a pool of candidate solutions to an SPM problem is automatically generated and gradually evolved to be increasingly more desirable. The thesis is motivated by the observation from industrial practice that it is much more helpful to the project manager to provide insightful knowledge than exact solutions. We investigate whether S...

  13. A web based Publish-Subscribe framework for mobile computing

    Directory of Open Access Journals (Sweden)

    Cosmina Ivan

    2014-05-01

    Full Text Available The growing popularity of mobile devices is permanently changing the Internet user’s computing experience. Smartphones and tablets begin to replace the desktop as the primary means of interacting with various information technology and web resources. While mobile devices facilitate in consuming web resources in the form of web services, the growing demand for consuming services on mobile device is introducing a complex ecosystem in the mobile environment. This research addresses the communication challenges involved in mobile distributed networks and proposes an event-driven communication approach for information dissemination. This research investigates different communication techniques such as polling, long-polling and server-side push as client-server interaction mechanisms and the latest web technologies standard WebSocket , as communication protocol within a Publish/Subscribe paradigm. Finally, this paper introduces and evaluates the proposed framework, that is a hybrid approach of WebSocket and event-based publish/subscribe for operating in mobile environments.

  14. Publishing FAIR Data: An Exemplar Methodology Utilizing PHI-Base.

    Science.gov (United States)

    Rodríguez-Iglesias, Alejandro; Rodríguez-González, Alejandro; Irvine, Alistair G; Sesma, Ane; Urban, Martin; Hammond-Kosack, Kim E; Wilkinson, Mark D

    2016-01-01

    Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species vs. the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be "FAIR"-Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences-the Pathogen-Host Interaction Database (PHI-base)-to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings. PMID:27433158

  15. Publishing FAIR Data: an exemplar methodology utilizing PHI-base

    Directory of Open Access Journals (Sweden)

    Alejandro eRodríguez Iglesias

    2016-05-01

    Full Text Available Pathogen-Host interaction data is core to our understanding of disease processes and their molecular/genetic bases. Facile access to such core data is particularly important for the plant sciences, where individual genetic and phenotypic observations have the added complexity of being dispersed over a wide diversity of plant species versus the relatively fewer host species of interest to biomedical researchers. Recently, an international initiative interested in scholarly data publishing proposed that all scientific data should be FAIR - Findable, Accessible, Interoperable, and Reusable. In this work, we describe the process of migrating a database of notable relevance to the plant sciences - the Pathogen-Host Interaction Database (PHI-base - to a form that conforms to each of the FAIR Principles. We discuss the technical and architectural decisions, and the migration pathway, including observations of the difficulty and/or fidelity of each step. We examine how multiple FAIR principles can be addressed simultaneously through careful design decisions, including making data FAIR for both humans and machines with minimal duplication of effort. We note how FAIR data publishing involves more than data reformatting, requiring features beyond those exhibited by most life science Semantic Web or Linked Data resources. We explore the value-added by completing this FAIR data transformation, and then test the result through integrative questions that could not easily be asked over traditional Web-based data resources. Finally, we demonstrate the utility of providing explicit and reliable access to provenance information, which we argue enhances citation rates by encouraging and facilitating transparent scholarly reuse of these valuable data holdings.

  16. Developing a Comprehensive Search Strategy for Evidence Based Systematic Reviews

    Directory of Open Access Journals (Sweden)

    Sekhar Thadiparthi

    2008-03-01

    Full Text Available Objective ‐ Within the health care field it becomes ever more critical to conduct systematic reviews of the research literature to guide programmatic activities, policy‐making decisions, and future research. Conducting systematic reviews requires a comprehensive search of behavioural, social, and policy research to identify relevant literature. As a result, the validity of the systematic review findings and recommendations is partly a function of the quality of the systematic search of the literature. Therefore, a carefully thought out and organized plan for developing and testing a comprehensive search strategy should be followed. This paper uses the HIV/AIDS prevention literature to provide a framework for developing, testing, and conducting a comprehensive search strategy looking beyond RCTs.Methods ‐ Comprehensive search strategies, including automated and manual search techniques, were developed, tested, and implemented to locate published and unpublished citations in order to build a database of HIV/AIDS and sexually transmitted diseases (STD literature. The search incorporated various automated and manual search methods to decrease the chance of missing pertinent information. The automated search was implemented in MEDLINE, EMBASE,PsycINFO, Sociological Abstracts and AIDSLINE. These searches utilized both index terms as well as keywords including truncation, proximity, and phrases. The manual search method includes physically examining journals (hand searching, reference list checks, and researching key authors.Results ‐ Using automated and manual search components, the search strategy retrieved 17,493 articles about prevention of HIV/AIDS and STDs for the years 1988‐2005. The automated search found 91%, and the manual search contributed 9% of the articles reporting on HIV/AIDS or STD interventions with behavioural/biologic outcomes. Among the citations located with automated searches, 48% were found in only one database (20

  17. Distributed search engine architecture based on topic specific searches

    Science.gov (United States)

    Abudaqqa, Yousra; Patel, Ahmed

    2015-05-01

    Indisputably, search engines (SEs) abound. The monumental growth of users performing online searches on the Web is a contending issue in the contemporary world nowadays. For example, there are tens of billions of searches performed everyday, which typically offer the users many irrelevant results which are time consuming and costly to the user. Based on the afore-going problem it has become a herculean task for existing Web SEs to provide complete, relevant and up-to-date information response to users' search queries. To overcome this problem, we developed the Distributed Search Engine Architecture (DSEA), which is a new means of smart information query and retrieval of the World Wide Web (WWW). In DSEAs, multiple autonomous search engines, owned by different organizations or individuals, cooperate and act as a single search engine. This paper includes the work reported in this research focusing on development of DSEA, based on topic-specific specialised search engines. In DSEA, the results to specific queries could be provided by any of the participating search engines, for which the user is unaware of. The important design goal of using topic-specific search engines in the research is to build systems that can effectively be used by larger number of users simultaneously. Efficient and effective usage with good response is important, because it involves leveraging the vast amount of searched data from the World Wide Web, by categorising it into condensed focused topic -specific results that meet the user's queries. This design model and the development of the DSEA adopt a Service Directory (SD) to route queries towards topic-specific document hosting SEs. It displays the most acceptable performance which is consistent with the requirements of the users. The evaluation results of the model return a very high priority score which is associated with each frequency of a keyword.

  18. SCIBS (Subset Count Index Based Search) indexing algorithm to reduce the time complexity of search algorithms

    OpenAIRE

    Dr. R.Manicka chezian; Nishad Pm

    2012-01-01

    There are several algorithms like binary search, linear search, Interpolation search, Ternary search and, etc used for search. Search algorithms locate the position of an item in a sorted. But the time taken for the search is huge. The search algorithm initially set the first and last index for the search; this directly leads to thetime complexity. This paper proposes a new prefix search indexing algorithm is called Subset Count Index Based Search Algorithm (SCIBS). This algorithm achieved th...

  19. Efficiency of tabu-search-based conformational search algorithms.

    Science.gov (United States)

    Grebner, Christoph; Becker, Johannes; Stepanenko, Svetlana; Engels, Bernd

    2011-07-30

    Efficient conformational search or sampling approaches play an integral role in molecular modeling, leading to a strong demand for even faster and more reliable conformer search algorithms. This article compares the efficiency of a molecular dynamics method, a simulated annealing method, and the basin hopping (BH) approach (which are widely used in this field) with a previously suggested tabu-search-based approach called gradient only tabu search (GOTS). The study emphasizes the success of the GOTS procedure and, more importantly, shows that an approach which combines BH and GOTS outperforms the single methods in efficiency and speed. We also show that ring structures built by a hydrogen bond are useful as starting points for conformational search investigations of peptides and organic ligands with biological activities, especially in structures that contain multiple rings. PMID:21541959

  20. Mathematical programming solver based on local search

    CERN Document Server

    Gardi, Frédéric; Darlay, Julien; Estellon, Bertrand; Megel, Romain

    2014-01-01

    This book covers local search for combinatorial optimization and its extension to mixed-variable optimization. Although not yet understood from the theoretical point of view, local search is the paradigm of choice for tackling large-scale real-life optimization problems. Today's end-users demand interactivity with decision support systems. For optimization software, this means obtaining good-quality solutions quickly. Fast iterative improvement methods, like local search, are suited to satisfying such needs. Here the authors show local search in a new light, in particular presenting a new kind of mathematical programming solver, namely LocalSolver, based on neighborhood search. First, an iconoclast methodology is presented to design and engineer local search algorithms. The authors' concern about industrializing local search approaches is of particular interest for practitioners. This methodology is applied to solve two industrial problems with high economic stakes. Software based on local search induces ex...

  1. Proposal of Tabu Search Algorithm Based on Cuckoo Search

    Directory of Open Access Journals (Sweden)

    Ahmed T. Sadiq Al-Obaidi

    2014-03-01

    Full Text Available This paper presents a new version of Tabu Search (TS based on Cuckoo Search (CS called (Tabu-Cuckoo Search TCS to reduce the effect of the TS problems. The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.

  2. Proposal of Tabu Search Algorithm Based on Cuckoo Search

    OpenAIRE

    Ahmed T.Sadiq Al-Obaidi; Ahmed Badre Al-Deen Majeed

    2014-01-01

    This paper presents a new version of Tabu Search (TS) based on Cuckoo Search (CS) called (Tabu-Cuckoo Search TCS) to reduce the effect of the TS problems. The proposed algorithm provides a more diversity to candidate solutions of TS. Two case studies have been solved using the proposed algorithm, 4-Color Map and Traveling Salesman Problem. The proposed algorithm gives a good result compare with the original, the iteration numbers are less and the local minimum or non-optimal solutions are less.

  3. ArraySearch: A Web-Based Genomic Search Engine

    OpenAIRE

    Wilson, Tyler J; Ge, Steven X

    2012-01-01

    Recent advances in microarray technologies have resulted in a flood of genomics data. This large body of accumulated data could be used as a knowledge base to help researchers interpret new experimental data. ArraySearch finds statistical correlations between newly observed gene expression profiles and the huge source of well-characterized expression signatures deposited in the public domain. A search query of a list of genes will return experiments on which the genes are significantly up- or...

  4. Research libraries – new approaches for library-based publishing

    OpenAIRE

    Ayris, P.

    2014-01-01

    This paper describes the UCL model (University College London) for 21st-century university publishing. This is centred in the University Library, not just as the curator and indexer of Knowledge, but as the producer of Knowledge. The paper sets the establishment of UCL Press in the context of Open Access developments in the UK and examines the traditional role of the Library in the research process. It posits a role for the University Library as publisher in at least three areas: research mon...

  5. ArraySearch: A Web-Based Genomic Search Engine

    Directory of Open Access Journals (Sweden)

    Tyler J. Wilson

    2012-01-01

    Full Text Available Recent advances in microarray technologies have resulted in a flood of genomics data. This large body of accumulated data could be used as a knowledge base to help researchers interpret new experimental data. ArraySearch finds statistical correlations between newly observed gene expression profiles and the huge source of well-characterized expression signatures deposited in the public domain. A search query of a list of genes will return experiments on which the genes are significantly up- or downregulated collectively. Searches can also be conducted using gene expression signatures from new experiments. This resource will empower biological researchers with a statistical method to explore expression data from their own research by comparing it with expression signatures from a large public archive.

  6. Smart Agent Learning based Hotel Search System- Android Environment

    Directory of Open Access Journals (Sweden)

    Wayne Lawrence

    2012-08-01

    Full Text Available The process of finding the finest hotel in central location is time consuming, information overload and overwhelming and in some cases poses a security risk to the client. Over time with competition in the market among travel agents and hotels, the process of hotel search and booking has improved with the advances in technology. Various web sites allow a user to select a destination from a pull-down list along with several categories to suit one’s preference.. Some of the more advanced web sites allow for a search of the destination via a map for example hotelguidge.com and jamaica.hotels.hu. Recently good amount of work been carried in the use of Intelligent agents towards hotel search on J2ME based mobile handset which still has some weakness. The proposed system so developed uses smart software agents that overcomes the weakness in the previous system by collaborating among themselves and search Google map based on criteria selected by the user and return results to the client that is precise and best suit the user requirements. In addition, the agent possesses learning capability of searching the hotels too which is based on past search experience. The booking of hotel involving cryptography has not been incorporated in this research paper and been published elsewhere. This will be facilitated on Android 2.2-enabled mobile phone using JADE-LEAP Agent development kit.

  7. Systematic search and evaluation of published scientific research:implications for schizophrenia research

    OpenAIRE

    Mäkinen, J.

    2010-01-01

    Abstract The aim of this doctoral thesis is to present methods of search, evaluation and analysis of a specific research domain (schizophrenia) from four perspectives: bibliometric analysis of 1) Finnish doctoral theses and 2) Finnish journal articles on schizophrenia, and meta-analysis to determine the prevalence of 3) alcohol use disorders and 4) cannabis use disorders in schizophrenia. Over the years, the number of Finnish articles on schizophrenia has increased, as well as the amou...

  8. Cost Based Satisficing Search Considered Harmful

    CERN Document Server

    Cushing, William; Kambhampati, Subbarao

    2011-01-01

    Recently, several researchers have found that cost-based satisficing search with A* often runs into problems. Although some "work arounds" have been proposed to ameliorate the problem, there has not been any concerted effort to pinpoint its origin. In this paper, we argue that the origins can be traced back to the wide variance in action costs that is observed in most planning domains. We show that such cost variance misleads A* search, and that this is no trifling detail or accidental phenomenon, but a systemic weakness of the very concept of "cost-based evaluation functions + systematic search + combinatorial graphs". We show that satisficing search with sized-based evaluation functions is largely immune to this problem.

  9. Model-based Tomographic Reconstruction Literature Search

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Lehman, S K

    2005-11-30

    In the process of preparing a proposal for internal research funding, a literature search was conducted on the subject of model-based tomographic reconstruction (MBTR). The purpose of the search was to ensure that the proposed research would not replicate any previous work. We found that the overwhelming majority of work on MBTR which used parameterized models of the object was theoretical in nature. Only three researchers had applied the technique to actual data. In this note, we summarize the findings of the literature search.

  10. Water pollution analysis and detection. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    The bibliography contains citations concerning water pollution analysis, detection, monitoring, and regulation. Citations review online systems, bioassay monitoring, laser-based detection, sensor and biosensor systems, metabolic analyzers, and microsystem techniques. References cover fiber-optic portable detection instruments and rapid detection of toxicants in drinking water. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  11. Search Result Diversification Based on Query Facets

    Institute of Scientific and Technical Information of China (English)

    胡莎; 窦志成; 王晓捷; 继荣

    2015-01-01

    In search engines, different users may search for different information by issuing the same query. To satisfy more users with limited search results, search result diversification re-ranks the results to cover as many user intents as possible. Most existing intent-aware diversification algorithms recognize user intents as subtopics, each of which is usually a word, a phrase, or a piece of description. In this paper, we leverage query facets to understand user intents in diversification, where each facet contains a group of words or phrases that explain an underlying intent of a query. We generate subtopics based on query facets and propose faceted diversification approaches. Experimental results on the public TREC 2009 dataset show that our faceted approaches outperform state-of-the-art diversification models.

  12. Modeling and Implementing Ontology-Based Publish/Subscribe Using Semantic Web Technologies

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk; Hansen, Klaus Marius

    2010-01-01

    issue in implementing publish/subscribe-based systems is for entities to agree on a common vocabulary. To support this, we present a conceptualization and a design of a publish/subscribe system that combine and generalize many of these paradigms by using Semantic Web technology. In doing so, we extend...... previous work on ontology-based publish/subscribe by describing the semantics of publish/subscribe using operations on ontologies and by using common Semantic Web technology. Furthermore, we present an implementation in the context of a middleware we are designing and show that this implementation has...

  13. Partial evolution based local adiabatic quantum search

    International Nuclear Information System (INIS)

    Recently, Zhang and Lu provided a quantum search algorithm based on partial adiabatic evolution, which beats the time bound of local adiabatic search when the number of marked items in the unsorted database is larger than one. Later, they found that the above two adiabatic search algorithms had the same time complexity when there is only one marked item in the database. In the present paper, following the idea of Roland and Cerf [Roland J and Cerf N J 2002 Phys. Rev. A 65 042308], if within the small symmetric evolution interval defined by Zhang et al., a local adiabatic evolution is performed instead of the original “global” one, this “new” algorithm exhibits slightly better performance, although they are progressively equivalent with M increasing. In addition, the proof of the optimality for this partial evolution based local adiabatic search when M = 1 is also presented. Two other special cases of the adiabatic algorithm obtained by appropriately tuning the evolution interval of partial adiabatic evolution based quantum search, which are found to have the same phenomenon above, are also discussed. (general)

  14. Space based microlensing planet searches

    CERN Document Server

    Beaulieu, J P; Batista, V

    2013-01-01

    The discovery of extra-solar planets is arguably the most exciting development in astrophysics during the past 15 years, rivalled only by the detection of dark energy. Two projects unite the communities of exoplanet scientists and cosmologists: the proposed ESA M class mission EUCLID and the large space mission WFIRST, top ranked by the Astronomy 2010 Decadal Survey report. The later states that: "Space-based microlensing is the optimal approach to providing a true statistical census of planetary systems in the Galaxy, over a range of likely semi-major axes". They also add: "This census, combined with that made by the Kepler mission, will determine how common Earth-like planets are over a wide range of orbital parameters". We will present a status report of the results obtained by microlensing on exoplanets and the new objectives of the next generation of ground based wide field imager networks. We will finally discuss the fantastic prospect offered by space based microlensing at the horizon 2020-2025.

  15. Space based microlensing planet searches

    Directory of Open Access Journals (Sweden)

    Tisserand Patrick

    2013-04-01

    Full Text Available The discovery of extra-solar planets is arguably the most exciting development in astrophysics during the past 15 years, rivalled only by the detection of dark energy. Two projects unite the communities of exoplanet scientists and cosmologists: the proposed ESA M class mission EUCLID and the large space mission WFIRST, top ranked by the Astronomy 2010 Decadal Survey report. The later states that: “Space-based microlensing is the optimal approach to providing a true statistical census of planetary systems in the Galaxy, over a range of likely semi-major axes”. They also add: “This census, combined with that made by the Kepler mission, will determine how common Earth-like planets are over a wide range of orbital parameters”. We will present a status report of the results obtained by microlensing on exoplanets and the new objectives of the next generation of ground based wide field imager networks. We will finally discuss the fantastic prospect offered by space based microlensing at the horizon 2020–2025.

  16. Dyniqx: a novel meta-search engine for metadata based cross search

    OpenAIRE

    Zhu, Jianhan; Song, Dawei; Eisenstadt, Marc; Barladeanu, Cristi; Rüger, Stefan

    2008-01-01

    The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based cross search. Dyniqx exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are ...

  17. Ontology-Based Search of Genomic Metadata.

    Science.gov (United States)

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries. PMID:26529777

  18. New similarity search based glioma grading

    International Nuclear Information System (INIS)

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  19. New similarity search based glioma grading

    Energy Technology Data Exchange (ETDEWEB)

    Haegler, Katrin; Brueckmann, Hartmut; Linn, Jennifer [Ludwig-Maximilians-University of Munich, Department of Neuroradiology, Munich (Germany); Wiesmann, Martin; Freiherr, Jessica [RWTH Aachen University, Department of Neuroradiology, Aachen (Germany); Boehm, Christian [Ludwig-Maximilians-University of Munich, Department of Computer Science, Munich (Germany); Schnell, Oliver; Tonn, Joerg-Christian [Ludwig-Maximilians-University of Munich, Department of Neurosurgery, Munich (Germany)

    2012-08-15

    MR-based differentiation between low- and high-grade gliomas is predominately based on contrast-enhanced T1-weighted images (CE-T1w). However, functional MR sequences as perfusion- and diffusion-weighted sequences can provide additional information on tumor grade. Here, we tested the potential of a recently developed similarity search based method that integrates information of CE-T1w and perfusion maps for non-invasive MR-based glioma grading. We prospectively included 37 untreated glioma patients (23 grade I/II, 14 grade III gliomas), in whom 3T MRI with FLAIR, pre- and post-contrast T1-weighted, and perfusion sequences was performed. Cerebral blood volume, cerebral blood flow, and mean transit time maps as well as CE-T1w images were used as input for the similarity search. Data sets were preprocessed and converted to four-dimensional Gaussian Mixture Models that considered correlations between the different MR sequences. For each patient, a so-called tumor feature vector (= probability-based classifier) was defined and used for grading. Biopsy was used as gold standard, and similarity based grading was compared to grading solely based on CE-T1w. Accuracy, sensitivity, and specificity of pure CE-T1w based glioma grading were 64.9%, 78.6%, and 56.5%, respectively. Similarity search based tumor grading allowed differentiation between low-grade (I or II) and high-grade (III) gliomas with an accuracy, sensitivity, and specificity of 83.8%, 78.6%, and 87.0%. Our findings indicate that integration of perfusion parameters and CE-T1w information in a semi-automatic similarity search based analysis improves the potential of MR-based glioma grading compared to CE-T1w data alone. (orig.)

  20. Chemical Information in Scirus and BASE (Bielefeld Academic Search Engine)

    Science.gov (United States)

    Bendig, Regina B.

    2009-01-01

    The author sought to determine to what extent the two search engines, Scirus and BASE (Bielefeld Academic Search Engines), would be useful to first-year university students as the first point of searching for chemical information. Five topics were searched and the first ten records of each search result were evaluated with regard to the type of…

  1. A Hybrid Metaheuristic for Biclustering Based on Scatter Search and Genetic Algorithms

    Science.gov (United States)

    Nepomuceno, Juan A.; Troncoso, Alicia; Aguilar–Ruiz, Jesús S.

    In this paper a hybrid metaheuristic for biclustering based on Scatter Search and Genetic Algorithms is presented. A general scheme of Scatter Search has been used to obtain high-quality biclusters, but a way of generating the initial population and a method of combination based on Genetic Algorithms have been chosen. Experimental results from yeast cell cycle and human B-cell lymphoma are reported. Finally, the performance of the proposed hybrid algorithm is compared with a genetic algorithm recently published.

  2. Ontology-based prior art search

    OpenAIRE

    Bondarenok, A.

    2003-01-01

    This article describes a method of prior art document search based on semantic similarities of a user query and indexed documents. The ontology-based technology of knowledge extraction and representation is used to build document and query images, which are compared using the semantic similarity technique. Documents are ranked according to their semantic similarities to the query, and the top results are shown to the user.

  3. Location-based Services using Image Search

    DEFF Research Database (Denmark)

    Vertongen, Pieter-Paulus; Hansen, Dan Witzner

    2008-01-01

    Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all situat...... image search engine and database image location knowledge, the location is determined of the query image and associated data can be presented to the user.......Recent developments in image search has made them sufficiently efficient to be used in real-time applications. GPS has become a popular navigation tool. While GPS information provide reasonably good accuracy, they are not always present in all hand held devices nor are they accurate in all...... situations, for example in urban environments. We propose a system to provide location-based services using image searches without requiring GPS. The goal of this system is to assist tourists in cities with additional information using their mobile phones and built-in cameras. Based upon the result of the...

  4. Community Colleges, school data base attribute, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Community Colleges dataset, was produced all or in part from Published Reports/Deeds information as of 2006. It is described as 'school data base attribute'....

  5. Building Permits, permits plus data base, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Building Permits dataset, was produced all or in part from Published Reports/Deeds information as of 2006. It is described as 'permits plus data base'. Data by...

  6. Sequential search strategies based on kriging

    OpenAIRE

    Vazquez, Emmanuel

    2015-01-01

    This manuscript has been written to obtain the French Habilitation à Diriger des Recherches. It is not intended to provide new academic results nor should it be considered as a reference textbook. Instead, this manuscript is a brief (and incomplete) summary of my teaching and research activities. You will find in this manuscript a compilation of some articles in which I had a significant contribution, together with some introductory paragraphs about sequential search strategies based on krigi...

  7. Strategies for searching and managing evidence-based practice resources.

    Science.gov (United States)

    Robb, Meigan; Shellenbarger, Teresa

    2014-10-01

    Evidence-based nursing practice requires the use of effective search strategies to locate relevant resources to guide practice change. Continuing education and staff development professionals can assist nurses to conduct effective literature searches. This article provides suggestions for strategies to aid in identifying search terms. Strategies also are recommended for refining searches by using controlled vocabulary, truncation, Boolean operators, PICOT (Population/Patient Problem, Intervention, Comparison, Outcome, Time) searching, and search limits. Suggestions for methods of managing resources also are identified. Using these approaches will assist in more effective literature searches and may help evidence-based practice decisions. PMID:25221988

  8. Iris Localization Based on Edge Searching Strategies

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Han Jiuqiang

    2005-01-01

    An iris localization scheme based on edge searching strategies is presented. First, the edge detection operator Laplacian-ofGaussian (LoG) is used to iris original image to search its inner boundary. Then, a circle detection operator is introduced to locate the outer boundary and its center, which is invariant of translation, rotation and scale. Finally, the method of curve fitting is developed in localization of eyelid. The performance of the proposed method is tested with 756 iris images from 108 different classes in CASIA Iris Database and compared with the conventional Hough transform method. The experimental results show that without loss of localization accuracy, the proposed iris localization algorithm is apparently faster than Hough transform.

  9. Skip List Data Structure Based New Searching Algorithm and Its Applications: Priority Search

    Directory of Open Access Journals (Sweden)

    Mustafa Aksu

    2016-02-01

    Full Text Available Our new algorithm, priority search, was created with the help of skip list data structure and algorithms. Skip list data structure consists of linked lists formed in layers, which were linked in a pyramidal way. The time complexity of searching algorithm is equal to O(lgN in an N-element skip list data structure. The new developed searching algorithm was based on the hit search number for each searched data. If a datum has greater hit search number, then it was upgraded in the skip list data structure to the upper level. That is, the mostly searched data were located in the upper levels of the skip list data structure and rarely searched data were located in the lower levels of the skip list data structure. The pyramidal structure of data was constructed by using the hit search numbers, in another word, frequency of each data. Thus, the time complexity of searching was almost ?(1 for N records data set. In this paper, the applications of searching algorithms like linear search, binary search, and priority search were realized, and the obtained results were compared. The results demonstrated that priority search algorithm was better than the binary search algorithm.

  10. SEMANTIC BASED MULTIPLE WEB SEARCH ENGINE

    Directory of Open Access Journals (Sweden)

    MS.S.LATHA SHANMUGAVADIVU,

    2010-08-01

    Full Text Available With the tremendous growth of information available to end users through the Web, search engines come to play ever a more critical role. Nevertheless, because of their general-purpose approach, it is always less uncommon that obtained result sets provide a burden ofuseless pages. The next-generation Web architecture, represented by the Semantic Web, provides the layered architecture possibly allowing overcoming this limitation. Several search engines have been proposed, which allow increasing information retrieval accuracy by exploiting a key content of Semantic Web resources, that is, relations. To make the Semantic Web work, well-structured data andrules are necessary for agents to roam the Web [2]. XML and RDF are two important technologies: we can create our own structures by XML without indicating what they mean; RDF uses sets of triples which express basic concepts [2]. DAML is the extension of XML and RDF The aim of this project is to develop a search engine based on ontologymatching within the Semantic Web. It uses the data in Semantic Web form such as DAML or RDF. When the user input a query, the program accepts the query and transfers it to a machine learning agent. Then the agent measures the similarity between different ontology’s, and feedback the matched item to the user.

  11. A Feedback-Based Web Search Engine

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei-feng; XU Bao-wen; ZHOU Xiao-yu

    2004-01-01

    Web search engines are very useful information service tools in the Internet.The current web search engines produce search results relating to the search terms and the actual information collected by them.Since the selections of the search results cannot affect the future ones, they may not cover most people's interests.In this paper, feedback information produced by the users' accessing lists will be represented by the rough set and can reconstruct the query string and influence the search results.And thus the search engines can provide self-adaptability.

  12. Differential Search Algorithm Based Edge Detection

    Science.gov (United States)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  13. Math Search for the Masses: Multimodal Search Interfaces and Appearance-Based Retrieval

    OpenAIRE

    Zanibbi, Richard; Orakwue, Awelemdy

    2015-01-01

    We summarize math search engines and search interfaces produced by the Document and Pattern Recognition Lab in recent years, and in particular the min math search interface and the Tangent search engine. Source code for both systems are publicly available. "The Masses" refers to our emphasis on creating systems for mathematical non-experts, who may be looking to define unfamiliar notation, or browse documents based on the visual appearance of formulae rather than their mathematical semantics.

  14. Decomposition During Search for Propagation-Based Constraint Solvers

    OpenAIRE

    Mann, Martin; Tack, Guido; Will, Sebastian

    2007-01-01

    We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have impl...

  15. Search-based software test data generation using evolutionary computation

    OpenAIRE

    Maragathavalli, P.

    2011-01-01

    Search-based Software Engineering has been utilized for a number of software engineering activities.One area where Search-Based Software Engineering has seen much application is test data generation. Evolutionary testing designates the use of metaheuristic search methods for test case generation. The search space is the input domain of the test object, with each individual or potential solution, being an encoded set of inputs to that test object. The fitness function is tailored to find...

  16. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    Science.gov (United States)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  17. Mashup Based Content Search Engine for Mobile Devices

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-05-01

    Full Text Available Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiveness on e-learning content searches with a variety of content types, image, document, pdf files, moving picture.

  18. The OGC Publish/Subscribe specification in the context of sensor-based applications

    Science.gov (United States)

    Bigagli, Lorenzo

    2014-05-01

    The Open Geospatial Consortium Publish/Subscribe Standards Working Group (in short, OGC PubSub SWG) was chartered in 2010 to specify a mechanism to support publish/subscribe requirements across OGC service interfaces and data types (coverage, feature, etc.) The Publish/Subscribe Interface Standard 1.0 - Core (13-131) defines an abstract description of the basic mandatory functionality, along with several optional, extended capabilities. The Core is independent of the underlying binding, for which two extensions are currently considered in the PubSub SWG scope: a SOAP binding and RESTful binding. Two primary parties characterize the publish/subscribe model: a Publisher, which is publishing information, and a Subscriber, which expresses an interest in all or part of the published information. In many cases, the Subscriber and the entity to which data is to be delivered (the Receiver) are one and the same. However, they are distinguished in PubSub to allow for these roles to be segregated. This is useful, for example, in event-based systems, where system entities primarily react to incoming information and may emit new information to other interested entities. The Publish/Subscribe model is distinguished from the typical request/response model, where a client makes a request and the server responds with either the requested information or a failure. This provides relatively immediate feedback, but can be insufficient in cases where the client is waiting for a specific event (such as data arrival, server changes, or data updates). In fact, while waiting for an event, a client must repeatedly request the desired information (polling). This has undesirable side effects: if a client polls frequently this can increase server load and network traffic, and if a client polls infrequently it may not receive a message when it is needed. These issues are accentuated when event occurrences are unpredictable, or when the delay between event occurrence and client notification must

  19. Search-Based Peer Firms: Aggregating Investor Perceptions Through Internet Co-Searches

    OpenAIRE

    Lee, Charles M.C.; Ma, Paul; Wang, Changyi Chang-Yi

    2015-01-01

    Applying a "co-search" algorithm to Internet traffic at the SEC's EDGAR website, we develop a novel method for identifying economically-related peer firms and for measuring their relative importance. Our results show that firms appearing in chronologically adjacent searches by the same individual (Search-Based Peers or SBPs) are fundamentally similar on multiple dimensions. In direct tests, SBPs dominate GICS6 industry peers in explaining cross-sectional variations in base firms' out-of-sampl...

  20. Study of a Quantum Framework for Search Based Software Engineering

    Science.gov (United States)

    Wu, Nan; Song, Fangmin; Li, Xiangdong

    2013-06-01

    The Search Based Software Engineering (SBSE) is widely used in the software engineering to identify optimal solutions. The traditional methods and algorithms used in SBSE are criticized due to their high costs. In this paper, we propose a rapid modified-Grover quantum searching method for SBSE, and theoretically this method can be applied to any search-space structure and any type of searching problems.

  1. Computer-based literature search in medical institutions in India

    OpenAIRE

    Kalita Jayantee; Misra Usha; Kumar Gyanendra

    2007-01-01

    Aim: To study the use of computer-based literature search and its application in clinical training and patient care as a surrogate marker of evidence-based medicine. Materials and Methods: A questionnaire comprising of questions on purpose (presentation, patient management, research), realm (site accessed, nature and frequency of search), effect, infrastructure, formal training in computer based literature search and suggestions for further improvement were sent to residents and faculty of...

  2. Searching a database based web site

    OpenAIRE

    Filipe Silva; Gabriel David

    2003-01-01

    Currently, information systems are usually supported by databases (DB) and accessed through a Web interface. Pages in such Web sites are not drawn from HTML files but are generated on the fly upon request. Indexing and searching such dynamic pages raises several extra difficulties not solved by most search engines, which were designed for static contents. In this paper we describe the development of a search engine that overcomes most of the problems for a specific Web site, how the limitatio...

  3. Search Relevance based on the Semantic Web

    OpenAIRE

    Bicer, Veli

    2012-01-01

    In this thesis, we explore the challenge of search relevance in the context of semantic search. Specifically, the notion of semantic relevance can be distinguished from the other types of relevance in Information Retrieval (IR) in terms of employing an underlying semantic model. We propose the emerging Semantic Web data on the Web which is represented in RDF graph structures as an important candidate to become such a semantic model in a search process.

  4. A web-based rapid assessment tool for production publishing solutions

    Science.gov (United States)

    Sun, Tong

    2010-02-01

    Solution assessment is a critical first-step in understanding and measuring the business process efficiency enabled by an integrated solution package. However, assessing the effectiveness of any solution is usually a very expensive and timeconsuming task which involves lots of domain knowledge, collecting and understanding the specific customer operational context, defining validation scenarios and estimating the expected performance and operational cost. This paper presents an intelligent web-based tool that can rapidly assess any given solution package for production publishing workflows via a simulation engine and create a report for various estimated performance metrics (e.g. throughput, turnaround time, resource utilization) and operational cost. By integrating the digital publishing workflow ontology and an activity based costing model with a Petri-net based workflow simulation engine, this web-based tool allows users to quickly evaluate any potential digital publishing solutions side-by-side within their desired operational contexts, and provides a low-cost and rapid assessment for organizations before committing any purchase. This tool also benefits the solution providers to shorten the sales cycles, establishing a trustworthy customer relationship and supplement the professional assessment services with a proven quantitative simulation and estimation technology.

  5. A cluster-based simulation of facet-based search

    OpenAIRE

    Urruty, T.; Hopfgartner, F.; Villa, R.; Gildea, N.; Jose, J.M.

    2008-01-01

    The recent increase of online video has challenged the research in the field of video information retrieval. Video search engines are becoming more and more interactive, helping the user to easily find what he or she is looking for. In this poster, we present a new approach of using an iterative clustering algorithm on text and visual features to simulate users creating new facets in a facet-based interface. Our experimental results prove the usefulness of such an approach.

  6. Text-based plagiarism in scientific publishing: issues, developments and education.

    Science.gov (United States)

    Li, Yongyan

    2013-09-01

    Text-based plagiarism, or copying language from sources, has recently become an issue of growing concern in scientific publishing. Use of CrossCheck (a computational text-matching tool) by journals has sometimes exposed an unexpected amount of textual similarity between submissions and databases of scholarly literature. In this paper I provide an overview of the relevant literature, to examine how journal gatekeepers perceive textual appropriation, and how automated plagiarism-screening tools have been developed to detect text matching, with the technique now available for self-check of manuscripts before submission; I also discuss issues around English as an additional language (EAL) authors and in particular EAL novices being the typical offenders of textual borrowing. The final section of the paper proposes a few educational directions to take in tackling text-based plagiarism, highlighting the roles of the publishing industry, senior authors and English for academic purposes professionals. PMID:22535578

  7. Text-Based Plagiarism in Scientific Publishing: Issues, Developments and Education

    OpenAIRE

    Li, Yongyan

    2012-01-01

    Text-based plagiarism, or copying language from sources, has recently become an issue of growing concern in scientific publishing. Use of CrossCheck (a computational text-matching tool) by journals has sometimes exposed an unexpected amount of textual similarity between submissions and databases of scholarly literature. In this paper I provide an overview of the relevant literature, to examine how journal gatekeepers perceive textual appropriation, and how automated plagiarism-screening tools...

  8. Empirical Evidences in Citation-Based Search Engines: Is Microsoft Academic Search dead?

    OpenAIRE

    Orduna-Malea, Enrique; Ayllon, Juan Manuel; Martin-Martin, Alberto; Lopez-Cozar, Emilio Delgado

    2014-01-01

    The goal of this working paper is to summarize the main empirical evidences provided by the scientific community as regards the comparison between the two main citation based academic search engines: Google Scholar and Microsoft Academic Search, paying special attention to the following issues: coverage, correlations between journal rankings, and usage of these academic search engines. Additionally, selfelaborated data is offered, which are intended to provide current evidence about the popul...

  9. Mashup Based Content Search Engine for Mobile Devices

    OpenAIRE

    Kohei Arai

    2013-01-01

    Mashup based content search engine for mobile devices is proposed. Example of the proposed search engine is implemented with Yahoo!JAPAN Web SearchAPI, Yahoo!JAPAN Image searchAPI, YouTube Data API, and Amazon Product Advertising API. The retrieved results are also merged and linked each other. Therefore, the different types of contents can be referred once an e-learning content is retrieved. The implemented search engine is evaluated with 20 students. The results show usefulness and effectiv...

  10. AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

    Directory of Open Access Journals (Sweden)

    Cezar VASILESCU

    2010-01-01

    Full Text Available The Internet becomes for most of us a daily used instrument, for professional or personal reasons. We even do not remember the times when a computer and a broadband connection were luxury items. More and more people are relying on the complicated web network to find the needed information.This paper presents an overview of Internet search related issues, upon search engines and describes the parties and the basic mechanism that is embedded in a search for web based information resources. Also presents ways to increase the efficiency of web searches, through a better understanding of what search engines ignore at websites content.

  11. Multi-agent based cooperative search in combinatorial optimisation

    OpenAIRE

    Martin, Simon

    2013-01-01

    Cooperative search provides a class of strategies to design more effective search methodologies by combining (meta-) heuristics for solving combinatorial optimisation problems. This area has been little explored in operational research. This thesis proposes a general agent-based distributed framework where each agent implements a (meta-) heuristic. An agent continuously adapts itself during the search process using a cooperation protocol based on reinforcement learning and pattern matching. G...

  12. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    Directory of Open Access Journals (Sweden)

    Alireza Isfandiyari Moghadam

    2010-03-01

    Full Text Available   The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, previous search query storage and help tutorials. Nevertheless, none of them demonstrated any search options for hypertext searching and displaying the size of the pages searched. 94.7% support features such as truncation, keywords in title and URL search and text summary display. The checklist used in the study could serve as a model for investigating search options in search engines, digital libraries and other internet search tools.

  13. Semantic Web Based Efficient Search Using Ontology and Mathematical Model

    Directory of Open Access Journals (Sweden)

    K.Palaniammal

    2014-01-01

    Full Text Available The semantic web is the forthcoming technology in the world of search engine. It becomes mainly focused towards the search which is more meaningful rather than the syntactic search prevailing now. This proposed work concerns about the semantic search with respect to the educational domain. In this paper, we propose semantic web based efficient search using ontology and mathematical model that takes into account the misleading, unmatched kind of service information, lack of relevant domain knowledge and the wrong service queries. To solve these issues in this framework is designed to make three major contributions, which are ontology knowledge base, Natural Language Processing (NLP techniques and search model. Ontology knowledge base is to store domain specific service ontologies and service description entity (SDE metadata. The search model is to retrieve SDE metadata as efficient for Education lenders, which include mathematical model. The Natural language processing techniques for spell-check and synonym based search. The results are retrieved and stored in an ontology, which in terms prevents the data redundancy. The results are more accurate to search, sensitive to spell check and synonymous context. This paper reduces the user’s time and complexity in finding for the correct results of his/her search text and our model provides more accurate results. A series of experiments are conducted in order to respectively evaluate the mechanism and the employed mathematical model.

  14. Beacon-Based Service Publishing Framework in Multiservice Wi-Fi Hotspots

    Directory of Open Access Journals (Sweden)

    Di Sorte Dario

    2007-01-01

    Full Text Available In an expected future multiaccess and multiservice IEEE 802.11 environment, the problem of providing users with useful service-related information to support a correct rapid network selection is expected to become a very important issue. A feasible short-term 802.11-tailored working solution, compliant with existing equipment, is to publish service information encoded within the SSID information element within beacon frames. This makes it possible for an operator to implement service publishing in 802.11 networks while waiting for a standardized mechanism. Also, this straightforward approach has allowed us to evaluate experimentally the performance of a beacon-based service publishing solution. In fact, the main focus of the paper is indeed to present a quantitative comparison of service discovery times between the legacy scenario, where the user is forced to associate and authenticate with a network point of access to check its service offer, and the enhanced scenario where the set of service-related information is broadcasted within beacons. These discovery times are obtained by processing the results of a measurement campaign performed in a multiaccess/service 802.11 environment. This analysis confirms the effectiveness of the beacon-based approach. We also show that the cost in terms of wireless bandwidth consumption of such solution is low.

  15. A Digital Ecosystem-based Framework for Math Search Systems

    Directory of Open Access Journals (Sweden)

    Mohammed Q. Shatnawi

    2012-03-01

    Full Text Available Text-based search engines fall short in retrieving structured information. When searching for x(y+z using those search engines, for example Google, it retrieves documents that contain xyz, x+y=z, (x+y+z =xyz or any other document that contain x, y, and/or z but not x(y+z as a standalone math expression. The reason behind this shortage; is that the text-based search engines ignore the structure of the mathematical expressions. Several issues are associated with designing and implementing math-based search systems. Those systems must be able to differentiate between a user query that contains a mathematical expression, and any other query that contains only a text term. A reliable indexing approach, along with a flexible and efficient representation technique are highly required. Eventually, text-based search systems must be able to process mathematical expressions that are well-structured and have properties that make them different from other forms of text. Here, in this context we take advantage from the concept of digital ecosystems to refine the text search process so it becomes applicable in searching for a mathematical expression. In this research, a framework that contains the basic building blocks of a math-based search system is designed

  16. Grover quantum searching algorithm based on weighted targets

    Institute of Scientific and Technical Information of China (English)

    Li Panchi; Li Shiyong

    2008-01-01

    The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm.Finally, the validity of the algorithm is proved by a simple searching example.

  17. Attribute-based proxy re-encryption with keyword search.

    Directory of Open Access Journals (Sweden)

    Yanfeng Shi

    Full Text Available Keyword search on encrypted data allows one to issue the search token and conduct search operations on encrypted data while still preserving keyword privacy. In the present paper, we consider the keyword search problem further and introduce a novel notion called attribute-based proxy re-encryption with keyword search (ABRKS, which introduces a promising feature: In addition to supporting keyword search on encrypted data, it enables data owners to delegate the keyword search capability to some other data users complying with the specific access control policy. To be specific, ABRKS allows (i the data owner to outsource his encrypted data to the cloud and then ask the cloud to conduct keyword search on outsourced encrypted data with the given search token, and (ii the data owner to delegate other data users keyword search capability in the fine-grained access control manner through allowing the cloud to re-encrypted stored encrypted data with a re-encrypted data (embedding with some form of access control policy. We formalize the syntax and security definitions for ABRKS, and propose two concrete constructions for ABRKS: key-policy ABRKS and ciphertext-policy ABRKS. In the nutshell, our constructions can be treated as the integration of technologies in the fields of attribute-based cryptography and proxy re-encryption cryptography.

  18. Personalized Web Search Using Trust Based Hubs And Authorities

    OpenAIRE

    Dr. Suruchi Chawla

    2014-01-01

    In this paper method has been proposed to improve the precision of Personalized Web Search (PWS) using Trust based Hubs and Authorities(HA) where Hubs are the high quality resource pages and Authorities are the high quality content pages in the specific topic generated using Hyperlink- Induced Topic Search (HITS). The Trust is used in HITS for increasing the reliability of HITS in identifying the good hubs and authorities for effective web search and overcome the problem of to...

  19. A Survey on Keyword Based Search over Outsourced Encrypted Data

    Directory of Open Access Journals (Sweden)

    S. Evangeline Sharon

    2013-04-01

    Full Text Available To ensure security, encryption techniques play a major role when data are outsourced to the cloud. Problem of retrieving the data from the cloud servers is considered. Many searching techniques are used for retrieving the data. This study focused on a set of keyword based search algorithms. It provides secure data retrieval with high efficiency. It concludes Ranked Searchable Symmetric Encryption (RSSE scheme meant to be best methodology for searching the encrypted data.

  20. Ranking Web Pages Based On Searching, Keywords and Incoming Links

    OpenAIRE

    S. M. Khalid Jamal; Babar

    2012-01-01

    In this article, a new Page-Rank strategy is proposed based on the searching keywords and incoming links. Page-Rank is an analysis algorithm that search engine uses to determine pages relevance and importance. For reaching this goal, we extract the meta data from the hypertext documents then we analyze it with the searching keywords and the links that pointing to that hypertext document.

  1. Mragyati : A System for Keyword-based Searching in Databases

    OpenAIRE

    Sarda, N. L.; Jain, Ankur

    2001-01-01

    The web, through many search engine sites, has popularized the keyword-based search paradigm, where a user can specify a string of keywords and expect to retrieve relevant documents, possibly ranked by their relevance to the query. Since a lot of information is stored in databases (and not as HTML documents), it is important to provide a similar search paradigm for databases, where users can query a database without knowing the database schema and database query languages such as SQL. In this...

  2. Search-Based Software Test Data Generation Using Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    P. Maragathavalli

    2011-02-01

    Full Text Available Search-based Software Engineering has been utilized for a number of software engineering activities.One area where Search-Based Software Engineering has seen much application is test data generation. Evolutionary testing designates the use of metaheuristic search methods for test case generation. The search space is the input domain of the test object, with each individual or potential solution, being an encoded set of inputs to that test object. The fitness function is tailored to find test data for the type of testthat is being undertaken. Evolutionary Testing (ET uses optimizing search techniques such as evolutionary algorithms to generate test data. The effectiveness of GA-based testing system is compared with a Random testing system. For simple programs both testing systems work fine, but as the complexity of the program or the complexity of input domain grows, GA-based testing system significantly outperforms Random testing.

  3. Weblog Search Engine Based on Quality Criteria

    Directory of Open Access Journals (Sweden)

    F. Azimzadeh,

    2011-01-01

    Full Text Available Nowadays, increasing amount of human knowledge is placed in computerized repositories such as the World Wide Web. This gives rise to the problem of how to locate specific pieces of information in these often quite unstructured repositories. Search engines is the best solved. Some studied show that, almost half of the traffic to the blog server comes from search engines. The more outgoing and informal social nature of the blogosphere opens the opportunity for exploiting more socially-oriented features. The nature of blogs, which are usually characterized by their personal and informal nature, dynamically and constructed on the new relational links required new quality measurement for blog search engine. Link analysis algorithms that exploit the Web graph may not work well in the blogosphere in general. (Gonçalves et al 2010 indicated that most of the popular blogs in the dataset (70% have a PageRank value equal -1, being thus almost invisible to the search engine. We expected that incorporated the special blogs quality criteria would be more desirably retrieved by search engines.

  4. Assessment and Comparison of Search capabilities of Web-based Meta-Search Engines: A Checklist Approach

    OpenAIRE

    Alireza Isfandiyari Moghadam; Zohreh Bahari Mova’fagh

    2010-01-01

      The present investigation concerns evaluation, comparison and analysis of search options existing within web-based meta-search engines. 64 meta-search engines were identified. 19 meta-search engines that were free, accessible and compatible with the objectives of the present study were selected. An author’s constructed check list was used for data collection. Findings indicated that all meta-search engines studied used the AND operator, phrase search, number of results displayed setting, pr...

  5. A grammar checker based on web searching

    Directory of Open Access Journals (Sweden)

    Joaquim Moré

    2006-05-01

    Full Text Available This paper presents an English grammar and style checker for non-native English speakers. The main characteristic of this checker is the use of an Internet search engine. As the number of web pages written in English is immense, the system hypothesises that a piece of text not found on the Web is probably badly written. The system also hypothesises that the Web will provide examples of how the content of the text segment can be expressed in a grammatically correct and idiomatic way. Thus, when the checker warns the user about the odd nature of a text segment, the Internet engine searches for contexts that can help the user decide whether he/she should correct the segment or not. By means of a search engine, the checker also suggests use of other expressions that appear on the Web more often than the expression he/she actually wrote.

  6. Web People Search Using Ontology Based Decision Tree

    Directory of Open Access Journals (Sweden)

    Mrunal Patil

    2012-09-01

    Full Text Available Nowadays, searching for people on web is the most common activity done by most of the users. When we give a query for person search, it returns a set of web pages related to distinct person of given name. For such type of search the job of finding the web page of interest is left on the user. In this paper, we develop a technique for web people search which clusters the web pages based on semantic information and maps them using ontology based decision tree making the user to access the information in more easy way. This technique uses the concept of ontology thus reducing the number of inconsistencies. The result proves that ontology based decision tree and clustering helps in increasing the efficiency of the overall search.

  7. Web People Search Using Ontology Based Decision Tree

    Directory of Open Access Journals (Sweden)

    Mrunal Patil

    2012-03-01

    Full Text Available Nowadays, searching for people on web is the most common activity done by most of the users. When we give a query for person search, it returns a set of web pages related to distinct person of given name. For such type of search the job of finding the web page of interest is left on the user. In this paper, we develop a technique for web people search which clusters the web pages based on semantic information and maps them using ontology based decision tree making the user to access the information in more easy way. This technique uses the concept of ontology thus reducing the number of inconsistencies. The result proves that ontology based decision tree and clustering helps in increasing the efficiency of the overall search.

  8. A unified architecture for biomedical search engines based on semantic web technologies.

    Science.gov (United States)

    Jalali, Vahid; Matash Borujerdi, Mohammad Reza

    2011-04-01

    There is a huge growth in the volume of published biomedical research in recent years. Many medical search engines are designed and developed to address the over growing information needs of biomedical experts and curators. Significant progress has been made in utilizing the knowledge embedded in medical ontologies and controlled vocabularies to assist these engines. However, the lack of common architecture for utilized ontologies and overall retrieval process, hampers evaluating different search engines and interoperability between them under unified conditions. In this paper, a unified architecture for medical search engines is introduced. Proposed model contains standard schemas declared in semantic web languages for ontologies and documents used by search engines. Unified models for annotation and retrieval processes are other parts of introduced architecture. A sample search engine is also designed and implemented based on the proposed architecture in this paper. The search engine is evaluated using two test collections and results are reported in terms of precision vs. recall and mean average precision for different approaches used by this search engine. PMID:20703566

  9. Semantic Search among Heterogeneous Biological Databases Based on Gene Ontology

    Institute of Scientific and Technical Information of China (English)

    Shun-Liang CAO; Lei QIN; Wei-Zhong HE; Yang ZHONG; Yang-Yong ZHU; Yi-Xue LI

    2004-01-01

    Semantic search is a key issue in integration of heterogeneous biological databases. In thispaper, we present a methodology for implementing semantic search in BioDW, an integrated biological datawarehouse. Two tables are presented: the DB2GO table to correlate Gene Ontology (GO) annotated entriesfrom BioDW data sources with GO, and the semantic similarity table to record similarity scores derived fromany pair of GO terms. Based on the two tables, multifarious ways for semantic search are provided and thecorresponding entries in heterogeneous biological databases in semantic terms can be expediently searched.

  10. Routing Optimization Based on Taboo Search Algorithm for Logistic Distribution

    Directory of Open Access Journals (Sweden)

    Hongxue Yang

    2014-04-01

    Full Text Available Along with the widespread application of the electronic commerce in the modern business, the logistic distribution has become increasingly important. More and more enterprises recognize that the logistic distribution plays an important role in the process of production and sales. A good routing for logistic distribution can cut down transport cost and improve efficiency. In order to cut down transport cost and improve efficiency, a routing optimization based on taboo search for logistic distribution is proposed in this paper. Taboo search is a metaheuristic search method to perform local search used for logistic optimization. The taboo search is employed to accelerate convergence and the aspiration criterion is combined with the heuristics algorithm to solve routing optimization. Simulation experimental results demonstrate that the optimal routing in the logistic distribution can be quickly obtained by the taboo search algorithm

  11. LAHS: A novel harmony search algorithm based on learning automata

    Science.gov (United States)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  12. The Critical Role of Journal Selection in Scholarly Publishing: A Search for Journal Options in Language-related Research Areas and Disciplines

    Directory of Open Access Journals (Sweden)

    Hacer Hande Uysal

    2012-04-01

    Full Text Available Problem statement: With the globalization in academia, pressures on academics to publish internationally have been increasing all over the world. However, participating in global scientific communication through publishing in well-regarded international journals is a very challenging and daunting task particularly for nonnative speaker (NNS scholars. Recent research has pointed out both linguistic and nonlinguistic factors behind the challenges facing NNS scholars in their attempts to publish internationally. Journal selection is suggested to be one of these critical determinants on the way to publication.Purpose of the study: The aim of this article, therefore, is to offer some suggestions about the journal selection process and to provide potential international journal options for especially newcomers to the field and the off-networked peripheral academics who may have limited access to journals. Method: First a framework is offered as guidance for the major points to be considered before deciding for a journal for manuscript subscription. Then, as a result of a search in major international databases, 17 tables are formed consisting international journal options according to their coverage by certain international indexes and according to their focus of interest in specific research areas in the disciplines of language education, applied linguistics, and linguistics. Conclusion: It is hoped that these suggestions and the compiled lists of available journals on specific topics would provide help for especially newcomers to the field and the off-networked peripheral academics who may have limited access to journals in language education and related fields while trying to publish internationally.

  13. Web page publishing policy: Developing taxonomy for private higher education settings based on current practice

    OpenAIRE

    Veronica F. McGowan

    2011-01-01

    Web page publishing has expanded rapidly in higher educational settings as administrative, faculty, staff, and student users lobby for server space. Increasingly, web publishing policies are needed to help maintain an institutional brand and insure that civil rights are not violated. Institutions that publish or host individual web pages must grabble with issues concerning web page ownership as well as style and content compliance. An analysis of the Web publishing policies of 59 Pennsylvania...

  14. Web page publishing policy: Developing taxonomy for private higher education settings based on current practice

    Directory of Open Access Journals (Sweden)

    Veronica F. McGowan

    2011-12-01

    Full Text Available Web page publishing has expanded rapidly in higher educational settings as administrative, faculty, staff, and student users lobby for server space. Increasingly, web publishing policies are needed to help maintain an institutional brand and insure that civil rights are not violated. Institutions that publish or host individual web pages must grabble with issues concerning web page ownership as well as style and content compliance. An analysis of the Web publishing policies of 59 Pennsylvanian private colleges yielded results which are presented in this paper as taxonomy for web site publishing policies for higher educational institutions.

  15. Stochastic Models for Budget Optimization in Search-Based Advertising

    OpenAIRE

    Muthukrishnan, S.; Pal, Martin; Svitkina, Zoya

    2006-01-01

    Internet search companies sell advertisement slots based on users' search queries via an auction. Advertisers have to determine how to place bids on the keywords of their interest in order to maximize their return for a given budget: this is the budget optimization problem. The solution depends on the distribution of future queries. In this paper, we formulate stochastic versions of the budget optimization problem based on natural probabilistic models of distribution over future queries, and ...

  16. Music-based training for pediatric CI recipients: A systematic analysis of published studies.

    Science.gov (United States)

    Gfeller, K

    2016-06-01

    In recent years, there has been growing interest in the use of music-based training to enhance speech and language development in children with normal hearing and some forms of communication disorders, including pediatric CI users. The use of music training for CI users may initially seem incongruous given that signal processing for CIs presents a degraded version of pitch and timbre, both key elements in music. Furthermore, empirical data of systematic studies of music training, particularly in relation to transfer to speech skills are limited. This study describes the rationale for music training of CI users, describes key features of published studies of music training with CI users, and highlights some developmental and logistical issues that should be taken into account when interpreting or planning studies of music training and speech outcomes with pediatric CI recipients. PMID:27246744

  17. Music-Based Training for Pediatric CI Recipients: A Systematic Analysis of Published Studies

    Science.gov (United States)

    Gfeller, Kate

    2016-01-01

    In recent years, there has been growing interest in the use of music-based training to enhance speech and language development in children with normal hearing and some forms of communication disorders, including pediatric CI users. The use of music training for CI users may initially seem incongruous given that signal processing for CIs presents a degraded version of pitch and timbre, both key elements in music. Furthermore, empirical data of systematic studies of music training, particularly in relation to transfer to speech skills are limited. This study describes the rationale for music training of CI users, describes key features of published studies of music training with CI users, and highlights some developmental and logistical issues that should be taken into account when interpreting or planning studies of music training and speech outcomes with pediatric CI recipients. PMID:27246744

  18. Technology transfer in agriculture. (Latest citations from the Biobusiness data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-10-01

    The bibliography contains citations concerning technology transfer in agriculture. Topics include applications of technology transfer in aquaculture, forestry, soil maintenance, agricultural pollution, agricultural biotechnology, and control of disease and insect pests. Use of computer technology in agriculture and technology transfers to developing countries are discussed. (Contains a minimum of 178 citations and includes a subject term index and title list.)

  19. Human gene therapy: methods and materials. (latest citations from the biobusiness data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-06-01

    The bibliography contains citations concerning the evolution of technologies for genetic identification and treatment of diseases such as cancer, immune deficiencies, anemias, hemophilias, muscular dystrophy, and diabetes. Emphasis is placed upon development and application of genetic engineering techniques for the production of medicinal biological preparations. Other topics include the use of DNA (deoxyribonucleic acid) probes for gene isolation and disease marker identification, methods for replacing missing or defective genetic material, and mapping of the human genome. Governmental regulation, and moral and ethical implications are briefly reviewed. (Contains 250 citations and includes a subject term index and title list.)

  20. Carbon monoxide toxicity. (Latest citations from the Life Sciences Collection data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-01

    The bibliography contains citations concerning the mechanism and clinical manifestations of carbon monoxide (CO) exposure, including the effects on the liver, cardiovascular, and nervous systems. Topics include studies of the carbon monoxide binding affinity with hemoglobin, measurement of carboxyhemoglobin in humans and various animal species, carbon monoxide levels resulting from tobacco and marijuana smoke, occupational exposure and the NIOSH (National Institute for Occupational Safety and Health) biological exposure index, symptomology and percent of blood CO, and intrauterine exposure. Air pollution, tobacco smoking, and occupational exposure are discussed as primary sources of carbon monoxide exposure. The effects of cigarette smoking on fetal development and health are excluded and examined in a separate bibliography. (Contains a minimum of 172 citations and includes a subject term index and title list.)

  1. Network-Based Electronic Publishing of Scholarly Works: A Selective Bibliography

    OpenAIRE

    Bailey, Jr., Charles W.

    1995-01-01

    This bibliography presents selected articles, books, electronic documents, and other sources that are useful in understanding scholarly electronic publishing efforts on the Internet and other networks. Most sources have been published between 1990 and the present; however, a limited number of key sources published prior to 1990 are also included. Where possible, links are provided to sources that are available via the Internet. Version 26 is the final update of this bibliography. For more cu...

  2. Methodological quality of systematic reviews and clinical trials on women's health published in a Brazilian evidence-based health journal

    OpenAIRE

    Cristiane Rufino Macedo; Rachel Riera; Maria Regina Torloni

    2013-01-01

    OBJECTIVES: To assess the quality of systematic reviews and clinical trials on women's health recently published in a Brazilian evidence-based health journal. METHOD: All systematic reviews and clinical trials on women's health published in the last five years in the Brazilian Journal of Evidence-based Health were retrieved. Two independent reviewers critically assessed the methodological quality of reviews and trials using AMSTAR and the Cochrane Risk of Bias Table, respectively. RESULTS: Sy...

  3. METADATA EXPANDED SEMANTICALLY BASED RESOURCE SEARCH IN EDUCATION GRID

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    With the rapid increase of educational resources, how to search for necessary educational resource quickly is one of most important issues. Educational resources have the characters of distribution and heterogeneity, which are the same as the characters of Grid resources. Therefore, the technology of Grid resources search was adopted to implement the educational resources search. Motivated by the insufficiency of currently resources search methods based on metadata, a method of extracting semantic relations between words constituting metadata is proposed. We mainly focus on acquiring synonymy, hyponymy, hypernymy and parataxis relations. In our schema, we extract texts related to metadata that will be expanded from text spatial through text extraction templates. Next, metadata will be obtained through metadata extraction templates. Finally, we compute semantic similarity to eliminate false relations and construct a semantic expansion knowledge base. The proposed method in this paper has been applied on the education grid.

  4. Entropy-Based Search Algorithm for Experimental Design

    CERN Document Server

    Malakar, N K

    2010-01-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. ...

  5. Cloud engineering is Search Based Software Engineering too

    OpenAIRE

    Harman M.; Lakhotia K.; Singer J.; White D.R.; Yoo S.

    2013-01-01

    Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation ...

  6. A Visual Similarity-Based 3D Search Engine

    OpenAIRE

    Lmaati, Elmustapha Ait; Oirrak, Ahmed El; M.N. Kaddioui

    2009-01-01

    Retrieval systems for 3D objects are required because 3D databases used around the web are growing. In this paper, we propose a visual similarity based search engine for 3D objects. The system is based on a new representation of 3D objects given by a 3D closed curve that captures all information about the surface of the 3D object. We propose a new 3D descriptor, which is a combination of three signatures of this new representation, and we implement it in our interactive web based search engin...

  7. In Search of...Brain-Based Education.

    Science.gov (United States)

    Bruer, John T.

    1999-01-01

    Debunks two ideas appearing in brain-based education articles: the educational significance of brain laterality (right brain versus left brain) and claims for a sensitive period of brain development in young children. Brain-based education literature provides a popular but misleading mix of fact, misinterpretation, and fantasy. (47 references (MLH)

  8. A Domain Specific Ontology Based Semantic Web Search Engine

    CERN Document Server

    Mukhopadhyay, Debajyoti; Mukherjee, Sreemoyee; Bhattacharya, Jhilik; Kim, Young-Chon

    2011-01-01

    Since its emergence in the 1990s the World Wide Web (WWW) has rapidly evolved into a huge mine of global information and it is growing in size everyday. The presence of huge amount of resources on the Web thus poses a serious problem of accurate search. This is mainly because today's Web is a human-readable Web where information cannot be easily processed by machine. Highly sophisticated, efficient keyword based search engines that have evolved today have not been able to bridge this gap. So comes up the concept of the Semantic Web which is envisioned by Tim Berners-Lee as the Web of machine interpretable information to make a machine processable form for expressing information. Based on the semantic Web technologies we present in this paper the design methodology and development of a semantic Web search engine which provides exact search results for a domain specific search. This search engine is developed for an agricultural Website which hosts agricultural information about the state of West Bengal.

  9. Entropy-Based Search Algorithm for Experimental Design

    Science.gov (United States)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  10. Personalized Search Based on Context-Centric Model

    Directory of Open Access Journals (Sweden)

    Mingyang Liu

    2013-07-01

    Full Text Available With the rapid development of the World Wide Web, huge amount of data has been growing exponentially in our daily life. Users will spend much more time on searching the information they really need than before. Even when they make the exactly same searching input, different users would have various goals. Otherwise, users commonly annotate the information resources or make search query according to their own behaviors. As a matter of fact, this process will bring fuzzy results and be time-consuming. Based on the above problems, we propose our methodology that to combine user’s context, users’ profile with users’ Folksonomies together to optimize personal search. At the end of this paper, we make an experiment to evaluate our methodology and from which we can conclude that our work performs better than other samples.

  11. A Shape Based Image Search Technique

    Directory of Open Access Journals (Sweden)

    Aratrika Sarkar

    2014-08-01

    Full Text Available This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, imatching of images based on contour matching; iimatching of images based on edge matching; iiimatching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i translation ; ii rotation; iii scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.

  12. IRPPS Editoria Elettronica: an electronic publishing web portal based on Open Journal Systems (OJS)

    OpenAIRE

    Nobile, Marianna; Pecoraro, Fabrizio; GreyNet, Grey Literature Network Service

    2013-01-01

    This paper presents IRPPS Editoria Elettronica, an e-publishing service developed by the Institute for Research on Population and Social Policies (IRPPS) of the Italian National Research Council (CNR). Its aim is reorganize the Institute scientific editorial activity, manage its in-house publications and diffuse its scientific results. In particular this paper focuses on: the IRPPS editorial activities, the platform used to develop the service, the publishing process and the web portal develo...

  13. A Compound Object Authoring and Publishing Tool for Literary Scholars based on the IFLA-FRBR

    OpenAIRE

    Anna Gerber; Jane Hunter

    2009-01-01

    This paper presents LORE (Literature Object Re-use and Exchange), a light-weight tool which is designed to allow literature scholars and teachers to author, edit and publish compound information objects encapsulating related digital resources and bibliographic records. LORE enables users to easily create OAI-ORE-compliant compound objects, which build on the IFLA FRBR model, and also enables them to describe and publish them to an RDF repository as Named Graphs. Using the tool, literary schol...

  14. Complications rates of non-oncologic urologic procedures in population-based data: a comparison to published series

    Directory of Open Access Journals (Sweden)

    David S. Aaronson

    2010-10-01

    Full Text Available PUSPOSE: Published single institutional case series are often performed by one or more surgeons with considerable expertise in specific procedures. The reported incidence of complications in these series may not accurately reflect community-based practice. We sought to compare complication and mortality rates following urologic procedures derived from population-based data to those of published single-institutional case series. MATERIALS AND METHODS: In-hospital mortality and complications of common urologic procedures (percutaneous nephrostomy, ureteropelvic junction obstruction repair, ureteroneocystostomy, urethral repair, artificial urethral sphincter implantation, urethral suspension, transurethral resection of the prostate, and penile prosthesis implantation reported in the U.S.’s National Inpatient Sample of the Healthcare Cost and Utilization Project were identified. Rates were then compared to those of published single-institution series using statistical analysis. RESULTS: For 7 of the 8 procedures examined, there was no significant difference in rates of complication or mortality between published studies and our population-based data. However, for percutaneous nephrostomy, two published single-center series had significantly lower mortality rates (p < 0.001. The overall rate of complications in the population-based data was higher than published single or select multi-institutional data for percutaneous nephrostomy performed for urinary obstruction (p < 0.001. CONCLUSIONS: If one assumes that administrative data does not suffer from under reporting of complications then for some common urological procedures, complication rates between population-based data and published case series seem comparable. Endorsement of mandatory collection of clinical outcomes is likely the best way to appropriately counsel patients about the risks of these common urologic procedures.

  15. Review of Online Evidence-based Practice Point-of-Care Information Summary Providers: Response by the Publisher of DynaMed

    OpenAIRE

    Alper, Brian S.

    2010-01-01

    In response to Banzi's et al review of online evidence-based practice point-of-care resources published in the Journal of Medical Internet Research, the publisher of DynaMed clarifies his evidence-based methodology.

  16. FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM

    OpenAIRE

    VIPINKUMAR TIWARI

    2012-01-01

    Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face ...

  17. Personalized Web Search Using Trust Based Hubs And Authorities

    Directory of Open Access Journals (Sweden)

    Dr. Suruchi Chawla

    2014-07-01

    Full Text Available In this paper method has been proposed to improve the precision of Personalized Web Search (PWS using Trust based Hubs and Authorities(HA where Hubs are the high quality resource pages and Authorities are the high quality content pages in the specific topic generated using Hyperlink- Induced Topic Search (HITS. The Trust is used in HITS for increasing the reliability of HITS in identifying the good hubs and authorities for effective web search and overcome the problem of topic drift found in HITS. Experimental Study was conducted on the data set of web query sessions to test the effectiveness of PWS with Trust based HA in domains Academics, Entertainment and Sport. The experimental results were compared on the basis of improvement in average precision using PWS with HA (with/without Trust. The results verified statistically show the significant improvement in precision using PWS with HA (with Trust.

  18. Automated Search-Based Robustness Testing for Autonomous Vehicle Software

    Directory of Open Access Journals (Sweden)

    Kevin M. Betts

    2016-01-01

    Full Text Available Autonomous systems must successfully operate in complex time-varying spatial environments even when dealing with system faults that may occur during a mission. Consequently, evaluating the robustness, or ability to operate correctly under unexpected conditions, of autonomous vehicle control software is an increasingly important issue in software testing. New methods to automatically generate test cases for robustness testing of autonomous vehicle control software in closed-loop simulation are needed. Search-based testing techniques were used to automatically generate test cases, consisting of initial conditions and fault sequences, intended to challenge the control software more than test cases generated using current methods. Two different search-based testing methods, genetic algorithms and surrogate-based optimization, were used to generate test cases for a simulated unmanned aerial vehicle attempting to fly through an entryway. The effectiveness of the search-based methods in generating challenging test cases was compared to both a truth reference (full combinatorial testing and the method most commonly used today (Monte Carlo testing. The search-based testing techniques demonstrated better performance than Monte Carlo testing for both of the test case generation performance metrics: (1 finding the single most challenging test case and (2 finding the set of fifty test cases with the highest mean degree of challenge.

  19. Developing a distributed HTML5-based search engine for geospatial resource discovery

    Science.gov (United States)

    ZHOU, N.; XIA, J.; Nebert, D.; Yang, C.; Gui, Z.; Liu, K.

    2013-12-01

    With explosive growth of data, Geospatial Cyberinfrastructure(GCI) components are developed to manage geospatial resources, such as data discovery and data publishing. However, the efficiency of geospatial resources discovery is still challenging in that: (1) existing GCIs are usually developed for users of specific domains. Users may have to visit a number of GCIs to find appropriate resources; (2) The complexity of decentralized network environment usually results in slow response and pool user experience; (3) Users who use different browsers and devices may have very different user experiences because of the diversity of front-end platforms (e.g. Silverlight, Flash or HTML). To address these issues, we developed a distributed and HTML5-based search engine. Specifically, (1)the search engine adopts a brokering approach to retrieve geospatial metadata from various and distributed GCIs; (2) the asynchronous record retrieval mode enhances the search performance and user interactivity; (3) the search engine based on HTML5 is able to provide unified access capabilities for users with different devices (e.g. tablet and smartphone).

  20. FACE RECOGNITION BASED ON CUCKOO SEARCH ALGORITHM

    Directory of Open Access Journals (Sweden)

    VIPINKUMAR TIWARI

    2012-07-01

    Full Text Available Feature Selection is a optimization technique used in face recognition technology. Feature selection removes the irrelevant, noisy and redundant data thus leading to the more accurate recognition of face from the database.Cuckko Algorithm is one of the recent optimization algorithm in the league of nature based algorithm. Its optimization results are better than the PSO and ACO optimization algorithms. The proposal of applying the Cuckoo algorithm for feature selection in the process of face recognition is presented in this paper.

  1. A Theoretical and Empirical Evaluation of Software Component Search Engines, Semantic Search Engines and Google Search Engine in the Context of COTS-Based Development

    CERN Document Server

    Yanes, Nacim; Ghezala, Henda Hajjami Ben

    2012-01-01

    COTS-based development is a component reuse approach promising to reduce costs and risks, and ensure higher quality. The growing availability of COTS components on the Web has concretized the possibility of achieving these objectives. In this multitude, a recurrent problem is the identification of the COTS components that best satisfy the user requirements. Finding an adequate COTS component implies searching among heterogeneous descriptions of the components within a broad search space. Thus, the use of search engines is required to make more efficient the COTS components identification. In this paper, we investigate, theoretically and empirically, the COTS component search performance of eight software component search engines, nine semantic search engines and a conventional search engine (Google). Our empirical evaluation is conducted with respect to precision and normalized recall. We defined ten queries for the assessed search engines. These queries were carefully selected to evaluate the capability of e...

  2. Optimal fractional order PID design via Tabu Search based algorithm.

    Science.gov (United States)

    Ateş, Abdullah; Yeroglu, Celaleddin

    2016-01-01

    This paper presents an optimization method based on the Tabu Search Algorithm (TSA) to design a Fractional-Order Proportional-Integral-Derivative (FOPID) controller. All parameter computations of the FOPID employ random initial conditions, using the proposed optimization method. Illustrative examples demonstrate the performance of the proposed FOPID controller design method. PMID:26652128

  3. Constraint-based local search for container stowage slot planning

    DEFF Research Database (Denmark)

    Pacino, Dario; Jensen, Rune Møller; Bebbington, Tom

    2012-01-01

    -sea vessels. This paper describes the constrained-based local search algorithm used in the second phase of this approach where individual containers are assigned to slots in each bay section. The algorithm can solve this problem in an average of 0.18 seconds per bay, corresponding to a 20 seconds runtime for...

  4. An analysis of search-based user interaction on the semantic web

    OpenAIRE

    Hildebrand, M; Ossenbruggen, van, Jacco; Hardman, HL Lynda

    2007-01-01

    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of semantic search features that are used during query construction, the core search process, the presentation of the search results and user feedback on query and results. For each of these, we cons...

  5. A World Wide Web Region-Based Image Search Engine

    DEFF Research Database (Denmark)

    Kompatsiaris, Ioannis; Triantafyllou, Evangelia; Strintzis, Michael G.

    2001-01-01

    In this paper the development of an intelligent image content-based search engine for the World Wide Web is presented. This system will offer a new form of media representation and access of content available in WWW. Information Web Crawlers continuously traverse the Internet and collect images....../region boundary information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user friendly interface. The output of the system is a set of...

  6. Computer-Assisted Search Of Large Textual Data Bases

    Science.gov (United States)

    Driscoll, James R.

    1995-01-01

    "QA" denotes high-speed computer system for searching diverse collections of documents including (but not limited to) technical reference manuals, legal documents, medical documents, news releases, and patents. Incorporates previously available and emerging information-retrieval technology to help user intelligently and rapidly locate information found in large textual data bases. Technology includes provision for inquiries in natural language; statistical ranking of retrieved information; artificial-intelligence implementation of semantics, in which "surface level" knowledge found in text used to improve ranking of retrieved information; and relevance feedback, in which user's judgements of relevance of some retrieved documents used automatically to modify search for further information.

  7. An Efficient Annotation of Search Results Based on Feature

    OpenAIRE

    A. Jebha; R. Tamilselvi

    2015-01-01

     With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB). A result page returned from WDB has multiple search records (SRR).Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able...

  8. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Directory of Open Access Journals (Sweden)

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  9. A Vertical Search Engine – Based On Domain Classifier

    Directory of Open Access Journals (Sweden)

    Rajashree Shettar

    2008-11-01

    Full Text Available The World Wide Web is growing exponentially and the dynamic, unstructured nature of the web makes it difficult to locate useful resources. Web Search engines such as Google and Alta Vista provide huge amount of information many of which might not be relevant to the users query. In this paper, we build a vertical search engine which takes a seed URL and classifies the URLs crawled as Medical or Finance domains. The filter component of the vertical search engine classifies the web pages downloaded by the crawler into appropriate domains. The web pages crawled is checked for relevance based on the domain chosen and indexed. External users query the database with keywords to search; The Domain classifiers classify the URLs into relevant domain and are presented in descending order according to the rank number. This paper focuses on two issues – page relevance to a particular domain and page contents for the search keywords to improve the quality of URLs to be listed thereby avoiding irrelevant or low-quality ones .

  10. Dynamic Clinical Data Mining: Search Engine-Based Decision Support

    OpenAIRE

    Celi, Leo Anthony; Zimolzak, Andrew J; Stone, David J

    2014-01-01

    The research world is undergoing a transformation into one in which data, on massive levels, is freely shared. In the clinical world, the capture of data on a consistent basis has only recently begun. We propose an operational vision for a digitally based care system that incorporates data-based clinical decision making. The system would aggregate individual patient electronic medical data in the course of care; query a universal, de-identified clinical database using modified search engine t...

  11. Gradient-Based Cuckoo Search for Global Optimization

    OpenAIRE

    Fateen, Seif-Eddeen K.; Adrián Bonilla-Petriciolet

    2014-01-01

    One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global opt...

  12. Why Publish?

    Science.gov (United States)

    Kaye, Sharon

    2008-01-01

    In humanities, there does not seem to be any good reason to privilege the academic journal over other venues. If the goal of humanities publishing is to spread new ideas, then it seems that creating a popular Internet blog would be the better choice. However, the goal of humanities publishing is not just to spread new ideas, but to spread "good"…

  13. Gradient-Based Cuckoo Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Seif-Eddeen K. Fateen

    2014-01-01

    Full Text Available One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.

  14. The Value of Interdisciplinarity: A Study Based on the Design of Internet Search Engines.

    Science.gov (United States)

    Herring, Susan Davis

    1999-01-01

    Examines whether search engine design shows a pattern of interdisciplinarity focusing on two disciplines: computer science and library/information science. A citation analysis measured levels of interdisciplinary research and publishing in search engine design and development. Results show a higher level of interdisciplinarity among library and…

  15. Context Disambiguation Based Semantic Web Search for Effective Information Retrieval

    Directory of Open Access Journals (Sweden)

    M. Barathi

    2011-01-01

    Full Text Available Problem statement: Search queries are short and ambiguous and are insufficient for specifying precise user needs. To overcome this problem, some search engines suggest terms that are semantically related to the submitted queries, so that users can choose from the suggestions based on their information needs. Approach: In this study, we introduce an effective approach that captures the user’s specific context by using the WordNet based semantic relatedness measure and the measures of joint keyword occurrences in the web page. Results: The context of the user query is identified and formulated. The user query is enriched to get more relevant web pages that the user needs. Conclusion: Experimental results show that our approach has better precision and recall than the existing methods.

  16. User-Based Information Search across Multiple Social Media

    OpenAIRE

    Gåre, Marte Lise

    2015-01-01

    Most of todays Internet users are registered to one or more social media applications. As so many are registered to multiple application, it has become difficult to locate friends, former colleagues, peers and acquaintances. Reasons for this include private profiles, name collisions, multiple usernames, lack of profile attributes and profile picture. The system designed and implemented in this thesis enable automatic user-based information search across multiple social media without relyi...

  17. Approximation Error Based Suitable Domain Search for Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Vijayshri Chaurasia

    2010-02-01

    Full Text Available Fractal Image compression is a very advantageous technique in the field of image compression. The coding phase of this technique is very time consuming because of computational expenses of suitable domain search. In this paper we have proposed an approximation error based speed-up technique with the use of feature extraction. Proposed scheme reduces the number of range-domain comparisons with significant amount and gives improved time performance.

  18. Ground-Based Photometric Searches for Transiting Planets

    OpenAIRE

    Mazeh, Tsevi

    2009-01-01

    This paper reviews the basic technical characteristics of the ground-based photometric searches for transiting planets, and discusses a possible observational selection effect. I suggest that additional photometric observations of the already observed fields might discover new transiting planets with periods around 4-6 days. The set of known transiting planets support the intriguing correlation between the planetary mass and the orbital period suggested already in 2005.

  19. A new classification algorithm based on RGH-tree search

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.

  20. The effects of mulching on soil erosion by water. A review based on published data

    Science.gov (United States)

    Prosdocimi, Massimo; Jordán, Antonio; Tarolli, Paolo; Cerdà, Artemi

    2016-04-01

    lands, post-fire affected areas and anthropic sites. Data published in literature have been collected. The results proved the beneficial effects of mulching on soil erosion by water in all the contexts considered, with reduction rates in average sediment concentration, soil loss and runoff volume that, in some cases, exceeded 90%. Furthermore, in most cases, mulching confirmed to be a relatively inexpensive soil conservation practice that allowed to reduce soil erodibility and surface immediately after its application. References Cerdà, A., 1994. The response of abandoned terraces to simulated rain, in: Rickson, R.J., (Ed.), Conserving Soil Resources: European Perspective, CAB International, Wallingford, pp. 44-55. Cerdà, A., Flanagan, D.C., Le Bissonnais, Y., Boardman, J., 2009. Soil erosion and agriculture. Soil & Tillage Research 106, 107-108. Cerdan, O., Govers, G., Le Bissonnais, Y., Van Oost, K., Poesen, J., Saby, N., Gobin, A., Vacca, A., Quinton, J., Auerwald, K., Klik, A., Kwaad, F.J.P.M., Raclot, D., Ionita, I., Rejman, J., Rousseva, S., Muxart, T., Roxo, M.J., Dostal, T., 2010. Rates and spatial variations of soil erosion in Europe: A study based on erosion plot data. Geomorphology 122, 167-177. García-Orenes, F., Roldán A., Mataix-Solera, J, Cerdà, A., Campoy M, Arcenegui, V., Caravaca F. 2009. Soil structural stability and erosion rates influenced by agricultural management practices in a semi-arid Mediterranean agro-ecosystem. Soil Use and Management 28: 571-579. Hayes, S.A., McLaughlin, R.A., Osmond, D.L., 2005. Polyacrylamide use for erosion and turbidity control on construction sites. Journal of soil and water conservation 60(4):193-199. Jordán, A., Zavala, L.M., Muñoz-Rojas, M., 2011. Mulching, effects on soil physical properties. In: Gliński, J., Horabik, J., Lipiec, J. (Eds.), Encyclopedia of Agrophysics. Springer, Dordrecht, pp. 492-496. Montgomery, D.R., 2007. Soil erosion and agricultural sustainability. PNAS 104, 13268-13272. Prats, S

  1. The effects of mulching on soil erosion by water. A review based on published data

    Science.gov (United States)

    Prosdocimi, Massimo; Jordán, Antonio; Tarolli, Paolo; Cerdà, Artemi

    2016-04-01

    lands, post-fire affected areas and anthropic sites. Data published in literature have been collected. The results proved the beneficial effects of mulching on soil erosion by water in all the contexts considered, with reduction rates in average sediment concentration, soil loss and runoff volume that, in some cases, exceeded 90%. Furthermore, in most cases, mulching confirmed to be a relatively inexpensive soil conservation practice that allowed to reduce soil erodibility and surface immediately after its application. References Cerdà, A., 1994. The response of abandoned terraces to simulated rain, in: Rickson, R.J., (Ed.), Conserving Soil Resources: European Perspective, CAB International, Wallingford, pp. 44-55. Cerdà, A., Flanagan, D.C., Le Bissonnais, Y., Boardman, J., 2009. Soil erosion and agriculture. Soil & Tillage Research 106, 107-108. Cerdan, O., Govers, G., Le Bissonnais, Y., Van Oost, K., Poesen, J., Saby, N., Gobin, A., Vacca, A., Quinton, J., Auerwald, K., Klik, A., Kwaad, F.J.P.M., Raclot, D., Ionita, I., Rejman, J., Rousseva, S., Muxart, T., Roxo, M.J., Dostal, T., 2010. Rates and spatial variations of soil erosion in Europe: A study based on erosion plot data. Geomorphology 122, 167-177. García-Orenes, F., Roldán A., Mataix-Solera, J, Cerdà, A., Campoy M, Arcenegui, V., Caravaca F. 2009. Soil structural stability and erosion rates influenced by agricultural management practices in a semi-arid Mediterranean agro-ecosystem. Soil Use and Management 28: 571-579. Hayes, S.A., McLaughlin, R.A., Osmond, D.L., 2005. Polyacrylamide use for erosion and turbidity control on construction sites. Journal of soil and water conservation 60(4):193-199. Jordán, A., Zavala, L.M., Muñoz-Rojas, M., 2011. Mulching, effects on soil physical properties. In: Gliński, J., Horabik, J., Lipiec, J. (Eds.), Encyclopedia of Agrophysics. Springer, Dordrecht, pp. 492-496. Montgomery, D.R., 2007. Soil erosion and agricultural sustainability. PNAS 104, 13268-13272. Prats, S

  2. Self-Adapting Routing Overlay Network for Frequently Changing Application Traffic in Content-Based Publish/Subscribe System

    Directory of Open Access Journals (Sweden)

    Meng Chi

    2014-01-01

    Full Text Available In the large-scale distributed simulation area, the topology of the overlay network cannot always rapidly adapt to frequently changing application traffic to reduce the overall traffic cost. In this paper, we propose a self-adapting routing strategy for frequently changing application traffic in content-based publish/subscribe system. The strategy firstly trains the traffic information and then uses this training information to predict the application traffic in the future. Finally, the strategy reconfigures the topology of the overlay network based on this predicting information to reduce the overall traffic cost. A predicting path is also introduced in this paper to reduce the reconfiguration numbers in the process of the reconfigurations. Compared to other strategies, the experimental results show that the strategy proposed in this paper could reduce the overall traffic cost of the publish/subscribe system in less reconfigurations.

  3. Where Is It? How Deaf Adolescents Complete Fact-Based Internet Search Tasks

    Science.gov (United States)

    Smith, Chad E.

    2007-01-01

    An exploratory study was designed to describe Internet search behaviors of deaf adolescents who used Internet search engines to complete fact-based search tasks. The study examined search behaviors of deaf high school students such as query formation, query modification, Web site identification, and Web site selection. Consisting of two fact-based…

  4. Base input for large break LOCA analysis of commercial PWR with published version of THYDE-P1

    International Nuclear Information System (INIS)

    This report describes input data to be used with the THYDE-P1 interim version SV02L03, which has been published through NEA DATA BANK in April, 1982, and its calculated results. The input data consist of three input data sets, one is for steady state and the following transients and the other two are for restarting, and they are successively used for a through calculation of a large break loss-of-coolant accident (LOCA) of a 1,100 MWe commercial pressurized water reactor (PWR) with ''best estimate'' (BE) options. The major purposes to set up the input data are not only to provide users sample data in publishing THYDE-P1 but also to demonstrate the ability of the published version of THYDE-P1 without any modification to perform a through calculation of a large break LOCA. The results from the present calculation will also be widely utilized as a bench mark in performing sensitivity calculations and further code modifications. In this sense, the input can be called a base input for the published version of THYDE-P1. This report also contains the results from several sensitivity calculations, which show high capability of the version of THYDE-P1 to analyse large break LOCAs. (author)

  5. SHOP: scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Linusson, Anna; Zamora, Ismael

    2007-01-01

    A new GRID-based method for scaffold hopping (SHOP) is presented. In a fully automatic manner, scaffolds were identified in a database based on three types of 3D-descriptors. SHOP's ability to recover scaffolds was assessed and validated by searching a database spiked with fragments of known...... scaffolds were in the 31 top-ranked scaffolds. SHOP also identified new scaffolds with substantially different chemotypes from the queries. Docking analysis indicated that the new scaffolds would have similar binding modes to those of the respective query scaffolds observed in X-ray structures. The...

  6. Complete Boolean Satisfiability Solving Algorithms Based on Local Search

    Institute of Scientific and Technical Information of China (English)

    Wen-Sheng Guo; Guo-Wu Yang; William N.N.Hung; Xiaoyu Song

    2013-01-01

    Boolean satisfiability (SAT) is a well-known problem in computer science,artificial intelligence,and operations research.This paper focuses on the satisfiability problem of Model RB structure that is similar to graph coloring problems and others.We propose a translation method and three effective complete SAT solving algorithms based on the characterization of Model RB structure.We translate clauses into a graph with exclusive sets and relative sets.In order to reduce search depth,we determine search order using vertex weights and clique in the graph.The results show that our algorithms are much more effective than the best SAT solvers in numerous Model RB benchmarks,especially in those large benchmark instances.

  7. Self-Adapting Routing Overlay Network for Frequently Changing Application Traffic in Content-Based Publish/Subscribe System

    OpenAIRE

    Meng Chi; Shufen Liu; Changhong Hu

    2014-01-01

    In the large-scale distributed simulation area, the topology of the overlay network cannot always rapidly adapt to frequently changing application traffic to reduce the overall traffic cost. In this paper, we propose a self-adapting routing strategy for frequently changing application traffic in content-based publish/subscribe system. The strategy firstly trains the traffic information and then uses this training information to predict the application traffic in the future. Finally, the strat...

  8. Intelligent Agent based Flight Search and Booking System

    Directory of Open Access Journals (Sweden)

    Floyd Garvey

    2012-07-01

    Full Text Available The world globalization is widely used, and there are several definitions that may fit this one word. However the reality remains that globalization has impacted and is impacting each individual on this planet. It is defined to be greater movement of people, goods, capital and ideas due to increased economic integration, which in turn is propelled, by increased trade and investment. It is like moving towards living in a borderless world. With the reality of globalization, the travel industry has benefited significantly. It could be said that globalization is benefiting from the flight industry. Regardless of the way one looks at it, more persons are traveling each day and are exploring several places that were distant places on a map. Equally, technology has been growing at an increasingly rapid pace and is being utilized by several persons all over the world. With the combination of globalization and the increase in technology and the frequency in travel there is a need to provide an intelligent application that is capable to meeting the needs of travelers that utilize mobile phones all over. It is a solution that fits in perfectly to a user’s busy lifestyle, offers ease of use and enough intelligence that makes a user’s experience worthwhile. Having recognized this need, the Agent based Mobile Airline Search and Booking System is been developed that is built to work on the Android to perform Airline Search and booking using Biometric. The system also possess agent learning capability to perform the search of Airlines based on some previous search pattern .The development been carried out using JADE-LEAP Agent development kit on Android.

  9. WISE: a content-based Web image search engine

    Science.gov (United States)

    Qiu, Guoping; Palmer, R. D.

    2000-12-01

    This paper describes the development of a prototype of a Web Image Search Engine (WISE), which allows users to search for images on the WWW by image examples, in a similar fashion to current search engines that allow users to find related Web pages using text matching on keywords. The system takes an image specified by the user and finds similar images available on the WWW by comparing the image contents using low level image features. The current version of the WISE system consists of a graphical user interface (GUI), an autonomous Web agent, an image comparison program and a query processing program. The users specify the URL of a target image and the URL of the starting Web page from where the program will 'crawl' the Web, finding images along the way and retrieve those satisfying a certain constraints. The program then computes the visual features of the retrieved images and performs content-based comparison with the target image. The results of the comparison are then sorted according to a certain similarity measure, which along with thumbnails and information associated with the images, such as the URLs; image size, etc. are then written to an HTML page. The resultant page is stored on a Web server and is outputted onto the user's Web browser once the search process is complete. A unique feature of the current version of WISE is its image content comparison algorithm. It is based on the comparison of image palettes and it therefore very efficient in retrieving one of the two universally accepted image formats on the Web, 'gif.' In gif images, the color palette is contained in its header and therefore it is only necessary to retrieve the header information rather than the whole images, thus making it very efficient.

  10. Eosinophilic pustular folliculitis: A published work-based comprehensive analysis of therapeutic responsiveness.

    Science.gov (United States)

    Nomura, Takashi; Katoh, Mayumi; Yamamoto, Yosuke; Miyachi, Yoshiki; Kabashima, Kenji

    2016-08-01

    Eosinophilic pustular folliculitis (EPF) is a non-infectious inflammatory dermatosis of unknown etiology that principally affects the hair follicles. There are three variants of EPF: (i) classic EPF; (ii) immunosuppression-associated EPF, which is subdivided into HIV-associated (IS/HIV) and non-HIV-associated (IS/non-HIV); and (iii) infancy-associated EPF. Oral indomethacin is efficacious, especially for classic EPF. No comprehensive information on the efficacies of other medical management regimens is currently available. In this study, we surveyed regimens for EPF that were described in articles published between 1965 and 2013. In total, there were 1171 regimens; 874, 137, 45 and 115 of which were applied to classic, IS/HIV, IS/non-HIV and infancy-associated EPF, respectively. Classic EPF was preferentially treated with oral indomethacin with efficacy of 84% whereas topical steroids were preferred for IS/HIV, IS/non-HIV and infancy-associated EPF with efficacy of 47%, 73% and 82%, respectively. Other regimens such as oral Sairei-to (a Chinese-Japanese herbal medicine), diaminodiphenyl sulfone, cyclosporin and topical tacrolimus were effective for indomethacin-resistant cases. Although the preclusion of direct comparison among cases was one limitation, this study provides a dataset that is applicable to the construction of therapeutic algorithms for EPF. PMID:26875627

  11. StudySearch: a web-based application for posting and searching clinical research studies.

    Science.gov (United States)

    Gonsenhauser, Blair; Hallarn, Rose; Carpenter, Daniel; Para, Michael F; Reider, Carson R

    2016-03-01

    Participant accrual into research studies is critical to advancing clinical and translational research to clinical care. Without sufficient recruitment, the purpose of any research study cannot be realized; yet, low recruitment and enrollment of participants persist. StudySearch is a web-based application designed to provide an easily readable, publicly accessible, and searchable listing of IRB-approved protocols that are accruing study participants. The Regulatory, Recruitment and Biomedical Informatics Cores of the Center for Clinical and Translational Science (CCTS) at The Ohio State University developed this research study posting platform. Postings include basic descriptive information: study title, purpose of the study, eligibility criteria and study personnel contact information. Language concerning benefits and/or inducements is not included; therefore, while IRB approval for a study to be listed on StudySearch is required, IRB approval of the posted language is not. Studies are listed by one of two methods; one automated and one manual: (1). Studies registered on ClinicalTrials.gov are automatically downloaded once a month; or (2). Studies are submitted directly by researchers to the CCTS Regulatory Core staff. In either case, final language is a result of an iterative process between researchers and CCTS staff. Deployed in January 2011 at OSU, this application has grown to approximately 200 studies currently posted and 1500 unique visitors per month. Locally, StudySearch is part of the CCTS recruitment toolkit. Features continue to be modified to better accommodate user behaviors. Nationally, this open source application is available for use. PMID:26912012

  12. Web Search Result Clustering based on Cuckoo Search and Consensus Clustering

    OpenAIRE

    Alam, Mansaf; Sadaf, Kishwar

    2015-01-01

    Clustering of web search result document has emerged as a promising tool for improving retrieval performance of an Information Retrieval (IR) system. Search results often plagued by problems like synonymy, polysemy, high volume etc. Clustering other than resolving these problems also provides the user the easiness to locate his/her desired information. In this paper, a method, called WSRDC-CSCC, is introduced to cluster web search result using cuckoo search meta-heuristic method and Consensus...

  13. Yahoo!Search and Web API Utilized Mashup based e-Leaning Content Search Engine for Mobile Learning

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2015-06-01

    Full Text Available Mashup based content search engine for mobile devices is proposed. Mashup technology is defined as search engine with plural different APIs. Mash-up has not only the plural APIs, but also the following specific features, (1 it enables classifications of the contents in concern by using web 2.0, (2 it may use API from the different sites, (3 it allows information retrievals from both sides of client and server, (4 it may search contents as an arbitrary structured hybrid content which is mixed content formed with the individual content from the different sites, (5 it enabling to utilize REST, RSS, Atom, etc. which are formed from XML conversions. The mash-up should be a flexible search engine for any purposes of content retrievals. The proposed search system allows 3D space display of search menus with these peculiarities on Android devices. The proposed search system featuring Yahoo!search BOSS and Web API is also applied for e-learning content retrievals. It is confirmed that the system can be used for search a variety of e-learning content in concern efficiently.

  14. Visual tracking method based on cuckoo search algorithm

    Science.gov (United States)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  15. Similarity preserving snippet-based visualization of web search results.

    Science.gov (United States)

    Gomez-Nieto, Erick; San Roman, Frizzi; Pagliosa, Paulo; Casaca, Wallace; Helou, Elias S; de Oliveira, Maria Cristina F; Nonato, Luis Gustavo

    2014-03-01

    Internet users are very familiar with the results of a search query displayed as a ranked list of snippets. Each textual snippet shows a content summary of the referred document (or webpage) and a link to it. This display has many advantages, for example, it affords easy navigation and is straightforward to interpret. Nonetheless, any user of search engines could possibly report some experience of disappointment with this metaphor. Indeed, it has limitations in particular situations, as it fails to provide an overview of the document collection retrieved. Moreover, depending on the nature of the query--for example, it may be too general, or ambiguous, or ill expressed--the desired information may be poorly ranked, or results may contemplate varied topics. Several search tasks would be easier if users were shown an overview of the returned documents, organized so as to reflect how related they are, content wise. We propose a visualization technique to display the results of web queries aimed at overcoming such limitations. It combines the neighborhood preservation capability of multidimensional projections with the familiar snippet-based representation by employing a multidimensional projection to derive two-dimensional layouts of the query search results that preserve text similarity relations, or neighborhoods. Similarity is computed by applying the cosine similarity over a "bag-of-words" vector representation of collection built from the snippets. If the snippets are displayed directly according to the derived layout, they will overlap considerably, producing a poor visualization. We overcome this problem by defining an energy functional that considers both the overlapping among snippets and the preservation of the neighborhood structure as given in the projected layout. Minimizing this energy functional provides a neighborhood preserving two-dimensional arrangement of the textual snippets with minimum overlap. The resulting visualization conveys both a global

  16. Scanned Hardcopy Maps, legato data base; public works, Published in 2006, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Scanned Hardcopy Maps dataset, was produced all or in part from Hardcopy Maps information as of 2006. It is described as 'legato data base; public works'. Data...

  17. Documenting Norwegian Scholarly Publishing

    OpenAIRE

    R.W. Vaagan

    2005-01-01

    From 2005-2006, scholarly publishing, including e-publishing, becomes one of several criteria used by The Ministry of Education and Science in financing research in Norwegian universities and colleges. Based on qualitative methodology and critical case sampling of recent Norwegian policy documents and reports, combined with typical case sampling of articles on e-publishing 2000-2005, especially from D-Lib magazine (Patton, 2002; Hawkins, 2001), the article discusses trends in Norwegian schola...

  18. Performance Oriented Query Processing In GEO Based Location Search Engines

    CERN Document Server

    Umamaheswari, M

    2010-01-01

    Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable geographic search engines. Query processing is a major bottleneck in standard web search engines, and the main reason for the thousands of machines used by the major engines. Geographic search engine query processing is different in that it requires a combination of text and spatial data processing techniques. We propose several algorithms for efficient query processing in geographic search engines, integrate them into an existing web search query processor, and evaluate them on large sets of real data and query traces.

  19. Web Image Retrieval Search Engine based on Semantically Shared Annotation

    Directory of Open Access Journals (Sweden)

    Alaa Riad

    2012-03-01

    Full Text Available This paper presents a new majority voting technique that combines the two basic modalities of Web images textual and visual features of image in a re-annotation and search based framework. The proposed framework considers each web page as a voter to vote the relatedness of keyword to the web image, the proposed approach is not only pure combination between image low level feature and textual feature but it take into consideration the semantic meaning of each keyword that expected to enhance the retrieval accuracy. The proposed approach is not used only to enhance the retrieval accuracy of web images; but also able to annotated the unlabeled images.

  20. Web Image Retrieval Search Engine based on Semantically Shared Annotation

    OpenAIRE

    Alaa Riad; Hamdy Kamal Elminir; Sameh Abd-Elghany

    2012-01-01

    This paper presents a new majority voting technique that combines the two basic modalities of Web images textual and visual features of image in a re-annotation and search based framework. The proposed framework considers each web page as a voter to vote the relatedness of keyword to the web image, the proposed approach is not only pure combination between image low level feature and textual feature but it take into consideration the semantic meaning of each keyword that expected to enhance t...

  1. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  2. The Critical Role of Journal Selection in Scholarly Publishing: A Search for Journal Options in Language-related Research Areas and Disciplines

    OpenAIRE

    2012-01-01

    Problem statement: With the globalization in academia, pressures on academics to publish internationally have been increasing all over the world. However, participating in global scientific communication through publishing in well-regarded international journals is a very challenging and daunting task particularly for nonnative speaker (NNS) scholars. Recent research has pointed out both linguistic and nonlinguistic factors behind the challenges facing NNS scholars in their attempts to publis...

  3. Performance Oriented Query Processing In GEO Based Location Search Engines

    OpenAIRE

    Umamaheswari, M.; S. Sivasubramanian

    2010-01-01

    Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable g...

  4. On optimizing distance-based similarity search for biological databases.

    Science.gov (United States)

    Mao, Rui; Xu, Weijia; Ramakrishnan, Smriti; Nuckolls, Glen; Miranker, Daniel P

    2005-01-01

    Similarity search leveraging distance-based index structures is increasingly being used for both multimedia and biological database applications. We consider distance-based indexing for three important biological data types, protein k-mers with the metric PAM model, DNA k-mers with Hamming distance and peptide fragmentation spectra with a pseudo-metric derived from cosine distance. To date, the primary driver of this research has been multimedia applications, where similarity functions are often Euclidean norms on high dimensional feature vectors. We develop results showing that the character of these biological workloads is different from multimedia workloads. In particular, they are not intrinsically very high dimensional, and deserving different optimization heuristics. Based on MVP-trees, we develop a pivot selection heuristic seeking centers and show it outperforms the most widely used corner seeking heuristic. Similarly, we develop a data partitioning approach sensitive to the actual data distribution in lieu of median splits. PMID:16447992

  5. Dynamic clinical data mining: search engine-based decision support.

    Science.gov (United States)

    Celi, Leo Anthony; Zimolzak, Andrew J; Stone, David J

    2014-01-01

    The research world is undergoing a transformation into one in which data, on massive levels, is freely shared. In the clinical world, the capture of data on a consistent basis has only recently begun. We propose an operational vision for a digitally based care system that incorporates data-based clinical decision making. The system would aggregate individual patient electronic medical data in the course of care; query a universal, de-identified clinical database using modified search engine technology in real time; identify prior cases of sufficient similarity as to be instructive to the case at hand; and populate the individual patient's electronic medical record with pertinent decision support material such as suggested interventions and prognosis, based on prior outcomes. Every individual's course, including subsequent outcomes, would then further populate the population database to create a feedback loop to benefit the care of future patients. PMID:25600664

  6. Graph-based identification of cancer signaling pathways from published gene expression signatures using PubLiME.

    Science.gov (United States)

    Finocchiaro, Giacomo; Mancuso, Francesco Mattia; Cittaro, Davide; Muller, Heiko

    2007-01-01

    Gene expression technology has become a routine application in many laboratories and has provided large amounts of gene expression signatures that have been identified in a variety of cancer types. Interpretation of gene expression signatures would profit from the availability of a procedure capable of assigning differentially regulated genes or entire gene signatures to defined cancer signaling pathways. Here we describe a graph-based approach that identifies cancer signaling pathways from published gene expression signatures. Published gene expression signatures are collected in a database (PubLiME: Published Lists of Microarray Experiments) enabled for cross-platform gene annotation. Significant co-occurrence modules composed of up to 10 genes in different gene expression signatures are identified. Significantly co-occurring genes are linked by an edge in an undirected graph. Edge-betweenness and k-clique clustering combined with graph modularity as a quality measure are used to identify communities in the resulting graph. The identified communities consist of cell cycle, apoptosis, phosphorylation cascade, extra cellular matrix, interferon and immune response regulators as well as communities of unknown function. The genes constituting different communities are characterized by common genomic features and strongly enriched cis-regulatory modules in their upstream regulatory regions that are consistent with pathway assignment of those genes. PMID:17389643

  7. An analysis of search-based user interaction on the Semantic Web

    NARCIS (Netherlands)

    Hildebrand, M.; Ossenbruggen, J.R. van; Hardman, L.

    2007-01-01

    Many Semantic Web applications provide access to their resources through text-based search queries, using explicit semantics to improve the search results. This paper provides an analysis of the current state of the art in semantic search, based on 35 existing systems. We identify different types of

  8. Proceedings of the ECIR 2012 Workshop on Task-Based and Aggregated Search (TBAS2012)

    DEFF Research Database (Denmark)

    2012-01-01

    Task-based search aims to understand the user's current task and desired outcomes, and how this may provide useful context for the Information Retrieval (IR) process. An example of task-based search is situations where additional user information on e.g. the purpose of the search or what the user...

  9. Trends in authorship based on gender and nationality in published neuroscience literature

    OpenAIRE

    Divyanshu Dubey; Anshudha Sawhney; Aparna Atluru; Amod Amritphale; Archana Dubey; Jaya Trivedi

    2016-01-01

    Objective: To evaluate the disparity in authorship based on gender and nationality of institutional affiliation among journals from developed and developing countries. Materials and Methods: Original articles from two neuroscience journals, with a 5 year impact factor >15 (Neuron and Nature Neuroscience) and from two neurology journals from a developing country (Neurology India and Annals of Indian Academy of Neurology) were categorized by gender and institutional affiliation of first and...

  10. Where is smoking research published?

    OpenAIRE

    A. Liguori(ISAAS, Trieste); Hughes, J. R.

    1996-01-01

    OBJECTIVE: To identify journals that have a focus on human nicotine/smoking research and to investigate the coverage of smoking in "high-impact" journals. DESIGN: The MEDLINE computer database was searched for English-language articles on human studies published in 1988-1992 using "nicotine", "smoking", "smoking cessation", "tobacco", or "tobacco use disorder" as focus descriptors. This search was supplemented with a similar search of the PSYCLIT computer database. Fifty-eight journals ...

  11. There are Discipline-Based Differences in Authors’ Perceptions Towards Open Access Publishing. A Review of: Coonin, B., & Younce, L. M. (2010. Publishing in open access education journals: The authors’ perspectives. Behavioral & Social Sciences Librarian, 29, 118-132. doi:10.1080/01639261003742181

    Directory of Open Access Journals (Sweden)

    Lisa Shen

    2011-09-01

    searches for publishing opportunities (40.4%, and professional societies (29.3% for raising their awareness of OA. Moreover, based on voluntary general comments left at end of the survey, researchers observed that some authors viewed the terms open access and electronic “synonymously” and thought of OA publishing only as a “format change” (p.125.Conclusion – The study revealed some discipline-based differences in authors’ attitudes toward scholarly publishing and the concept of OA. The majority of authors publishing in education viewed author fees, a common OA publishing practice in life and medical sciences as undesirable. On the other hand, citation impact, a major determinant for life and medical sciences publishing, was only a minor factor for authors in education. These findings provide useful insights for future research on discipline-based publication differences.The findings also indicated peer review is the primary determinant for authors publishing in education. Moreover, while the majority of authors surveyed considered both print and e-journal format to be equally acceptable, almost one third viewed OA journals as less prestigious than subscription-based publications. Some authors also seemed to confuse the concept between OA and electronic publishing. These findings could generate fresh discussion points between academic librarians and faculty members regarding OA publishing.

  12. Location-Based Search Engines Tasks and Capabilities: A Comparative Study

    OpenAIRE

    Hossein Vakili Mofrad; Hamid R. Jamali; Xiaofang Zhou; Saeid Asadi

    2007-01-01

    Location-based web searching is one of the popular tasks expected from the search engines. A location-based query consists of a topic and a reference location. Unlike general web search, in location-based search it is expected to find and rank documents which are not only related to the query topic but also geographically related to the location which the query is associated with. There are several issues for developing effective geographic search engines and so far, no global location-based ...

  13. Publisher's Announcement

    Science.gov (United States)

    Scriven, Neil

    2003-12-01

    We are delighted to announce that the new Editor-in-Chief of Journal of Physics A: Mathematical and General for 2004 will be Professor Carl M Bender of Washington University, St. Louis. Carl will, with the help of his world class editorial board, maintain standards of scientific rigour whilst ensuring that research published is of the highest importance. Carl attained his first degree in physics at Cornell University before studying for his PhD at Harvard. He later worked at The Institute for Advanced Study in Princeton and at MIT before assuming his current position at Washington University, St Louis. He has been a visiting professor at Technion, Haifa, and Imperial College, London and a scientific consultant for Los Alamos National Laboratory. His main expertise is in using classical applied mathematics to solve a broad range of problems in high-energy theoretical physics and mathematical physics. Since the publication of his book Advanced Mathematical Methods for Scientists and Engineers, written with Steven Orszag, he has been regarded as an expert on the subject of asymptotic analysis and perturbative methods. `Carl publishes his own internationally-important research in the journal and has been an invaluable, energetic member of the Editorial Board for some time' said Professor Ed Corrigan, Carl's predecessor as Editor, `he will be an excellent Editor-in-Chief'. Our grateful thanks and best wishes go to Professor Corrigan who has done a magnificent job for the journal during his five-year tenure.

  14. Clever Search: A WordNet Based Wrapper for Internet Search Engines

    OpenAIRE

    Kruse, Peter M.; Naujoks, Andre; Roesner, Dietmar; Kunze, Manuela

    2005-01-01

    This paper presents an approach to enhance search engines with information about word senses available in WordNet. The approach exploits information about the conceptual relations within the lexical-semantic net. In the wrapper for search engines presented, WordNet information is used to specify user's request or to classify the results of a publicly available web search engine, like google, yahoo, etc.

  15. Efficient mining of association rules based on gravitational search algorithm

    Directory of Open Access Journals (Sweden)

    Fariba Khademolghorani

    2011-07-01

    Full Text Available Association rules mining are one of the most used tools to discover relationships among attributes in a database. A lot of algorithms have been introduced for discovering these rules. These algorithms have to mine association rules in two stages separately. Most of them mine occurrence rules which are easily predictable by the users. Therefore, this paper discusses the application of gravitational search algorithm for discovering interesting association rules. This evolutionary algorithm is based on the Newtonian gravity and the laws of motion. Furthermore, contrary to the previous methods, the proposed method in this study is able to mine the best association rules without generating frequent itemsets and is independent of the minimum support and confidence values. The results of applying this method in comparison with the method of mining association rules based upon the particle swarm optimization show that our method is successful.

  16. Memoryless cooperative graph search based on the simulated annealing algorithm

    Institute of Scientific and Technical Information of China (English)

    Hou Jian; Yan Gang-Feng; Fan Zhen

    2011-01-01

    We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip consensus method based scheme is presented to update the key parameter-radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.

  17. A Detection Scheme for Cavity-based Dark Matter Searches

    CERN Document Server

    Bukhari, M H S

    2016-01-01

    We present here proposal of a scheme and some useful ideas for resonant cavity-based detection of cold dark matter axions with hope to improve the existing endeavors. The scheme is based upon our idea of a detector, which incorporates an integrated tunnel diode and a GaAs HEMT or HFET, High Electron Mobility Transistor or Heterogenous FET, for resonance detection and amplification from a resonant cavity (in a strong transverse magnetic field from a cylindrical array of halbach magnets). The idea of a TD-oscillator-amplifier combination could possibly serve as a more sensitive and viable resonance detection regime while maintaining an excellent performance with low noise temperature, whereas the halbach magnets array may offer a compact and permanent solution replacing the conventional electromagnets scheme. We believe that all these factors could possibly increase the sensitivity and accuracy of axion detection searches and reduce complications (and associated costs) in the experiments, in addition to help re...

  18. A Factorial Experiment on Scalability of Search Based Software Testing

    CERN Document Server

    Mehrmand, Arash

    2011-01-01

    Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and to decide which method to use in order to generate the test data is important. This paper discusses the efficiency of search-based algorithms (preferably genetic algorithm) versus random testing, in soft- ware test-data generation. This study di?ers from all previous studies due to sample programs (SUTs) which are used. Since we want to in- crease the complexity of SUTs gradually, and the program generation is automatic as well, Grammatical Evolution is used to guide the program generation. SUTs are generated according to the grammar we provide, with di?erent levels of complexity. SUTs will first undergo genetic al- gorithm and then random testing. Based on the test results, this paper recommends one method to use for automation of software testing.

  19. MISH publishes new framework for fear-based, abstinence-only education.

    Science.gov (United States)

    Mayer, R

    1997-01-01

    The US Medical Institute for Sexual Health (MISH) "National Guidelines for Sexuality and Character Education" is a fear-based, abstinence-only framework for sexuality education. This document is virtually identical in format, conceptual framework, and typeface to that produced by the Sexuality Information and Education Council of the United States (SIECUS) and adopts SIECUS language in many sections. SIECUS agrees with approximately 60% of the MISH messages and finds it noteworthy that the MISH guidelines provide a blueprint for sex education from elementary school through high school. However, MISH and SIECUS follow very different approaches to sex education. SIECUS seeks to help young people acquire the necessary information to safeguard their sexual health and make proper decisions, while the single goal of MISH is to promote abstinence until marriage (avoiding sexual intercourse and any activity involving genital contact or stimulation). MISH promotes this view with fear-based messages, uses only negative terms to describe adolescent sexual relations, and provides scant and misleading information about contraception (including the assertion that adolescent use of birth control is often ineffective). The MISH curriculum also promotes the anti-abortion viewpoint that life begins at conception. While acknowledging the changing composition of the US family, MISH promotes a view of the nuclear family as the "best" type of family in which to rear children. MISH skirts the issue of sexual orientation and avoids giving information about ways to seek treatment for sexually transmitted diseases or prenatal care. MISH guidelines make unsubstantiated statements about the value of abstinence, provide almost no information about how to adapt their framework to various communities, discuss contraceptives and condoms only in terms of failures, and suggest that all adolescent sexual relations have negative consequences. PMID:12319710

  20. Yahoo!Search and Web API Utilized Mashup based e-Leaning Content Search Engine for Mobile Learning

    OpenAIRE

    Kohei Arai

    2015-01-01

    Mashup based content search engine for mobile devices is proposed. Mashup technology is defined as search engine with plural different APIs. Mash-up has not only the plural APIs, but also the following specific features, (1) it enables classifications of the contents in concern by using web 2.0, (2) it may use API from the different sites, (3) it allows information retrievals from both sides of client and server, (4) it may search contents as an arbitrary structured hybrid content which is mi...

  1. Improving Image Search based on User Created Communities

    CERN Document Server

    Joshi, Amruta; Radev, Dragomir; Hassan, Ahmed

    2011-01-01

    Tag-based retrieval of multimedia content is a difficult problem, not only because of the shorter length of tags associated with images and videos, but also due to mismatch in the terminologies used by searcher and content creator. To alleviate this problem, we propose a simple concept-driven probabilistic model for improving text-based rich-media search. While our approach is similar to existing topic-based retrieval and cluster-based language modeling work, there are two important differences: (1) our proposed model considers not only the query-generation likelihood from cluster, but explicitly accounts for the overall "popularity" of the cluster or underlying concept, and (2) we explore the possibility of inferring the likely concept relevant to a rich-media content through the user-created communities that the content belongs to. We implement two methods of concept extraction: a traditional cluster based approach, and the proposed community based approach. We evaluate these two techniques for how effectiv...

  2. Semantic Web Search based on Ontology Modeling using Protege Reasoner

    OpenAIRE

    Shekhar, Monica; K, Saravanaguru RA.

    2013-01-01

    The Semantic Web works on the existing Web which presents the meaning of information as well-defined vocabularies understood by the people. Semantic Search, at the same time, works on improving the accuracy if a search by understanding the intent of the search and providing contextually relevant results. This paper describes a semantic approach toward web search through a PHP application. The goal was to parse through a user's browsing history and return semantically relevant web pages for th...

  3. Developing a Grid-Based Search and Categorization Tool

    OpenAIRE

    Haya, Glenn; Scholze, Frank; Vigen, Jens

    2003-01-01

    Grid technology has the potential to improve the accessibility of digital libraries. The participants in Project GRACE (Grid Search And Categorization Engine) are in the process of developing a search engine that will allow users to search through heterogeneous resources stored in geographically distributed digital collections. What differentiates this project from current search tools is that GRACE will be run on the European Data Grid, a large distributed network, and will not have a single...

  4. Biobotic insect swarm based sensor networks for search and rescue

    Science.gov (United States)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  5. An Efficient Annotation of Search Results Based on Feature

    Directory of Open Access Journals (Sweden)

    A. Jebha

    2015-10-01

    Full Text Available  With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB. A result page returned from WDB has multiple search records (SRR.Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able, relevant information are extracted and labels of data are assigned meaningfully. In this paper, feature ranking is proposed to extract the relevant information of extracted feature from WDB. Feature ranking is practical to enhance ideas of data and identify relevant features. This research explores the performance of feature ranking process by using the linear support vector machines with various feature of WDB database for annotation of relevant results. Experimental result of proposed system provides better result when compared with the earlier methods.

  6. New Architectures for Presenting Search Results Based on Web Search Engines Users Experience

    Science.gov (United States)

    Martinez, F. J.; Pastor, J. A.; Rodriguez, J. V.; Lopez, Rosana; Rodriguez, J. V., Jr.

    2011-01-01

    Introduction: The Internet is a dynamic environment which is continuously being updated. Search engines have been, currently are and in all probability will continue to be the most popular systems in this information cosmos. Method: In this work, special attention has been paid to the series of changes made to search engines up to this point,…

  7. An Interactive Approach for Filtering out Junk Images from Keyword Based Google Search Results

    OpenAIRE

    Gao, Yuli; Peng, Jinye; Luo, Hangzai; Keim, Daniel A.; Fan, Jianping

    2009-01-01

    Keyword-based Google Images search engine is now becoming very popular for online image search. Unfortunately, only the text terms that are explicitly or implicitly linked with the images are used for image indexing and the associated text terms may not have exact correspondence with the underlying image semantics, thus the keyword-based Google Images search engine may return large amounts of junk images which are irrelevant to the given keyword-based queries. Based on this observation, we ha...

  8. Cost Analysis of Screening for, Diagnosing, and Staging Prostate Cancer Based on a Systematic Review of Published Studies

    Directory of Open Access Journals (Sweden)

    Donatus U. Ekwueme, PhD

    2007-10-01

    Full Text Available IntroductionThe reported estimates of the economic costs associated with prostate cancer screening, diagnostic testing, and clinical staging are substantial. However, the resource costs (i.e., factors such as physician’s time, laboratory tests, patient’s time away from work included in these estimates are unknown. We examined the resource costs for prostate cancer screening, diagnostic tests, and staging; examined how these costs differ in the United States from costs in other industrialized countries; and estimated the cost per man screened for prostate cancer, per man given a diagnostic test, and per man given a clinically staged diagnosis of this disease.Methods We searched the electronic databases MEDLINE, EMBASE, and CINAHL for articles and reports on prostate cancer published from January 1980 through December 2003. Studies were selected according to the following criteria: the article was published in English; the full text was available for review; the study reported the resource or input cost data used to estimate the cost of prostate cancer testing, diagnosing, or clinical staging; and the study was conducted in an established market economy. We used descriptive statistics, weighted mean, and Monte Carlo simulation methods to pool and analyze the abstracted data.Results Of 262 studies examined, 28 met our selection criteria (15 from the United States and 13 from other industrialized countries. For studies conducted in the United States, the pooled baseline resource cost was $37.23 for screening with prostate-specific antigen (PSA and $31.77 for screening with digital rectal examination (DRE. For studies conducted in other industrialized countries, the pooled baseline resource cost was $30.92 for screening with PSA and $33.54 for DRE. For diagnostic and staging methods, the variation in the resource costs between the United States and other industrialized countries was mixed.Conclusion Because national health resources are limited

  9. A Content-Based Search Algorithm for Motion Estimation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The basic search algorithm toimplement Motion Estimation (ME) in the H. 263 encoder is a full search.It is simple but time-consuming. Traditional search algorithms are fast, but may cause a fall in image quality or an increase in bit-rate in low bit-rate applications. A fast search algorithm for ME with consideration on image content is proposed in this paper. Experiments show that the proposed algorithm can offer up to 70 percent savings in execution time with almost no sacrifice in PSNR and bit-rate, compared with the full search.

  10. Permutation based decision making under fuzzy environment using Tabu search

    Directory of Open Access Journals (Sweden)

    Mahdi Bashiri

    2012-04-01

    Full Text Available One of the techniques, which are used for Multiple Criteria Decision Making (MCDM is the permutation. In the classical form of permutation, it is assumed that weights and decision matrix components are crisp. However, when group decision making is under consideration and decision makers could not agree on a crisp value for weights and decision matrix components, fuzzy numbers should be used. In this article, the fuzzy permutation technique for MCDM problems has been explained. The main deficiency of permutation is its big computational time, so a Tabu Search (TS based algorithm has been proposed to reduce the computational time. A numerical example has illustrated the proposed approach clearly. Then, some benchmark instances extracted from literature are solved by proposed TS. The analyses of the results show the proper performance of the proposed method.

  11. Accelerated Search for Gaussian Generator Based on Triple Prime Integers

    Directory of Open Access Journals (Sweden)

    Boris S. Verkhovsky

    2009-01-01

    Full Text Available Problem statement: Modern cryptographic algorithms are based on complexity of two problems: Integer factorization of real integers and a Discrete Logarithm Problem (DLP. Approach: The latter problem is even more complicated in the domain of complex integers, where Public Key Cryptosystems (PKC had an advantage over analogous encryption-decryption protocols in arithmetic of real integers modulo p: The former PKC have quadratic cycles of order O (p2 while the latter PKC had linear cycles of order O(p. Results: An accelerated non-deterministic search algorithm for a primitive root (generator in a domain of complex integers modulo triple prime p was provided in this study. It showed the properties of triple primes, the frequencies of their occurrence on a specified interval and analyzed the efficiency of the proposed algorithm. Conclusion: Numerous computer experiments and their analysis indicated that three trials were sufficient on average to find a Gaussian generator.

  12. Building high dimensional imaging database for content based image search

    Science.gov (United States)

    Sun, Qinpei; Sun, Jianyong; Ling, Tonghui; Wang, Mingqing; Yang, Yuanyuan; Zhang, Jianguo

    2016-03-01

    In medical imaging informatics, content-based image retrieval (CBIR) techniques are employed to aid radiologists in the retrieval of images with similar image contents. CBIR uses visual contents, normally called as image features, to search images from large scale image databases according to users' requests in the form of a query image. However, most of current CBIR systems require a distance computation of image character feature vectors to perform query, and the distance computations can be time consuming when the number of image character features grows large, and thus this limits the usability of the systems. In this presentation, we propose a novel framework which uses a high dimensional database to index the image character features to improve the accuracy and retrieval speed of a CBIR in integrated RIS/PACS.

  13. GeoSearcher: Location-Based Ranking of Search Engine Results.

    Science.gov (United States)

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  14. Occurrences Algorithm for String Searching Based on Brute-force Algorithm

    OpenAIRE

    Ababneh Mohammad; Oqeili Saleh; Rawan A. Abdeen

    2006-01-01

    This study proposes a string searching algorithm as an improvement of the brute-force searching algorithm. The algorithm is named as, Occurrences algorithm. It is based on performing preprocessing for the pattern and for the text before beginning to search for the pattern in the text.

  15. Novel cued search strategy based on information gain for phased array radar

    Institute of Scientific and Technical Information of China (English)

    Lu Jianbin; Hu Weidong; Xiao Hui; Yu Wenxian

    2008-01-01

    A search strategy based on the maximal information gain principle is presented for the cued search of phased array radars. First, the method for the determination of the cued search region, arrangement of beam positions, and the calculation of the prior probability distribution of each beam position is discussed. And then,two search algorithms based on information gain are proposed using Shannon entropy and Kullback-Leibler entropy,respectively. With the proposed strategy, the information gain of each beam position is predicted before the radar detection, and the observation is made in the beam position with the maximal information gain. Compared with the conventional method of sequential search and confirm search, simulation results show that the proposed search strategy can distinctly improve the search performance and save radar time resources with the same given detection probability.

  16. Commercial Properties, parcel data base attribute, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Commercial Properties dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is...

  17. Cellular Phone Towers, parcel data base attribute, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cellular Phone Towers dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of 2006. It is...

  18. Federal Military Facilities, Federal Lands, National Atlas, Published in 2005, Smaller than 1:100000 scale, Whiteman Air Force Base.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Federal Military Facilities dataset, published at Smaller than 1:100000 scale, was produced all or in part from Published Reports/Deeds information as of 2005....

  19. A Semantic Query Transformation Approach Based on Ontology for Search Engine

    Directory of Open Access Journals (Sweden)

    SAJENDRA KUMAR

    2012-05-01

    Full Text Available These days we are using some popular web search engines for information retrieval in all areas, such engine are as Google, Yahoo!, and Live Search, etc. to obtain initial helpful information.Which information we retrieved via search engine may not be relevant to the search target in the search engine user's mind. When user not found relevant information he has to shortlist the results. Thesesearch engines use traditional search service based on "static keywords", which require the users to type in the exact keywords. This approach clearly puts the users in a critical situation of guessing the exact keyword. The users may want to define their search by using attributes of the search target. But the relevancy of results in most cases may not be satisfactory and the users may not be patient enough to browse through complete list of pages to get a relevant result. The reason behind this is the search engines performs search based on the syntax not on semantics. But they seemed to be less efficient to understand the relationship between the keywords which had an adverse effect on the results it produced. Semantic search engines – only solution to this; which returns concepts not documents according to user query matching. In This paper we proposed a semantic query interface which creates a semantic query according the user input query and study of current semantic search engine techniques for semantic search.

  20. Making the road by searching - A search engine based on Swarm Information Foraging

    CERN Document Server

    Gayo-Avello, Daniel

    2009-01-01

    Search engines are nowadays one of the most important entry points for Internet users and a central tool to solve most of their information needs. Still, there exist a substantial amount of users' searches which obtain unsatisfactory results. Needless to say, several lines of research aim to increase the relevancy of the results users retrieve. In this paper the authors frame this problem within the much broader (and older) one of information overload. They argue that users' dissatisfaction with search engines is a currently common manifestation of such a problem, and propose a different angle from which to tackle with it. As it will be discussed, their approach shares goals with a current hot research topic (namely, learning to rank for information retrieval) but, unlike the techniques commonly applied in that field, their technique cannot be exactly considered machine learning and, additionally, it can be used to change the search engine's response in real-time, driven by the users behavior. Their proposal ...

  1. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  2. Location-Based Search Engines Tasks and Capabilities: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Hossein Vakili Mofrad

    2007-12-01

    Full Text Available Location-based web searching is one of the popular tasks expected from the search engines. A location-based query consists of a topic and a reference location. Unlike general web search, in location-based search it is expected to find and rank documents which are not only related to the query topic but also geographically related to the location which the query is associated with. There are several issues for developing effective geographic search engines and so far, no global location-based search engine has been reported. Location ambiguity, lack of geographic information on web pages, language-based and country-dependent addressing styles, and multiple locations related to a single web resource are notable difficulties. Search engine companies have started to develop and offer location-based services. However, they are still geographically limited and have not become as successful and popular as general search engines. This paper reviews the architecture and tasks of location-based search engines and compares the capabilities, functionalities and coverage of the current geographic search engines with a user-oriented approach.

  3. Multiple search methods for similarity-based virtual screening: analysis of search overlap and precision

    OpenAIRE

    Holliday John D; Kanoulas Evangelos; Malim Nurul; Willett Peter

    2011-01-01

    Abstract Background Data fusion methods are widely used in virtual screening, and make the implicit assumption that the more often a molecule is retrieved in multiple similarity searches, the more likely it is to be active. This paper tests the correctness of this assumption. Results Sets of 25 searches using either the same reference structure and 25 different similarity measures (similarity fusion) or 25 different reference structures and the same similarity measure (group fusion) show that...

  4. Developing a Grid-based search and categorization tool

    CERN Document Server

    Haya, Glenn; Vigen, Jens

    2003-01-01

    Grid technology has the potential to improve the accessibility of digital libraries. The participants in Project GRACE (Grid Search And Categorization Engine) are in the process of developing a search engine that will allow users to search through heterogeneous resources stored in geographically distributed digital collections. What differentiates this project from current search tools is that GRACE will be run on the European Data Grid, a large distributed network, and will not have a single centralized index as current web search engines do. In some cases, the distributed approach offers advantages over the centralized approach since it is more scalable, can be used on otherwise inaccessible material, and can provide advanced search options customized for each data source.

  5. Survey on: Rule Based Phonetic Search for Slavic Surnames

    Directory of Open Access Journals (Sweden)

    Janki .B. Pardeshi

    2016-02-01

    Full Text Available There are different applications in NLP (Natural Language Processing where searching of surname plays an important role. This paper is the survey about solution to searching algorithms for the databases of communications service providers, person registries, social networks or genealogy. This paper overcomes the problem of phonetic algorithm for Slovak and (territorial neighbouring languages (Czech, Polish, Ukrainian, Russian, German, Hungarian, Jewish surnames. This solution provides high precision and recall for searching surnames in these languages.

  6. Search Engines and Search Technologies for Web-based Text Data%网络文本数据搜索引擎与搜索技术

    Institute of Scientific and Technical Information of China (English)

    李勇

    2001-01-01

    This paper describes the functions, characteristics and operating principles of search engines based on Web text, and the searching and data mining technologies for Web-based text information. Methods of computer-aided text clustering and abstacting are also given. Finally, it gives some guidelines for the assessment of searching quality.

  7. Search Method Based on Figurative Indexation of Folksonomic Features of Graphic Files

    Directory of Open Access Journals (Sweden)

    Oleg V. Bisikalo

    2013-11-01

    Full Text Available In this paper the search method based on usage of figurative indexation of folksonomic characteristics of graphical files is described. The method takes into account extralinguistic information, is based on using a model of figurative thinking of humans. The paper displays the creation of a method of searching image files based on their formal, including folksonomical clues.

  8. Concept Search

    OpenAIRE

    Giunchiglia, Fausto; Kharkevich, Uladzimir; Zaihrayeu, Ilya

    2008-01-01

    In this paper we present a novel approach, called Concept Search, which extends syntactic search, i.e., search based on the computation of string similarity between words, with semantic search, i.e., search based on the computation of semantic relations between concepts. The key idea of Concept Search is to operate on complex concepts and to maximally exploit the semantic information available, reducing to syntactic search only when necessary, i.e., when no semantic information is available. ...

  9. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the ycbcr color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach is

  10. A fast block-matching algorithm based on variable shape search

    Institute of Scientific and Technical Information of China (English)

    LIU Hao; ZHANG Wen-jun; CAI Jun

    2006-01-01

    Block-matching motion estimation plays an important role in video coding. The simple and efficient fast block-matching algorithm using Variable Shape Search (VSS) proposed in this paper is based on diamond search and hexagon search. The initial big diamond search is designed to fit the directional centre-biased characteristics of the real-world video sequence, and the directional hexagon search is designed to identify a small region where the best motion vector is expected to locate.Finally, the small diamond search is used to select the best motion vector in the located small region. Experimental results showed that the proposed VSS algorithm can significantly reduce the computational complexity, and provide competitive computational speedup with similar distortion performance as compared with the popular Diamond-based Search (DS) algorithm in the MPEG-4 Simple Profile.

  11. A phenomenological relative biological effectiveness (RBE) model for proton therapy based on all published in vitro cell survival data

    International Nuclear Information System (INIS)

    Proton therapy treatments are currently planned and delivered using the assumption that the proton relative biological effectiveness (RBE) relative to photons is 1.1. This assumption ignores strong experimental evidence that suggests the RBE varies along the treatment field, i.e. with linear energy transfer (LET) and with tissue type. A recent review study collected over 70 experimental reports on proton RBE, providing a comprehensive dataset for predicting RBE for cell survival. Using this dataset we developed a model to predict proton RBE based on dose, dose average LET (LETd) and the ratio of the linear-quadratic model parameters for the reference radiation (α/β)x, as the tissue specific parameter.The proposed RBE model is based on the linear quadratic model and was derived from a nonlinear regression fit to 287 experimental data points. The proposed model predicts that the RBE increases with increasing LETd and decreases with increasing (α/β)x. This agrees with previous theoretical predictions on the relationship between RBE, LETd and (α/β)x. The model additionally predicts a decrease in RBE with increasing dose and shows a relationship between both α and β with LETd. Our proposed phenomenological RBE model is derived using the most comprehensive collection of proton RBE experimental data to date. Previously published phenomenological models, based on a limited data set, may have to be revised. (paper)

  12. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, From FEMA, Published in 2007, 1:1200 (1in=100ft) scale, Town of Cary NC.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from LIDAR...

  13. Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE, flood plains, Published in 2008, 1:24000 (1in=2000ft) scale, Box Elder County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Flood Insurance Rate Maps and Base Flood Elevations, FIRM, DFIRM, BFE dataset, published at 1:24000 (1in=2000ft) scale, was produced all or in part from Other...

  14. SEARCH PROFILES BASED ON USER TO CLUSTER SIMILARITY

    Directory of Open Access Journals (Sweden)

    Ilija Subasic

    2007-12-01

    Full Text Available Privacy of web users' query search logs has, since last year's AOL dataset release, been treated as one of the central issues concerning privacy on the Internet, Therefore, the question of privacy preservation has also raised a lot of attention in different communities surrounding the search engines. Usage of clustering methods for providing low level contextual search, wriile retaining high privacy/utility is examined in this paper. By using only the user's cluster membership the search query terms could be no longer retained thus providing less privacy concerns both for the users and companies. The paper brings lightweight framework for combining query words, user similarities and clustering in order to provide a meaningful way of mining user searches while protecting their privacy. This differs from previous attempts for privacy preserving in the attempt to anonymize the queries instead of the users.

  15. A generic agent-based framework for cooperative search using pattern matching and reinforcement learning

    OpenAIRE

    Martin, Simon; Ouelhadj, Djamila; Beullens, P.; Ozcan, E.

    2011-01-01

    Cooperative search provides a class of strategies to design more effective search methodologies through combining (meta-) heuristics for solving combinatorial optimisation problems. This area has been little explored in operational research. In this study, we propose a general agent-based distributed framework where each agent implements a (meta-) heuristic. An agent continuously adapts itself during the search process using a cooperation protocol based on reinforcement learning and pattern m...

  16. Generating MEDLINE search strategies using a librarian knowledge-based system.

    OpenAIRE

    P. Peng; Aguirre, A.; Johnson, S. B.; Cimino, J. J.

    1993-01-01

    We describe a librarian knowledge-based system that generates a search strategy from a query representation based on a user's information need. Together with the natural language parser AQUA, the system functions as a human/computer interface, which translates a user query from free text into a BRS Onsite search formulation, for searching the MEDLINE bibliographic database. In the system, conceptual graphs are used to represent the user's information need. The UMLS Metathesaurus and Semantic ...

  17. A self-adaptive step Cuckoo search algorithm based on dimension by dimension improvement

    OpenAIRE

    Ren, Lu; Li, Haiyang; He, Xingshi

    2015-01-01

    The choice of step length plays an important role in convergence speed and precision of Cuckoo search algorithm. In the paper, a self-adaptive step Cuckoo search algorithm based on dimensional improvement is provided. First, since the step in the original self-adaptive step Cuckoo search algorithm is not updated when the current position of the nest is in the optimal position, simple modification of the step is made for the update. Second, evaluation strategy based on dimension by dimension u...

  18. Professional Microsoft search fast search, Sharepoint search, and search server

    CERN Document Server

    Bennett, Mark; Kehoe, Miles; Voskresenskaya, Natalya

    2010-01-01

    Use Microsoft's latest search-based technology-FAST search-to plan, customize, and deploy your search solutionFAST is Microsoft's latest intelligent search-based technology that boasts robustness and an ability to integrate business intelligence with Search. This in-depth guide provides you with advanced coverage on FAST search and shows you how to use it to plan, customize, and deploy your search solution, with an emphasis on SharePoint 2010 and Internet-based search solutions.With a particular appeal for anyone responsible for implementing and managing enterprise search, this book presents t

  19. Lyman-Kutcher-Burman NTCP model parameters for radiation pneumonitis and xerostomia based on combined analysis of published clinical data

    International Nuclear Information System (INIS)

    Knowledge of accurate parameter estimates is essential for incorporating normal tissue complication probability (NTCP) models into biologically based treatment planning. The purpose of this work is to derive parameter estimates for the Lyman-Kutcher-Burman (LKB) NTCP model using a combined analysis of multi-institutional toxicity data for the lung (radiation pneumonitis) and parotid gland (xerostomia). A series of published clinical datasets describing dose response for radiation pneumonitis (RP) and xerostomia were identified for this analysis. The data support the notion of large volume effect for the lung and parotid gland with the estimates of the n parameter being close to unity. Assuming that n = 1, the m and TD50 parameters of the LKB model were estimated by the maximum likelihood method from plots of complication rate as a function of mean organ dose. Ninety five percent confidence intervals for parameter estimates were obtained by the profile likelihood method. If daily fractions other than 2 Gy had been used in a published report, mean organ doses were converted to 2 Gy/fraction-equivalent doses using the linear-quadratic (LQ) formula with α/β = 3 Gy. The following parameter estimates were obtained for the endpoint of symptomatic RP when the lung is considered a paired organ: m = 0.41 (95% CI 0.38, 0.45) and TD50 = 29.9 Gy (95% CI 28.2, 31.8). When RP incidence was evaluated as a function of dose to the ipsilateral lung rather than total lung, estimates were m = 0.35 (95% CI 0.29, 0.43) and TD50 = 37.6 Gy (95% CI 34.6, 41.4). For xerostomia expressed as reduction in stimulated salivary flow below 25% within six months after radiotherapy, the following values were obtained: m = 0.53 (95% CI 0.45, 0.65) and TD50 = 31.4 Gy (95% CI 29.1, 34.0). Although a large number of parameter estimates for different NTCP models and critical structures exist and continue to appear in the literature, it is hard to justify the use of any single parameter set obtained at a

  20. Evaluating Search Engine Relevance with Click-Based Metrics

    Science.gov (United States)

    Radlinski, Filip; Kurup, Madhu; Joachims, Thorsten

    Automatically judging the quality of retrieval functions based on observable user behavior holds promise for making retrieval evaluation faster, cheaper, and more user centered. However, the relationship between observable user behavior and retrieval quality is not yet fully understood. In this chapter, we expand upon, Radlinski et al. (How does clickthrough data reflect retrieval quality, In Proceedings of the ACM Conference on Information and Knowledge Management (CIKM), 43-52, 2008), presenting a sequence of studies investigating this relationship for an operational search engine on the arXiv.org e-print archive. We find that none of the eight absolute usage metrics we explore (including the number of clicks observed, the frequency with which users reformulate their queries, and how often result sets are abandoned) reliably reflect retrieval quality for the sample sizes we consider. However, we find that paired experiment designs adapted from sensory analysis produce accurate and reliable statements about the relative quality of two retrieval functions. In particular, we investigate two paired comparison tests that analyze clickthrough data from an interleaved presentation of ranking pairs, and find that both give accurate and consistent results. We conclude that both paired comparison tests give substantially more accurate and sensitive evaluation results than the absolute usage metrics in our domain.

  1. Proposal of an ontology based web search engine

    OpenAIRE

    Deco, Claudia; Bender, Cristina; Ponce, Adrián

    2008-01-01

    When users search for information in a web site, sometimes they do not get what they want. Assuming that the scope where the search take place works fine, there are some problems caused by the way the user interact with the system, others that refer to characteristics of the language used, and others caused by the lack or nonexistent semantics in web documents. In this work, we propose a web search engine of a particular web site that uses ontologies and information retrieval techniques. A...

  2. A Vertical Search Engine – Based On Domain Classifier

    OpenAIRE

    Rajashree Shettar; Rahul Bhuptani

    2008-01-01

    The World Wide Web is growing exponentially and the dynamic, unstructured nature of the web makes it difficult to locate useful resources. Web Search engines such as Google and Alta Vista provide huge amount of information many of which might not be relevant to the users query. In this paper, we build a vertical search engine which takes a seed URL and classifies the URLs crawled as Medical or Finance domains. The filter component of the vertical search engine classifies the web pages downloa...

  3. Demeter, persephone, and the search for emergence in agent-based models.

    Energy Technology Data Exchange (ETDEWEB)

    North, M. J.; Howe, T. R.; Collier, N. T.; Vos, J. R.; Decision and Information Sciences; Univ. of Chicago; PantaRei Corp.; Univ. of Illinois

    2006-01-01

    In Greek mythology, the earth goddess Demeter was unable to find her daughter Persephone after Persephone was abducted by Hades, the god of the underworld. Demeter is said to have embarked on a long and frustrating, but ultimately successful, search to find her daughter. Unfortunately, long and frustrating searches are not confined to Greek mythology. In modern times, agent-based modelers often face similar troubles when searching for agents that are to be to be connected to one another and when seeking appropriate target agents while defining agent behaviors. The result is a 'search for emergence' in that many emergent or potentially emergent behaviors in agent-based models of complex adaptive systems either implicitly or explicitly require search functions. This paper considers a new nested querying approach to simplifying such agent-based modeling and multi-agent simulation search problems.

  4. A low-power network search engine based on statistical partitioning

    OpenAIRE

    Basci, F; Kocak, T

    2004-01-01

    Network search engines based on ternary CAMs (content addressable memories) are widely used in routers. However, due to the parallel search nature of TCAMs, power consumption becomes a critical issue. We propose an architecture that partitions the lookup table into multiple TCAM chips, based on the individual TCAM cell status, and achieves lower power figures

  5. Searches for physics beyond the Standard Model using jet-based resonances with the ATLAS Detector

    CERN Document Server

    Frate, Meghan; The ATLAS collaboration

    2016-01-01

    Run2 of the LHC, with its increased center-of-mass energy, is an unprecedented opportunity to discover physics beyond the Standard Model. One interesting possibility to conduct such searches is to use resonances based on jets. The latest search results from the ATLAS experiment, based on either inclusive or heavy-flavour jets, will be presented.

  6. Teaching AI Search Algorithms in a Web-Based Educational System

    Science.gov (United States)

    Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis

    2013-01-01

    In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…

  7. Risk-based scheduling of multiple search passes for UUVs

    Science.gov (United States)

    Baylog, John G.; Wettergren, Thomas A.

    2016-05-01

    This paper addresses selected computational aspects of collaborative search planning when multiple search agents seek to find hidden objects (i.e. mines) in operating environments where the detection process is prone to false alarms. A Receiver Operator Characteristic (ROC) analysis is applied to construct a Bayesian cost objective function that weighs and combines missed detection and false alarm probabilities. It is shown that for fixed ROC operating points and a validation criterion consisting of a prerequisite number of detection outcomes, an interval exists in the number of conducted search passes over which the risk objective function is supermodular. We show that this property is not retained beyond validation criterion boundaries. We investigate the use of greedy algorithms for distributing search effort and, in particular, examine the double greedy algorithm for its applicability under conditions of varying criteria. Numerical results are provided to demonstrate the effectiveness of the approach.

  8. Personalized Concept-Based Clustering Of Search Engine Queries

    Directory of Open Access Journals (Sweden)

    Rohit Chouhan

    2016-06-01

    Full Text Available Now a day web search currently facing many problems like search queries are very short and ambiguous and not meet exact user want. To remove such type problem few search engines suggest terms that are meaningfully related to the submitted queries so that users may be select from the suggestions the ones that reflect their information needs. In this paper, we introduce an hybrid approach that takes the user’s conceptual preferences in directive to provide personalized query recommendations. We achieve this goal with two new strategies. First, we develop online techniques that extract concepts from the websnippets of the search result returned from a query and use the concepts to identify related queries for that user query. Second, we propose a new two phase personalized agglomerative clustering algorithm that is able to generate personalized query cluster show Proposed approach will be better precision and recall than the existing query clustering methods.

  9. Visualization for Information Retrieval based on Fast Search Technology

    OpenAIRE

    Mamoon H. Mamoon; Hazem M. El-Bakry; Amany Salama

    2013-01-01

    The core of search engine is information retrieval technique. Using information retrieval system backs more retrieval results, some of them more relevant than other, and some is not relevant. While using search engine to retrieve information has grown very substantially, there remain problems with the information retrieval systems. The interface of the systems does not help them to perceive the precision of these results. It is therefore not surprising that graphical visualizations have been ...

  10. Designing a soft preference based search interface for the housing market

    OpenAIRE

    Oudshoorn, Kevin

    2011-01-01

    Websites aiming to assist users in finding a new house are becoming increasingly popular. Finding a potential relevant house is based on the users’ search criteria and the ability to define these criteria in an easy-to-use search user interface. Due to hard constraints, over-specification is one of the problems many users encounter when searching for a house online. In this paper, we propose a soft onstraint based search user interface for the housing domain. Our analysis of existing websites...

  11. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    OpenAIRE

    Maryam Akbari; Mozafar Cheshme Sohrabi; Ebrahim Afshar Zanjani

    2012-01-01

    The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post...

  12. Comfort and human factors in office and residential settings. (Latest citations from the NTIS data base). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    1992-04-01

    The bibliography contains citations concerning human factors engineering, anthropometry, and ergonomics as they relate to human comfort in the office and home. Human requirements, including ventilation, temperature control, and lighting, are considered. Research regarding environmental architecture, and engineering, safety, and convenience aspects are discussed. (Contains a minimum of 142 citations and includes a subject term index and title list.)

  13. A study of the disciplinary structure of mechanics based on the titles of published journal articles in mechanics

    Institute of Scientific and Technical Information of China (English)

    CHEN; Lixin; LIU; Zeyuan; LIANG; Liming

    2010-01-01

    Scientometrics is an emerging academic field for the exploration of the structure of science through journal citation relations.However,this article aims to study those subject-relevant journals’contents rather than studying their citations contained therein with the purpose of discovering a given disciplinary structure of science such as mechanics in our case.Based on the title wordings of 68,075 articles published in 66 mechanics journals,and using such research tools as the word frequency analysis,multidimensional scaling analysis and factor analysis,this article analyzes similarity and distinctions of those journals’contents in the subject field of mechanics.We first convert complex internal relations of these mechanics journals into a small number amount of independent indicators.The group of selected mechanics journals is then classified by a cluster analysis.This article demonstrates that the relations of the research contents of mechanics can be shown in an intuitively recognizable map,and we can have them analyzed from a perspective by taking into account about how those major branches of mechanics,such as solid mechanics,fluid mechanics,rational mechanics(including mathematical methods in mechanics),sound and vibration mechanics,computational mechanics,are related to the main thematic tenet of our study.It is hoped that such an approach,buttressed with this new perspective and approach,will enrich our means to explore the disciplinary structure of science and technology in general and mechanics in specific.

  14. The NeuARt II system: a viewing tool for neuroanatomical data based on published neuroanatomical atlases

    Directory of Open Access Journals (Sweden)

    Cheng Wei-Cheng

    2006-12-01

    Full Text Available Abstract Background Anatomical studies of neural circuitry describing the basic wiring diagram of the brain produce intrinsically spatial, highly complex data of great value to the neuroscience community. Published neuroanatomical atlases provide a spatial framework for these studies. We have built an informatics framework based on these atlases for the representation of neuroanatomical knowledge. This framework not only captures current methods of anatomical data acquisition and analysis, it allows these studies to be collated, compared and synthesized within a single system. Results We have developed an atlas-viewing application ('NeuARt II' in the Java language with unique functional properties. These include the ability to use copyrighted atlases as templates within which users may view, save and retrieve data-maps and annotate them with volumetric delineations. NeuARt II also permits users to view multiple levels on multiple atlases at once. Each data-map in this system is simply a stack of vector images with one image per atlas level, so any set of accurate drawings made onto a supported atlas (in vector graphics format could be uploaded into NeuARt II. Presently the database is populated with a corpus of high-quality neuroanatomical data from the laboratory of Dr Larry Swanson (consisting 64 highly-detailed maps of PHAL tract-tracing experiments, made up of 1039 separate drawings that were published in 27 primary research publications over 17 years. Herein we take selective examples from these data to demonstrate the features of NeuArt II. Our informatics tool permits users to browse, query and compare these maps. The NeuARt II tool operates within a bioinformatics knowledge management platform (called 'NeuroScholar' either as a standalone or a plug-in application. Conclusion Anatomical localization is fundamental to neuroscientific work and atlases provide an easily-understood framework that is widely used by neuroanatomists and non

  15. Bielefeld Academic Search Engine (BASE) An end-user oriented institutional repository search service

    OpenAIRE

    Pieper, Dirk; Summann, Friedrich

    2006-01-01

    In a SPARC position paper (http://www.arl.org/sparc/IR/ir.html) published in 2002 Raym Crow defined an institutional repository as a "digital collection capturing and preserving the intellectual output of a single or multi-university community". Repository servers can help institutions to increase their visibility and, in addition, they are beginning to change the system of scholarly communication. There are some multi-institutional driven repository servers but most of the repositories a...

  16. World Search Engine IQ Test Based on the Internet IQ Evaluation Algorithms

    OpenAIRE

    Feng Liu; Yong Shi; Bo Wang

    2015-01-01

    With increasing concern about Internet intelligence, this paper proposes concepts of the Internet and Internet subsystem IQs over search engines. Based on human IQ calculations, the paper first establishes a 2014 Internet Intelligence Scale and designs an intelligence test bank regarding search engines. Then, an intelligence test using such test bank is carried out on 50 typical search engines from 25 countries and regions across the world. Meanwhile, another intelligence test is also conduct...

  17. Pulsed laser deposition: Superconducting films. (Latest citations from the INSPEC: Information Services for the Physics and Engineering Communities database). Published Search

    International Nuclear Information System (INIS)

    The bibliography contains citations concerning technology and evaluation of pulsed laser deposition of superconducting films. Citations discuss the deposition of yttrium-barium based high-temperature superconducting thin films on a variety of substrates. Topics also examine laser ablation, film structures and quality, epitaximal growth, substrate temperature, doping materials, bismuth-strontium based superconducting films, pulsed excimer laser, critical current density, and microwave surface resistance. (Contains a minimum of 190 citations and includes a subject term index and title list.)

  18. Semantic search using modular ontology learning and case-based reasoning

    OpenAIRE

    Ben Mustapha, Nesrine; Baazaoui, Hajer; Aufaure, Marie-Aude; Ben Ghezala, Henda

    2010-01-01

    International audience In this paper, we present a semantic search approach based on Case-based reasoning and modular Ontology learning. A case is defined by a set of similar queries associated with its relevant results. The case base is used for ontology learning and for contextualizing the search process. Modular ontologies are designed to be used for case representation and indexing. Our work aims at improving ontology-based information retrieval by the integration of the traditional in...

  19. Shopping Malls, Updates of the shopping malls and centers based of land use, Published in unknown, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Shopping Malls dataset, was produced all or in part from Published Reports/Deeds information as of unknown. It is described as 'Updates of the shopping malls...

  20. Visualization for Information Retrieval based on Fast Search Technology

    Directory of Open Access Journals (Sweden)

    Mamoon H. Mamoon

    2013-03-01

    Full Text Available The core of search engine is information retrieval technique. Using information retrieval system backs more retrieval results, some of them more relevant than other, and some is not relevant. While using search engine to retrieve information has grown very substantially, there remain problems with the information retrieval systems. The interface of the systems does not help them to perceive the precision of these results. It is therefore not surprising that graphical visualizations have been employed in search engines to assist users. The main objective of Internet users is to find the required information with high efficiency and effectiveness. In this paper we present brief sides of information visualization's role in enhancing web information retrieval system as in some of its techniques such as tree view, title view, map view, bubble view and cloud view and its tools such as highlighting and Colored Query Result.

  1. Ranking Search Engine Result Pages based on Trustworthiness of Websites

    OpenAIRE

    Srikantaiah K C; Srikanth P L; Tejaswi V; Shaila K; Venugopal K R; L M Patnaik

    2012-01-01

    The World Wide Web (WWW) is the repository of large number of web pages which can be accessed via Internet by multiple users at the same time and therefore it is Ubiquitous in nature. The search engine is a key application used to search the web pages from this huge repository, which uses the link analysis for ranking the web pages without considering the facts provided by them. A new algorithm called Probability of Correctness of Facts(PCF)-Engine is proposed to find the accuracy of the fact...

  2. A constrained optimization algorithm based on the simplex search method

    Science.gov (United States)

    Mehta, Vivek Kumar; Dasgupta, Bhaskar

    2012-05-01

    In this article, a robust method is presented for handling constraints with the Nelder and Mead simplex search method, which is a direct search algorithm for multidimensional unconstrained optimization. The proposed method is free from the limitations of previous attempts that demand the initial simplex to be feasible or a projection of infeasible points to the nonlinear constraint boundaries. The method is tested on several benchmark problems and the results are compared with various evolutionary algorithms available in the literature. The proposed method is found to be competitive with respect to the existing algorithms in terms of effectiveness and efficiency.

  3. A quantum search algorithm based on partial adiabatic evolution

    Institute of Scientific and Technical Information of China (English)

    Zhang Ying-Yu; Hu He-Ping; Lu Song-Feng

    2011-01-01

    This paper presents and implements a specified partial adiabatic search algorithm on a quantum circuit. It studies the minimum energy gap between the first excited state and the ground state of the system Hamiltonian and it finds that, in the case of M=1, the algorithm has the same performance as the local adiabatic algorithm. However, the algorithm evolves globally only within a small interval, which implies that it keeps the advantages of global adiabatic algorithms without losing the speedup of the local adiabatic search algorithm.

  4. Searching for evidence-based information in eye care

    OpenAIRE

    Karen Blackhall

    2005-01-01

    A growth in health awareness has led to an increase in the volume and availability of health information. Health care professionals may feel under pressure to read this increasing volume of material. A search on the internet is often a quick and efficient way to find information and this can be done by using one of the many search engines such as Google or Google Scholar2 or one of the health care information portals such as Omni. A previous article by Sally Parsley in the Community Eye Healt...

  5. A quantum search algorithm based on partial adiabatic evolution

    International Nuclear Information System (INIS)

    This paper presents and implements a specified partial adiabatic search algorithm on a quantum circuit. It studies the minimum energy gap between the first excited state and the ground state of the system Hamiltonian and it finds that, in the case of M = 1, the algorithm has the same performance as the local adiabatic algorithm. However, the algorithm evolves globally only within a small interval, which implies that it keeps the advantages of global adiabatic algorithms without losing the speedup of the local adiabatic search algorithm. (general)

  6. The Academic Publishing Industry

    DEFF Research Database (Denmark)

    Nell, Phillip Christopher; Wenzel, Tim Ole; Schmidt, Florian

    2014-01-01

    . The case is intended to be used as a basis for class discussion rather than to illustrate effective handling of a managerial situation. It is based on published sources, interviews, and personal experience. The authors have disguised some names and other identifying information to protect...

  7. ExactSearch: a web-based plant motif search tool

    OpenAIRE

    Gunasekara, Chathura; Subramanian, Avinash; Avvari, Janaki Venkata Ram Kumar; Li, Bin; Chen, Su; Wei, Hairong

    2016-01-01

    Background Plant biologists frequently need to examine if a sequence motif bound by a specific transcription or translation factor is present in the proximal promoters or 3′ untranslated regions (3′ UTR) of a set of plant genes of interest. To achieve such a task, plant biologists have to not only identify an appropriate algorithm for motif searching, but also manipulate the large volume of sequence data, making it burdensome to carry out or fulfill. Result In this study, we developed a web p...

  8. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    Science.gov (United States)

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  9. Effect of Reading Ability and Internet Experience on Keyword-Based Image Search

    Science.gov (United States)

    Lei, Pei-Lan; Lin, Sunny S. J.; Sun, Chuen-Tsai

    2013-01-01

    Image searches are now crucial for obtaining information, constructing knowledge, and building successful educational outcomes. We investigated how reading ability and Internet experience influence keyword-based image search behaviors and performance. We categorized 58 junior-high-school students into four groups of high/low reading ability and…

  10. New Diamond Block Based Gradient Descent Search Algorithm for Motion Estimation in the MPEG-4 Encoder

    Institute of Scientific and Technical Information of China (English)

    王振洲; 李桂苓

    2003-01-01

    Motion estimation is an important part of the MPEG-4 encoder, due to its significant impact on the bit rate and the output quality of the encoder sequence. Unfortunately this feature takes a significant part of the encoding time especially when the straightforward full search(FS) algorithm is used. In this paper, a new algorithm named diamond block based gradient descent search (DBBGDS) algorithm, which is significantly faster than FS and gives similar quality of the output sequence, is proposed. At the same time, some other algorithms, such as three step search (TSS), improved three step search (ITSS), new three step search (NTSS), four step search (4SS), cellular search (CS) , diamond search (DS) and block based gradient descent search (BBGDS), are adopted and compared with DBBGDS. As the experimental results show, DBBGDS has its own advantages. Although DS has been adopted by the MPEG-4 VM, its output sequence quality is worse than that of the proposed algorithm while its complexity is similar to the proposed one. Compared with BBGDS, the proposed algorithm can achieve a better output quality.

  11. Adaptive Search Protocol Based on Optimized Ant Colony Algorithm in Peer-to-Peer Network

    Directory of Open Access Journals (Sweden)

    Chun-Ying Liu

    2013-04-01

    Full Text Available In order to solve the low searching efficiency in the peer-to-peer (P2P network, introduce the ant colony algorithm with the particle swarm optimization in searching procedure. Present a new adaptive search protocol (SACASP based on the ant colony algorithm with the particle swarm optimization in the Peer-to-Peer Network. The approach simulates the process of the ants’ searching food, and can direct the query routing efficiently according to the adaptive strategy and the positive feedback principle of the pheromone. Decrease the blindness of the messages transmitting in early searching stage by adding the particle swarm optimization to the ant colony algorithm. Give the adaptive P2P search model based on the fusion algorithm, and design the data structure and steps of the model. The simulation experiment shows, PSACASP can effectively shorten the time and reduce the search query packets comparing with the other search algorithms, and it can achieve better search performance and decrease the network loads.

  12. A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles

    OpenAIRE

    Neas, Charles Bennett

    2010-01-01

    This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search tech...

  13. The Search for Extension: 7 Steps to Help People Find Research-Based Information on the Internet

    Science.gov (United States)

    Hill, Paul; Rader, Heidi B.; Hino, Jeff

    2012-01-01

    For Extension's unbiased, research-based content to be found by people searching the Internet, it needs to be organized in a way conducive to the ranking criteria of a search engine. With proper web design and search engine optimization techniques, Extension's content can be found, recognized, and properly indexed by search engines and…

  14. Constructing Virtual Documents for Keyword Based Concept Search in Web Ontology

    Directory of Open Access Journals (Sweden)

    Sapna Paliwal

    2013-04-01

    Full Text Available Web ontologies are structural frameworks for organizing information in semantics web and provide shared concepts. Ontology formally represents knowledge or information about particular entity as a set of concepts within a particular domain on semantic web. Web ontology helps to describe concepts within domain and also help us to enables semantic interoperability between two different applications byusing Falcons concept search. We can facilitate concept searching and ontologies reusing. Constructing virtual documents is a keyword based search in ontology. The proposed method helps us to find how search engine help user to find out ontologies in less time so we can satisfy their needs. It include some supportive technologies with new technique is to constructing virtual documents of concepts for keywordbased search and based on population scheme we rank the concept and ontologies, a way to generate structured snippets according to query. In this concept we can report the user feedback and usabilityevolution.

  15. Optimal attack strategy of complex networks based on tabu search

    Science.gov (United States)

    Deng, Ye; Wu, Jun; Tan, Yue-jin

    2016-01-01

    The problem of network disintegration has broad applications and recently has received growing attention, such as network confrontation and disintegration of harmful networks. This paper presents an optimized attack strategy model for complex networks and introduces the tabu search into the network disintegration problem to identify the optimal attack strategy, which is a heuristic optimization algorithm and rarely applied to the study of network robustness. The efficiency of the proposed solution was verified by comparing it with other attack strategies used in various model networks and real-world network. Numerical experiments suggest that our solution can improve the effect of network disintegration and that the "best" choice for node failure attacks can be identified through global searches. Our understanding of the optimal attack strategy may also shed light on a new property of the nodes within network disintegration and deserves additional study.

  16. Towards ontology based search and knowledgesharing using domain ontologies

    DEFF Research Database (Denmark)

    Zambach, Sine

    This paper reports on work in progress. We present work on domain specific verbs and their role as relations in domain ontologies. The domain ontology which is in focus for our research is modeled in cooperation with the Danish biotech company Novo Nordic. Two of the main purposes of domain...... ontologies for enterprises are as background for search and knowledge sharing used for e.g. multi lingual product development. Our aim is to use linguistic methods and logic to construct consistent ontologies that can be used in both a search perspective and as knowledge sharing.This focuses on identifying...... verbs for relations in the ontology modeling. For this work we use frequency lists from a biomedical text corpus of different genres as well as a study of the relations used in other biomedical text mining tools. In addition, we discuss how these relations can be used in broarder perspective....

  17. Tree-Based Search for Stochastic Simulation Algorithm

    OpenAIRE

    Vo Hong, Thanh; Zunino, Roberto

    2011-01-01

    In systems biology, the cell behavior is governed by a series of biochemical reactions. The stochastic simulation algorithm (SSA), which was introduced by Gillespie, is a standard method to properly realize the dynamic and stochastic nature of such systems. In general, SSA follows a two-step approach: finding the next reaction firing, and updating the system accordingly. In this paper we apply the Huffman tree, an optimal tree for data compression, so to improve the search for the next reacti...

  18. A Framework for Hierarchical Clustering Based Indexing in Search Engines

    OpenAIRE

    Parul Gupta; Sharma, A.K.

    2011-01-01

    Granting efficient and fast accesses to the index is a key issuefor performances of Web Search Engines. In order to enhancememory utilization and favor fast query resolution, WSEs useInverted File (IF) indexes that consist of an array of theposting lists where each posting list is associated with a termand contains the term as well as the identifiers of the documentscontaining the term. Since the document identifiers are stored insorted order, they can be stored as the difference between thes...

  19. Multilevel Threshold Based Gray Scale Image Segmentation using Cuckoo Search

    OpenAIRE

    Samantaa, Sourav; Dey, Nilanjan; Das, Poulami; Acharjee, Suvojit; Chaudhuri, Sheli Sinha

    2013-01-01

    Image Segmentation is a technique of partitioning the original image into some distinct classes. Many possible solutions may be available for segmenting an image into a certain number of classes, each one having different quality of segmentation. In our proposed method, multilevel thresholding technique has been used for image segmentation. A new approach of Cuckoo Search (CS) is used for selection of optimal threshold value. In other words, the algorithm is used to achieve the best solution ...

  20. Storage Ring Based EDM Search — Achievements and Goals

    Science.gov (United States)

    Lehrach, Andreas

    2016-02-01

    This paper summarizes the experimental achievements of the JEDI (Jülich Electric Dipole moment Investigations) Collaboration to exploit and demonstrate the feasibility of charged particle Electric Dipole Moment searches with storage rings at the Cooler Synchrotron COSY of the Forschungszentrum Jülich. Recent experimental results, design and optimization of critical accelerator elements, progress in beam and spin tracking, and future goals of the R & D program at COSY are presented.

  1. Semiconductor-based experiments for neutrinoless double beta decay search

    Science.gov (United States)

    Barnabé Heider, Marik; Gerda Collaboration

    2012-08-01

    Three experiments are employing semiconductor detectors in the search for neutrinoless double beta (0νββ) decay: COBRA, Majorana and GERDA. COBRA is studying the prospects of using CdZnTe detectors in terms of achievable energy resolution and background suppression. These detectors contain several ββ emitters and the most promising for 0νββ-decay search is 116Cd. Majorana and GERDA will use isotopically enriched high purity Ge detectors to search for 0νββ-decay of 76Ge. Their aim is to achieve a background ⩽10-3 counts/(kgṡyṡkeV) at the Q improvement compared to the present state-of-art. Majorana will operate Ge detectors in electroformed-Cu vacuum cryostats. A first cryostat housing a natural-Ge detector array is currently under preparation. In contrast, GERDA is operating bare Ge detectors submerged in liquid argon. The construction of the GERDA experiment is completed and a commissioning run started in June 2010. A string of natural-Ge detectors is operated to test the complete experimental setup and to determine the background before submerging the detectors enriched in 76Ge. An overview and a comparison of these three experiments will be presented together with the latest results and developments.

  2. Development of an item bank for food parenting practices based on published instruments and reports from Canadian and US parents.

    Science.gov (United States)

    O'Connor, Teresia M; Pham, Truc; Watts, Allison W; Tu, Andrew W; Hughes, Sheryl O; Beauchamp, Mark R; Baranowski, Tom; Mâsse, Louise C

    2016-08-01

    Research to understand how parents influence their children's dietary intake and eating behaviors has expanded in the past decades and a growing number of instruments are available to assess food parenting practices. Unfortunately, there is no consensus on how constructs should be defined or operationalized, making comparison of results across studies difficult. The aim of this study was to develop a food parenting practice item bank with items from published scales and supplement with parenting practices that parents report using. Items from published scales were identified from two published systematic reviews along with an additional systematic review conducted for this study. Parents (n = 135) with children 5-12 years old from the US and Canada, stratified to represent the demographic distribution of each country, were recruited to participate in an online semi-qualitative survey on food parenting. Published items and parent responses were coded using the same framework to reduce the number of items into representative concepts using a binning and winnowing process. The literature contributed 1392 items and parents contributed 1985 items, which were reduced to 262 different food parenting concepts (26% exclusive from literature, 12% exclusive from parents, and 62% represented in both). Food parenting practices related to 'Structure of Food Environment' and 'Behavioral and Educational' were emphasized more by parent responses, while practices related to 'Consistency of Feeding Environment' and 'Emotional Regulation' were more represented among published items. The resulting food parenting item bank should next be calibrated with item response modeling for scientists to use in the future. PMID:27131416

  3. A geometry-based image search engine for advanced RADARSAT-1/2 GIS applications

    Science.gov (United States)

    Kotamraju, Vinay; Rabus, Bernhard; Busler, Jennifer

    2012-06-01

    Space-borne Synthetic Aperture Radar (SAR) sensors, such as RADARSAT-1 and -2, enable a multitude of defense and security applications owing to their unique capabilities of cloud penetration, day/night imaging and multi-polarization imaging. As a result, advanced SAR image time series exploitation techniques such as Interferometric SAR (InSAR) and Radargrammetry are now routinely used in applications such as underground tunnel monitoring, infrastructure monitoring and DEM generation. Imaging geometry, as determined by the satellite orbit and imaged terrain, plays a critical role in the success of such techniques. This paper describes the architecture and the current status of development of a geometry-based search engine that allows the search and visualization of archived and future RADARSAT-1 and -2 images appropriate for a variety of advanced SAR techniques and applications. Key features of the search engine's scalable architecture include (a) Interactive GIS-based visualization of the search results; (b) A client-server architecture for online access that produces up-to-date searches of the archive images and that can, in future, be extended to acquisition planning; (c) A techniquespecific search mode, wherein an expert user explicitly sets search parameters to find appropriate images for advanced SAR techniques such as InSAR and Radargrammetry; (d) A future application-specific search mode, wherein all search parameters implicitly default to preset values according to the application of choice such as tunnel monitoring, DEM generation and deformation mapping; (f) Accurate baseline calculations for InSAR searches, and, optimum beam configuration for Radargrammetric searches; (g) Simulated quick look images and technique-specific sensitivity maps in the future.

  4. Bielefeld Academic Search Engine: a (Potential Information-)BASE for the Working Mathematician

    OpenAIRE

    Höppner, Michael; Becker, Hans; Stange, Kari; Wegner, Bernd

    2004-01-01

    A modern search engine based approach to scientific information retrieval is described as the consequent next step after building up digital libraries. Bielefeld University Library has just established two demonstrators for possible retrieval services, one of them from mathematics.

  5. Analyzing Effective Features based on User Intention for Enhanced Map Search

    Directory of Open Access Journals (Sweden)

    Junki Matsuo

    2012-06-01

    Full Text Available Map would be the most critical information in daily real-world activities. Due to the advance of the Web and digital map processing techniques, we can now easily find various maps of different presentations appropriate to diverse user purposes such as trivial searching for a restaurant or consulting a path during a trip. However, maps served by today’s representative map search engines such as Google Maps cannot satisfy all users whose map-reading ability and search purposes are quite different. Thus, map search engine need to provide maps well represented for specific needs. Nowadays, there are numerous numbers of map contents available on the web, which are appropriately well drawn and shared on various web sites. However, it is not an easy task for users to find out appropriate maps on the Web. In order to support user's map search on the Web, we developed a map search system, which can search for map contents drawn in various viewpoints by interacting with users based on a relevance feedback. In particular, we analyze each map content according to two distinguishing features, geographical features and image features. Significantly, the proposed system can deal with visual map contents by considering how the map contents are represented. In this paper, we analyze effective features based on user intention for map search.

  6. A Feature-Weighted Instance-Based Learner for Deep Web Search Interface Identification

    OpenAIRE

    Hong Wang; Qingsong Xu; Youyang Chen; Jinsong Lan

    2013-01-01

    Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based lear...

  7. GeNemo: a search engine for web-based functional genomic data.

    Science.gov (United States)

    Zhang, Yongqing; Cao, Xiaoyi; Zhong, Sheng

    2016-07-01

    A set of new data types emerged from functional genomic assays, including ChIP-seq, DNase-seq, FAIRE-seq and others. The results are typically stored as genome-wide intensities (WIG/bigWig files) or functional genomic regions (peak/BED files). These data types present new challenges to big data science. Here, we present GeNemo, a web-based search engine for functional genomic data. GeNemo searches user-input data against online functional genomic datasets, including the entire collection of ENCODE and mouse ENCODE datasets. Unlike text-based search engines, GeNemo's searches are based on pattern matching of functional genomic regions. This distinguishes GeNemo from text or DNA sequence searches. The user can input any complete or partial functional genomic dataset, for example, a binding intensity file (bigWig) or a peak file. GeNemo reports any genomic regions, ranging from hundred bases to hundred thousand bases, from any of the online ENCODE datasets that share similar functional (binding, modification, accessibility) patterns. This is enabled by a Markov Chain Monte Carlo-based maximization process, executed on up to 24 parallel computing threads. By clicking on a search result, the user can visually compare her/his data with the found datasets and navigate the identified genomic regions. GeNemo is available at www.genemo.org. PMID:27098038

  8. Face sketch synthesis via sparse representation-based greedy search.

    Science.gov (United States)

    Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li; Mingjin Zhang

    2015-08-01

    Face sketch synthesis has wide applications in digital entertainment and law enforcement. Although there is much research on face sketch synthesis, most existing algorithms cannot handle some nonfacial factors, such as hair style, hairpins, and glasses if these factors are excluded in the training set. In addition, previous methods only work on well controlled conditions and fail on images with different backgrounds and sizes as the training set. To this end, this paper presents a novel method that combines both the similarity between different image patches and prior knowledge to synthesize face sketches. Given training photo-sketch pairs, the proposed method learns a photo patch feature dictionary from the training photo patches and replaces the photo patches with their sparse coefficients during the searching process. For a test photo patch, we first obtain its sparse coefficient via the learnt dictionary and then search its nearest neighbors (candidate patches) in the whole training photo patches with sparse coefficients. After purifying the nearest neighbors with prior knowledge, the final sketch corresponding to the test photo can be obtained by Bayesian inference. The contributions of this paper are as follows: 1) we relax the nearest neighbor search area from local region to the whole image without too much time consuming and 2) our method can produce nonfacial factors that are not contained in the training set and is robust against image backgrounds and can even ignore the alignment and image size aspects of test photos. Our experimental results show that the proposed method outperforms several state-of-the-arts in terms of perceptual and objective metrics. PMID:25879946

  9. Sequential search based on kriging: convergence analysis of some algorithms

    CERN Document Server

    Vazquez, Emmanuel

    2011-01-01

    Let $\\FF$ be a set of real-valued functions on a set $\\XX$ and let $S:\\FF \\to \\GG$ be an arbitrary mapping. We consider the problem of making inference about $S(f)$, with $f\\in\\FF$ unknown, from a finite set of pointwise evaluations of $f$. We are mainly interested in the problems of approximation and optimization. In this article, we make a brief review of results concerning average error bounds of Bayesian search methods that use a random process prior about $f$.

  10. Critique of EPS/RIN/RCUK/DTI "Evidence-Based Analysis of Data Concerning Scholarly Journal Publishing"

    OpenAIRE

    Harnad, Stevan

    2006-01-01

    This Report on UK Scholarly Journals was commissioned by RIN, RCUK and DTI, and conducted by ELS, but its questions, answers and interpretations are clearly far more concerned with the interests of the publishing lobby than with those of the research community. The Report's two relevant overall findings are correct and stated very fairly in their summary form: [1] "Overall, [self-archiving] of articles in open access repositories seems to be associated with both a larger number of citations, ...

  11. Global polar geospatial information service retrieval based on search engine and ontology reasoning

    Science.gov (United States)

    Chen, Nengcheng; E, Dongcheng; Di, Liping; Gong, Jianya; Chen, Zeqiang

    2007-01-01

    In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

  12. Aspiration Levels and R&D Search in Young Technology-Based Firms

    DEFF Research Database (Denmark)

    Candi, Marina; Saemundsson, Rognvaldur; Sigurjonsson, Olaf

    Decisions about allocation of resources to research and development (R&D), referred to here as R&D search, are critically important for competitive advantage. Using panel data collected yearly over a period of nine years, this paper re-visits existing theories of backward-looking and forward......-looking decision models for R&D search in the important context of young technology-based firms. Some of the findings confirming existing models, but overall the findings contradict existing models. Not only are young technology-based firms found to increase search when aspirations are not met, but they do the...... same when performance surpasses aspirations. Both positive and negative outlooks reinforce the effects of performance feedback. The combined effect is that the more outcomes and expectations deviate from aspirations the more young technology-based firms invest in R&D search....

  13. Rank-Based Similarity Search: Reducing the Dimensional Dependence.

    Science.gov (United States)

    Houle, Michael E; Nett, Michael

    2015-01-01

    This paper introduces a data structure for k-NN search, the Rank Cover Tree (RCT), whose pruning tests rely solely on the comparison of similarity values; other properties of the underlying space, such as the triangle inequality, are not employed. Objects are selected according to their ranks with respect to the query object, allowing much tighter control on the overall execution costs. A formal theoretical analysis shows that with very high probability, the RCT returns a correct query result in time that depends very competitively on a measure of the intrinsic dimensionality of the data set. The experimental results for the RCT show that non-metric pruning strategies for similarity search can be practical even when the representational dimension of the data is extremely high. They also show that the RCT is capable of meeting or exceeding the level of performance of state-of-the-art methods that make use of metric pruning or other selection tests involving numerical constraints on distance values. PMID:26353214

  14. A Framework for Hierarchical Clustering Based Indexing in Search Engines

    Directory of Open Access Journals (Sweden)

    Parul Gupta

    2011-01-01

    Full Text Available Granting efficient and fast accesses to the index is a key issuefor performances of Web Search Engines. In order to enhancememory utilization and favor fast query resolution, WSEs useInverted File (IF indexes that consist of an array of theposting lists where each posting list is associated with a termand contains the term as well as the identifiers of the documentscontaining the term. Since the document identifiers are stored insorted order, they can be stored as the difference between thesuccessive documents so as to reduce the size of the index. Thispaper describes a clustering algorithm that aims atpartitioning the set of documents into ordered clusters so thatthe documents within the same cluster are similar and are beingassigned the closer document identifiers. Thus the averagevalue of the differences between the successive documents willbe minimized and hence storage space would be saved. Thepaper further presents the extension of this clustering algorithmto be applied for the hierarchical clustering in which similarclusters are clubbed to form a mega cluster and similar megaclusters are then combined to form super cluster. Thus thepaper describes the different levels of clustering whichoptimizes the search process by directing the searchto a specific path from higher levels of clustering to the lowerlevels i.e. from super clusters to mega clusters, then to clustersand finally to the individual documents so that the user gets thebest possible matching results in minimum possible time.

  15. THE TYPES OF PUBLISHING SLOGANS

    OpenAIRE

    Ryzhov Konstantin Germanovich

    2015-01-01

    The author of the article focuses his attention on publishing slogans which are posted on 100 present-day Russian publishing houses' official websites and have not yet been studied in the special literature. The author has developed his own classification of publishing slogans based on the results of analysis and considering the current scientific views on the classification of slogans. The examined items are classified into autonomous and text-dependent according to interrelationship with an...

  16. A dichotomous search-based heuristic for the three-dimensional sphere packing problem

    Directory of Open Access Journals (Sweden)

    Mhand Hifi

    2015-12-01

    Full Text Available In this paper, the three-dimensional sphere packing problem is solved by using a dichotomous search-based heuristic. An instance of the problem is defined by a set of $ n $ unequal spheres and an object of fixed width and height and, unlimited length. Each sphere is characterized by its radius and the aim of the problem is to optimize the length of the object containing all spheres without overlapping. The proposed method is based upon beam search, in which three complementary phases are combined: (i a greedy selection phase which determines a series of eligible search subspace, (ii a truncated tree search, using a width-beam search, that explores some promising paths, and (iii a dichotomous search that diversifies the search. The performance of the proposed method is evaluated on benchmark instances taken from the literature where its obtained results are compared to those reached by some recent methods of the literature. The proposed method is competitive and it yields promising results.

  17. Keyword-based Ciphertext Search Algorithm under Cloud Storage

    Directory of Open Access Journals (Sweden)

    Ren Xunyi

    2016-01-01

    Full Text Available With the development of network storage services, cloud storage have the advantage of high scalability , inexpensive, without access limit and easy to manage. These advantages make more and more small or medium enterprises choose to outsource large quantities of data to a third party. This way can make lots of small and medium enterprises get rid of costs of construction and maintenance, so it has broad market prospects. But now lots of cloud storage service providers can not protect data security.This result leakage of user data, so many users have to use traditional storage method.This has become one of the important factors that hinder the development of cloud storage. In this article, establishing keyword index by extracting keywords from ciphertext data. After that, encrypted data and the encrypted index upload cloud server together.User get related ciphertext by searching encrypted index, so it can response data leakage problem.

  18. A Publisher view on the future of scholarly publishing

    Science.gov (United States)

    Stoop, Jose

    2015-08-01

    The journal publishing landscape is changing rapidly. With the massive move from print to online taking place at the end of the last century, we are now seeing a shift from traditional subscription based publishing model to ‘hybrid’ models and full Open Access publishing. Other major changes are taking place at the article interface level (from a static PDF to the “Article of the Future”), in data and code repository linking, in publishing data and code and hence make it citable and discoverable, and in many other subject area specific online innovations that are being introduced.Elsevier is actively involved - both in Open Access publishing, and in content innovation - in discussing, and taking the lead through many big and smaller scale initiatives. This presentation will outline Elsevier’s perspective on the future of scientific publishing with regards to these developments.

  19. New Tabu Search based global optimization methods outline of algorithms and study of efficiency.

    Science.gov (United States)

    Stepanenko, Svetlana; Engels, Bernd

    2008-04-15

    The study presents two new nonlinear global optimization routines; the Gradient Only Tabu Search (GOTS) and the Tabu Search with Powell's Algorithm (TSPA). They are based on the Tabu-Search strategy, which tries to determine the global minimum of a function by the steepest descent-mildest ascent strategy. The new algorithms are explained and their efficiency is compared with other approaches by determining the global minima of various well-known test functions with varying dimensionality. These tests show that for most tests the GOTS possesses a much faster convergence than global optimizer taken from the literature. The efficiency of the TSPA compares to the efficiency of genetic algorithms. PMID:17910004

  20. Multi-leg Searching by Adopting Graph-based Knowledge Representation

    Directory of Open Access Journals (Sweden)

    Siti Zarinah Mohd Yusof

    2011-01-01

    Full Text Available This research explores the development of multi-leg searching concept by adopting graph-based knowledge representation. The research is aimed at proposing a searching concept that is capable of providing advanced information through retrieving not only direct but continuous related information from a point. It applies maximal join concept to merge multiple information networks for supporting multi-leg searching process. Node and edge similarity concept are also applied to determine transit node and alternative edges of the same route. A working prototype of flight networks domain is developed to represent the overview of the research.

  1. Colorize magnetic nanoparticles using a search coil based testing method

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kai; Wang, Yi; Feng, Yinglong; Yu, Lina; Wang, Jian-Ping, E-mail: jpwang@umn.edu

    2015-04-15

    Different magnetic nanoparticles (MNPs) possess unique spectral responses to AC magnetic field and we can use this specific magnetic property of MNPs as “colors” in the detection. In this paper, a detection scheme for magnetic nanoparticle size distribution is demonstrated by using an MNPs and search-coils integrated detection system. A low frequency (50 Hz) sinusoidal magnetic field is applied to drive MNPs into saturated region. Then a high frequency sinusoidal field sweeping from 5 kHz to 35 kHz is applied in order to generate mixing frequency signals, which are collected by a pair of balanced search coils. These harmonics are highly specific to the nonlinearity of magnetization curve of the MNPs. Previous work focused on using the amplitude and phase of the 3rd harmonic or the amplitude ratio of the 5th harmonic over 3rd harmonic. Here we demonstrate to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of MNPs. It is found that this method effectively reduces the magnetic colorization error. - Highlights: • We demonstrated to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of magnetic nanoparticles (MNPs). • An easier and simpler way to calibrate amounts of MNPs was developed. • With the same concentration, MNP solution with a larger average particle size could induce higher amplitude, and its amplitude changes greatly with sweeping high frequency. • At lower sweeping frequency, the 5 samples have almost the same phase lag. As the sweeping frequency goes higher, phase lag of large particles drop faster.

  2. Colorize magnetic nanoparticles using a search coil based testing method

    International Nuclear Information System (INIS)

    Different magnetic nanoparticles (MNPs) possess unique spectral responses to AC magnetic field and we can use this specific magnetic property of MNPs as “colors” in the detection. In this paper, a detection scheme for magnetic nanoparticle size distribution is demonstrated by using an MNPs and search-coils integrated detection system. A low frequency (50 Hz) sinusoidal magnetic field is applied to drive MNPs into saturated region. Then a high frequency sinusoidal field sweeping from 5 kHz to 35 kHz is applied in order to generate mixing frequency signals, which are collected by a pair of balanced search coils. These harmonics are highly specific to the nonlinearity of magnetization curve of the MNPs. Previous work focused on using the amplitude and phase of the 3rd harmonic or the amplitude ratio of the 5th harmonic over 3rd harmonic. Here we demonstrate to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of MNPs. It is found that this method effectively reduces the magnetic colorization error. - Highlights: • We demonstrated to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic “colors” of magnetic nanoparticles (MNPs). • An easier and simpler way to calibrate amounts of MNPs was developed. • With the same concentration, MNP solution with a larger average particle size could induce higher amplitude, and its amplitude changes greatly with sweeping high frequency. • At lower sweeping frequency, the 5 samples have almost the same phase lag. As the sweeping frequency goes higher, phase lag of large particles drop faster

  3. Minimum Distortion Direction Prediction-based Fast Half-pixel Motion Vector Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    DONG Hai-yan; ZHANG Qi-shan

    2005-01-01

    A minimum distortion direction prediction-based novel fast half-pixel motion vector search algorithm is proposed, which can reduce considerably the computation load of half-pixel search. Based on the single valley characteristic of half-pixel error matching function inside search grid, the minimum distortion direction is predicted with the help of comparative results of sum of absolute difference(SAD) values of four integer-pixel points around integer-pixel motion vector. The experimental results reveal that, to all kinds of video sequences, the proposed algorithm can obtain almost the same video quality as that of the half-pixel full search algorithm with a decrease of computation cost by more than 66%.

  4. Fast Block-match Motion Estimation Based on Multilevel Adaptive Diamond Search

    Science.gov (United States)

    Li, Shan; Yi, Qing-Ming; Shi, Min

    In this paper, a novel fast block-match algorithm called MADS based on multilevel adaptive diamond search is proposed. The algorithm adaptively estimates the frame-level motion complexity with the reference frame texture information and the macro-block residual value at first, and then estimates the block-level motion complexity according to the spatial-temporal correlation of the vector field. The threshold is applied to stop the stationary block from searching. The initial search point and different diamond search modes are adaptively selected based on motion type for non-stationary block. Experimental results show that MADS algorithm has better performance than other popular fast algorithms for a wide range of video sequences.

  5. The Effect of Problem Reduction in the Integer Programming-based Local Search

    Directory of Open Access Journals (Sweden)

    Junha Hwang

    2016-06-01

    Full Text Available Integer Programming-based Local Search (IPbLS is a kind of local search. IPbLS is based on the first-choice hill-climbing search and uses integer programming to generate a neighbor solution. IPbLS has been applied to solve various NP-hard combinatorial optimization problems like knapsack problem, set covering problem, set partitioning problem, and so on. In this paper, we investigate the effect of problem reduction in the IPbLS experimentally using the n-queens maximization problem. The characteristics of IPbLS are examined by comparing IPbLS using strong problem reduction with IPbLS using weak problem reduction, and also IPbLS is compared with other local search strategies like simulated annealing. Experimental results show the importance of problem reduction in IPbLS.

  6. An Integer Programming-based Local Search for Large-scale Maximal Covering Problems

    Directory of Open Access Journals (Sweden)

    Junha Hwang

    2011-02-01

    Full Text Available Maximal covering problem (MCP is classified as a linear integer optimization problem which can be effectively solved by integer programming technique. However, as the problem size grows, integerprogramming requires excessive time to get an optimal solution. This paper suggests a method for applying integer programming-based local search (IPbLS to solve large-scale maximal covering problems. IPbLS, which is a hybrid technique combining integer programming and local search, is a kind of local search using integer programming for neighbor generation. IPbLS itself is very effective for MCP. In addition, we improve the performance of IPbLS for MCP through problem reduction based on the current solution. Experimental results show that the proposed method considerably outperforms any other local search techniques and integer programming.

  7. A self-adaptive step Cuckoo search algorithm based on dimension by dimension improvement

    Directory of Open Access Journals (Sweden)

    Lu REN

    2015-10-01

    Full Text Available The choice of step length plays an important role in convergence speed and precision of Cuckoo search algorithm. In the paper, a self-adaptive step Cuckoo search algorithm based on dimensional improvement is provided. First, since the step in the original self-adaptive step Cuckoo search algorithm is not updated when the current position of the nest is in the optimal position, simple modification of the step is made for the update. Second, evaluation strategy based on dimension by dimension update is introduced to the modified self-adaptive step Cuckoo search algorithm. The experimental results show that the algorithm can balance the contradiction between the global convergence ability and the precision of optimization. Moreover, the proposed algorithm has better convergence speed.

  8. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    Science.gov (United States)

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  9. The Industrial Engineering publishing landscape

    OpenAIRE

    Claasen, Schalk

    2012-01-01

    Looking at the Industrial Engineering publishing landscape through the window of Google Search, an interesting panorama unfolds. The view that I took is actually just a peek and therefore my description of what I saw is not meant to be comprehensive. The African landscape is empty except for the South African Journal of Industrial Engineering (SAJIE). This is an extraordinary situation if compared to the South American continent where there are Industrial Engineering journals in at least ...

  10. Development of an Ontology Based Forensic Search Mechanism: Proof of Concept

    Directory of Open Access Journals (Sweden)

    Jill Slay

    2006-03-01

    Full Text Available This paper examines the problems faced by Law Enforcement in searching large quantities of electronic evidence. It examines the use of ontologies as the basis for new forensic software filters and provides a proof of concept tool based on an ontological design. It demonstrates that efficient searching is produced through the use of such a design and points to further work that might be carried out to extend this concept.

  11. A tensor-based selection hyper-heuristic for cross-domain heuristic search

    OpenAIRE

    Asta, Shahriar; Özcan, Ender

    2015-01-01

    Hyper-heuristics have emerged as automated high level search methodologies that manage a set of low level heuristics for solving computationally hard problems. A generic selection hyper-heuristic combines heuristic selection and move acceptance methods under an iterative single point-based search framework. At each step, the solution in hand is modified after applying a selected heuristic and a decision is made whether the new solution is accepted or not. In this study, we represent the trail...

  12. HARD: SUBJECT-BASED SEARCH ENGINE MENGGUNAKAN TF-IDF DAN JACCARD'S COEFFICIENT

    OpenAIRE

    Rolly Intan; Andrew Defeng

    2006-01-01

    This paper proposes a hybridized concept of search engine based on subject parameter of High Accuracy Retrieval from Documents (HARD). Tf-Idf and Jaccard's Coefficient are modified and extended to providing the concept. Several illustrative examples are given including their steps of calculations in order to clearly understand the proposed concept and formulas. Abstract in Bahasa Indonesia : Paper ini memperkenalkan suatu algorima search engine berdasarkan konsep HARD (High Accuracy Retrieval...

  13. Evaluating medical student searches of MEDLINE for evidence-based information: process and application of results.

    OpenAIRE

    Burrows, S C; Tylman, V

    1999-01-01

    OBJECTIVE: To evaluate the adequacy of the MEDLINE instruction routinely given to all entering medical students at the University of Miami School of Medicine and the ability of students to search effectively for and retrieve evidence-based information for clinical decision making by the end of their third-year. METHODOLOGY: The authors developed and implemented a strategy for evaluating the search strategies and articles selected by third-year students, who participated in the Objective Struc...

  14. Towards Parallel Constraint-Based Local Search with the X10 Language

    OpenAIRE

    Munera, Danny; Diaz, Daniel; Abreu, Salvador

    2013-01-01

    In this study, we started to investigate how the Partitioned Global Address Space (PGAS) programming language X10 would suit the implementation of a Constraint-Based Local Search solver. We wanted to code in this language because we expect to gain from its ease of use and independence from specifi c parallel architectures. We present the implementation strategy, and search for di fferent sources of parallelism. We discuss the algorithms, their implementations and present a performance evaluat...

  15. Publishing with XML structure, enter, publish

    CERN Document Server

    Prost, Bernard

    2015-01-01

    XML is now at the heart of book publishing techniques: it provides the industry with a robust, flexible format which is relatively easy to manipulate. Above all, it preserves the future: the XML text becomes a genuine tactical asset enabling publishers to respond quickly to market demands. When new publishing media appear, it will be possible to very quickly make your editorial content available at a lower cost. On the downside, XML can become a bottomless pit for publishers attracted by its possibilities. There is a strong temptation to switch to audiovisual production and to add video and a

  16. Proposing LT based Search in PDM Systems for Better Information Retrieval

    CERN Document Server

    Ahmed, Zeeshan

    2011-01-01

    PDM Systems contain and manage heavy amount of data but the search mechanism of most of the systems is not intelligent which can process user"s natural language based queries to extract desired information. Currently available search mechanisms in almost all of the PDM systems are not very efficient and based on old ways of searching information by entering the relevant information to the respective fields of search forms to find out some specific information from attached repositories. Targeting this issue, a thorough research was conducted in fields of PDM Systems and Language Technology. Concerning the PDM System, conducted research provides the information about PDM and PDM Systems in detail. Concerning the field of Language Technology, helps in implementing a search mechanism for PDM Systems to search user"s needed information by analyzing user"s natural language based requests. The accomplished goal of this research was to support the field of PDM with a new proposition of a conceptual model for the imp...

  17. DISA at ImageCLEF 2014 Revised: Search-based Image Annotation with DeCAF Features

    OpenAIRE

    Budikova, Petra; Botorek, Jan; Batko, Michal; Zezula, Pavel

    2014-01-01

    This paper constitutes an extension to the report on DISA-MU team participation in the ImageCLEF 2014 Scalable Concept Image Annotation Task as published in [3]. Specifically, we introduce a new similarity search component that was implemented into the system, report on the results achieved by utilizing this component, and analyze the influence of different similarity search parameters on the annotation quality.

  18. Comics, Copyright and Academic Publishing

    Directory of Open Access Journals (Sweden)

    Ronan Deazley

    2014-05-01

    Full Text Available This article considers the extent to which UK-based academics can rely upon the copyright regime to reproduce extracts and excerpts from published comics and graphic novels without having to ask the copyright owner of those works for permission. In doing so, it invites readers to engage with a broader debate about the nature, demands and process of academic publishing.

  19. INTELLIGENT SEARCH ENGINE-BASED UNIVERSAL DESCRIPTION, DISCOVERY AND INTEGRATION FOR WEB SERVICE DISCOVERY

    Directory of Open Access Journals (Sweden)

    Tamilarasi Karuppiah

    2014-01-01

    Full Text Available Web Services standard has been broadly acknowledged by industries and academic researches along with the progress of web technology and e-business. Increasing number of web applications have been bundled as web services that can be published, positioned and invoked across the web. The importance of the issues regarding their publication and innovation attains a maximum as web services multiply and become more advanced and mutually dependent. With the intension of determining the web services through effiective manner with in the minimum time period in this study proposes an UDDI with intelligent serach engine. In order to publishing and discovering web services initially, the web services are published in the UDDI registry subsequently the published web services are indexed. To improve the efficiency of discovery of web services, the indexed web services are saved as index database. The search query is compared with the index database for discovering of web services and the discovered web services are given to the service customer. The way of accessing the web services is stored in a log file, which is then utilized to provide personalized web services to the user. The finding of web service is enhanced significantly by means of an efficient exploring capability provided by the proposed system and it is accomplished of providing the maximum appropriate web service. Universal Description, Discovery and Integration (UDDI.

  20. Optimal search-based gene subset selection for gene array cancer classification.

    Science.gov (United States)

    Li, Jiexun; Su, Hua; Chen, Hsinchun; Futscher, Bernard W

    2007-07-01

    High dimensionality has been a major problem for gene array-based cancer classification. It is critical to identify marker genes for cancer diagnoses. We developed a framework of gene selection methods based on previous studies. This paper focuses on optimal search-based subset selection methods because they evaluate the group performance of genes and help to pinpoint global optimal set of marker genes. Notably, this paper is the first to introduce tabu search (TS) to gene selection from high-dimensional gene array data. Our comparative study of gene selection methods demonstrated the effectiveness of optimal search-based gene subset selection to identify cancer marker genes. TS was shown to be a promising tool for gene subset selection. PMID:17674622

  1. An Angle-Based Crossover Tabu Search for Vehicle Routing Problem

    Science.gov (United States)

    Yang, Ning; Li, Ping; Li, Mingsen

    An improved tabu search - crossover tabu search (CTS) is presented which adopt the crossover operator of the genetic algorithm as the diversification strategy, and selecting elite solutions as the intensification strategies. To improve the performances, the angle-based idea of the sweep heuristic is used to confirm the neighborhood, and an object function with punishment. The angle-based CTS is applied for the vehicle routing problem. The simulating results which compared the tradition sweep heuristic and the standard tabu search shows the results got by angle-based CTS are better than those got by other two heuristics. The experiment shows the angle-based CTS has good performance on the vehicle routing problem.

  2. Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search

    CERN Document Server

    Guez, Arthur; Dayan, Peter

    2012-01-01

    Bayesian model-based reinforcement learning is a formally elegant approach to learning optimal behaviour under model uncertainty. In this setting, a Bayes-optimal policy captures the ideal trade-off between exploration and exploitation. Unfortunately, finding Bayes-optimal policies is notoriously taxing due to the enormous search space in the augmented belief-state MDP. In this paper we exploit recent advances in sample-based planning, based on Monte-Carlo tree search, to introduce a tractable method for approximate Bayes-optimal planning. Unlike prior work in this area, we avoid expensive applications of Bayes rule within the search tree, by lazily sampling models from the current beliefs. Our approach outperformed prior Bayesian model-based RL algorithms by a significant margin on several well-known benchmark problems.

  3. A New RFID Anti-collision Algorithm Based on the Q-Ary Search Scheme

    Institute of Scientific and Technical Information of China (English)

    SU Jian; WEN Guangjun; HONG Danfeng

    2015-01-01

    Deterministic tree-based algorithms are mostly used to guarantee that all the tags in the reader field are successfully identified, and to achieve the best performance. Through an analysis of the deficiencies of ex-isting tree-based algorithms, a Q-ary search algorithm was proposed. The Q-ary search (QAS) algorithm introduced a bit encoding mechanism of tag ID by which the multi-bit collision arbitration was implemented. According to the encoding mechanism, the collision cycle was reduced. The theoretical analysis and simulation results showed that the proposed MS algorithm overcame the shortcoming of exist-ing tree-based algorithms and exhibited good performance during identification.

  4. Magnetic Flux Leakage Signal Inversion of Corrosive Flaws Based on Modified Genetic Local Search Algorithm

    Institute of Scientific and Technical Information of China (English)

    HAN Wen-hua; FANG Ping; XIA Fei; XUE Fang

    2009-01-01

    In this paper, a modified genetic local search algorithm (MGLSA) is proposed. The proposed algorithm is resulted from employing the simulated annealing technique to regulate the variance of the Gaussian mutation of the genetic local search algorithm (GLSA). Then, an MGLSA-based inverse algorithm is proposed for magnetic flux leakage (MFL) signal inversion of corrosive flaws, in which the MGLSA is used to solve the optimization problem in the MFL inverse problem. Experimental results demonstrate that the MGLSA-based inverse algorithm is more robust than GLSA-based inverse algorithm in the presence of noise in the measured MFL signals.

  5. A Semidefinite Programming Based Search Strategy for Feature Selection with Mutual Information Measure.

    Science.gov (United States)

    Naghibi, Tofigh; Hoffmann, Sarah; Pfister, Beat

    2015-08-01

    Feature subset selection, as a special case of the general subset selection problem, has been the topic of a considerable number of studies due to the growing importance of data-mining applications. In the feature subset selection problem there are two main issues that need to be addressed: (i) Finding an appropriate measure function than can be fairly fast and robustly computed for high-dimensional data. (ii) A search strategy to optimize the measure over the subset space in a reasonable amount of time. In this article mutual information between features and class labels is considered to be the measure function. Two series expansions for mutual information are proposed, and it is shown that most heuristic criteria suggested in the literature are truncated approximations of these expansions. It is well-known that searching the whole subset space is an NP-hard problem. Here, instead of the conventional sequential search algorithms, we suggest a parallel search strategy based on semidefinite programming (SDP) that can search through the subset space in polynomial time. By exploiting the similarities between the proposed algorithm and an instance of the maximum-cut problem in graph theory, the approximation ratio of this algorithm is derived and is compared with the approximation ratio of the backward elimination method. The experiments show that it can be misleading to judge the quality of a measure solely based on the classification accuracy, without taking the effect of the non-optimum search strategy into account. PMID:26352993

  6. Improved methods for scheduling flexible manufacturing systems based on Petri nets and heuristic search

    Institute of Scientific and Technical Information of China (English)

    Bo HUANG; Yamin SUN

    2005-01-01

    This paper proposes and evaluates two improved Petri net (PN)-based hybrid search strategies and their applications to flexible manufacturing system (FMS) scheduling.The algorithms proposed in some previous papers,which combine PN simulation capabilities with A* heuristic search within the PN reachability graph,may not find an optimum solution even with an admissible heuristic function.To remedy the defects an improved heuristic search strategy is proposed,which adopts a different method for selecting the promising markings and reserves the admissibility of the algorithm.To speed up the search process,another algorithm is also proposed which invokes faster termination conditions and still guarantees that the solution found is optimum.The scheduling results are compared through a simple FMS between our algorithms and the previous methods.They are also applied and evaluated in a set of randomly-generated FMSs with such characteristics as multiple resources and alternative routes.

  7. Algorithm Based on Taboo Search and Shifting Bottleneck for Job Shop Scheduling

    Institute of Scientific and Technical Information of China (English)

    Wen-Qi Huang; Zhi Huang

    2004-01-01

    In this paper, a computational effective heuristic method for solving the minimum makespan problem of job shop scheduling is presented. It is based on taboo search procedure and on the shifting bottleneck procedure used to jump out of the trap of the taboo search procedure. A key point of the algorithm is that in the taboo search procedure two taboo lists are used to forbid two kinds of reversals of arcs, which is a new and effective way in taboo search methods for job shop scheduling. Computational experiments on a set of benchmark problem instances show that, in several cases, the approach, in reasonable time, yields better solutions than the other heuristic procedures discussed in the literature.

  8. POLYNOMIAL MODEL BASED FAST FRACTIONAL PIXEL SEARCH ALGORITHM FOR H.264/AVC

    Institute of Scientific and Technical Information of China (English)

    Xi Yinglai; Hao Chongyang; Lai Changcai

    2006-01-01

    This paper proposed a novel fast fractional pixel search algorithm based on polynomial model.With the analysis of distribution characteristics of motion compensation error surface inside fractional pixel searching window, the matching error is fitted with parabola along horizontal and vertical direction respectively. The proposed searching strategy needs to check only 6 points rather than 16 or 24 points, which are used in the Hierarchical Fractional Pel Search algorithm (HFPS) for 1/4-pel and 1/8-pel Motion Estimation (ME). The experimental results show that the proposed algorithm shows very good capability in keeping the rate distortion performance while reduces computation load to a large extent compared with HFPS algorithm.

  9. Query sensitive comparative summarization of search results using concept based segmentation

    CERN Document Server

    Chitra, P; Sarukesi, K

    2012-01-01

    Query sensitive summarization aims at providing the users with the summary of the contents of single or multiple web pages based on the search query. This paper proposes a novel idea of generating a comparative summary from a set of URLs from the search result. User selects a set of web page links from the search result produced by search engine. Comparative summary of these selected web sites is generated. This method makes use of HTML DOM tree structure of these web pages. HTML documents are segmented into set of concept blocks. Sentence score of each concept block is computed with respect to the query and feature keywords. The important sentences from the concept blocks of different web pages are extracted to compose the comparative summary on the fly. This system reduces the time and effort required for the user to browse various web sites to compare the information. The comparative summary of the contents would help the users in quick decision making.

  10. Structure-Based Local Search Heuristics for Circuit-Level Boolean Satisfiability

    CERN Document Server

    Belov, Anton

    2011-01-01

    This work focuses on improving state-of-the-art in stochastic local search (SLS) for solving Boolean satisfiability (SAT) instances arising from real-world industrial SAT application domains. The recently introduced SLS method CRSat has been shown to noticeably improve on previously suggested SLS techniques in solving such real-world instances by combining justification-based local search with limited Boolean constraint propagation on the non-clausal formula representation form of Boolean circuits. In this work, we study possibilities of further improving the performance of CRSat by exploiting circuit-level structural knowledge for developing new search heuristics for CRSat. To this end, we introduce and experimentally evaluate a variety of search heuristics, many of which are motivated by circuit-level heuristics originally developed in completely different contexts, e.g., for electronic design automation applications. To the best of our knowledge, most of the heuristics are novel in the context of SLS for S...

  11. Two-grade search mechanism based motion planning of a three-limbed robot

    Institute of Scientific and Technical Information of China (English)

    Pang Ming; Zang Xizhe; Yan Jihong; Zhao Jie

    2008-01-01

    A novel three-limbed robot was described and its motion planning method was discussed. After the introduction of the robot mechanical structure and the human-robot interface, a two-grade search mechanism based motion planning method was proposed. The first-grade search method using genetic algorithm tries to find an optimized target position and orientation of the three-limbed robot. The second-grade search method using virtual compliance tries to avoid the collision between the three-limbed robot and obstacles in a dynamic environment. Experiment shows the feasibility of the two-grade search mechanism and proves that the proposed motion planning method can be used to solve the motion planning problem of the redundant three-limbed robot without deficiencies of traditional genetic algorithm.

  12. Evaluation of the Exposure–Response Relationship of Lung Cancer Mortality and Occupational Exposure to Hexavalent Chromium Based on Published Epidemiological Data

    OpenAIRE

    van Wijngaarden, Edwin; Mundt, Kenneth A; Luippold, Rose S

    2004-01-01

    Some have suggested a threshold mechanism for the carcinogenicity of exposure to hexavalent chromium, Cr(VI). We evaluated the nature of the exposure–response relationship between occupational exposure to Cr(VI) and respiratory cancer based on results of two recently published epidemiological cohort studies. The combined cohort comprised a total of 2,849 workers employed at two U.S. chromate production plants between 1940 and 1974. Standardized mortality ratios (SMRs) for lung cancer in relat...

  13. Embracing Electronic Publishing.

    Science.gov (United States)

    Wills, Gordon

    1996-01-01

    Electronic publishing is the grandest revolution in the capture and dissemination of academic and professional knowledge since Caxton developed the printing press. This article examines electronic publishing, describes different electronic publishing scenarios (authors' cooperative, consolidator/retailer/agent oligopsony, publisher oligopoly), and…

  14. Web-based Image Search Engines%因特网上的图像搜索引擎

    Institute of Scientific and Technical Information of China (English)

    陈立娜

    2001-01-01

    The operating principle of Web-based image search engines is briefly described. A detailed evaluation of some of image search engines is made. Finally, the paper points out the deficiencies of the present image search engines and their development trend.

  15. An efficient similarity search based on indexing in large DNA databases.

    Science.gov (United States)

    Jeong, In-Seon; Park, Kyoung-Wook; Kang, Seung-Ho; Lim, Hyeong-Seok

    2010-04-01

    Index-based search algorithms are an important part of a genomic search, and how to construct indices is the key to an index-based search algorithm to compute similarities between two DNA sequences. In this paper, we propose an efficient query processing method that uses special transformations to construct an index. It uses small storage and it rapidly finds the similarity between two sequences in a DNA sequence database. At first, a sequence is partitioned into equal length windows. We select the likely subsequences by computing Hamming distance to query sequence. The algorithm then transforms the subsequences in each window into a multidimensional vector space by indexing the frequencies of the characters, including the positional information of the characters in the subsequences. The result of our experiments shows that the algorithm has faster run time than other heuristic algorithms based on index structure. Also, the algorithm is as accurate as those heuristic algorithms. PMID:20418167

  16. Design and Implementation of the Personalized Search Engine Based on the Improved Behavior of User Browsing

    Directory of Open Access Journals (Sweden)

    Wei-Chao Li

    2013-02-01

    Full Text Available An improved user profile based on the user browsing behavior is proposed in this study. In the user profile, the user browsing web pages behaviors, the level of interest to keywords, the user's short-term interest and long-term interest are overall taken into account. The improved user profile based on the user browsing behavior is embedded in the personalized search engine system. The basic framework and the basic functional modules of the system are described detailed in this study. A demonstration system of IUBPSES is developed in the .NET platform. The results of the simulation experiments indicate that the retrieval effects which use the IUBPSES based on the improved user profile for information search surpass the current mainstream search engines. The direction of improvement and further research is proposed in the finally.

  17. Optimal Search Strategy of Robotic Assembly Based on Neural Vibration Learning

    Directory of Open Access Journals (Sweden)

    Lejla Banjanovic-Mehmedovic

    2011-01-01

    Full Text Available This paper presents implementation of optimal search strategy (OSS in verification of assembly process based on neural vibration learning. The application problem is the complex robot assembly of miniature parts in the example of mating the gears of one multistage planetary speed reducer. Assembly of tube over the planetary gears was noticed as the most difficult problem of overall assembly. The favourable influence of vibration and rotation movement on compensation of tolerance was also observed. With the proposed neural-network-based learning algorithm, it is possible to find extended scope of vibration state parameter. Using optimal search strategy based on minimal distance path between vibration parameter stage sets (amplitude and frequencies of robots gripe vibration and recovery parameter algorithm, we can improve the robot assembly behaviour, that is, allow the fastest possible way of mating. We have verified by using simulation programs that search strategy is suitable for the situation of unexpected events due to uncertainties.

  18. Analysis of Search Engines and Meta Search Engines\\\\\\' Position by University of Isfahan Users Based on Rogers\\\\\\' Diffusion of Innovation Theory

    Directory of Open Access Journals (Sweden)

    Maryam Akbari

    2012-10-01

    Full Text Available The present study investigated the analysis of search engines and meta search engines adoption process by University of Isfahan users during 2009-2010 based on the Rogers' diffusion of innovation theory. The main aim of the research was to study the rate of adoption and recognizing the potentials and effective tools in search engines and meta search engines adoption among University of Isfahan users. The research method was descriptive survey study. The cases of the study were all of the post graduate students of the University of Isfahan. 351 students were selected as the sample and categorized by a stratified random sampling method. Questionnaire was used for collecting data. The collected data was analyzed using SPSS 16 in both descriptive and analytic statistic. For descriptive statistic frequency, percentage and mean were used, while for analytic statistic t-test and Kruskal-Wallis non parametric test (H-test were used. The finding of t-test and Kruscal-Wallis indicated that the mean of search engines and meta search engines adoption did not show statistical differences gender, level of education and the faculty. Special search engines adoption process was different in terms of gender but not in terms of the level of education and the faculty. Other results of the research indicated that among general search engines, Google had the most adoption rate. In addition, among the special search engines, Google Scholar and among the meta search engines Mamma had the most adopting rate. Findings also showed that friends played an important role on how students adopted general search engines while professors had important role on how students adopted special search engines and meta search engines. Moreover, results showed that the place where students got the most acquaintance with search engines and meta search engines was in the university. The finding showed that the curve of adoption rate was not normal and it was not also in S-shape. Morover

  19. Secondary eclipses in the CoRoT light curves: A homogeneous search based on Bayesian model selection

    CERN Document Server

    Parviainen, Hannu; Belmonte, Juan Antonio

    2012-01-01

    We aim to identify and characterize secondary eclipses in the original light curves of all published CoRoT planets using uniform detection and evaluation critetia. Our analysis is based on a Bayesian model selection between two competing models: one with and one without an eclipse signal. The search is carried out by mapping the Bayes factor in favor of the eclipse model as a function of the eclipse center time, after which the characterization of plausible eclipse candidates is done by estimating the posterior distributions of the eclipse model parameters using Markov Chain Monte Carlo. We discover statistically significant eclipse events for two planets, CoRoT-6b and CoRoT-11b, and for one brown dwarf, CoRoT-15b. We also find marginally significant eclipse events passing our plausibility criteria for CoRoT-3b, 13b, 18b, and 21b. The previously published CoRoT-1b and CoRoT-2b eclipses are also confirmed.

  20. Self-Published Books: An Empirical "Snapshot"

    Science.gov (United States)

    Bradley, Jana; Fulton, Bruce; Helm, Marlene

    2012-01-01

    The number of books published by authors using fee-based publication services, such as Lulu and AuthorHouse, is overtaking the number of books published by mainstream publishers, according to Bowker's 2009 annual data. Little empirical research exists on self-published books. This article presents the results of an investigation of a random sample…

  1. The optimal time-frequency atom search based on a modified ant colony algorithm

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-feng; LI Yan-jun; YU Rui-xing; ZHANG Ke

    2008-01-01

    In this paper,a new optimal time-frequency atom search method based on a modified ant colony algorithm is proposed to improve the precision of the traditional methods.First,the discretization formula of finite length time-frequency atom is inferred at length.Second; a modified ant colony algorithm in continuous space is proposed.Finally,the optimal timefrequency atom search algorithm based on the modified ant colony algorithm is described in detail and the simulation experiment is carried on.The result indicates that the developed algorithm is valid and stable,and the precision of the method is higher than that of the traditional method.

  2. A Feature-Weighted Instance-Based Learner for Deep Web Search Interface Identification

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2013-02-01

    Full Text Available Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based learner gives better results than classical algorithms such as C4.5, random forest and KNN.

  3. Structure-Based Search for New Inhibitors of Cholinesterases

    Directory of Open Access Journals (Sweden)

    Barbara Malawska

    2013-03-01

    Full Text Available Cholinesterases are important biological targets responsible for regulation of cholinergic transmission, and their inhibitors are used for the treatment of Alzheimer’s disease. To design new cholinesterase inhibitors, of different structure-based design strategies was followed, including the modification of compounds from a previously developed library and a fragment-based design approach. This led to the selection of heterodimeric structures as potential inhibitors. Synthesis and biological evaluation of selected candidates confirmed that the designed compounds were acetylcholinesterase inhibitors with IC50 values in the mid-nanomolar to low micromolar range, and some of them were also butyrylcholinesterase inhibitors.

  4. Nomogram-based search for subspaces of independent attributes

    OpenAIRE

    Moškon, Sašo

    2009-01-01

    In thesis we introduce selective nomograms, an improvement of nomograms for visualization of naive Bayesian classifier. Selective nomograms allow us to interactively explore the domain and discover conditional dependencies between the attributes. We also propose a classification algorithm based on the idea of selectable nomograms. First, we introduce selective nomograms, define conditional dependencies and describe the theoretical background for discovering conditional dependencies betw...

  5. Improving software security using search-based refactoring

    OpenAIRE

    Ghaith, Shadi; ?? Cinn??ide, Mel

    2012-01-01

    Security metrics have been proposed to assess the security of software applications based on the principles of ???reduce attack surface??? and ???grant least privilege.??? While these metrics can help inform the developer in choosing designs that provide better security, they cannot on their own show exactly how to make an application more secure. Even if they could, the onerous task of updating the software to improve its security is left to the developer. In this paper we ...

  6. Digital Elevation Model (DEM), Digital elevation model based of 2006 LIDAR data., Published in 2007, Johnson County AIMS.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Digital Elevation Model (DEM) dataset, was produced all or in part from LIDAR information as of 2007. It is described as 'Digital elevation model based of 2006...

  7. Road and Street Centerlines, Centerlines based on newly platted subdivisions, Published in Not Provided, City of Aurora.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — , was produced all or in part from Hardcopy Maps information as of Not Provided. It is described as 'Centerlines based on newly platted subdivisions'. Data by this...

  8. COORDINATE-BASED META-ANALYTIC SEARCH FOR THE SPM NEUROIMAGING PIPELINE

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej; Szewczyk, Marcin; Rasmussen, Peter Mondrup;

    2009-01-01

    . BredeQuery offers a direct link from SPM5 to the Brede Database coordinate-based search engine. BredeQuery is able to ‘grab’ brain location coordinates from the SPM windows and enter them as a query for the Brede Database. Moreover, results of the query can be displayed in an SPM window and/or exported...... databases offer so- called coordinate-based searching to the users (e.g. Brede, BrainMap). For such search, the publications, which relate to the brain locations represented by the user coordinates, are retrieved. In this paper we present BredeQuery – a plugin for the widely used SPM5 data analytic pipeline...

  9. A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.

    Science.gov (United States)

    Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R

    2008-04-01

    Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required. PMID:18351728

  10. Efficiency image data retrieval based on asynchronous capability aware spatial search service middleware

    Science.gov (United States)

    Chen, Nengcheng; Chen, Zeqiang; Gong, Jianya

    2007-11-01

    Recent advances in open geospatial web service, such as Web Coverage Service as well as corresponding web ready data processing service, have led to the generation of large amounts of OGC enabled links on Internet. Recently a few search engines that are specialised with respect to geographic space have appeared. However, users do not always get the effective OGC WCS link information they expect when searching the Web. How to quickly find the correct spatial aware web service in a heterogeneous distributed environment has become a "bottleneck" of geospatial web-based applications. In order to improve the retrieval efficiency of OGC Web Coverage Service (WCS) on WWW, a new methodology for retrieving WCS based on clustering capability aware spatial search service middleware is put forward in this paper.

  11. Ontology-based Semantic Search Engine for Healthcare Services

    Directory of Open Access Journals (Sweden)

    Jotsna Molly Rajan

    2012-04-01

    Full Text Available With the development of Web Services, the retrieval of relevant services has become a challenge. The keyword-based discovery mechanism using UDDI and WSDL is insufficient due to the retrievalof a large amount of irrelevant information. Also, keywords are insufficient in expressing semantic concepts since a single concept can be referred using syntactically different terms. Hence, service capabilities need to be manually analyzed, which lead to the development of the Semantic Web for automatic service discovery andretrieval of relevant services and resources. This work proposes the incorporation of Semantic matching methodology in Semantic Web for improving the efficiency and accuracy of the discovery mechanism.

  12. Brain bases of the automaticity via visual search task

    OpenAIRE

    Bueichekú Bohabonay, Elisenda Práxedes

    2016-01-01

    El objetivo principal de esta tesis es estudiar las bases cerebrales de los procesos de búsqueda visual, el desarrollo de la automaticidad a través del entrenamiento, y la relación entre los procesos subyacentes a la búsqueda visual, los modelos teóricos de la atención y la neuroplasticidad. La tarea de búsqueda visual fue utilizada en tres experimentos, recogiendo datos conductuales y de la actividad y conectividad cerebral mediante RMf en población sana. Los resultados señalan las tareas at...

  13. Exploring personalized searches using tag-based user profiles and resource profiles in folksonomy.

    Science.gov (United States)

    Cai, Yi; Li, Qing; Xie, Haoran; Min, Huaqin

    2014-10-01

    With the increase in resource-sharing websites such as YouTube and Flickr, many shared resources have arisen on the Web. Personalized searches have become more important and challenging since users demand higher retrieval quality. To achieve this goal, personalized searches need to take users' personalized profiles and information needs into consideration. Collaborative tagging (also known as folksonomy) systems allow users to annotate resources with their own tags, which provides a simple but powerful way for organizing, retrieving and sharing different types of social resources. In this article, we examine the limitations of previous tag-based personalized searches. To handle these limitations, we propose a new method to model user profiles and resource profiles in collaborative tagging systems. We use a normalized term frequency to indicate the preference degree of a user on a tag. A novel search method using such profiles of users and resources is proposed to facilitate the desired personalization in resource searches. In our framework, instead of the keyword matching or similarity measurement used in previous works, the relevance measurement between a resource and a user query (termed the query relevance) is treated as a fuzzy satisfaction problem of a user's query requirements. We implement a prototype system called the Folksonomy-based Multimedia Retrieval System (FMRS). Experiments using the FMRS data set and the MovieLens data set show that our proposed method outperforms baseline methods. PMID:24939833

  14. Efficient Multi-keyword Ranked Search over Outsourced Cloud Data based on Homomorphic Encryption

    Directory of Open Access Journals (Sweden)

    Nie Mengxi

    2016-01-01

    Full Text Available With the development of cloud computing, more and more data owners are motivated to outsource their data to the cloud server for great flexibility and less saving expenditure. Because the security of outsourced data must be guaranteed, some encryption methods should be used which obsoletes traditional data utilization based on plaintext, e.g. keyword search. To solve the search of encrypted data, some schemes were proposed to solve the search of encrypted data, e.g. top-k single or multiple keywords retrieval. However, the efficiency of these proposed schemes is not high enough to be impractical in the cloud computing. In this paper, we propose a new scheme based on homomorphic encryption to solve this challenging problem of privacy-preserving efficient multi-keyword ranked search over outsourced cloud data. In our scheme, the inner product is adopted to measure the relevance scores and the technique of relevance feedback is used to reflect the search preference of the data users. Security analysis shows that the proposed scheme can meet strict privacy requirements for such a secure cloud data utilization system. Performance evaluation demonstrates that the proposed scheme can achieve low overhead on both computation and communication.

  15. A semantics-based method for clustering of Chinese web search results

    Science.gov (United States)

    Zhang, Hui; Wang, Deqing; Wang, Li; Bi, Zhuming; Chen, Yong

    2014-01-01

    Information explosion is a critical challenge to the development of modern information systems. In particular, when the application of an information system is over the Internet, the amount of information over the web has been increasing exponentially and rapidly. Search engines, such as Google and Baidu, are essential tools for people to find the information from the Internet. Valuable information, however, is still likely submerged in the ocean of search results from those tools. By clustering the results into different groups based on subjects automatically, a search engine with the clustering feature allows users to select most relevant results quickly. In this paper, we propose an online semantics-based method to cluster Chinese web search results. First, we employ the generalised suffix tree to extract the longest common substrings (LCSs) from search snippets. Second, we use the HowNet to calculate the similarities of the words derived from the LCSs, and extract the most representative features by constructing the vocabulary chain. Third, we construct a vector of text features and calculate snippets' semantic similarities. Finally, we improve the Chameleon algorithm to cluster snippets. Extensive experimental results have shown that the proposed algorithm has outperformed over the suffix tree clustering method and other traditional clustering methods.

  16. A Component Based Heuristic Search Method with Evolutionary Eliminations

    CERN Document Server

    Li, Jingpeng; Burke, Edmund

    2009-01-01

    Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with evolutionary eliminations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then to implement two evolutionary elimination strategies mimicking natural selection and natural mutation process on these components respectively to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs an evaluation function which evaluates how well each component contributes towards the final objective. Two elimination steps are then applied: the first elimination eliminates a number of components that are deemed not worthy to stay in the current schedule; the second elimination may a...

  17. Copyright of Electronic Publishing.

    Science.gov (United States)

    Dong, Elaine; Wang, Bob

    2002-01-01

    Analyzes the importance of copyright, considers the main causes of copyright infringement in electronic publishing, discusses fair use of a copyrighted work, and suggests methods to safeguard copyrighted electronic publishing, including legislation, contracts, and technology. (Author/LRW)

  18. An Evolutionary Programming Based Tabu Search Method for Unit Commitment Problem with Cooling-Banking Constraints

    Science.gov (United States)

    Christober, C.; Rajan, Asir

    2011-01-01

    This paper presents a new approach to solve the short-term unit commitment problem using An Evolutionary Programming Based tabu search method with cooling and banking constraints. Numerical results are shown comparing the cost solutions and computation time obtained by using the evolutionary programming method and other conventional methods like dynamic programming, lagrangian relaxation.

  19. Exploring Gender Differences in SMS-Based Mobile Library Search System Adoption

    Science.gov (United States)

    Goh, Tiong-Thye

    2011-01-01

    This paper investigates differences in how male and female students perceived a short message service (SMS) library catalog search service when adopting it. Based on a sample of 90 students, the results suggest that there are significant differences in perceived usefulness and intention to use but no significant differences in self-efficacy and…

  20. Information Commitments: Evaluative Standards and Information Searching Strategies in Web-Based Learning Environments

    Science.gov (United States)

    Wu, Ying-Tien; Tsai, Chin-Chung

    2005-01-01

    "Information commitments" include both a set of evaluative standards that Web users utilize to assess the accuracy and usefulness of information in Web-based learning environments (implicit component), and the information searching strategies that Web users use on the Internet (explicit component). An "Information Commitment Survey" (ICS),…

  1. EARS: An Online Bibliographic Search and Retrieval System Based on Ordered Explosion.

    Science.gov (United States)

    Ramesh, R.; Drury, Colin G.

    1987-01-01

    Provides overview of Ergonomics Abstracts Retrieval System (EARS), an online bibliographic search and retrieval system in the area of human factors engineering. Other online systems are described, the design of EARS based on inverted file organization is explained, and system expansions including a thesaurus are discussed. (Author/LRW)

  2. A novel approach towards skill-based search and services of Open Educational Resources

    NARCIS (Netherlands)

    Ha, Kyung-Hun; Niemann, Katja; Schwertel, Uta; Holtkamp, Philipp; Pirkkalainen, Henri; Börner, Dirk; Kalz, Marco; Pitsilis, Vassilis; Vidalis, Ares; Pappa, Dimitra; Bick, Markus; Pawlowski, Jan; Wolpers, Martin

    2011-01-01

    Ha, K.-H., Niemann, K., Schwertel, U., Holtkamp, P., Pirkkalainen, H., Börner, D. et al (2011). A novel approach towards skill-based search and services of Open Educational Resources. In E. Garcia-Barriocanal, A. Öztürk, & M. C. Okur (Eds.), Metadata and Semantics Research: 5th International Confere

  3. Prospects for SUSY discovery based on inclusive searches with the ATLAS detector at the LHC

    International Nuclear Information System (INIS)

    We present searches for generic SUSY models with R-parity conservation in the ATLAS detector at the LHC, based on signatures including missing transverse momentum from undetected neutralinos, multiple jets and leptons or b and tau jets. We show the corresponding discovery reach for early ATLAS data, including the effect of systematic uncertainties on the background estimate. (author)

  4. Eugene Garfield, Francis Narin, and PageRank: The Theoretical Bases of the Google Search Engine

    CERN Document Server

    Bensman, Stephen J

    2013-01-01

    This paper presents a test of the validity of using Google Scholar to evaluate the publications of researchers by comparing the premises on which its search engine, PageRank, is based, to those of Garfield's theory of citation indexing. It finds that the premises are identical and that PageRank and Garfield's theory of citation indexing validate each other.

  5. Report on TBAS 2012: workshop on task-based and aggregated search

    DEFF Research Database (Denmark)

    Larsen, Birger; Lioma, Christina; de Vries, Arjen

    2012-01-01

    The ECIR half-day workshop on Task-Based and Aggregated Search (TBAS) was held in Barcelona, Spain on 1 April 2012. The program included a keynote talk by Professor Järvelin, six full paper presentations, two poster presentations, and an interactive discussion among the approximately 25 participa...

  6. Darwin and his publisher.

    Science.gov (United States)

    McClay, David

    2009-01-01

    Charles Darwin's publisher John Murray played an important, if often underrated, role in bringing his theories to the public. As their letters and publishing archives show they had a friendly, business like and successful relationship. This was despite fundamental scientific and religious differences between the men. In addition to publishing Darwin, Murray also published many of the critical and supportive works and reviews which Darwin's own works excited. PMID:19960865

  7. Digital self-publishing

    OpenAIRE

    KRANJC, ALJAŽ

    2016-01-01

    Digital publishing (also referred to as e-publishing or digital publishing) includes the digital publication of e-books, digital magazines, and the development of digital libraries and catalogs. It has become common to distribute books, magazines, and newspapers to consumers in the digital form. There are a lot of different online tools for creating and formatting our texts, depending on their type (plain text, scientific articles, web content). We have reviewed several self-publishing platfo...

  8. Scholarly electronic publishing bibliography

    OpenAIRE

    Bailey, Jr., Charles W.

    2005-01-01

    The Scholarly Electronic Publishing Bibliography (SEPB) presents selected English-language articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet. Most sources have been published between 1990 and the present; however, a limited number of key sources published prior to 1990 are also included. Where possible, links are provided to sources that are freely available on the Internet. SEPB includes "Scholar...

  9. Based on A* and Q-Learning Search and Rescue Robot Navigation

    OpenAIRE

    Ruiyuan Fan; Xiaogang Ruan; Tao Pang; Ershen Wang

    2012-01-01

    For the search and rescue robot navigation in unknown environment, a bionic self-learning algorithm based on A* and Q-Learning is put forward. The algorithm utilizes the Growing Self-organizing Map (GSOM) to build the environment topology cognitive map. The heuristic search A* algorithm is used the global path planning. When the local environment changes, Q-Learning is used the local path planning. Thereby the robot can obtain the self-learning skill by studying and training like human or ani...

  10. A New Genetic Algorithm Based on Niche Technique and Local Search Method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The genetic algorithm has been widely used in many fields as an easy robust global search and optimization method. In this paper, a new genetic algorithm based on niche technique and local search method is presented under the consideration of inadequacies of the simple genetic algorithm. In order to prove the adaptability and validity of the improved genetic algorithm, optimization problems of multimodal functions with equal peaks, unequal peaks and complicated peak distribution are discussed. The simulation results show that compared to other niching methods, this improved genetic algorithm has obvious potential on many respects, such as convergence speed, solution accuracy, ability of global optimization, etc.

  11. A Beam Search-based Algorithm for Flexible Manufacturing System Scheduling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Bing-hai; ZHOU Xiao-jun; CAI Jian-guo; FENG Kun

    2002-01-01

    A new algorithm is proposed for the flexible manufacturing system (FMS) scheduling problem in this paper. The proposed algorithm is a heuristic based on filtered beam search. It considers the machines and automated guided vehicle (AGV) as the primary resources, It utilizes system constraints and related manufacturing and processing information to generate machines and AGV schedules. The generated schedules can be an entire scheduling horizon as well as various lengths of scheduling periods. The proposed algorithm is also compared with other well-known dispatching rulesbased FMS scheduling. The results indicate that the beam search algorithm is a simple, valid and promising algorithm that deserves further research in FMS scheduling field.

  12. Economic Load Dispatch by Hybrid Swarm Intelligence Based Gravitational Search Algorithm

    Directory of Open Access Journals (Sweden)

    Hari Mohan Dubey

    2013-07-01

    Full Text Available This paper presents a novel heuristic optimization method to solve complex economic load dispatch problem using a hybrid method based on particle swarm optimization (PSO and gravitational search algorithm (GSA. This algorithm named as hybrid PSOGSA combines the social thinking feature in PSO with the local search capability of GSA. To analyze the performance of the PSOGSA algorithm it has been tested on four different standard test cases of different dimensions and complexity levels arising due to practical operating constraints. The obtained results are compared with recently reported methods. The comparison confirms the robustness and efficiency of the algorithm over other existing techniques.

  13. The International Nuclear Information System Collection Search: New Features (Information report on the base of INIS Collection Search help file)

    International Nuclear Information System (INIS)

    Full text: Main purpose of this information report is to show advantages of the new INIS Collection Search (ICS) web application over the replaced old INIS Online Database (IODB). This represents the most important outcome achieved in 2011 and can be considered one of the major improvements of the services provided to users since the establishment of the INIS Secretariat. In report have explained the following features of the INIS Collection Search: Integration of the Joint INIS/ETDE Thesaurus with the INIS Collection Search; INIS Authorities Integration: support for Subject Category, Journal and Report; Multi-lingual User Interface; Google Search Appliance; Dynamic Navigation; RSS2.0 Feed; My Workspace. The INIS Collection Search has been well received by our users and a number of suggestions for further improvement have been identified. Our aim is to continue to develop and implement new features. (authors)

  14. Democratic community-based search with XML full-text queries

    OpenAIRE

    Curtmola, Emiran

    2009-01-01

    As the web evolves, it is becoming easier to form online communities based on shared interests, and to create and publish data on a wide variety of topics. With this democratization of information creation, it is natural to query, in an ad-hoc and expressive fashion, the global collection that is the union of all local data collections of others within the community. In order to publish and locate documents of interest while fully delivering on the promise of free data exchange, any community...

  15. Development and evaluation of a biomedical search engine using a predicate-based vector space model.

    Science.gov (United States)

    Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey

    2013-10-01

    Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (ppredicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (ppredicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. PMID:23892296

  16. On the importance of graph search algorithms for DRGEP-based mechanism reduction methods

    CERN Document Server

    Niemeyer, Kyle E

    2016-01-01

    The importance of graph search algorithm choice to the directed relation graph with error propagation (DRGEP) method is studied by comparing basic and modified depth-first search, basic and R-value-based breadth-first search (RBFS), and Dijkstra's algorithm. By using each algorithm with DRGEP to produce skeletal mechanisms from a detailed mechanism for n-heptane with randomly-shuffled species order, it is demonstrated that only Dijkstra's algorithm and RBFS produce results independent of species order. In addition, each algorithm is used with DRGEP to generate skeletal mechanisms for n-heptane covering a comprehensive range of autoignition conditions for pressure, temperature, and equivalence ratio. Dijkstra's algorithm combined with a coefficient scaling approach is demonstrated to produce the most compact skeletal mechanism with a similar performance compared to larger skeletal mechanisms resulting from the other algorithms. The computational efficiency of each algorithm is also compared by applying the DRG...

  17. Genetic Algorithm-based Dynamic Vehicle Route Search using Car-to-Car Communication

    Directory of Open Access Journals (Sweden)

    KIM, J.

    2010-11-01

    Full Text Available Suggesting more efficient driving routes generate benefits not only for individuals by saving commute time, but also for society as a whole by reducing accident rates and social costs by lessening traffic congestion. In this paper, we suggest a new route search algorithm based on a genetic algorithm which is more easily installable into mutually communicating car navigation systems, and validate its usefulness through experiments reflecting real-world situations. The proposed algorithm is capable of searching alternative routes dynamically in unexpected events of system malfunctioning or traffic slow-downs due to accidents. Experimental results demonstrate that our algorithm searches the best route more efficiently and evolves with universal adaptability.

  18. Project GRACE A grid based search tool for the global digital library

    CERN Document Server

    Scholze, Frank; Vigen, Jens; Prazak, Petra; The Seventh International Conference on Electronic Theses and Dissertations

    2004-01-01

    The paper will report on the progress of an ongoing EU project called GRACE - Grid Search and Categorization Engine (http://www.grace-ist.org). The project participants are CERN, Sheffield Hallam University, Stockholm University, Stuttgart University, GL 2006 and Telecom Italia. The project started in 2002 and will finish in 2005, resulting in a Grid based search engine that will search across a variety of content sources including a number of electronic thesis and dissertation repositories. The Open Archives Initiative (OAI) is expanding and is clearly an interesting movement for a community advocating open access to ETD. However, the OAI approach alone may not be sufficiently scalable to achieve a truly global ETD Digital Library. Many universities simply offer their collections to the world via their local web services without being part of any federated system for archiving and even those dissertations that are provided with OAI compliant metadata will not necessarily be picked up by a centralized OAI Ser...

  19. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    Directory of Open Access Journals (Sweden)

    K.S. Kuppusamy,

    2011-03-01

    Full Text Available The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines inorder to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web pagesegmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approachinclude instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  20. Differential Evolution Based Intelligent System State Search Method for Composite Power System Reliability Evaluation

    Science.gov (United States)

    Bakkiyaraj, Ashok; Kumarappan, N.

    2015-09-01

    This paper presents a new approach for evaluating the reliability indices of a composite power system that adopts binary differential evolution (BDE) algorithm in the search mechanism to select the system states. These states also called dominant states, have large state probability and higher loss of load curtailment necessary to maintain real power balance. A chromosome of a BDE algorithm represents the system state. BDE is not applied for its traditional application of optimizing a non-linear objective function, but used as tool for exploring more number of dominant states by producing new chromosomes, mutant vectors and trail vectors based on the fitness function. The searched system states are used to evaluate annualized system and load point reliability indices. The proposed search methodology is applied to RBTS and IEEE-RTS test systems and results are compared with other approaches. This approach evaluates the indices similar to existing methods while analyzing less number of system states.

  1. HARD: SUBJECT-BASED SEARCH ENGINE MENGGUNAKAN TF-IDF DAN JACCARD'S COEFFICIENT

    Directory of Open Access Journals (Sweden)

    Rolly Intan

    2006-01-01

    Full Text Available This paper proposes a hybridized concept of search engine based on subject parameter of High Accuracy Retrieval from Documents (HARD. Tf-Idf and Jaccard's Coefficient are modified and extended to providing the concept. Several illustrative examples are given including their steps of calculations in order to clearly understand the proposed concept and formulas. Abstract in Bahasa Indonesia : Paper ini memperkenalkan suatu algorima search engine berdasarkan konsep HARD (High Accuracy Retrieval from Documents dengan menggabungkan penggunaan metoda TF-IDF (Term Frequency Inverse Document Frequency dan Jaccard's Coefficient. Kedua metoda, TF-IDF dan Jaccard's Coefficient dimodifikasi dan dikembangkan dengan memperkenalkan beberapa rumusan baru. Untuk lebih memudahkan dalam mengerti algoritma dan rumusan baru yang diperkenalkan, beberapa contoh perhitungan diberikan. Kata kunci: HARD, Tf-Idf, koefisien Jaccard, search engine, himpunan fuzzy.

  2. Segmentation Based Approach to Dynamic Page Construction from Search Engine Results

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The results rendered by the search engines are mostly a linear snippet list. With the prolific increase in the dynamism of web pages there is a need for enhanced result lists from search engines in order to cope-up with the expectations of the users. This paper proposes a model for dynamic construction of a resultant page from various results fetched by the search engine, based on the web page segmentation approach. With the incorporation of personalization through user profile during the candidate segment selection, the enriched resultant page is constructed. The benefits of this approach include instant, one-shot navigation to relevant portions from various result items, in contrast to a linear page-by-page visit approach. The experiments conducted on the prototype model with various levels of users, quantifies the improvements in terms of amount of relevant information fetched.

  3. Cross-correlation measurement techniques for cavity-based axion and weakly interacting slim particle searches

    CERN Document Server

    Parker, Stephen R; Ivanov, Eugene N; Tobar, Michael E

    2015-01-01

    Weakly Interacting Slim Particles (WISPs), such as axions, are highly motivated dark matter candidates. The most sensitive experimental searches for these particles exploit WISP-to-photon conversion mechanisms and use resonant cavity structures to enhance the resulting power signal. For WISPs to constitute Cold Dark Matter their required masses correspond to photons in the microwave spectrum. As such, searches for these types of WISPs are primarily limited by the thermal cavity noise and the broadband first-stage amplifier noise. In this work we propose and then verify two cross-correlation measurement techniques for cavity-based WISP searches. These are two channel measurement schemes where the cross-spectrum is computed, rejecting uncorrelated noise sources while still retaining correlated signals such as those generated by WISPs. The first technique allows for the cavity thermal spectrum to be observed with an enhanced resolution. The second technique cross-correlates two individual cavity/amplifier system...

  4. Hierarchical content-based image retrieval by dynamic indexing and guided search

    Science.gov (United States)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  5. KRBKSS: a keyword relationship based keyword-set search system for peer-to-peer networks

    Institute of Scientific and Technical Information of China (English)

    ZHANG Liang; ZOU Fu-tai; MA Fan-yuan

    2005-01-01

    Distributed inverted index technology is used in many peer-to-peer (P2P) systems to help find rapidly document in -set search system for peer-to-peer networkswhich a given word appears. Distributed inverted index by keywords may incur significant bandwidth for executing more complicated search queries such as multiple-attribute queries. In order to reduce query overhead, KSS (keyword-set search) by Gnawali partitions the index by a set of keywords. However, a KSS index is considerably larger than a standard inverted index,since there are more word sets than there are individual words. And the insert overhead and storage overhead are obviously unacceptable for full-text search on a collection of documents even if KSS uses the distance window technology. In this paper, we extract the relationship information between query keywords from websites' queries logs to improve performance of KSS system.Experiments results clearly demonstrated that the improved keyword-set search system based on keywords relationship (KRBKSS) is more efficient than KSS index in insert overhead and storage overhead, and a standard inverted index in terms of communication costs for query.

  6. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning.

    Science.gov (United States)

    Gao, Wei-feng; Liu, San-yang; Huang, Ling-ling

    2013-06-01

    The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions. PMID:23086528

  7. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    Science.gov (United States)

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  8. TRUST BASED AUTOMATIC QUERY FORMULATION SEARCH ON EXPERT AND KNOWLEDGE USERS SYSTEMS

    Directory of Open Access Journals (Sweden)

    K. Sridharan

    2014-01-01

    Full Text Available Due to enhance in complexity of services, there is a necessity for dynamic interaction models. For a service-oriented system to work properly, we need a context-sensitive trust based search. Automatic information transfer is also deficient when unexpected query is given. However, it shows that search engines are vulnerable in answering intellectual queries and shows an unreliable outcome. The user cannot have a fulfillment with these results due to lack of trusts on blogs. In our modified trust algorithm, which process exact skill matching and retrieval of information based on proper content rank. Our contribution to this system is new modified trust algorithm with automatic formulation of meaningful query search to retrieve the exact contents from the top-ranked documents based on the expert rank and their content quality verified of their resources provided. Some semantic search engines cannot show their important performance in improving precision and lowering recall. It hence effectively reduces complexity in combining HPS and software services.

  9. Web Document Clustering Using Cuckoo Search Clustering Algorithm based on Levy Flight

    Directory of Open Access Journals (Sweden)

    Moe Moe Zaw

    2013-09-01

    Full Text Available The World Wide Web serves as a huge widely distributed global information service center. The tremendous amount of information on the web is improving day by day. So, the process of finding the relevant information on the web is a major challenge in Information Retrieval. This leads the need for the development of new techniques for helping users to effectively navigate, summarize and organize the overwhelmed information. One of the techniques that can play an important role towards the achievement of this objective is web document clustering. This paper aims to develop a clustering algorithm and apply in web document clustering area. The Cuckoo Search Optimization algorithm is a recently developed optimization algorithm based on the obligate behavior of some cuckoo species in combining with the levy flight. In this paper, Cuckoo Search Clustering Algorithm based on levy flight is proposed. This algorithm is the application of Cuckoo Search Optimization algorithm in web document clustering area to locate the optimal centroids of the cluster and to find global solution of the clustering algorithm. For testing the performance of the proposed method, this paper will show the experience result by using the benchmark dataset. The result obtained shows that the Cuckoo Search Clustering algorithm based on Levy Flight performs well in web document clustering.

  10. The Open Data Repository's Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Bristow, T.; Blake, D. F.; Fonda, M.; Pires, A.

    2015-12-01

    Data management and data publication are becoming increasingly important components of research workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software (http://www.opendatarepository.org) strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity. We gratefully acknowledge the support for this study by the Science-Enabling Research Activity (SERA), and NASA NNX11AP82A

  11. The Open Data Repositorys Data Publisher

    Science.gov (United States)

    Stone, N.; Lafuente, B.; Downs, R. T.; Blake, D.; Bristow, T.; Fonda, M.; Pires, A.

    2015-01-01

    Data management and data publication are becoming increasingly important components of researcher's workflows. The complexity of managing data, publishing data online, and archiving data has not decreased significantly even as computing access and power has greatly increased. The Open Data Repository's Data Publisher software strives to make data archiving, management, and publication a standard part of a researcher's workflow using simple, web-based tools and commodity server hardware. The publication engine allows for uploading, searching, and display of data with graphing capabilities and downloadable files. Access is controlled through a robust permissions system that can control publication at the field level and can be granted to the general public or protected so that only registered users at various permission levels receive access. Data Publisher also allows researchers to subscribe to meta-data standards through a plugin system, embargo data publication at their discretion, and collaborate with other researchers through various levels of data sharing. As the software matures, semantic data standards will be implemented to facilitate machine reading of data and each database will provide a REST application programming interface for programmatic access. Additionally, a citation system will allow snapshots of any data set to be archived and cited for publication while the data itself can remain living and continuously evolve beyond the snapshot date. The software runs on a traditional LAMP (Linux, Apache, MySQL, PHP) server and is available on GitHub (http://github.com/opendatarepository) under a GPLv2 open source license. The goal of the Open Data Repository is to lower the cost and training barrier to entry so that any researcher can easily publish their data and ensure it is archived for posterity.

  12. PUBLISHER'S ANNOUNCEMENT: Refereeing standards

    Science.gov (United States)

    Bender, C.; Scriven, N.

    2004-08-01

    submitting papers to J. Phys. A. In addition to the office staff, the journal has two assets of enormous value. First, there is the pool of referees. It is impossible to have an academic system based on publication of original ideas without peer review. I believe that when one submits papers for publication in journals, one assumes a moral responsibility to participate in the peer review system. A published author has an obligation to referee papers and thereby to keep the scientific quality of published work as high as possible. In general, referees' reports that are submitted to scientific journals vary in quality. Some referees reply quickly and write detailed, careful, and helpful reports; other referees write cursory reports that are not so useful. Over the years J. Phys. A has amassed an amazingly talented and sedulous group of referees. I thank the referees of the journal who have worked so hard and have contributed their time without any expectation of financial compensation. I emphasize that the office tries hard to avoid overburdening referees. Sending back a quick and detailed response does not increase the likelihood of the referee receiving another paper to evaluate. (A number of people have told me that they sit on and delay the refereeing of papers in hopes of reducing the number of papers per year that they receive to referee. The office at J. Phys. A works to make this sort of strategy unnecessary.) The second asset is the Board of Editors and the Advisory Panel. For some journals membership on the Board of Editors is a sinecure. However, the 37 members of the Board of Editors and the 50 members of the Advisory Panel of J. Phys. A have been chosen not only because they are distinguished mathematical physicists but also because of their demonstrated willingness to work hard. Six members of the Board of Editors are designated as Section Editors: H Nishimori, Tokyo Institute of Technology, Japan (Statistical Physics); P Grassberger, Bergische Universität GH

  13. Genealogical Information Search by Using Parent Bidirectional Breadth Algorithm and Rule Based Relationship

    CERN Document Server

    Nuanmeesri, Sumitra; Meesad, Payung

    2010-01-01

    Genealogical information is the best histories resources for culture study and cultural heritage. The genealogical research generally presents family information and depict tree diagram. This paper presents Parent Bidirectional Breadth Algorithm (PBBA) to find consanguine relationship between two persons. In addition, the paper utilizes rules based system in order to identify consanguine relationship. The study reveals that PBBA is fast to solve the genealogical information search problem and the Rule Based Relationship provides more benefits in blood relationship identification.

  14. The Robustness of Content-Based Search in Hierarchical Peer to Peer Networks

    OpenAIRE

    Renda, Maria Elena; Callan, Jamie

    2004-01-01

    Hierarchical Peer to Peer (P2P) networks with multiple directory services have quickly become one of the dominant architectures for large-scale file sharing due to their effectiveness and efficiency. Recent research argues that such networks are also an effective method of providing large-scale content-based federated search of text-based digital libraries. In both cases the directory services are critical resources that are subject to attack or failure, but the latter architecture may be par...

  15. Predatory Search Strategy Based on Swarm Intelligence for Continuous Optimization Problems

    OpenAIRE

    Wang, J. W.; H. F. Wang; Ip, W. H.; Furuta, K; Kanno, T.; Zhang, W. J.

    2013-01-01

    We propose an approach to solve continuous variable optimization problems. The approach is based on the integration of predatory search strategy (PSS) and swarm intelligence technique. The integration is further based on two newly defined concepts proposed for the PSS, namely, “restriction” and “neighborhood,” and takes the particle swarm optimization (PSO) algorithm as the local optimizer. The PSS is for the switch of exploitation and exploration (in particular by the adjustment of neighborh...

  16. How Users Search the Library from a Single Search Box

    Science.gov (United States)

    Lown, Cory; Sierra, Tito; Boyer, Josh

    2013-01-01

    Academic libraries are turning increasingly to unified search solutions to simplify search and discovery of library resources. Unfortunately, very little research has been published on library user search behavior in single search box environments. This study examines how users search a large public university library using a prominent, single…

  17. Is Internet search better than structured instruction for web-based health education?

    Science.gov (United States)

    Finkelstein, Joseph; Bedra, McKenzie

    2013-01-01

    Internet provides access to vast amounts of comprehensive information regarding any health-related subject. Patients increasingly use this information for health education using a search engine to identify education materials. An alternative approach of health education via Internet is based on utilizing a verified web site which provides structured interactive education guided by adult learning theories. Comparison of these two approaches in older patients was not performed systematically. The aim of this study was to compare the efficacy of a web-based computer-assisted education (CO-ED) system versus searching the Internet for learning about hypertension. Sixty hypertensive older adults (age 45+) were randomized into control or intervention groups. The control patients spent 30 to 40 minutes searching the Internet using a search engine for information about hypertension. The intervention patients spent 30 to 40 minutes using the CO-ED system, which provided computer-assisted instruction about major hypertension topics. Analysis of pre- and post- knowledge scores indicated a significant improvement among CO-ED users (14.6%) as opposed to Internet users (2%). Additionally, patients using the CO-ED program rated their learning experience more positively than those using the Internet. PMID:23823377

  18. Electronic publishing and bibliometrics

    OpenAIRE

    Moed, H.F.

    2009-01-01

    This lecture deals with the relationships between electronic publishing and bibliometrics, the quantitative study of scientific-scholarly texts. It gives an overview of the effects of recent trends in electronic publishing upon the availability of bibliographic or bibliometric databases, indexing of publications, the construction of new bibliometric indicators, and upon the research topics in bibliometrics and quantitative studies of science. In fact, electronic publishing constitutes an impo...

  19. Publishing studies: what else?

    Directory of Open Access Journals (Sweden)

    Bertrand Legendre

    2015-07-01

    Full Text Available This paper intends to reposition “publishing studies” in the long process that goes from the beginning of book history to the current research on cultural industries. It raises questions about interdisciplinarity and the possibility of considering publishing independently of other sectors of the media and cultural offerings. Publishing is now included in a large range of industries and, at the same time, analyses tend to become more and more segmented according to production sectors and scientific fields. In addition to the problems created, from the professional point of view, by this double movement, this one requires a questioning of the concept of “publishing studies”.

  20. Budget-Impact Analyses: A Critical Review of Published Studies

    OpenAIRE

    Ewa Orlewska; Laszlo Gulcsi

    2009-01-01

    This article reviews budget-impact analyses (BIAs) published to date in peer-reviewed bio-medical journals with reference to current best practice, and discusses where future research needs to be directed. Published BIAs were identified by conducting a computerized search on PubMed using the search term 'budget impact analysis'. The years covered by the search included January 2000 through November 2008. Only studies (i) named by authors as BIAs and (ii) predicting financial consequences of a...

  1. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    OpenAIRE

    Jie-sheng Wang; Shu-xia Li; Jiang-di Song

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird’s nests location. In order to select a reasonable repeat-...

  2. Parcels and Land Ownership, Parcel data based off Landnet and survey grade GPS, Published in unknown, 1:7200 (1in=600ft) scale, Bayfield County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Parcels and Land Ownership dataset, published at 1:7200 (1in=600ft) scale, was produced all or in part from Published Reports/Deeds information as of unknown....

  3. Television Transmitter Locations, parcel data base attribute; building type, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Television Transmitter Locations dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as of...

  4. Radio Transmitters and Tower Locations, parcel data base attribute; building type, Published in 2006, 1:1200 (1in=100ft) scale, Washoe County.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Radio Transmitters and Tower Locations dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Published Reports/Deeds information as...

  5. Predatory Search Strategy Based on Swarm Intelligence for Continuous Optimization Problems

    Directory of Open Access Journals (Sweden)

    J. W. Wang

    2013-01-01

    Full Text Available We propose an approach to solve continuous variable optimization problems. The approach is based on the integration of predatory search strategy (PSS and swarm intelligence technique. The integration is further based on two newly defined concepts proposed for the PSS, namely, “restriction” and “neighborhood,” and takes the particle swarm optimization (PSO algorithm as the local optimizer. The PSS is for the switch of exploitation and exploration (in particular by the adjustment of neighborhood, while the swarm intelligence technique is for searching the neighborhood. The proposed approach is thus named PSS-PSO. Five benchmarks are taken as test functions (including both unimodal and multimodal ones to examine the effectiveness of the PSS-PSO with the seven well-known algorithms. The result of the test shows that the proposed approach PSS-PSO is superior to all the seven algorithms.

  6. Utilization of Tabu search heuristic rules in sampling-based motion planning

    Science.gov (United States)

    Khaksar, Weria; Hong, Tang Sai; Sahari, Khairul Salleh Mohamed; Khaksar, Mansoor

    2015-05-01

    Path planning in unknown environments is one of the most challenging research areas in robotics. In this class of path planning, the robot acquires the information from its sensory system. Sampling-based path planning is one of the famous approaches with low memory and computational requirements that has been studied by many researchers during the past few decades. We propose a sampling-based algorithm for path planning in unknown environments using Tabu search. The Tabu search component of the proposed method guides the sampling to find the samples in the most promising areas and makes the sampling procedure more intelligent. The simulation results show the efficient performance of the proposed approach in different types of environments. We also compare the performance of the algorithm with some of the well-known path planning approaches, including Bug1, Bug2, PRM, RRT and the Visibility Graph. The comparison results support the claim of superiority of the proposed algorithm.

  7. Local search methods based on variable focusing for random K-satisfiability.

    Science.gov (United States)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed. PMID:25679737

  8. A New Hardware Architecture for Parallel Shortest Path Searching Processor Based-on FPGA Technology

    Directory of Open Access Journals (Sweden)

    Jassim M. Abdul-Jabbar

    2012-09-01

    Full Text Available In this paper, a new FPGA-based parallel processor for shortest path searching for OSPF networks is designed and implemented. The processor design is based on parallel searching algorithm that overcomes the long time execution of the conventional Dijkstra algorithm which is used originally in OSPF network protocol. Multiple shortest links can be found simultaneously and the execution iterations of the processing phase are limited to − instead of of Dijkstra algorithm. Depending on the FPGA chip resources, the processor is expanded to be able to process an OSPF area with 128 routers. High speed up factors of our proposal processor against the sequential Dijkstra execution times, within (76.77-103.45, are achieved.

  9. SHOP: receptor-based scaffold hopping by GRID-based similarity searches

    DEFF Research Database (Denmark)

    Bergmann, Rikke; Liljefors, Tommy; Sørensen, Morten D;

    2009-01-01

    find known active CDK2 scaffolds in a database. Additionally, SHOP was used for suggesting new inhibitors of p38 MAP kinase. Four p38 complexes were used to perform six scaffold searches. Several new scaffolds were suggested, and the resulting compounds were successfully docked into the query proteins....

  10. The Design of a Semantic Search Engine based on “Visual Content”

    OpenAIRE

    Ambreen Anjum, Muhammad Nabeel Aslam, Rehana Sharif

    2013-01-01

    Digital contents like images are becoming abundant on the World Wide Web (WWW). The searching of images pose numerous challenges, the few of the approaches that gave encouraging results are automatic image annotation (AIA) via digital content processing. There is a growing awareness of content based-image retrieval (CB-IR). In CB-IR, first of all a system can strain images depending upon their content. Then, they would provide superior indexing and gives more accurate and efficient output. In...

  11. A Content-based search engine on medical images for telemedicine

    OpenAIRE

    Lee, CH; Ng, V; Cheung, DWL

    1997-01-01

    Retrieving images by content and forming visual queries are important functionality of an image database system. Using textual descriptions to specify queries on image content is another important component of content-based search. The authors describe a medical image database system MIQS which supports visual queries such as query by example and query by sketch. In addition, it supports textual queries on spatial relationships between the objects of an image. MIQS is designed as a client-ser...

  12. Direct Search Based Strategy for Obstacle Avoidance of a Redundant Manipulator

    OpenAIRE

    Cornel Secară; Dan Dumitriu

    2010-01-01

    This paper presents an iterative direct search based strategy for redundancy resolution. The end-effector of the redundant manipulatorachieves the imposed task of following the contour of a curve, whilefulfilling two other performance criteria: obstacle avoidance and minimization of the sum of joint displacements. The objective function to minimize is the sum of joint displacements, while the obstacle avoidance and end-effector task are expressed as non-linear constraints. The proposed direct...

  13. Ontology-Aided vs. Keyword-Based Web Searches: A Comparative User Study

    OpenAIRE

    Kamel, Magdi; Lee, Ann; Powers, Ed

    2007-01-01

    Ontologies are formal explicit description of concepts in a domain of discourse, properties of these concepts, and restrictions on these properties that are specified by semantics that follow the “rules” of the domain of knowledge. As such, ontologies would be extremely useful as knowledge bases for an application attempting to add context to a particular Web search term. This paper describes such an application and reports the results of a user study designed to compare the effec...

  14. Home-Explorer: Ontology-Based Physical Artifact Search and Hidden Object Detection System

    OpenAIRE

    Bin Guo; Satoru Satake; Michita Imai

    2008-01-01

    A new system named Home-Explorer that searches and finds physical artifacts in a smart indoor environment is proposed. The view on which it is based is artifact-centered and uses sensors attached to the everyday artifacts (called smart objects) in the real world. This paper makes two main contributions: First, it addresses, the robustness of the embedded sensors, which is seldom discussed in previous smart artifact research. Because sensors may sometimes be broken or fail to work under certai...

  15. A Rule-Based Local Search Algorithm for General Shift Design Problems in Airport Ground Handling

    DEFF Research Database (Denmark)

    Clausen, Tommy

    We consider a generalized version of the shift design problem where shifts are created to cover a multiskilled demand and fit the parameters of the workforce. We present a collection of constraints and objectives for the generalized shift design problem. A local search solution framework with mul...... multiple neighborhoods and a loosely coupled rule engine based on simulated annealing is presented. Computational experiments on real-life data from various airport ground handling organization show the performance and flexibility of the proposed algorithm....

  16. A Factorial Experiment on Scalability of Search-based Software Testing

    OpenAIRE

    Mehrmand, Arash

    2009-01-01

    Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although ...

  17. Block-based disparity estimation by partial finite ridgelet distortion search (PFRDS)

    Science.gov (United States)

    Eslami, Mohammad; Torkamani-Azar, Farah

    2010-01-01

    In stereo vision applications, computing the disparity map is an important issue. Performance of different approaches totally depends on the employed similarity measurements. In this paper finite ridgelet transform is used to define an edge sensitive block distortion similarity measure. Simulation results emphasize to outperform in the conventional criteria and is less sensitive to noise, especially at the edge set of images. To speed computations, a new partial search algorithm based on energy conservation property of FRIT is proposed.

  18. Genealogical Information Search by Using Parent Bidirectional Breadth Algorithm and Rule Based Relationship

    OpenAIRE

    Sumitra Nuanmeesri; Chanasak Baitiang,; Phayung Meesad

    2009-01-01

    Genealogical information is the best histories resources for culture study and cultural heritage. The genealogical research generally presents family information and depict tree diagram. This paper presents Parent Bidirectional Breadth Algorithm (PBBA) to find consanguine relationship between two persons. In addition, the paper utilizes rules based system in order to identify consanguine relationship. The study reveals that PBBA is fast to solve the genealogical information search problem and...

  19. Architecture for Knowledge-Based and Federated Search of Online Clinical Evidence

    OpenAIRE

    Coiera, Enrico; Walther, Martin; Nguyen, Ken; Lovell, Nigel H.

    2005-01-01

    Background It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. Objectives The...

  20. Knowledge search for new product development: a multi-agent based methodology

    OpenAIRE

    Jian, Guo

    2011-01-01

    Manufacturers are the leaders in developing new products to drive productivity. Higher productivity means more products based on the same materials, energy, labour, and capitals. New product development plays a critical role in the success of manufacturing firms. Activities in the product development process are dependent on the knowledge of new product development team members. Increasingly, many enterprises consider effective knowledge search to be a source of competitive advantage. Th...

  1. An Improved ZMP-Based CPG Model of Bipedal Robot Walking Searched by SaDE

    OpenAIRE

    Yu, H. F.; Fung, E. H. K.; Jing, X. J.

    2014-01-01

    This paper proposed a method to improve the walking behavior of bipedal robot with adjustable step length. Objectives of this paper are threefold. (1) Genetic Algorithm Optimized Fourier Series Formulation (GAOFSF) is modified to improve its performance. (2) Self-adaptive Differential Evolutionary Algorithm (SaDE) is applied to search feasible walking gait. (3) An efficient method is proposed for adjusting step length based on the modified central pattern generator (CPG) model. The GAOFSF is ...

  2. A Product Feature-based User-Centric Ranking Model for E-Commerce Search

    OpenAIRE

    Ben Jabeur, Lamjed; Soulier, Laure; Tamine, Lynda; Mousset, Paul

    2016-01-01

    International audience During the online shopping process, users search for interesting products in order to quickly access those that fit with their needs among a long tail of similar or closely related products. Our contribution addresses head queries that are frequently submitted on e-commerce Web sites. Head queries usually target featured products with several variations , accessories, and complementary products. We present in this paper a product feature-based user-centric model for ...

  3. Characterization of single layer anti-reflective coatings for bolometer-based rare event searches

    CERN Document Server

    Hansen, E V

    2016-01-01

    A photon signal added to the existing phonon signal can powerfully reduce backgrounds for bolometer-based rare event searches. Anti-reflective coatings can significantly increase the performance of the secondary light sensing bolometer in these experiments. Coatings of SiO2, HfO2, and TiO2 on Ge and Si were fabricated and characterized at room temperature and all angles of incidence.

  4. PSO-Based Support Vector Machine with Cuckoo Search Technique for Clinical Disease Diagnoses

    OpenAIRE

    Xiaoyong Liu; Hui Fu

    2014-01-01

    Disease diagnosis is conducted with a machine learning method. We have proposed a novel machine learning method that hybridizes support vector machine (SVM), particle swarm optimization (PSO), and cuckoo search (CS). The new method consists of two stages: firstly, a CS based approach for parameter optimization of SVM is developed to find the better initial parameters of kernel function, and then PSO is applied to continue SVM training and find the best parameters of SVM. Experimental results ...

  5. THE TYPES OF PUBLISHING SLOGANS

    Directory of Open Access Journals (Sweden)

    Ryzhov Konstantin Germanovich

    2015-03-01

    Full Text Available The author of the article focuses his attention on publishing slogans which are posted on 100 present-day Russian publishing houses' official websites and have not yet been studied in the special literature. The author has developed his own classification of publishing slogans based on the results of analysis and considering the current scientific views on the classification of slogans. The examined items are classified into autonomous and text-dependent according to interrelationship with an advertising text; marketable, corporative and mixed according to a presentation subject; rational, emotional and complex depending on the method of influence upon a recipient; slogan-presentation, slogan-assurance, slogan-identifier, slogan-appraisal, slogan-appeal depending on the communicative strategy; slogans consisting of one sentence and of two or more sentences; Russian and foreign ones. The analysis of the slogans of all kinds presented in the actual material allowed the author to determine the dominant features of the Russian publishing slogan which is an autonomous sentence in relation to the advertising text. In spite of that, the slogan shows the publishing output, influences the recipient emotionally, actualizes the communicative strategy of publishing house presentation of its distinguishing features, gives assurance to the target audience and distinguishes the advertised subject among competitors.

  6. A neotropical Miocene pollen database employing image-based search and semantic modeling1

    Science.gov (United States)

    Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren

    2014-01-01

    • Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648

  7. Trade Publishing: A Report from the Front.

    Science.gov (United States)

    Fister, Barbara

    2001-01-01

    Reports on the current condition of trade publishing and its future prospects based on interviews with editors, publishers, agents, and others. Discusses academic libraries and the future of trade publishing, including questions relating to electronic books, intellectual property, and social and economic benefits of sharing information…

  8. Personal publishing and media literacy

    OpenAIRE

    2005-01-01

    Based on a discussion of the terms “digital competence” and “media competence” this paper presents challenges of designing virtual learning arenas based on principles known from weblogs and wikis. Both are personal publishing forms that seem promising in an educational context. The paper outlines a learning environment designed to make it possible for individual users to organize their own learning environments and enabling them to utilize web-based forms of personal communication...

  9. Extended-search, Bézier Curve-based Lane Detection and Reconstruction System for an Intelligent Vehicle

    Directory of Open Access Journals (Sweden)

    Xiaoyun Huang

    2015-09-01

    Full Text Available To improve the real-time performance and detection rate of a Lane Detection and Reconstruction (LDR system, an extended-search-based lane detection method and a Bézier curve-based lane reconstruction algorithm are proposed in this paper. The extended search-based lane detection method is designed to search boundary blocks from the initial position, in an upwards direction and along the lane, with small search areas including continuous search, discontinuous search and bending search in order to detect different lane boundaries. The Bézier curve-based lane reconstruction algorithm is employed to describe a wide range of lane boundary forms with comparatively simple expressions. In addition, two Bézier curves are adopted to reconstruct the lanes’ outer boundaries with large curvature variation. The lane detection and reconstruction algorithm — including initial-blocks’ determining, extended search, binarization processing and lane boundaries’ fitting in different scenarios — is verified in road tests. The results show that this algorithm is robust against different shadows and illumination variations; the average processing time per frame is 13 ms. Significantly, it presents an 88.6% high-detection rate on curved lanes with large or variable curvatures, where the accident rate is higher than that of straight lanes.

  10. Web-Based Undergraduate Chemistry Problem-Solving: The Interplay of Task Performance, Domain Knowledge and Web-Searching Strategies

    Science.gov (United States)

    She, Hsiao-Ching; Cheng, Meng-Tzu; Li, Ta-Wei; Wang, Chia-Yu; Chiu, Hsin-Tien; Lee, Pei-Zon; Chou, Wen-Chi; Chuang, Ming-Hua

    2012-01-01

    This study investigates the effect of Web-based Chemistry Problem-Solving, with the attributes of Web-searching and problem-solving scaffolds, on undergraduate students' problem-solving task performance. In addition, the nature and extent of Web-searching strategies students used and its correlation with task performance and domain knowledge also…

  11. A Comparison of Multi-Parametric Programming, Mixed-Integer Programming, Gradient Descent Based, and the Embedding Approach on Four Published Hybrid Optimal Control Examples

    CERN Document Server

    Meyer, Richard; DeCarlo, Raymond A

    2012-01-01

    This paper compares the embedding approach for solving hybrid optimal control problems to multi-parameter programming, mixed-integer programming, and gradient-descent based methods in the context of four published examples. The four examples include a spring-mass system, moving-target tracking for a mobile robot, two-tank filling, and a DC-DC boost converter. Numerical advantages of the embedding approach are set forth and validated for each example: significantly faster solution time, no ad hoc assumptions (such as predetermined mode sequences) or control models, lower performance index costs, and algorithm convergence when other methods fail. Specific (theoretical) advantages of the embedding approach over the other methods are also described: guaranteed existence of a solution under mild conditions, convexity of the embedded optimization problem solvable with traditional techniques such as sequential quadratic programming with no need for any mixed-integer programming, applicability to nonlinear systems, e...

  12. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    Science.gov (United States)

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems. PMID:17945233

  13. Data Sharing & Publishing at Nature Publishing Group

    Science.gov (United States)

    VanDecar, J. C.; Hrynaszkiewicz, I.; Hufton, A. L.

    2015-12-01

    In recent years, the research community has come to recognize that upon-request data sharing has important limitations1,2. The Nature-titled journals feel that researchers have a duty to share data without undue qualifications, in a manner that allows others to replicate and build upon their published findings. Historically, the Nature journals have been strong supporters of data deposition in communities with existing data mandates, and have required data sharing upon request in all other cases. To help address some of the limitations of upon-request data sharing, the Nature titles have strengthened their existing data policies and forged a new partnership with Scientific Data, to promote wider data sharing in discoverable, citeable and reusable forms, and to ensure that scientists get appropriate credit for sharing3. Scientific Data is a new peer-reviewed journal for descriptions of research datasets, which works with a wide of range of public data repositories4. Articles at Scientific Data may either expand on research publications at other journals or may be used to publish new datasets. The Nature Publishing Group has also signed the Joint Declaration of Data Citation Principles5, and Scientific Data is our first journal to include formal data citations. We are currently in the process of adding data citation support to our various journals. 1 Wicherts, J. M., Borsboom, D., Kats, J. & Molenaar, D. The poor availability of psychological research data for reanalysis. Am. Psychol. 61, 726-728, doi:10.1037/0003-066x.61.7.726 (2006). 2 Vines, T. H. et al. Mandated data archiving greatly improves access to research data. FASEB J. 27, 1304-1308, doi:10.1096/fj.12-218164 (2013). 3 Data-access practices strengthened. Nature 515, 312, doi:10.1038/515312a (2014). 4 More bang for your byte. Sci. Data 1, 140010, doi:10.1038/sdata.2014.10 (2014). 5 Data Citation Synthesis Group: Joint Declaration of Data Citation Principles. (FORCE11, San Diego, CA, 2014).

  14. Internet-based search of randomised trials relevant to mental health originating in the Arab world

    Directory of Open Access Journals (Sweden)

    Adams Clive E

    2005-07-01

    Full Text Available Abstract Background The internet is becoming a widely used source of accessing medical research through various on-line databases. This instant access to information is of benefit to busy clinicians and service users around the world. The population of the Arab World is comparable to that of the United States, yet it is widely believed to have a greatly contrasting output of randomised controlled trials related to mental health. This study was designed to investigate the existence of such research in the Arab World and also to investigate the availability of this research on-line. Methods Survey of findings from three internet-based potential sources of randomised trials originating from the Arab world and relevant to mental health care. Results A manual search of an Arabic online current contents service identified 3 studies, MEDLINE, EMBASE, and PsycINFO searches identified only 1 study, and a manual search of a specifically indexed, study-based mental health database, PsiTri, revealed 27 trials. Conclusion There genuinely seem to be few trials from the Arab world and accessing these on-line was problematic. Replication of some studies that guide psychiatric/psychological practice in the Arab world would seem prudent.

  15. Image search engine with selective filtering and feature-element-based classification

    Science.gov (United States)

    Li, Qing; Zhang, Yujin; Dai, Shengyang

    2001-12-01

    With the growth of Internet and storage capability in recent years, image has become a widespread information format in World Wide Web. However, it has become increasingly harder to search for images of interest, and effective image search engine for the WWW needs to be developed. We propose in this paper a selective filtering process and a novel approach for image classification based on feature element in the image search engine we developed for the WWW. First a selective filtering process is embedded in a general web crawler to filter out the meaningless images with GIF format. Two parameters that can be obtained easily are used in the filtering process. Our classification approach first extract feature elements from images instead of feature vectors. Compared with feature vectors, feature elements can better capture visual meanings of the image according to subjective perception of human beings. Different from traditional image classification method, our classification approach based on feature element doesn't calculate the distance between two vectors in the feature space, while trying to find associations between feature element and class attribute of the image. Experiments are presented to show the efficiency of the proposed approach.

  16. Elearning and digital publishing

    CERN Document Server

    Ching, Hsianghoo Steve; Mc Naught, Carmel

    2006-01-01

    ""ELearning and Digital Publishing"" will occupy a unique niche in the literature accessed by library and publishing specialists, and by university teachers and planners. It examines the interfaces between the work done by four groups of university staff who have been in the past quite separate from, or only marginally related to, each other - library staff, university teachers, university policy makers, and staff who work in university publishing presses. All four groups are directly and intimately connected with the main functions of universities - the creation, management and dissemination

  17. Handling Conflicts in Depth-First Search for LTL Tableau to Debug Compliance Based Languages

    Directory of Open Access Journals (Sweden)

    Francois Hantry

    2011-09-01

    Full Text Available Providing adequate tools to tackle the problem of inconsistent compliance rules is a critical research topic. This problem is of paramount importance to achieve automatic support for early declarative design and to support evolution of rules in contract-based or service-based systems. In this paper we investigate the problem of extracting temporal unsatisfiable cores in order to detect the inconsistent part of a specification. We extend conflict-driven SAT-solver to provide a new conflict-driven depth-first-search solver for temporal logic. We use this solver to compute LTL unsatisfiable cores without re-exploring the history of the solver.

  18. Handling Conflicts in Depth-First Search for LTL Tableau to Debug Compliance Based Languages

    CERN Document Server

    Hantry, Francois; 10.4204/EPTCS.68.5

    2011-01-01

    Providing adequate tools to tackle the problem of inconsistent compliance rules is a critical research topic. This problem is of paramount importance to achieve automatic support for early declarative design and to support evolution of rules in contract-based or service-based systems. In this paper we investigate the problem of extracting temporal unsatisfiable cores in order to detect the inconsistent part of a specification. We extend conflict-driven SAT-solver to provide a new conflict-driven depth-first-search solver for temporal logic. We use this solver to compute LTL unsatisfiable cores without re-exploring the history of the solver.

  19. Case and Relation (CARE based Page Rank Algorithm for Semantic Web Search Engines

    Directory of Open Access Journals (Sweden)

    N. Preethi

    2012-05-01

    Full Text Available Web information retrieval deals with a technique of finding relevant web pages for any given query from a collection of documents. Search engines have become the most helpful tool for obtaining useful information from the Internet. The next-generation Web architecture, represented by the Semantic Web, provides the layered architecture possibly allowing data to be reused across application. The proposed architecture use a hybrid methodology named Case and Relation (CARE based Page Rank algorithm which uses past problem solving experience maintained in the case base to form a best matching relations and then use them for generating graphs and spanning forests to assign a relevant score to the pages.

  20. MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2013-10-01

    Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works

  1. Neural Based Tabu Search method for solving unit commitment problem with cooling-banking constraints

    Directory of Open Access Journals (Sweden)

    Rajan Asir Christober Gnanakkan Charles

    2009-01-01

    Full Text Available This paper presents a new approach to solve short-term unit commitment problem (UCP using Neural Based Tabu Search (NBTS with cooling and banking constraints. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for next H hours. A 7-unit utility power system in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different IEEE test systems consist of 10, 26 and 34 units. Numerical results are shown to compare the superiority of the cost solutions obtained using the Tabu Search (TS method, Dynamic Programming (DP and Lagrangian Relaxation (LR methods in reaching proper unit commitment.

  2. Based on A* and Q-Learning Search and Rescue Robot Navigation

    Directory of Open Access Journals (Sweden)

    Ruiyuan Fan

    2012-11-01

    Full Text Available For the search and rescue robot navigation in unknown environment, a bionic self-learning algorithm based on A* and Q-Learning is put forward. The algorithm utilizes the Growing Self-organizing Map (GSOM to build the environment topology cognitive map. The heuristic search A* algorithm is used the global path planning. When the local environment changes, Q-Learning is used the local path planning. Thereby the robot can obtain the self-learning skill by studying and training like human or animal, and looks for a free path from the initial state to the target state in unknown environment. The theory proves the validity of the method. The simulation result shows the robot obtains the navigation capability.

  3. Transmission network expansion planning based on hybridization model of neural networks and harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Ameli

    2012-01-01

    Full Text Available Transmission Network Expansion Planning (TNEP is a basic part of power network planning that determines where, when and how many new transmission lines should be added to the network. So, the TNEP is an optimization problem in which the expansion purposes are optimized. Artificial Intelligence (AI tools such as Genetic Algorithm (GA, Simulated Annealing (SA, Tabu Search (TS and Artificial Neural Networks (ANNs are methods used for solving the TNEP problem. Today, by using the hybridization models of AI tools, we can solve the TNEP problem for large-scale systems, which shows the effectiveness of utilizing such models. In this paper, a new approach to the hybridization model of Probabilistic Neural Networks (PNNs and Harmony Search Algorithm (HSA was used to solve the TNEP problem. Finally, by considering the uncertain role of the load based on a scenario technique, this proposed model was tested on the Garver’s 6-bus network.

  4. Towards a Complexity Theory of Randomized Search Heuristics: Ranking-Based Black-Box Complexity

    CERN Document Server

    Doerr, Benjamin

    2011-01-01

    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. A big step forward would be a useful complexity theory for such algorithms. We enrich the two existing black-box complexity notions due to Wegener and other authors by the restrictions that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithm. Many randomized search heuristics belong to this class of algorithms. We show that the new ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold.

  5. Target detection method based on supervised saliency map and efficient subwindow search

    Science.gov (United States)

    Liu, Songtao; Jiang, Ning; Liu, Zhenxing

    2015-10-01

    In order to realize fast target detection under complex image scene, a novel method is proposed based on supervised saliency map and efficient subwindow search. Supervised saliency map generation mainly includes: (1) the original image is segmented by different parameters to obtain multi-segmentation results; (2) regional feature is mapped for salient value by random forest regressor; (3) obtain saliency map by fusing multi-level segmentation results. Efficient subwindow search method is implemented by transforming salient target detection as maximum saliency density, and using branch and bound algorithm to localize the maximum saliency density in global optimum. The experimental results show that the new method can not only detect salient region, but also recognize this region in some extent.

  6. A Fast LSF Search Algorithm Based on Interframe Correlation in G.723.1

    Directory of Open Access Journals (Sweden)

    Kulkarni Jaydeep P

    2004-01-01

    Full Text Available We explain a time complexity reduction algorithm that improves the line spectral frequencies (LSF search procedure on the unit circle for low bit rate speech codecs. The algorithm is based on strong interframe correlation exhibited by LSFs. The fixed point C code of ITU-T Recommendation G.723.1, which uses the “real root algorithm” was modified and the results were verified on ARM-7TDMI general purpose RISC processor. The algorithm works for all test vectors provided by International Telecommunications Union-Telecommunication (ITU-T as well as real speech. The average time reduction in the search computation was found to be approximately 20%.

  7. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    Science.gov (United States)

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  8. SearchLight: a freely available web-based quantitative spectral analysis tool (Conference Presentation)

    Science.gov (United States)

    Prabhat, Prashant; Peet, Michael; Erdogan, Turan

    2016-03-01

    In order to design a fluorescence experiment, typically the spectra of a fluorophore and of a filter set are overlaid on a single graph and the spectral overlap is evaluated intuitively. However, in a typical fluorescence imaging system the fluorophores and optical filters are not the only wavelength dependent variables - even the excitation light sources have been changing. For example, LED Light Engines may have a significantly different spectral response compared to the traditional metal-halide lamps. Therefore, for a more accurate assessment of fluorophore-to-filter-set compatibility, all sources of spectral variation should be taken into account simultaneously. Additionally, intuitive or qualitative evaluation of many spectra does not necessarily provide a realistic assessment of the system performance. "SearchLight" is a freely available web-based spectral plotting and analysis tool that can be used to address the need for accurate, quantitative spectral evaluation of fluorescence measurement systems. This tool is available at: http://searchlight.semrock.com/. Based on a detailed mathematical framework [1], SearchLight calculates signal, noise, and signal-to-noise ratio for multiple combinations of fluorophores, filter sets, light sources and detectors. SearchLight allows for qualitative and quantitative evaluation of the compatibility of filter sets with fluorophores, analysis of bleed-through, identification of optimized spectral edge locations for a set of filters under specific experimental conditions, and guidance regarding labeling protocols in multiplexing imaging assays. Entire SearchLight sessions can be shared with colleagues and collaborators and saved for future reference. [1] Anderson, N., Prabhat, P. and Erdogan, T., Spectral Modeling in Fluorescence Microscopy, http://www.semrock.com (2010).

  9. About EBSCO Publishing

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    <正>EBSCO Publishing,headquartered in Ipswich,Massachusetts[1],is an aggregator of premium full-text content. EBSCO Publishing’s core business is providing online databases via EBSCOhost to libraries worldwide.

  10. Cellular Phone Towers, Serve as base information for use in GIS systems for general planning, analytical, and research purposes., Published in 2007, 1:24000 (1in=2000ft) scale, Louisiana State University.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cellular Phone Towers dataset, published at 1:24000 (1in=2000ft) scale as of 2007. It is described as 'Serve as base information for use in GIS systems for...

  11. The Book Publishing Industry

    OpenAIRE

    Jean-Paul Simon; Giuditta De Prato

    2012-01-01

    This report offers an in-depth analysis of the major economic developments in the book publishing industry. The analysis integrates data from a statistical report published earlier as part of this project. The report is divided into 4 main parts. Chapter 1, the introduction, puts the sector into an historical perspective. Chapter 2 introduces the markets at a global and regional level; describes some of the major EU markets (France, Germany, Italy, Spain and the United Kingdom). Chapter 3 ana...

  12. PLAGIARISM IN SCIENTIFIC PUBLISHING

    OpenAIRE

    Masic, Izet

    2012-01-01

    Scientific publishing is the ultimate product of scientist work. Number of publications and their quoting are measures of scientist success while unpublished researches are invisible to the scientific community, and as such nonexistent. Researchers in their work rely on their predecessors, while the extent of use of one scientist work, as a source for the work of other authors is the verification of its contributions to the growth of human knowledge. If the author has published an article in ...

  13. Open Access Publishing

    OpenAIRE

    Morrison, Heather

    2007-01-01

    An overview of open access publishing, for college faculty. Presents a definition of open access, the two roads to open access (OA publishing and self-archiving), overview of business models for open access, and examples of open access journals, with a focus on journals developed in British Columbia, including one (Topics in Scholarly Communication) developed by graduate students as a class assignment, and another developed by high school students (The Pink Voice). Includes a handout of res...

  14. Open-Access Publishing

    Directory of Open Access Journals (Sweden)

    Nedjeljko Frančula

    2013-06-01

    Full Text Available Nature, one of the most prominent scientific journals dedicated one of its issues to recent changes in scientific publishing (Vol. 495, Issue 7442, 27 March 2013. Its editors stressed that words technology and revolution are closely related when it comes to scientific publishing. In addition, the transformation of research publishing is not as much a revolution than an attrition war in which all sides are buried. The most important change they refer to is the open-access model in which an author or an institution pays in advance for publishing a paper in a journal, and the paper is then available to users on the Internet free of charge.According to preliminary results of a survey conducted among 23 000 scientists by the publisher of Nature, 45% of them believes all papers should be published in open access, but at the same time 22% of them would not allow the use of papers for commercial purposes. Attitudes toward open access vary according to scientific disciplines, leading the editors to conclude the revolution still does not suit everyone.

  15. Nephrogenic systemic fibrosis: risk factors suggested from Japanese published cases

    DEFF Research Database (Denmark)

    Tsushima, Y; Kanal, E; Thomsen, H S

    2010-01-01

    The aim of this article is to review the published cases of nephrogenic systemic fibrosis (NSF) in Japan. The Japanese medical literature database and MedLine were searched using the keywords NSF and nephrogenic fibrosing dermopathy (January 2000 to March 2009). Reports in peer-reviewed journals...... and meeting abstracts were included, and cases with biopsy confirmation were selected. 14 biopsy-verified NSF cases were found. In seven of eight patients reported after the association between gadolinium-based contrast agent (GBCA) and NSF was proposed, GBCA administration was documented: five received only...

  16. Optimization of fuel cells for BWR based in Tabu modified search

    International Nuclear Information System (INIS)

    The advances in the development of a computational system for the design and optimization of cells for assemble of fuel of Boiling Water Reactors (BWR) are presented. The method of optimization is based on the technique of Tabu Search (Tabu Search, TS) implemented in progressive stages designed to accelerate the search and to reduce the time used in the process of optimization. It was programed an algorithm to create the first solution. Also for to diversify the generation of random numbers, required by the technical TS, it was used the Makoto Matsumoto function obtaining excellent results. The objective function has been coded in such a way that can adapt to optimize different parameters like they can be the enrichment average or the peak factor of radial power. The neutronic evaluation of the cells is carried out in a fine way by means of the HELIOS simulator. In the work the main characteristics of the system are described and an application example is presented to the design of a cell of 10x10 bars of fuel with 10 different enrichment compositions and gadolinium content. (Author)

  17. Similarity-based search of model organism, disease and drug effect phenotypes

    KAUST Repository

    Hoehndorf, Robert

    2015-02-19

    Background: Semantic similarity measures over phenotype ontologies have been demonstrated to provide a powerful approach for the analysis of model organism phenotypes, the discovery of animal models of human disease, novel pathways, gene functions, druggable therapeutic targets, and determination of pathogenicity. Results: We have developed PhenomeNET 2, a system that enables similarity-based searches over a large repository of phenotypes in real-time. It can be used to identify strains of model organisms that are phenotypically similar to human patients, diseases that are phenotypically similar to model organism phenotypes, or drug effect profiles that are similar to the phenotypes observed in a patient or model organism. PhenomeNET 2 is available at http://aber-owl.net/phenomenet. Conclusions: Phenotype-similarity searches can provide a powerful tool for the discovery and investigation of molecular mechanisms underlying an observed phenotypic manifestation. PhenomeNET 2 facilitates user-defined similarity searches and allows researchers to analyze their data within a large repository of human, mouse and rat phenotypes.

  18. A mass accuracy sensitive probability based scoring algorithm for database searching of tandem mass spectrometry data

    Directory of Open Access Journals (Sweden)

    Freitas Michael A

    2007-04-01

    Full Text Available Abstract Background Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS has become one of the most used tools in mass spectrometry based proteomics. Various algorithms have since been developed to automate the process for modern high-throughput LC-MS/MS experiments. Results A probability based statistical scoring model for assessing peptide and protein matches in tandem MS database search was derived. The statistical scores in the model represent the probability that a peptide match is a random occurrence based on the number or the total abundance of matched product ions in the experimental spectrum. The model also calculates probability based scores to assess protein matches. Thus the protein scores in the model reflect the significance of protein matches and can be used to differentiate true from random protein matches. Conclusion The model is sensitive to high mass accuracy and implicitly takes mass accuracy into account during scoring. High mass accuracy will not only reduce false positives, but also improves the scores of true positive matches. The algorithm is incorporated in an automated database search program MassMatrix.

  19. Energy transmission modes based on Tabu search and particle swarm hybrid optimization algorithm

    Institute of Scientific and Technical Information of China (English)

    LI xiang; CUI Ji-feng; QI Jian-xun; YANG Shang-dong

    2007-01-01

    In China, economic centers are far from energy storage bases, so it is significant to select a proper energy transferring mode to improve the efficiency of energy usage, To solve this problem, an optimal allocation model based on energy transfer mode was proposed after objective function for optimizing energy using efficiency Was established, and then, a new Tabu search and power transmission was gained.Based on the above discussion, some proposals were put forward for optimal allocation of energy transfer modes in China. By comparing other three traditional methodsthat are based on regional price differences. freight rates and annual cost witll the proposed method, the result indicates that the economic efficiency of the energy transfer Can be enhanced by 3.14%, 5.78% and 6.01%, respectively.

  20. A simple heuristic for Internet-based evidence search in primary care: a randomized controlled trial

    Science.gov (United States)

    Eberbach, Andreas; Becker, Annette; Rochon, Justine; Finkemeler, Holger; Wagner, Achim; Donner-Banzhoff, Norbert

    2016-01-01

    Background General practitioners (GPs) are confronted with a wide variety of clinical questions, many of which remain unanswered. Methods In order to assist GPs in finding quick, evidence-based answers, we developed a learning program (LP) with a short interactive workshop based on a simple three-step-heuristic to improve their search and appraisal competence (SAC). We evaluated the LP effectiveness with a randomized controlled trial (RCT). Participants (intervention group [IG] n=20; control group [CG] n=31) rated acceptance and satisfaction and also answered 39 knowledge questions to assess their SAC. We controlled for previous knowledge in content areas covered by the test. Results Main outcome – SAC: within both groups, the pre–post test shows significant (P=0.00) improvements in correctness (IG 15% vs CG 11%) and confidence (32% vs 26%) to find evidence-based answers. However, the SAC difference was not significant in the RCT. Other measures Most workshop participants rated “learning atmosphere” (90%), “skills acquired” (90%), and “relevancy to my practice” (86%) as good or very good. The LP-recommendations were implemented by 67% of the IG, whereas 15% of the CG already conformed to LP recommendations spontaneously (odds ratio 9.6, P=0.00). After literature search, the IG showed a (not significantly) higher satisfaction regarding “time spent” (IG 80% vs CG 65%), “quality of information” (65% vs 54%), and “amount of information” (53% vs 47%). Conclusion Long-standing established GPs have a good SAC. Despite high acceptance, strong learning effects, positive search experience, and significant increase of SAC in the pre–post test, the RCT of our LP showed no significant difference in SAC between IG and CG. However, we suggest that our simple decision heuristic merits further investigation.

  1. Algorithm of axial fuel optimization based in progressive steps of turned search

    International Nuclear Information System (INIS)

    The development of an algorithm for the axial optimization of fuel of boiling water reactors (BWR) is presented. The algorithm is based in a serial optimizations process in the one that the best solution in each stage is the starting point of the following stage. The objective function of each stage adapts to orient the search toward better values of one or two parameters leaving the rest like restrictions. Conform to it advances in those optimization stages, it is increased the fineness of the evaluation of the investigated designs. The algorithm is based on three stages, in the first one are used Genetic algorithms and in the two following Tabu Search. The objective function of the first stage it looks for to minimize the average enrichment of the one it assembles and to fulfill with the generation of specified energy for the operation cycle besides not violating none of the limits of the design base. In the following stages the objective function looks for to minimize the power factor peak (PPF) and to maximize the margin of shutdown (SDM), having as restrictions the one average enrichment obtained for the best design in the first stage and those other restrictions. The third stage, very similar to the previous one, it begins with the design of the previous stage but it carries out a search of the margin of shutdown to different exhibition steps with calculations in three dimensions (3D). An application to the case of the design of the fresh assemble for the fourth fuel reload of the Unit 1 reactor of the Laguna Verde power plant (U1-CLV) is presented. The obtained results show an advance in the handling of optimization methods and in the construction of the objective functions that should be used for the different design stages of the fuel assemblies. (Author)

  2. Query Intent Disambiguation of Keyword-Based Semantic Entity Search in Dataspaces

    Institute of Scientific and Technical Information of China (English)

    Dan Yang; De-Rong Shen; Ge Yu; Yue Kou; Tie-Zheng Nie

    2013-01-01

    Keyword query has attracted much research attention due to its simplicity and wide applications.The inherent ambiguity of keyword query is prone to unsatisfied query results.Moreover some existing techniques on Web query,keyword query in relational databases and XML databases cannot be completely applied to keyword query in dataspaces.So we propose KeymanticES,a novel keyword-based semantic entity search mechanism in dataspaces which combines both keyword query and semantic query features.And we focus on query intent disambiguation problem and propose a novel three-step approach to resolve it.Extensive experimental results show the effectiveness and correctness of our proposed approach.

  3. Color Octet Electron Search Potential of the FCC Based e-p Colliders

    CERN Document Server

    Acar, Y C; Oner, B B; Sultansoy, S

    2016-01-01

    Resonant production of color octet electron, e_{8}, at the FCC based ep colliders has been analyzed. It is shown that e-FCC will cover much a wider region of e_{8} masses compared to the LHC. Moreover, with highest electron beam energy, e_{8} search potential of the e-FCC exceeds that of FCC pp collider. If e_{8} is discovered earlier by the FCC pp collider, e-FCC will give opportunity to handle very important additional information. For example, compositeness scale can be probed up to hundreds TeV region.

  4. Development of a magnetometer-based search strategy for stopped monopoles at the Large Hadron Collider

    CERN Document Server

    De Roeck, A.; Hirt, A M; Joergensen, M-D; Katre, A; Mermod, P; Milstead, D; Sloan, T

    2012-01-01

    If produced in high energy particle collisions at the LHC, magnetic monopoles could stop in material surrounding the interaction points. Obsolete parts of the beam pipe near the CMS interaction region, which were exposed to the products of pp and heavy ion collisions, were analysed using a SQUID-based magnetometer. The purpose of this work is to quantify the performance of the magnetometer in the context of a monopole search using a small set of samples of accelerator material ahead of the 2013 shutdown.

  5. EMBANKS: Towards Disk Based Algorithms For Keyword-Search In Structured Databases

    OpenAIRE

    Gupta, Nitin

    2011-01-01

    In recent years, there has been a lot of interest in the field of keyword querying relational databases. A variety of systems such as DBXplorer [ACD02], Discover [HP02] and ObjectRank [BHP04] have been proposed. Another such system is BANKS, which enables data and schema browsing together with keyword-based search for relational databases. It models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. The size of the database graph that BANKS uses is ...

  6. A method of characterizing network topology based on the breadth-first search tree

    Science.gov (United States)

    Zhou, Bin; He, Zhe; Wang, Nianxin; Wang, Bing-Hong

    2016-05-01

    A method based on the breadth-first search tree is proposed in this paper to characterize the hierarchical structure of network. In this method, a similarity coefficient is defined to quantitatively distinguish networks, and quantitatively measure the topology stability of the network generated by a model. The applications of the method are discussed in ER random network, WS small-world network and BA scale-free network. The method will be helpful for deeply describing network topology and provide a starting point for researching the topology similarity and isomorphism of networks.

  7. Publishers and repositories

    CERN Document Server

    CERN. Geneva

    2007-01-01

    The impact of self-archiving on journals and publishers is an important topic for all those involved in scholarly communication. There is some evidence that the physics arXiv has had no impact on physics journals, while 'economic common sense' suggests that some impact is inevitable. I shall review recent studies of librarian attitudes towards repositories and journals, and place this in the context of IOP Publishing's experiences with arXiv. I shall offer some possible reasons for the mis-match between these perspectives and then discuss how IOP has linked with arXiv and experimented with OA publishing. As well as launching OA journals we have co-operated with Cornell and the arXiv on Eprintweb.org, a platform that offers new features to repository users. View Andrew Wray's biography

  8. Ethics in Scientific Publishing

    Science.gov (United States)

    Sage, Leslie J.

    2012-08-01

    We all learn in elementary school not turn in other people's writing as if it were our own (plagiarism), and in high school science labs not to fake our data. But there are many other practices in scientific publishing that are depressingly common and almost as unethical. At about the 20 percent level authors are deliberately hiding recent work -- by themselves as well as by others -- so as to enhance the apparent novelty of their most recent paper. Some people lie about the dates the data were obtained, to cover up conflicts of interest, or inappropriate use of privileged information. Others will publish the same conference proceeding in multiple volumes, or publish the same result in multiple journals with only trivial additions of data or analysis (self-plagiarism). These shady practices should be roundly condemned and stopped. I will discuss these and other unethical actions I have seen over the years, and steps editors are taking to stop them.

  9. Exploring Multidisciplinary Data Sets through Database Driven Search Capabilities and Map-Based Web Services

    Science.gov (United States)

    O'Hara, S.; Ferrini, V.; Arko, R.; Carbotte, S. M.; Leung, A.; Bonczkowski, J.; Goodwillie, A.; Ryan, W. B.; Melkonian, A. K.

    2008-12-01

    Relational databases containing geospatially referenced data enable the construction of robust data access pathways that can be customized to suit the needs of a diverse user community. Web-based search capabilities driven by radio buttons and pull-down menus can be generated on-the-fly leveraging the power of the relational database and providing specialists a means of discovering specific data and data sets. While these data access pathways are sufficient for many scientists, map-based data exploration can also be an effective means of data discovery and integration by allowing users to rapidly assess the spatial co- registration of several data types. We present a summary of data access tools currently provided by the Marine Geoscience Data System (www.marine-geo.org) that are intended to serve a diverse community of users and promote data integration. Basic search capabilities allow users to discover data based on data type, device type, geographic region, research program, expedition parameters, personnel and references. In addition, web services are used to create database driven map interfaces that provide live access to metadata and data files.

  10. VIS-PROCUUS: A NOVEL PROFILING SYSTEM FOR INSTIGATING USER PROFILES FROM SEARCH ENGINE LOGS BASED ON QUERY SENSE

    Directory of Open Access Journals (Sweden)

    Dr.S.K.JAYANTHI,

    2011-06-01

    Full Text Available Most commercial search engines return roughly the same results for the same query, regardless of the user’s real interest. This paper focus on user report strategy so that the browsers can obtain the web search results based on their profiles in visual mode. Users can be mined from the concept-based user profiles to perform mutual filtering. Browsers with same idea and domain can share their knowledge. From the existing user profiles the interest and domain of the users can be obtained and the search engine personalization is focused in this paper. Finally, the concept-based user profiles can be incorporated into the vis -(Visual ranking algorithm of a searchengine so that search results can be ranked according to individual users’ interests and displayed in visual mode.

  11. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  12. Tool Path Generation for Clean-up Machining of Impeller by Point-searching Based Method

    Institute of Scientific and Technical Information of China (English)

    TANG Ming; ZHANG Dinghua; LUO Ming; WU Baohai

    2012-01-01

    Machining quality of clean-up region has a strong influence on the performances of the impeller.In order to plan clean-up tool paths rapidly and obtain good finish surface quality,an efficient and robust tool path generation method is presented,which employs an approach based on point-searching.The clean-up machining mentioned in this paper is pencil-cut and multilayer fillet-cut for a free-form model with a ball-end cutter.For pencil-cut,the cutter center position can be determined via judging whether it satisfies the distance requirement.After the searching direction and the tracing direction have been determined,by employing the point-searching algorithm with the idea of dichotomy,all the cutter contact (CC) points and cutter location (CL)points can be found and the clean-up boundaries can also be defined rapidly.Then the tool path is generated.Based on the main concept of pencil-cut,a multilayer fillet-cut method is proposed,which utilizes a ball-end cuter with its radius less than the design radius of clean-up region.Using a sequence of intermediate virtual cutters to divide the clean-uP region into several layersand given a cusp-height tolerance for the final layer,then the tool paths for all layers are calculated.Finally,computer implementation is also presented in this paper,and the result shows that the proposed method is feasible.

  13. Hprints - Licence to publish

    DEFF Research Database (Denmark)

    Rabow, Ingegerd; Sikström, Marjatta; Drachen, Thea Marie;

    2010-01-01

    realised the potential advantages for them. The universities have a role here as well as the libraries that manage the archives and support scholars in various aspects of the publishing processes. Libraries are traditionally service providers with a mission to facilitate the knowledge production...

  14. Scholars | Digital Representation | Publishing

    Science.gov (United States)

    Hodgson, Justin

    2014-01-01

    Understanding the current state of digital publishing means that writers can now do more and say more in more ways than ever before in human history. As modes, methods, media and mechanisms of expression mutate into newer and newer digital forms, writers find themselves at a moment when they can create, critique collaborate, and comment according…

  15. Turn-Based War Chess Model and Its Search Algorithm per Turn

    Directory of Open Access Journals (Sweden)

    Hai Nan

    2016-01-01

    Full Text Available War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1 enumeration by order; (2 enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.

  16. EMBANKS: Towards Disk Based Algorithms For Keyword-Search In Structured Databases

    CERN Document Server

    Gupta, Nitin

    2011-01-01

    In recent years, there has been a lot of interest in the field of keyword querying relational databases. A variety of systems such as DBXplorer [ACD02], Discover [HP02] and ObjectRank [BHP04] have been proposed. Another such system is BANKS, which enables data and schema browsing together with keyword-based search for relational databases. It models tuples as nodes in a graph, connected by links induced by foreign key and other relationships. The size of the database graph that BANKS uses is proportional to the sum of the number of nodes and edges in the graph. Systems such as SPIN, which search on Personal Information Networks and use BANKS as the backend, maintain a lot of information about the users' data. Since these systems run on the user workstation which have other demands of memory, such a heavy use of memory is unreasonable and if possible, should be avoided. In order to alleviate this problem, we introduce EMBANKS (acronym for External Memory BANKS), a framework for an optimized disk-based BANKS sy...

  17. LED TERMINAL AND ADVERTISEMENT PUBLISHING PLATFORM BASED ON XML INTERACTION PROTOCOLS%基于XML交互协议的LED终端及广告发布平台

    Institute of Scientific and Technical Information of China (English)

    梅良刚; 左保河; 李嘉炎

    2011-01-01

    This paper illustrates a design scheme of LED terminal and advertisement publishing platform based on XML interaction protocols. First the Java webpage platform is used to upload the media and to censor them, and then to put them into the media library. Then the embedded C is employed to develop the LED terminals, and to interact by XML protocols. In the case of the number of advertisement terminals is big or the terminals are separated far from each other and are managed and maintained manually, there must be problems occur, including heavy workload, slow update in advertising information, being unable to monitor the states of the advertisement terminals in time, and the flexibility and diversity of the terminals, etc. But the Internet - based B/S joint advertising pattern on the network can tackle the above problems.%说明一种基于XML交互协议的LED终端及广告发布平台的设计方案.首先通过Java网页平台来对媒体上传和进行审核,并放入媒体库中.再通过嵌入式C开发出LED终端,以XML协议进行交互.当终端数量较多,或相隔距离较远时,采用人工方式对每台终端进行管理和维护,必然存在工作量大、广告信息更新慢、不能实时监控终端状态、终端的灵活性与多样性等问题,而基于Internet的B/S联播模式就可以解决以上问题.

  18. Parcels and Land Ownership, Coweta County, Georgia Parcel Base Shapefile, Published in 2006, 1:12000 (1in=1000ft) scale, Chattahoochee-Flint Regional Development.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Parcels and Land Ownership dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from Field Survey/GPS information as of 2006. It is...

  19. Parcels and Land Ownership, Tax Assessors Data Base, Published in 1998, 1:600 (1in=50ft) scale, Jones County Board of Commissioners.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Parcels and Land Ownership dataset, published at 1:600 (1in=50ft) scale, was produced all or in part from Uncorrected Imagery information as of 1998. It is...

  20. Missile Sites, Former missile field for Whiteman., Published in 2005, 1:12000 (1in=1000ft) scale, Whiteman Air Force Base.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Missile Sites dataset, published at 1:12000 (1in=1000ft) scale, was produced all or in part from Field Survey/GPS information as of 2005. It is described as...

  1. Population Scalability Analysis of Abstract Population-based Random Search: Spectral Radius

    CERN Document Server

    He, Jun

    2011-01-01

    Population-based Random Search (RS) algorithms, such as Evolutionary Algorithms (EAs), Ant Colony Optimization (ACO), Artificial Immune Systems (AIS) and Particle Swarm Optimization (PSO), have been widely applied to solving discrete optimization problems. A common belief in this area is that the performance of a population-based RS algorithm may improve if increasing its population size. The term of population scalability is used to describe the relationship between the performance of RS algorithms and their population size. Although understanding population scalability is important to design efficient RS algorithms, there exist few theoretical results about population scalability so far. Among those limited results, most of them belong to case studies, e.g. simple RS algorithms for simple problems. Different from them, the paper aims at providing a general study. A large family of RS algorithms, called ARS, has been investigated in the paper. The main contribution of this paper is to introduce a novel appro...

  2. Creative Engineering Based Education with Autonomous Robots Considering Job Search Support

    Science.gov (United States)

    Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou

    The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.

  3. Semantic snippet construction for search engine results based on segment evaluation

    CERN Document Server

    Kuppusamy, K S

    2012-01-01

    The result listing from search engines includes a link and a snippet from the web page for each result item. The snippet in the result listing plays a vital role in assisting the user to click on it. This paper proposes a novel approach to construct the snippets based on a semantic evaluation of the segments in the page. The target segment(s) is/are identified by applying a model to evaluate segments present in the page and selecting the segments with top scores. The proposed model makes the user judgment to click on a result item easier since the snippet is constructed semantically after a critical evaluation based on multiple factors. A prototype implementation of the proposed model confirms the empirical validation.

  4. English for Non-English Departments: In Search for an Essential Home Base

    Directory of Open Access Journals (Sweden)

    Indah Winarni

    2016-02-01

    Full Text Available Promoting the quality of English for the students of non English Departments (henceforth English for undergraduates, which has been characterized as lacking prestige and resources, requires a serious promotion of its status. This means providing a proper home base for the English instructors where standards of profession and quality of service can be pursued, through a solid structure which could nurture academic culture. This paper will describe the various types of the existing structures of English for undergraduates. Illustration on the perseverance of Brawijaya University English instructors in searching for the intended home base through various efforts in staff development and serious research will be presented. What is meant by intended homebase is inspired by Swales's concept of discourse community

  5. Cuckoo search based optimal mask generation for noise suppression and enhancement of speech signal

    Directory of Open Access Journals (Sweden)

    Anil Garg

    2015-07-01

    Full Text Available In this paper, an effective noise suppression technique for enhancement of speech signals using optimized mask is proposed. Initially, the noisy speech signal is broken down into various time–frequency (TF units and the features are extracted by finding out the Amplitude Magnitude Spectrogram (AMS. The signals are then classified based on quality ratio into different classes to generate the initial set of solutions. Subsequently, the optimal mask for each class is generated based on Cuckoo search algorithm. Subsequently, in the waveform synthesis stage, filtered waveforms are windowed and then multiplied by the optimal mask value and summed up to get the enhanced target signal. The experimentation of the proposed technique was carried out using various datasets and the performance is compared with the previous techniques using SNR. The results obtained proved the effectiveness of the proposed technique and its ability to suppress noise and enhance the speech signal.

  6. Ovid MEDLINE Instruction can be Evaluated Using a Validated Search Assessment Tool. A Review of: Rana, G. K., Bradley, D. R., Hamstra, S. J., Ross, P. T., Schumacher, R. E., Frohna, J. G., & Lypson, M. L. (2011. A validated search assessment tool: Assessing practice-based learning and improvement in a residency program. Journal of the Medical Library Association, 99(1, 77-81. doi:10.3163/1536-5050.99.1.013

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-01-01

    Full Text Available Objective – To determine the construct validity of a search assessment instrument that is used to evaluate search strategies in Ovid MEDLINE. Design – Cross-sectional, cohort study. Setting – The Academic Medical Center of the University of Michigan. Subjects – All 22 first-year residents in the Department of Pediatrics in 2004 (cohort 1; 10 senior pediatric residents in 2005 (cohort 2; and 9 faculty members who taught evidence based medicine (EBM and published on EBM topics. Methods – Two methods were employed to determine whether the University of Michigan MEDLINE Search Assessment Instrument (UMMSA could show differences between searchers’ construction of a MEDLINE search strategy.The first method tested the search skills of all 22 incoming pediatrics residents (cohort 1 after they received MEDLINE training in 2004, and again upon graduation in 2007. Only 15 of these residents were tested upon graduation; seven were either no longer in the residency program, or had quickly left the institution after graduation. The search test asked study participants to read a clinical scenario, identify the search question in the scenario, and perform an Ovid MEDLINE search. Two librarians scored the blinded search strategies.The second method compared the scores of the 22 residents with the scores of ten senior residents (cohort 2 and nine faculty volunteers. Unlike the first cohort, the ten senior residents had not received any MEDLINE training. The faculty members’ search strategies were used as the gold standard comparison for scoring the search skills of the two cohorts.Main Results – The search strategy scores of the 22 first-year residents, who received training, improved from 2004 to 2007 (mean improvement: 51.7 to 78.7; t(14=5.43, PConclusion – According to the authors, “the results of this study provide evidence for the validity of an instrument to evaluate MEDLINE search strategies” (p. 81, since the instrument under

  7. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    OpenAIRE

    Andy Hickner; Friese, Christopher R.; Margaret Irwin

    2011-01-01

    Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP) review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS). A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from...

  8. A Novel Harmony Search Algorithm Based on Teaching-Learning Strategies for 0-1 Knapsack Problems

    OpenAIRE

    Shouheng Tuo; Longquan Yong; Fang’an Deng

    2014-01-01

    To enhance the performance of harmony search (HS) algorithm on solving the discrete optimization problems, this paper proposes a novel harmony search algorithm based on teaching-learning (HSTL) strategies to solve 0-1 knapsack problems. In the HSTL algorithm, firstly, a method is presented to adjust dimension dynamically for selected harmony vector in optimization procedure. In addition, four strategies (harmony memory consideration, teaching-learning strategy, local pitch adjusting, and rand...

  9. Monte Carlo-based searching as a tool to study carbohydrate structure.

    Science.gov (United States)

    Dowd, Michael K; Kiely, Donald E; Zhang, Jinsong

    2011-07-01

    A torsion angle-based Monte Carlo searching routine was developed and applied to several carbohydrate modeling problems. The routine was developed as a Unix shell script that calls several programs, which allows it to be interfaced with multiple potential functions and various utilities for evaluating conformers. In its current form, the program operates with several versions of the MM3 and MM4 molecular mechanics programs and has a module to calculate hydrogen-hydrogen coupling constants. The routine was used to study the low-energy exo-cyclic substituents of β-D-glucopyranose and the conformers of D-glucaramide, both of which had been previously studied with MM3 by full conformational searches. For these molecules, the program found all previously reported low-energy structures. The routine was also used to find favorable conformers of 2,3,4,5-tetra-O-acetyl-N,N'-dimethyl-D-glucaramide and D-glucitol, the latter of which is believed to have many low-energy forms. Finally, the technique was used to study the inter-ring conformations of β-gentiobiose, a β-(1→6)-linked disaccharide of D-glucopyranose. The program easily found conformers in the 10 previously identified low-energy regions for this disaccharide. In 6 of the 10 local regions, the same previously identified low-energy structures were found. In the remaining four regions, the search identified structures with slightly lower energies than those previously reported. The approach should be useful for extending modeling studies on acyclic monosaccharides and possibly oligosaccharides. PMID:21536262

  10. Tales from the Field: Search Strategies Applied in Web Searching

    OpenAIRE

    Soohyung Joo; Iris Xie

    2010-01-01

    In their web search processes users apply multiple types of search strategies, which consist of different search tactics. This paper identifies eight types of information search strategies with associated cases based on sequences of search tactics during the information search process. Thirty-one participants representing the general public were recruited for this study. Search logs and verbal protocols offered rich data for the identification of different types of search strategies. Based on...

  11. Open Access Publishing: What Authors Want

    OpenAIRE

    Nariani, R.; Fernandez, L.

    2012-01-01

    Campus-based open access author funds are being considered by many academic libraries as a way to support authors publishing in open access journals. Article processing fees for open access have been introduced recently by publishers and have not yet been widely accepted by authors. Few studies have surveyed authors on their reasons for publishing open access and their perceptions of open access journals. The present study was designed to gauge the uptake of library support for author funding...

  12. Support open access publishing

    DEFF Research Database (Denmark)

    Ekstrøm, Jeannette

    2013-01-01

    Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante tidsskriftsinformati......Projektet Support Open Access Publishing har til mål at få opdateret Sherpa/Romeo databasen (www.sherpa.ac.uk/romeo) med fagligt relevante, danske tidsskrifter. Projektet skal endvidere undersøge mulighederne for at få udviklet en database, hvor forskere på tværs af relevante...

  13. Reclaiming society publishing.

    OpenAIRE

    Steinberg, Philip E.

    2015-01-01

    Learned societies have become aligned with commercial publishers, who have increasingly taken over the latter’s function as independent providers of scholarly information. Using the example of geographical societies, the advantages and disadvantages of this trend are examined. It is argued that in an era of digital publication, learned societies can offer leadership with a new model of open access that can guarantee high quality scholarly material whose publication costs are supported by soci...

  14. Reclaiming Society Publishing

    Directory of Open Access Journals (Sweden)

    Philip E. Steinberg

    2015-07-01

    Full Text Available Learned societies have become aligned with commercial publishers, who have increasingly taken over the latter’s function as independent providers of scholarly information. Using the example of geographical societies, the advantages and disadvantages of this trend are examined. It is argued that in an era of digital publication, learned societies can offer leadership with a new model of open access that can guarantee high quality scholarly material whose publication costs are supported by society membership dues.

  15. UNDEMOCRATIC ASPECTS OF PUBLISHING

    OpenAIRE

    Meadows, J

    1999-01-01

    There seems to be a general belief among Internet users that it is a particularly democratic kind of activity. How true is this, more especially in terms of electronic publishing? The problem in seeking an answer is that 'democracy' means different things to different people. Its meaning not only varies from country to country, but even within a single country it can have different flavours. For example, a summary of the definition given in the Oxford English Dictionary might be: 'that form o...

  16. A Strategic Analysis of Search Engine Advertising in Web based-commerce

    OpenAIRE

    Ela Kumar; Shruti Kohli

    2007-01-01

    Endeavor of this paper is to explore the role play of Search Engine in Online Business Industry. This paper discusses the Search Engine advertising programs and provides an insight about the revenue generated online via Search Engine. It explores the growth of Online Business Industry in India and emphasis on the role of Search Engine as the major advertising vehicle. A case study on re volution of Indian Advertising Industry has been conducted and its impact on on...

  17. Query Recommendation employing Query Logs in Search Optimization

    Directory of Open Access Journals (Sweden)

    Neha Singh

    2013-11-01

    Full Text Available In this paper we suggest a method that, given a query presented to a search engine, proposes a list of concerned queries. The concerned queries are founded in antecedently published queries, and can be published by the user to the search engine to tune or redirect the search process. The method proposed is based on a query clustering procedure in which groups of semantically like queries are named. The clustering procedure uses the content of historical preferences of users registered in the query log of the search engine. The method not only discloses the related queries, but also ranks them agreeing to a relevance criterion. Finally, we show with experiments over the query log of a search engine the potency of the method.

  18. A Statistical Ontology-Based Approach to Ranking for Multiword Search

    Science.gov (United States)

    Kim, Jinwoo

    2013-01-01

    Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…

  19. Extraction of microcracks in rock images based on heuristic graph searching and application

    Science.gov (United States)

    Luo, Zhihua; Zhu, Zhende; Ruan, Huaining; Shi, Chong

    2015-12-01

    In this paper, we propose a new method, based on a graph searching technique, for microcrack extraction from scanning electron microscopic images of rocks. This method mainly focuses on how to detect the crack and extract it, and then quantify some basic geometrical features. The crack can be detected automatically with the aid of two endpoints of the crack. The algorithm involves the following process: the A* graph searching technique is first used to find a path throughout the crack region, defined by the initial two endpoints; the pixels of the path will be used as the seeds for the region growing method to restore the primary crack area; then, an automatic filling holes' operation is used to remove the possible holes in the region growing result; the medial axis and distance transformation of the crack area are acquired, and then the final crack is rebuilt by painting disks along a medial axis without branches. The crack result is separated without interaction. In the remaining parts, the crack features are quantified, such as the length, width, angle and area, and error analysis shows that the error percentage of the proposed approach reduces to a low level with actual width increases, and results of some example images are illustrated. The algorithm is efficient and can also be used for image detection of other linear structural objects.

  20. Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer

    Directory of Open Access Journals (Sweden)

    Mauro Castelli

    2015-01-01

    Full Text Available Energy consumption forecasting (ECF is an important policy issue in today’s economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.

  1. Neural-Based Cuckoo Search of Employee Health and Safety (HS

    Directory of Open Access Journals (Sweden)

    Koffka Khan

    2013-01-01

    Full Text Available A study using the cuckoo search algorithm to evaluate the effects of using computer-aided workstations on employee health and safety (HS is conducted. We collected data for HS risk on employees at their workplaces, analyzed the data and proposed corrective measures applying our methodology. It includes a checklist with nine HS dimensions: work organization, displays, input devices, furniture, work space, environment, software, health hazards and satisfaction. By the checklist, data on HS risk factors are collected. For the calculation of an HS risk index a neural-swarm cuckoo search (NSCS algorithm has been employed. Based on the HS risk index, IHS four groups of HS risk severity are determined: low, moderate, high and extreme HS risk. By this index HS problems are allocated and corrective measures can be applied. This approach is illustrated and validated by a case study. An important advantage of the approach is its easy use and HS index methodology speedily pointing out individual employee specific HS risk.

  2. A web-based search engine for triplex-forming oligonucleotide target sequences.

    Science.gov (United States)

    Gaddis, Sara S; Wu, Qi; Thames, Howard D; DiGiovanni, John; Walborg, Earl F; MacLeod, Michael C; Vasquez, Karen M

    2006-01-01

    Triplex technology offers a useful approach for site-specific modification of gene structure and function both in vitro and in vivo. Triplex-forming oligonucleotides (TFOs) bind to their target sites in duplex DNA, thereby forming triple-helical DNA structures via Hoogsteen hydrogen bonding. TFO binding has been demonstrated to site-specifically inhibit gene expression, enhance homologous recombination, induce mutation, inhibit protein binding, and direct DNA damage, thus providing a tool for gene-specific manipulation of DNA. We have developed a flexible web-based search engine to find and annotate TFO target sequences within the human and mouse genomes. Descriptive information about each site, including sequence context and gene region (intron, exon, or promoter), is provided. The engine assists the user in finding highly specific TFO target sequences by eliminating or flagging known repeat sequences and flagging overlapping genes. A convenient way to check for the uniqueness of a potential TFO binding site is provided via NCBI BLAST. The search engine may be accessed at spi.mdanderson.org/tfo. PMID:16764543

  3. A novel approach to dark matter search based on nanometric emulsions

    International Nuclear Information System (INIS)

    The most convincing candidate as main constituent of the dark matter in the Universe consists of weakly interacting massive particles (WIMP). WIMPs must be electrically neutral and interact with a very low cross-section (σ < 10−40 cm2) which makes them detectable in direct searches only through the observation of nuclear recoils induced by the WIMP rare scatterings. In the experiments carried out so far, recoiled nuclei are searched for as a signal over a background produced by Compton electrons and neutron scatterings. Signal found by some experiments have not been confirmed by other techniques. None of these experiments is able to detect the track, typically less than one micron long, of the recoiled nucleus and therefore none is able to directly detect the incoming direction of WIMPs. We propose an R and D program for a new experimental method able to observe the track of the scattered nucleus based on new developments in the nuclear emulsion technique: films with nanometric silver grains, expansion of emulsions and very fast completely automated scanning systems. Nuclear emulsions would act both as the WIMP target and as the tracking detector able to reconstruct the direction of the recoiled nucleus. This unique characteristic would provide a new and unambiguous signature of the presence of the dark matter in our galaxy

  4. Improving the Ranking Capability of the Hyperlink Based Search Engines Using Heuristic Approach

    Directory of Open Access Journals (Sweden)

    Haider A. Ramadhan

    2006-01-01

    Full Text Available To evaluate the informative content of a Web page, the Web structure has to be carefully analyzed. Hyperlink analysis, which is capable of measuring the potential information contained in a Web page with respect to the Web space, is gaining more attention. The links to and from Web pages are an important resource that has largely gone unused in existing search engines. Web pages differ from general text in that they posse’s external and internal structure. The Web links between documents can provide useful information in finding pages for a given set of topics. Making use of the Web link information would allow the construction of more powerful tools for answering user queries. Google has been among the first search engines to utilize hyper links in page ranking. Still two main flaws in Google need to be tackled. First, all the backlinks to a page are assigned equal weights. Second, less content rich pages, such as intermediate and transient pages, are not differentiated from more content rich pages. To overcome these pitfalls, this paper proposes a heuristic based solution to differentiate the significance of various backlinks by assigning a different weight factor to them depending on their location in the directory tree of the Web space.

  5. Search for pulsations in M dwarfs in the Kepler short-cadence data base

    Science.gov (United States)

    Rodríguez, E.; Rodríguez-López, C.; López-González, M. J.; Amado, P. J.; Ocando, S.; Berdiñas, Z. M.

    2016-04-01

    The results of a search for stellar pulsations in M dwarf stars in the Kepler short-cadence (SC) data base are presented. This investigation covers all the cool and dwarf stars in the list of Dressing & Charbonneau, which were also observed in SC mode by the Kepler satellite. The sample has been enlarged via selection of stellar parameters (temperature, surface gravity and radius) with available Kepler Input Catalogue values together with JHK and riz photometry. In total, 87 objects observed by the Kepler mission in SC mode were selected and analysed using Fourier techniques. The detection threshold is below 10 μmag for the brightest objects and below 20 μmag for about 40 per cent of the stars in the sample. However, no significant signal in the [˜10,100] cd-1 frequency domain that can be reliably attributable to stellar pulsations has been detected. The periodograms have also been investigated for solar-like oscillations in the >100 cd-1 region, but with unsuccessful results too. Despite these inconclusive photometric results, M dwarfs pulsation amplitudes may still be detected in radial velocity searches. State-of-the-art coming instruments, like CARMENES near-infrared high-precision spectrograph, will play a key role in the possible detection.

  6. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2011-02-01

    Full Text Available Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN. Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability.

  7. COStar: a D-star Lite-based dynamic search algorithm for codon optimization.

    Science.gov (United States)

    Liu, Xiaowu; Deng, Riqiang; Wang, Jinwen; Wang, Xunzhang

    2014-03-01

    Codon optimized genes have two major advantages: they simplify de novo gene synthesis and increase the expression level in target hosts. Often they achieve this by altering codon usage in a given gene. Codon optimization is complex because it usually needs to achieve multiple opposing goals. In practice, finding an optimal sequence from the massive number of possible combinations of synonymous codons that can code for the same amino acid sequence is a challenging task. In this article, we introduce COStar, a D-star Lite-based dynamic search algorithm for codon optimization. The algorithm first maps the codon optimization problem into a weighted directed acyclic graph using a sliding window approach. Then, the D-star Lite algorithm is used to compute the shortest path from the start site to the target site in the resulting graph. Optimizing a gene is thus converted to a search in real-time for a shortest path in a generated graph. Using in silico experiments, the performance of the algorithm was shown by optimizing the different genes including the human genome. The results suggest that COStar is a promising codon optimization tool for de novo gene synthesis and heterologous gene expression. PMID:24316385

  8. Biclustering of Gene Expression Data by Correlation-Based Scatter Search

    Directory of Open Access Journals (Sweden)

    Nepomuceno Juan A

    2011-01-01

    Full Text Available Abstract Background The analysis of data generated by microarray technology is very useful to understand how the genetic information becomes functional gene products. Biclustering algorithms can determine a group of genes which are co-expressed under a set of experimental conditions. Recently, new biclustering methods based on metaheuristics have been proposed. Most of them use the Mean Squared Residue as merit function but interesting and relevant patterns from a biological point of view such as shifting and scaling patterns may not be detected using this measure. However, it is important to discover this type of patterns since commonly the genes can present a similar behavior although their expression levels vary in different ranges or magnitudes. Methods Scatter Search is an evolutionary technique that is based on the evolution of a small set of solutions which are chosen according to quality and diversity criteria. This paper presents a Scatter Search with the aim of finding biclusters from gene expression data. In this algorithm the proposed fitness function is based on the linear correlation among genes to detect shifting and scaling patterns from genes and an improvement method is included in order to select just positively correlated genes. Results The proposed algorithm has been tested with three real data sets such as Yeast Cell Cycle dataset, human B-cells lymphoma dataset and Yeast Stress dataset, finding a remarkable number of biclusters with shifting and scaling patterns. In addition, the performance of the proposed method and fitness function are compared to that of CC, OPSM, ISA, BiMax, xMotifs and Samba using Gene the Ontology Database.

  9. Reducing a Knowledge-Base Search Space When Data Are Missing

    Science.gov (United States)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  10. Structure and navigation for electronic publishing

    Science.gov (United States)

    Tillinghast, John; Beretta, Giordano B.

    1998-01-01

    The sudden explosion of the World Wide Web as a new publication medium has given a dramatic boost to the electronic publishing industry, which previously was a limited market centered around CD-ROMs and on-line databases. While the phenomenon has parallels to the advent of the tabloid press in the middle of last century, the electronic nature of the medium brings with it the typical characteristic of 4th wave media, namely the acceleration in its propagation speed and the volume of information. Consequently, e-publications are even flatter than print media; Shakespeare's Romeo and Juliet share the same computer screen with a home-made plagiarized copy of Deep Throat. The most touted tool for locating useful information on the World Wide Web is the search engine. However, due to the medium's flatness, sought information is drowned in a sea of useless information. A better solution is to build tools that allow authors to structure information so that it can easily be navigated. We experimented with the use of ontologies as a tool to formulate structures for information about a specific topic, so that related concepts are placed in adjacent locations and can easily be navigated using simple and ergonomic user models. We describe our effort in building a World Wide Web based photo album that is shared among a small network of people.

  11. Development and Testing of a Literature Search Protocol for Evidence Based Nursing: An Applied Student Learning Experience

    Directory of Open Access Journals (Sweden)

    Andy Hickner

    2011-09-01

    Full Text Available Objective – The study aimed to develop a search protocol and evaluate reviewers' satisfaction with an evidence-based practice (EBP review by embedding a library science student in the process.Methods – The student was embedded in one of four review teams overseen by a professional organization for oncology nurses (ONS. A literature search protocol was developed by the student following discussion and feedback from the review team. Organization staff provided process feedback. Reviewers from both case and control groups completed a questionnaire to assess satisfaction with the literature search phases of the review process. Results – A protocol was developed and refined for use by future review teams. The collaboration and the resulting search protocol were beneficial for both the student and the review team members. The questionnaire results did not yield statistically significant differences regarding satisfaction with the search process between case and control groups. Conclusions – Evidence-based reviewers' satisfaction with the literature searching process depends on multiple factors and it was not clear that embedding an LIS specialist in the review team improved satisfaction with the process. Future research with more respondents may elucidate specific factors that may impact reviewers' assessment.

  12. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    Science.gov (United States)

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  13. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    Directory of Open Access Journals (Sweden)

    Jie-sheng Wang

    2015-01-01

    Full Text Available In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird’s nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO algorithm and artificial bee colony (ABC algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  14. Plagiarism in scientific publishing.

    Science.gov (United States)

    Masic, Izet

    2012-12-01

    Scientific publishing is the ultimate product of scientist work. Number of publications and their quoting are measures of scientist success while unpublished researches are invisible to the scientific community, and as such nonexistent. Researchers in their work rely on their predecessors, while the extent of use of one scientist work, as a source for the work of other authors is the verification of its contributions to the growth of human knowledge. If the author has published an article in a scientific journal it cannot publish the article in any other journal h with a few minor adjustments or without quoting parts of the first article, which are used in another article. Copyright infringement occurs when the author of a new article with or without the mentioning the author used substantial portions of previously published articles, including tables and figures. Scientific institutions and universities should,in accordance with the principles of Good Scientific Practice (GSP) and Good Laboratory Practices (GLP) have a center for monitoring,security, promotion and development of quality research. Establish rules and compliance to rules of good scientific practice are the obligations of each research institutions,universities and every individual-researchers,regardless of which area of science is investigated. In this way, internal quality control ensures that a research institution such as a university, assume responsibility for creating an environment that promotes standards of excellence, intellectual honesty and legality. Although the truth should be the aim of scientific research, it is not guiding fact for all scientists. The best way to reach the truth in its study and to avoid the methodological and ethical mistakes is to consistently apply scientific methods and ethical standards in research. Although variously defined plagiarism is basically intended to deceive the reader's own scientific contribution. There is no general regulation of control of

  15. Open Access Publishing: What Authors Want

    Science.gov (United States)

    Nariani, Rajiv; Fernandez, Leila

    2012-01-01

    Campus-based open access author funds are being considered by many academic libraries as a way to support authors publishing in open access journals. Article processing fees for open access have been introduced recently by publishers and have not yet been widely accepted by authors. Few studies have surveyed authors on their reasons for publishing…

  16. Search-Tree Based Uplink Channel Aware Packet Scheduling for UTRAN LTE

    DEFF Research Database (Denmark)

    Calabrese, Francesco Davide; Michaelsen, Per-Henrik; Rosa, Claudio;

    2008-01-01

    UTRAN Long Term Evolution is currently under standardization within 3GPP with the aim of providing a spectral efficiency 2 to 4 times higher than its predecessor HSUPA/HSDPA. Single Carrier FDMA has been selected as multiple access for the uplink. This technology requires the subcarriers allocated...... to a single user to be adjacent. The consequence is a reduced allocation flexibility which makes it challenging to design effective packet scheduling algorithms. This paper provides a search-tree based channel aware packet scheduling algorithm and evaluates its performance against throughput and...... noise rise distributions. It is shown that, despite measurement errors and high inter-cell interference variability, the proposed algorithm can increase the uplink capacity by more than 26%....

  17. SEGMENTATION ALGORITHM BASED ON EDGE-SEARCHING FOR MUlTI-LINEAR STRUCTURED LIGHT IMAGES

    Institute of Scientific and Technical Information of China (English)

    LIU Baohua; LI Bing; JIANG Zhuangde

    2006-01-01

    Aiming at the problem that the existence of disturbances on the edges of light-stripe makes the segmentation of the light-stripes images difficult, a new segmentation algorithm based on edge-searching is presented. It firstly calculates every edge pixel's horizontal coordinate grads to produce the corresponding grads-edge, then uses a designed length-variable 1D template to scan the light-stripes' grads-edges. The template is able to fmd the disturbances with different width utilizing the distributing character of the edge disturbances. The found disturbances are eliminated finally. The algorithm not only can smoothly segment the light-stripes images, but also eliminate most disturbances on the light-stripes' edges without damaging the light-stripes images' 3D information. A practical example of using the proposed algorithm is given in the end. It is proved that the efficiency of the algorithm has been improved obviously by comparison.

  18. A New Tabu-Search-Based Algorithm for Solvation of Proteins.

    Science.gov (United States)

    Grebner, Christoph; Kästner, Johannes; Thiel, Walter; Engels, Bernd

    2013-01-01

    The proper description of explicit water shells is of enormous importance for all-atom calculations. We propose a new approach for the setup of water shells around proteins based on Tabu-Search global optimization and compare its efficiency with standard molecular dynamics protocols using the chignolin protein as a test case. Both algorithms generate reasonable water shells, but the new approach provides solvated systems with an increased water-enzyme interaction and offers further advantages. It enables a stepwise buildup of the solvent shell, so that the more important inner part can be prepared more carefully. It also allows the generation of solute structures which can be biased either toward the (experimental) starting structure or the underlying theoretical model, i.e., the employed force field. PMID:26589073

  19. Function Optimization and Parameter Performance Analysis Based on Gravitation Search Algorithm

    Directory of Open Access Journals (Sweden)

    Jie-Sheng Wang

    2015-12-01

    Full Text Available The gravitational search algorithm (GSA is a kind of swarm intelligence optimization algorithm based on the law of gravitation. The parameter initialization of all swarm intelligence optimization algorithms has an important influence on the global optimization ability. Seen from the basic principle of GSA, the convergence rate of GSA is determined by the gravitational constant and the acceleration of the particles. The optimization performances on six typical test functions are verified by the simulation experiments. The simulation results show that the convergence speed of the GSA algorithm is relatively sensitive to the setting of the algorithm parameters, and the GSA parameter can be used flexibly to improve the algorithm’s convergence velocity and improve the accuracy of the solutions.

  20. An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering

    CERN Document Server

    Uwe, Aickelin; Jingpeng, Li

    2007-01-01

    This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is ...