WorldWideScience

Sample records for web accessible predictions

  1. Predicting Web Page Accesses, using Users’ Profile and Markov Models

    OpenAIRE

    Zeynab Fazelipour

    2016-01-01

    Nowadays web is an important source for information retrieval, the sources on WWW are constantly increasing and the users accessing the web have different backgrounds. Consequently, finding the information which satisfies the personal users needs is not so easy. Exploration of users behaviors in the web, as a method for extracting the knowledge lying behind the way of how the users interact with the web, is considered as an important tool in the field of web mining. By identifying user's beha...

  2. DERIVING USER ACCESS PATTERNS AND MINING WEB COMMUNITY WITH WEB-LOG DATA FOR PREDICTING USER SESSIONS WITH PAJEK

    Directory of Open Access Journals (Sweden)

    S. Balaji

    2012-10-01

    Full Text Available Web logs are a young and dynamic media type. Due to the intrinsic relationship among Web objects and the deficiency of a uniform schema of web documents, Web community mining has become significant area for Web data management and analysis. The research of Web communities extents a number of research domains. In this paper an ontological model has been present with some recent studies on this topic, which cover finding relevant Web pages based on linkage information, discovering user access patterns through analyzing Web log files from Web data. A simulation has been created with the academic website crawled data. The simulation is done in JAVA and ORACLE environment. Results show that prediction of user session could give us plenty of vital information for the Business Intelligence. Search Engine Optimization could also use these potential results which are discussed in the paper in detail.

  3. Web Accessibility and Guidelines

    Science.gov (United States)

    Harper, Simon; Yesilada, Yeliz

    Access to, and movement around, complex online environments, of which the World Wide Web (Web) is the most popular example, has long been considered an important and major issue in the Web design and usability field. The commonly used slang phrase ‘surfing the Web’ implies rapid and free access, pointing to its importance among designers and users alike. It has also been long established that this potentially complex and difficult access is further complicated, and becomes neither rapid nor free, if the user is disabled. There are millions of people who have disabilities that affect their use of the Web. Web accessibility aims to help these people to perceive, understand, navigate, and interact with, as well as contribute to, the Web, and thereby the society in general. This accessibility is, in part, facilitated by the Web Content Accessibility Guidelines (WCAG) currently moving from version one to two. These guidelines are intended to encourage designers to make sure their sites conform to specifications, and in that conformance enable the assistive technologies of disabled users to better interact with the page content. In this way, it was hoped that accessibility could be supported. While this is in part true, guidelines do not solve all problems and the new WCAG version two guidelines are surrounded by controversy and intrigue. This chapter aims to establish the published literature related to Web accessibility and Web accessibility guidelines, and discuss limitations of the current guidelines and future directions.

  4. Pred-hERG: A Novel web-Accessible Computational Tool for Predicting Cardiac Toxicity.

    Science.gov (United States)

    Braga, Rodolpho C; Alves, Vinicius M; Silva, Meryck F B; Muratov, Eugene; Fourches, Denis; Lião, Luciano M; Tropsha, Alexander; Andrade, Carolina H

    2015-10-01

    The blockage of the hERG K(+) channels is closely associated with lethal cardiac arrhythmia. The notorious ligand promiscuity of this channel earmarked hERG as one of the most important antitargets to be considered in early stages of drug development process. Herein we report on the development of an innovative and freely accessible web server for early identification of putative hERG blockers and non-blockers in chemical libraries. We have collected the largest publicly available curated hERG dataset of 5,984 compounds. We succeed in developing robust and externally predictive binary (CCR≈0.8) and multiclass models (accuracy≈0.7). These models are available as a web-service freely available for public at http://labmol.farmacia.ufg.br/predherg/. Three following outcomes are available for the users: prediction by binary model, prediction by multi-class model, and the probability maps of atomic contribution. The Pred-hERG will be continuously updated and upgraded as new information became available. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. A Dynamic Web Page Prediction Model Based on Access Patterns to Offer Better User Latency

    CERN Document Server

    Mukhopadhyay, Debajyoti; Saha, Dwaipayan; Kim, Young-Chon

    2011-01-01

    The growth of the World Wide Web has emphasized the need for improvement in user latency. One of the techniques that are used for improving user latency is Caching and another is Web Prefetching. Approaches that bank solely on caching offer limited performance improvement because it is difficult for caching to handle the large number of increasingly diverse files. Studies have been conducted on prefetching models based on decision trees, Markov chains, and path analysis. However, the increased uses of dynamic pages, frequent changes in site structure and user access patterns have limited the efficacy of these static techniques. In this paper, we have proposed a methodology to cluster related pages into different categories based on the access patterns. Additionally we use page ranking to build up our prediction model at the initial stages when users haven't already started sending requests. This way we have tried to overcome the problems of maintaining huge databases which is needed in case of log based techn...

  6. World Wide Access: Accessible Web Design.

    Science.gov (United States)

    Washington Univ., Seattle.

    This brief paper considers the application of "universal design" principles to Web page design in order to increase accessibility for people with disabilities. Suggestions are based on the World Wide Web Consortium's accessibility initiative, which has proposed guidelines for all Web authors and federal government standards. Seven guidelines for…

  7. Providing access to risk prediction tools via the HL7 XML-formatted risk web service.

    Science.gov (United States)

    Chipman, Jonathan; Drohan, Brian; Blackford, Amanda; Parmigiani, Giovanni; Hughes, Kevin; Bosinoff, Phil

    2013-07-01

    Cancer risk prediction tools provide valuable information to clinicians but remain computationally challenging. Many clinics find that CaGene or HughesRiskApps fit their needs for easy- and ready-to-use software to obtain cancer risks; however, these resources may not fit all clinics' needs. The HughesRiskApps Group and BayesMendel Lab therefore developed a web service, called "Risk Service", which may be integrated into any client software to quickly obtain standardized and up-to-date risk predictions for BayesMendel tools (BRCAPRO, MMRpro, PancPRO, and MelaPRO), the Tyrer-Cuzick IBIS Breast Cancer Risk Evaluation Tool, and the Colorectal Cancer Risk Assessment Tool. Software clients that can convert their local structured data into the HL7 XML-formatted family and clinical patient history (Pedigree model) may integrate with the Risk Service. The Risk Service uses Apache Tomcat and Apache Axis2 technologies to provide an all Java web service. The software client sends HL7 XML information containing anonymized family and clinical history to a Dana-Farber Cancer Institute (DFCI) server, where it is parsed, interpreted, and processed by multiple risk tools. The Risk Service then formats the results into an HL7 style message and returns the risk predictions to the originating software client. Upon consent, users may allow DFCI to maintain the data for future research. The Risk Service implementation is exemplified through HughesRiskApps. The Risk Service broadens the availability of valuable, up-to-date cancer risk tools and allows clinics and researchers to integrate risk prediction tools into their own software interface designed for their needs. Each software package can collect risk data using its own interface, and display the results using its own interface, while using a central, up-to-date risk calculator. This allows users to choose from multiple interfaces while always getting the latest risk calculations. Consenting users contribute their data for future

  8. From Web accessibility to Web adaptability.

    Science.gov (United States)

    Kelly, Brian; Nevile, Liddy; Sloan, David; Fanou, Sotiris; Ellison, Ruth; Herrod, Lisa

    2009-07-01

    This article asserts that current approaches to enhance the accessibility of Web resources fail to provide a solid foundation for the development of a robust and future-proofed framework. In particular, they fail to take advantage of new technologies and technological practices. The article introduces a framework for Web adaptability, which encourages the development of Web-based services that can be resilient to the diversity of uses of such services, the target audience, available resources, technical innovations, organisational policies and relevant definitions of 'accessibility'. The article refers to a series of author-focussed approaches to accessibility through which the authors and others have struggled to find ways to promote accessibility for people with disabilities. These approaches depend upon the resource author's determination of the anticipated users' needs and their provision. Through approaches labelled as 1.0, 2.0 and 3.0, the authors have widened their focus to account for contexts and individual differences in target audiences. Now, the authors want to recognise the role of users in determining their engagement with resources (including services). To distinguish this new approach, the term 'adaptability' has been used to replace 'accessibility'; new definitions of accessibility have been adopted, and the authors have reviewed their previous work to clarify how it is relevant to the new approach. Accessibility 1.0 is here characterised as a technical approach in which authors are told how to construct resources for a broadly defined audience. This is known as universal design. Accessibility 2.0 was introduced to point to the need to account for the context in which resources would be used, to help overcome inadequacies identified in the purely technical approach. Accessibility 3.0 moved the focus on users from a homogenised universal definition to recognition of the idiosyncratic needs and preferences of individuals and to cater for them. All of

  9. Web Accessibility in Romania: The Conformance of Municipal Web Sites to Web Content Accessibility Guidelines

    Directory of Open Access Journals (Sweden)

    Costin PRIBEANU

    2012-01-01

    Full Text Available The accessibility of public administration web sites is a key quality attribute for the successful implementation of the Information Society. The purpose of this paper is to present a second review of municipal web sites in Romania that is based on automated accessibility checking. A number of 60 web sites were evaluated against WCAG 2.0 recommendations. The analysis of results reveals a relatively low web accessibility of municipal web sites and highlights several aspects. Firstly, a slight progress in web accessibility was noticed as regarded the sample evaluated in 2010. Secondly, the number of specific accessibility errors is varying across the web sites and the accessibility is not preserved in time. Thirdly, these variations suggest that an accessibility check before launching a new release for a web page is not a common practice.

  10. Web Accessibility, Libraries, and the Law

    Directory of Open Access Journals (Sweden)

    Camilla Fulton

    2011-03-01

    Full Text Available With an abundance of library resources being served on the web, researchers are finding that disabled people oftentimes do not have the same level of access to materials as their nondisabled peers. This paper discusses web accessibility in the context of United States’ federal laws most referenced in web accessibility lawsuits. Additionally, it reveals which states have statutes that mirror federal web accessibility guidelines and to what extent. Interestingly, fewer than half of the states have adopted statutes addressing web accessibility, and fewer than half of these reference Section 508 of the Rehabilitation Act or Web Content Accessibility Guidelines (WCAG 1.0. Regardless of sparse legislation surrounding web accessibility, librarians should consult the appropriate web accessibility resources to ensure that their specialized content reaches all.

  11. 2B-Alert Web: An Open-Access Tool for Predicting the Effects of Sleep/Wake Schedules and Caffeine Consumption on Neurobehavioral Performance.

    Science.gov (United States)

    Reifman, Jaques; Kumar, Kamal; Wesensten, Nancy J; Tountas, Nikolaos A; Balkin, Thomas J; Ramakrishnan, Sridhar

    2016-12-01

    Computational tools that predict the effects of daily sleep/wake amounts on neurobehavioral performance are critical components of fatigue management systems, allowing for the identification of periods during which individuals are at increased risk for performance errors. However, none of the existing computational tools is publicly available, and the commercially available tools do not account for the beneficial effects of caffeine on performance, limiting their practical utility. Here, we introduce 2B-Alert Web, an open-access tool for predicting neurobehavioral performance, which accounts for the effects of sleep/wake schedules, time of day, and caffeine consumption, while incorporating the latest scientific findings in sleep restriction, sleep extension, and recovery sleep. We combined our validated Unified Model of Performance and our validated caffeine model to form a single, integrated modeling framework instantiated as a Web-enabled tool. 2B-Alert Web allows users to input daily sleep/wake schedules and caffeine consumption (dosage and time) to obtain group-average predictions of neurobehavioral performance based on psychomotor vigilance tasks. 2B-Alert Web is accessible at: https://2b-alert-web.bhsai.org. The 2B-Alert Web tool allows users to obtain predictions for mean response time, mean reciprocal response time, and number of lapses. The graphing tool allows for simultaneous display of up to seven different sleep/wake and caffeine schedules. The schedules and corresponding predicted outputs can be saved as a Microsoft Excel file; the corresponding plots can be saved as an image file. The schedules and predictions are erased when the user logs off, thereby maintaining privacy and confidentiality. The publicly accessible 2B-Alert Web tool is available for operators, schedulers, and neurobehavioral scientists as well as the general public to determine the impact of any given sleep/wake schedule, caffeine consumption, and time of day on performance of a

  12. Web Accessibility - A timely recognized challenge

    CERN Document Server

    Qadri, Jameel A

    2011-01-01

    Web Accessibility for disabled people has posed a challenge to the civilized societies that claim to uphold the principles of equal opportunity and nondiscrimination. Certain concrete measures have been taken to narrow down the digital divide between normal and disabled users of Internet technology. The efforts have resulted in enactment of legislations and laws and mass awareness about the discriminatory nature of the accessibility issue, besides the efforts have resulted in the development of commensurate technological tools to develop and test the Web accessibility. World Wide Web consortium's (W3C) Web Accessibility Initiative (WAI) has framed a comprehensive document comprising of set of guidelines to make the Web sites accessible to the users with disabilities. This paper is about the issues and aspects surrounding Web Accessibility. The details and scope are kept limited to comply with the aim of the paper which is to create awareness and to provide basis for in-depth investigation.

  13. Web accessibility standards and disability: developing critical perspectives on accessibility.

    Science.gov (United States)

    Lewthwaite, Sarah

    2014-01-01

    Currently, dominant web accessibility standards do not respect disability as a complex and culturally contingent interaction; recognizing that disability is a variable, contrary and political power relation, rather than a biological limit. Against this background there is clear scope to broaden the ways in which accessibility standards are understood, developed and applied. Commentary. The values that shape and are shaped by legislation promote universal, statistical and automated approaches to web accessibility. This results in web accessibility standards conveying powerful norms fixing the relationship between technology and disability, irrespective of geographical, social, technological or cultural diversity. Web accessibility standards are designed to enact universal principles; however, they express partial and biopolitical understandings of the relation between disability and technology. These values can be limiting, and potentially counter-productive, for example, for the majority of disabled people in the "Global South" where different contexts constitute different disabilities and different experiences of web access. To create more robust, accessible outcomes for disabled people, research and standards practice should diversify to embrace more interactional accounts of disability in different settings. Implications for Rehabilitation Creating accessible experiences is an essential aspect of rehabilitation. Web standards promote universal accessibility as a property of an online resource or service. This undervalues the importance of the user's intentions, expertize, their context, and the complex social and cultural nature of disability. Standardized, universal approaches to web accessibility may lead to counterproductive outcomes for disabled people whose impairments and circumstances do not meet Western disability and accessibility norms. Accessible experiences for rehabilitation can be enhanced through an additional focus on holistic approaches to

  14. Web accessibility of public universities in Andalusia

    Directory of Open Access Journals (Sweden)

    Luis Alejandro Casasola Balsells

    2017-06-01

    Full Text Available This paper describes an analysis conducted in 2015 to evaluate the accessibility of content on Andalusian public university websites. In order to determinate whether these websites are accessible, an assessment has been carried out to check conformance with the latest Web Content Accessibility Guidelines (WCAG 2.0 established by the World Wide Web Consortium (W3C. For this purpose, we have designed a methodology for analysis that combines the use of three automatic tools (eXaminator, MINHAP web accessibility tool, and TAW with a manual analysis to provide a greater reliability and validity of the results. Although the results are acceptable overall, a detailed analysis shows that more is still needed for achieving full accessibility for the entire university community. In this respect, we suggest several corrections to common accessibility errors for facilitating the design of university web portals.

  15. Understanding and Supporting Web Developers: Design and Evaluation of a Web Accessibility Information Resource (WebAIR).

    Science.gov (United States)

    Swallow, David; Petrie, Helen; Power, Christopher

    2016-01-01

    This paper describes the design and evaluation of a Web Accessibility Information Resource (WebAIR) for supporting web developers to create and evaluate accessible websites. WebAIR was designed with web developers in mind, recognising their current working practices and acknowledging their existing understanding of web accessibility. We conducted an evaluation with 32 professional web developers in which they used either WebAIR or an existing accessibility information resource, the Web Content Accessibility Guidelines, to identify accessibility problems. The findings indicate that several design decisions made in relation to the language, organisation, and volume of WebAIR were effective in supporting web developers to undertake web accessibility evaluations.

  16. Web accessibility and open source software.

    Science.gov (United States)

    Obrenović, Zeljko

    2009-07-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long-term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities, for which existing browsers tend to provide only partial solutions. Although functionality for Web accessibility can be found as open source and free software components, their reuse and integration is complex because they were developed in diverse implementation environments, following standards and conventions incompatible with the Web. To address these problems, we have started several activities that aim at exploiting the potential of open-source software for Web accessibility. The first of these activities is the development of Adaptable Multi-Interface COmmunicator (AMICO):WEB, an infrastructure that facilitates efficient reuse and integration of open source software components into the Web environment. The main contribution of AMICO:WEB is in enabling the syntactic and semantic interoperability between Web extension mechanisms and a variety of integration mechanisms used by open source and free software components. Its design is based on our experiences in solving practical problems where we have used open source components to improve accessibility of rich media Web applications. The second of our activities involves improving education, where we have used our platform to teach students how to build advanced accessibility solutions from diverse open-source software. We are also partially involved in the recently started Eclipse projects called Accessibility Tools Framework (ACTF), the aim of which is development of extensible infrastructure, upon which developers can build a variety of utilities that help to evaluate and enhance the accessibility of applications and content for people with disabilities. In this article we briefly report on these activities.

  17. Web browser accessibility using open source software

    NARCIS (Netherlands)

    Obrenovic, Z.; Ossenbruggen, J.R. van

    2007-01-01

    A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities,

  18. Evaluating Web accessibility at different processing phases

    Science.gov (United States)

    Fernandes, N.; Lopes, R.; Carriço, L.

    2012-09-01

    Modern Web sites use several techniques (e.g. DOM manipulation) that allow for the injection of new content into their Web pages (e.g. AJAX), as well as manipulation of the HTML DOM tree. This has the consequence that the Web pages that are presented to users (i.e. after browser processing) are different from the original structure and content that is transmitted through HTTP communication (i.e. after browser processing). This poses a series of challenges for Web accessibility evaluation, especially on automated evaluation software. This article details an experimental study designed to understand the differences posed by accessibility evaluation after Web browser processing. We implemented a Javascript-based evaluator, QualWeb, that can perform WCAG 2.0 based accessibility evaluations in the two phases of browser processing. Our study shows that, in fact, there are considerable differences between the HTML DOM trees in both phases, which have the consequence of having distinct evaluation results. We discuss the impact of these results in the light of the potential problems that these differences can pose to designers and developers that use accessibility evaluators that function before browser processing.

  19. Web services interface to EPICS channel access

    Institute of Scientific and Technical Information of China (English)

    DUAN Lei; SHEN Liren

    2008-01-01

    Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services' high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language,has good interopcrability and reusability.

  20. Village Green Project: Web-accessible Database

    Science.gov (United States)

    The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...

  1. Web Design for Accessibility: Policies and Practice.

    Science.gov (United States)

    Foley, Alan; Regan, Bob

    2002-01-01

    Discusses Web design for people with disabilities and outlines a process-based approach to accessibility policy implementation. Topics include legal mandates; determining which standards apply to a given organization; validation, or evaluation of the site; site architecture; navigation; and organizational needs. (Author/LRW)

  2. Web-accessible Chemical Compound Information

    OpenAIRE

    Roth, Dana L

    2008-01-01

    Web-accessible chemical compound information resources are widely available. In addition to fee-based resources, such as SciFinder Scholar and Beilstein, there is a wide variety of freely accessible resources such as ChemSpider and PubChem. The author provides a general description of various fee-based and free chemical compound resources. The free resources generally offer an acceptable alternative to fee-based resources for quick retrieval. It is assumed that readers will be familiar with ...

  3. Web-accessible Chemical Compound Information

    OpenAIRE

    Roth, Dana L

    2008-01-01

    Web-accessible chemical compound information resources are widely available. In addition to fee-based resources, such as SciFinder Scholar and Beilstein, there is a wide variety of freely accessible resources such as ChemSpider and PubChem. The author provides a general description of various fee-based and free chemical compound resources. The free resources generally offer an acceptable alternative to fee-based resources for quick retrieval. It is assumed that readers will be familiar with ...

  4. Access to Space Interactive Design Web Site

    Science.gov (United States)

    Leon, John; Cutlip, William; Hametz, Mark

    2000-01-01

    The Access To Space (ATS) Group at NASA's Goddard Space Flight Center (GSFC) supports the science and technology community at GSFC by facilitating frequent and affordable opportunities for access to space. Through partnerships established with access mode suppliers, the ATS Group has developed an interactive Mission Design web site. The ATS web site provides both the information and the tools necessary to assist mission planners in selecting and planning their ride to space. This includes the evaluation of single payloads vs. ride-sharing opportunities to reduce the cost of access to space. Features of this site include the following: (1) Mission Database. Our mission database contains a listing of missions ranging from proposed missions to manifested. Missions can be entered by our user community through data input tools. Data is then accessed by users through various search engines: orbit parameters, ride-share opportunities, spacecraft parameters, other mission notes, launch vehicle, and contact information. (2) Launch Vehicle Toolboxes. The launch vehicle toolboxes provide the user a full range of information on vehicle classes and individual configurations. Topics include: general information, environments, performance, payload interface, available volume, and launch sites.

  5. Web Accessibility Theory and Practice: An Introduction for University Faculty

    Science.gov (United States)

    Bradbard, David A.; Peters, Cara

    2010-01-01

    Web accessibility is the practice of making Web sites accessible to all, particularly those with disabilities. As the Internet becomes a central part of post-secondary instruction, it is imperative that instructional Web sites be designed for accessibility to meet the needs of disabled students. The purpose of this article is to introduce Web…

  6. Investigating the appropriateness and relevance of mobile web accessibility guidelines

    OpenAIRE

    Clegg-Vinell, R; Bailey, C.; Gkatzidou, V

    2014-01-01

    The Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C) develop and maintain guidelines for making the web more accessible to people with disabilities. WCAG 2.0 and the MWBP 1.0 are internationally regarded as the industry standard guidelines for web accessibility. Mobile testing sessions conducted by AbilityNet document issues raised by users in a report format, relating issues to guidelines wherever possible. This paper presents the results of a preliminary investigati...

  7. "Fine Tuning" image accessibility for museum Web sites

    OpenAIRE

    Leporini, Barbara; Norscia, Ivan

    2008-01-01

    Accessibility and usability guidelines are available to design web sites accessible to blind users. However, the actual usability of accessible web pages varies depending on the type of information the user is dealing with. Museum web sites, including specimens and hall descriptions, need specific requirements to allow vision-impaired users, who navigate using a screen-reader, to access pieces of information that are mainly based on visual perception. Here we address a methodology to be appli...

  8. Epitopemap: a web application for integrated whole proteome epitope prediction.

    Science.gov (United States)

    Farrell, Damien; Gordon, Stephen V

    2015-07-14

    Predictions of MHC binding affinity are commonly used in immunoinformatics for T cell epitope prediction. There are multiple available methods, some of which provide web access. However there is currently no convenient way to access the results from multiple methods at the same time or to execute predictions for an entire proteome at once. We designed a web application that allows integration of multiple epitope prediction methods for any number of proteins in a genome. The tool is a front-end for various freely available methods. Features include visualisation of results from multiple predictors within proteins in one plot, genome-wide analysis and estimates of epitope conservation. We present a self contained web application, Epitopemap, for calculating and viewing epitope predictions with multiple methods. The tool is easy to use and will assist in computational screening of viral or bacterial genomes.

  9. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility barriers that prevent people with disabilities from using them. This article combines current Web accessibility benchmarking methodologies with a sound strategy for comparing Web accessibility among countries and continents. Furthermore, the article presents the first global analysis of the Web...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...

  10. User Experience-UX-and the Web Accessibility Standards

    Directory of Open Access Journals (Sweden)

    Osama Sohaib

    2011-05-01

    Full Text Available The success of web-based applications depends on how well it is perceive by the end-users. The various web accessibility guidelines have promoted to help improve accessing, understanding the content of web pages. Designing for the total User Experience (UX is an evolving discipline of the World Wide Web mainstream that focuses on how the end users will work to achieve their target goals. To satisfy end-users, web-based applications must fulfill some common needs like clarity, accessibility and availability. The aim of this study is to evaluate how the User Experience characteristics of web-based application are related to web accessibility guidelines (WCAG 2.0, ISO 9241:151 and Section 508.

  11. Global Web Accessibility Analysis of National Government Portals and Ministry Web Sites

    DEFF Research Database (Denmark)

    Goodwin, Morten; Susar, Deniz; Nietzio, Annika

    2011-01-01

    Equal access to public information and services for all is an essential part of the United Nations (UN) Declaration of Human Rights. Today, the Web plays an important role in providing information and services to citizens. Unfortunately, many government Web sites are poorly designed and have...... accessibility of 192 United Nation Member States made publically available. The article also identifies common properties of Member States that have accessible and inaccessible Web sites and shows that implementing antidisability discrimination laws is highly beneficial for the accessibility of Web sites, while...... signing the UN Rights and Dignity of Persons with Disabilities has had no such effect yet. The article demonstrates that, despite the commonly held assumption to the contrary, mature, high-quality Web sites are more accessible than lower quality ones. Moreover, Web accessibility conformance claims by Web...

  12. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  13. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  14. Current state of web accessibility of Malaysian ministries websites

    Science.gov (United States)

    Ahmi, Aidi; Mohamad, Rosli

    2016-08-01

    Despite the fact that Malaysian public institutions have progressed considerably on website and portal usage, web accessibility has been reported as one of the issues deserves special attention. Consistent with the government moves to promote an effective use of web and portal, it is essential for the government institutions to ensure compliance with established standards and guidelines on web accessibility. This paper evaluates accessibility of 25 Malaysian ministries websites using automated tools i.e. WAVE and Achecker. Both tools are designed to objectively evaluate web accessibility in conformance with Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and United States Rehabilitation Act 1973 (Section 508). The findings reported somewhat low compliance to web accessibility standard amongst the ministries. Further enhancement is needed in the aspect of input elements such as label and checkbox to be associated with text as well as image-related elements. This findings could be used as a mechanism for webmasters to locate and rectify errors pertaining to the web accessibility and to ensure equal access of the web information and services to all citizen.

  15. AN AUTOMATIC AND METHODOLOGICAL APPROACH FOR ACCESSIBLE WEB APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Lourdes Moreno

    2007-06-01

    Full Text Available Semantic Web approaches try to get the interoperability and communication among technologies and organizations. Nevertheless, sometimes it is forgotten that the Web must be useful for every user, consequently it is necessary to include tools and techniques doing Semantic Web be accessible. Accessibility and usability are two usually joined concepts widely used in web application development, however their meaning are different. Usability means the way to make easy the use but accessibility is referred to the access possibility. For the first one, there are many well proved approaches in real cases. However, accessibility field requires a deeper research that will make feasible the access to disable people and also the access to novel non-disable people due to the cost to automate and maintain accessible applications. In this paper, we propose one architecture to achieve the accessibility in web-environments dealing with the WAI accessibility standard and the Universal Design paradigm. This architecture tries to control the accessibility in web applications development life-cycle following a methodology starting from a semantic conceptual model and leans on description languages and controlled vocabularies.

  16. MISA-web: a web server for microsatellite prediction.

    Science.gov (United States)

    Beier, Sebastian; Thiel, Thomas; Münch, Thomas; Scholz, Uwe; Mascher, Martin

    2017-08-15

    Microsatellites are a widely-used marker system in plant genetics and forensics. The development of reliable microsatellite markers from resequencing data is challenging. We extended MISA, a computational tool assisting the development of microsatellite markers, and reimplemented it as a web-based application. We improved compound microsatellite detection and added the possibility to display and export MISA results in GFF3 format for downstream analysis. MISA-web can be accessed under http://misaweb.ipk-gatersleben.de/. The website provides tutorials, usage note as well as download links to the source code. scholz@ipk-gatersleben.de.

  17. Mass predicts web asymmetry in Nephila spiders

    Science.gov (United States)

    Kuntner, Matjaž; Gregorič, Matjaž; Li, Daiqin

    2010-12-01

    The architecture of vertical aerial orb webs may be affected by spider size and gravity or by the available web space, in addition to phylogenetic and/or developmental factors. Vertical orb web asymmetry measured by hub displacement has been shown to increase in bigger and heavier spiders; however, previous studies have mostly focused on adult and subadult spiders or on several size classes with measured size parameters but no mass. Both estimations are suboptimal because (1) adult orb web spiders may not invest heavily in optimal web construction, whereas juveniles do; (2) size class/developmental stage is difficult to estimate in the field and is thus subjective, and (3) mass scales differently to size and is therefore more important in predicting aerial foraging success due to gravity. We studied vertical web asymmetry in a giant orb web spider, Nephila pilipes, across a wide range of size classes/developmental stages and tested the hypothesis that vertical web asymmetry (measured as hub displacement) is affected by gravity. On a sample of 100 webs, we found that hubs were more displaced in heavier and larger juveniles and that spider mass explained vertical web asymmetry better than other measures of spider size (carapace and leg lengths, developmental stage). Quantifying web shape via the ladder index suggested that, unlike in other nephilid taxa, growing Nephila orbs do not become vertically elongated. We conclude that the ontogenetic pattern of progressive vertical web asymmetry in Nephila can be explained by optimal foraging due to gravity, to which the opposing selective force may be high web-building costs in the lower orb. Recent literature finds little support for alternative explanations of ontogenetic orb web allometry such as the size limitation hypothesis and the biogenetic law.

  18. Web Accessibility Knowledge and Skills for Non-Web Library Staff

    Science.gov (United States)

    McHale, Nina

    2012-01-01

    Why do librarians and library staff other than Web librarians and developers need to know about accessibility? Web services staff do not--or should not--operate in isolation from the rest of the library staff. It is important to consider what areas of online accessibility are applicable to other areas of library work and to colleagues' regular job…

  19. Predicting web site audience demographics for web advertising targeting using multi-web site clickstream data

    OpenAIRE

    Bock, K W; D. VAN DEN POEL; Manigart, S.

    2009-01-01

    Several recent studies have explored the virtues of behavioral targeting and personalization for online advertising. In this paper, we add to this literature by proposing a cost-effective methodology for the prediction of demographic web site visitor profiles that can be used for web advertising targeting purposes. The methodology involves the transformation of web site visitors’ clickstream patterns to a set of features and the training of Random Forest classifiers that generate predictions ...

  20. Web accessibility practical advice for the library and information professional

    CERN Document Server

    Craven, Jenny

    2008-01-01

    Offers an introduction to web accessibility and usability for information professionals, offering advice on the concerns relevant to library and information organizations. This book can be used as a resource for developing staff training and awareness activities. It will also be of value to website managers involved in web design and development.

  1. The Next Page Access Prediction Using Makov Model

    Directory of Open Access Journals (Sweden)

    Deepti Razdan

    2011-09-01

    Full Text Available Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentialdata decreases recommendation accuracy. In thispaper involve incorporating clustering with low order markovmodel which can improve the prediction accuracy. The main areaof research in this paper is pre processing and identification ofuseful patterns from web data using mining techniques with thehelp of open source software.

  2. Accessible Web Design - The Power of the Personal Message.

    Science.gov (United States)

    Whitney, Gill

    2015-01-01

    The aim of this paper is to describe ongoing research being carried out to enable people with visual impairments to communicate directly with designers and specifiers of hobby and community web sites to maximise the accessibility of their sites. The research started with an investigation of the accessibility of community and hobby web sites as perceived by a group of visually impaired end users. It is continuing with an investigation into how to best to communicate with web designers who are not experts in web accessibility. The research is making use of communication theory to investigate how terminology describing personal experience can be used in the most effective and powerful way. By working with the users using a Delphi study the research has ensured that the views of the visually impaired end users is successfully transmitted.

  3. A Web-Based Remote Access Laboratory Using SCADA

    Science.gov (United States)

    Aydogmus, Z.; Aydogmus, O.

    2009-01-01

    The Internet provides an opportunity for students to access laboratories from outside the campus. This paper presents a Web-based remote access real-time laboratory using SCADA (supervisory control and data acquisition) control. The control of an induction motor is used as an example to demonstrate the effectiveness of this remote laboratory,…

  4. Binary Coded Web Access Pattern Tree in Education Domain

    Science.gov (United States)

    Gomathi, C.; Moorthi, M.; Duraiswamy, K.

    2008-01-01

    Web Access Pattern (WAP), which is the sequence of accesses pursued by users frequently, is a kind of interesting and useful knowledge in practice. Sequential Pattern mining is the process of applying data mining techniques to a sequential database for the purposes of discovering the correlation relationships that exist among an ordered list of…

  5. Improving access to space weather data via workflows and web services

    Science.gov (United States)

    Sundaravel, Anu Swapna

    The Space Physics Interactive Data Resource (SPIDR) is a web-based interactive tool developed by NOAA's National Geophysical Data Center to provide access to historical space physics datasets. These data sets are widely used by physicists for space weather modeling and predictions. Built on a distributed network of databases and application servers, SPIDR offers services in two ways: via a web page interface and via a web service interface. SPIDR exposes several SOAP-based web services that client applications implement to connect to a number of data sources for data download and processing. At present, the usage of the web services has been difficult, adding unnecessary complexity to client applications and inconvenience to the scientists who want to use these datasets. The purpose of this study focuses on improving SPIDR's web interface to better support data access, integration and display. This is accomplished in two ways: (1) examining the needs of scientists to better understand what web services they require to better access and process these datasets and (2) developing a client application to support SPIDR's SOAP-based services using the Kepler scientific workflow system. To this end, we identified, designed and developed several web services for filtering the existing datasets and created several Kepler workflows to automate routine tasks associated with these datasets. These workflows are a part of the custom NGDC build of the Kepler tool. Scientists are already familiar with Kepler due to its extensive use in this domain. As a result, this approach provides them with tools that are less daunting than raw web services and ultimately more useful and customizable. We evaluated our work by interviewing various scientists who make use of SPIDR and having them use the developed Kepler workflows while recording their feedback and suggestions. Our work has improved SPIDR such that new web services are now available and scientists have access to a desktop

  6. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    Science.gov (United States)

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  7. SIDECACHE: Information access, management and dissemination framework for web services

    Directory of Open Access Journals (Sweden)

    Robbins Kay A

    2011-06-01

    Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  8. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lamberth, K; Harndahl, M

    2008-01-01

    been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75–80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non...

  9. EnviroAtlas - Accessibility Characteristics in the Conterminous U.S. Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service includes maps that illustrate factors affecting transit accessibility, and indicators of accessibility. Accessibility measures how...

  10. UK Government Web Continuity: Persisting Access through Aligning Infrastructures

    Directory of Open Access Journals (Sweden)

    Amanda Spencer

    2009-06-01

    Full Text Available Normal 0 Government's use of the Web in the UK is prolific and a wide range of services are now available though this channel. The government set out to address the problem that links from Hansard (the transcripts of Parliamentary debates were not maintained over time and that therefore there was need for some long-term storage and stewardship of information, including maintaining access. Further investigation revealed that linking was key, not only in maintaining access to information, but also to the discovery of information. This resulted in a project that affects the entire  government Web estate, with a solution leveraging the basic building blocks of the Internet (DNS and the Web (HTTP and URIs in a pragmatic way, to ensure that an infrastructure is in place to provide access to important information both now and in the future.

  11. Learning Task Knowledge from Dialog and Web Access

    Directory of Open Access Journals (Sweden)

    Vittorio Perera

    2015-06-01

    Full Text Available We present KnoWDiaL, an approach for Learning and using task-relevant Knowledge from human-robot Dialog and access to the Web. KnoWDiaL assumes that there is an autonomous agent that performs tasks, as requested by humans through speech. The agent needs to “understand” the request, (i.e., to fully ground the task until it can proceed to plan for and execute it. KnoWDiaL contributes such understanding by using and updating a Knowledge Base, by dialoguing with the user, and by accessing the web. We believe that KnoWDiaL, as we present it, can be applied to general autonomous agents. However, we focus on our work with our autonomous collaborative robot, CoBot, which executes service tasks in a building, moving around and transporting objects between locations. Hence, the knowledge acquired and accessed consists of groundings of language to robot actions, and building locations, persons, and objects. KnoWDiaL handles the interpretation of voice commands, is robust regarding speech recognition errors, and is able to learn commands involving referring expressions in an open domain, (i.e., without requiring a lexicon. We present in detail the multiple components of KnoWDiaL, namely a frame-semantic parser, a probabilistic grounding model, a web-based predicate evaluator, a dialog manager, and the weighted predicate-based Knowledge Base. We illustrate the knowledge access and updates from the dialog and Web access, through detailed and complete examples. We further evaluate the correctness of the predicate instances learned into the Knowledge Base, and show the increase in dialog efficiency as a function of the number of interactions. We have extensively and successfully used KnoWDiaL in CoBot dialoguing and accessing the Web, and extract a few corresponding example sequences from captured videos.

  12. Access Control of Web- and Java-Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.

    2013-01-01

    Cybersecurity has become a great concern as threats of service interruption, unauthorized access, stealing and altering of information, and spreading of viruses have become more prevalent and serious. Application layer access control of applications is a critical component in the overall security solution that also includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. An access control solution, based on an open-source access manager augmented with custom software components, was developed to provide protection to both Web-based and Javabased client and server applications. The DISA Security Service (DISA-SS) provides common access control capabilities for AMMOS software applications through a set of application programming interfaces (APIs) and network- accessible security services for authentication, single sign-on, authorization checking, and authorization policy management. The OpenAM access management technology designed for Web applications can be extended to meet the needs of Java thick clients and stand alone servers that are commonly used in the JPL AMMOS environment. The DISA-SS reusable components have greatly reduced the effort for each AMMOS subsystem to develop its own access control strategy. The novelty of this work is that it leverages an open-source access management product that was designed for Webbased applications to provide access control for Java thick clients and Java standalone servers. Thick clients and standalone servers are still commonly used in businesses and government, especially for applications that require rich graphical user interfaces and high-performance visualization that cannot be met by thin clients running on Web browsers

  13. 3PAC: Enforcing Access Policies for Web Services

    NARCIS (Netherlands)

    van Bemmel, J.; Wegdam, M.; Lagerberg, K.

    Web Services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a Service Oriented Architecture is not addressed adequately. We

  14. 3PAC: Enforcing Access Policies for Web Services

    NARCIS (Netherlands)

    van Bemmel, J.; Wegdam, M.; Lagerberg, K.

    2005-01-01

    Web Services fail to deliver on the promise of ubiquitous deployment and seamless interoperability due to the lack of a uniform, standards-based approach to all aspects of security. In particular, the enforcement of access policies in a Service Oriented Architecture is not addressed adequately. We p

  15. Data Vault: providing simple web access to NRAO data archives

    Science.gov (United States)

    DuPlain, Ron; Benson, John; Sessoms, Eric

    2008-08-01

    In late 2007, the National Radio Astronomy Observatory (NRAO) launched Data Vault, a feature-rich web application for simplified access to NRAO data archives. This application allows users to submit a Google-like free-text search, and browse, download, and view further information on matching telescope data. Data Vault uses the model-view-controller design pattern with web.py, a minimalist open-source web framework built with the Python Programming Language. Data Vault implements an Ajax client built on the Google Web Toolkit (GWT), which creates structured JavaScript applications. This application supports plug-ins for linking data to additional web tools and services, including Google Sky. NRAO sought the inspiration of Google's remarkably elegant user interface and notable performance to create a modern search tool for the NRAO science data archive, taking advantage of the rapid development frameworks of web.py and GWT to create a web application on a short timeline, while providing modular, easily maintainable code. Data Vault provides users with a NRAO-focused data archive while linking to and providing more information wherever possible. Free-text search capabilities are possible (and even simple) with an innovative query parser. NRAO develops all software under an open-source license; Data Vault is available to developers and users alike.

  16. Unlocking the Gates to the Kingdom: Designing Web Pages for Accessibility.

    Science.gov (United States)

    Mills, Steven C.

    As the use of the Web is perceived to be an effective tool for dissemination of research findings for the provision of asynchronous instruction, the issue of accessibility of Web page information will become more and more relevant. The World Wide Web consortium (W3C) has recognized a disparity in accessibility to the Web between persons with and…

  17. Geodetic Data Via Web Services: Standardizing Access, Expanding Accessibility, and Promoting Discovery

    Science.gov (United States)

    Zietlow, D. W.; Molnar, C.; Meertens, C. M.; Phillips, D. A.; Bartel, B. A.; Ertz, D. J.

    2016-12-01

    UNAVCO, a university-governed consortium that enables geodetic research and education, is developing and implementing new web services to standardize and enhance access to geodetic data for an ever-growing community of users. This simple and easy to use tool gives both experienced and novice users access to all data and products archived at UNAVCO through a uniform interface regardless of data type and structure. UNAVCO data product types include GPS station position time series, velocity estimates and metadata, as well as meteorological time series, and borehole geophysical data and metadata including strain, seismic, and tilt time series. Users access data through a request URL or through the Swagger user interface (UI). The Swagger UI allows users to easily learn about the web services and provides users the ability to test the web services in their web browser. Swagger UI also provides documentation of the web services' URLs and query parameters. With this documentation users can see the valid query parameters for each web service to assist in documentation for both the developers and the users.Output from the web services is currently in a standard comma-separated (CSV) format that can then be used in other processing and/or visualization programs. The web services are being standardized so that all CSV formats will follow the GeoCSV specification. Other formats will be available such as GeoJSON and time series xml since not all data are well represented by the CSV format. The UNAVCO web services are written using Python along with the Flask microframework. This allows quick development and the ability to easily implement new services. Future services are being planned to allow users to access metadata from any station with data available directly or indirectly from the UNAVCO website. In collaboration with the UNAVCO Student Internship Program (USIP), we developed a short video demonstrating how to use the web services tool to assist new users and the broader

  18. Dynamic Tracking of Web Activity Accessed by Users Using Cookies

    Directory of Open Access Journals (Sweden)

    K.V.S. Jaharsh Samayan

    2015-07-01

    Full Text Available The motive of this study is to suggest a protocol which can be implemented to observe the activities of any node within a network whose contribution to the organization needs to be measured. Many associates working in any organization misuse the resources allocated to them and waste their working time in unproductive work which is of no use to the organization. In order to tackle this problem the dynamic approach in monitoring web pages accessed by user using cookies gives a very efficient way of tracking all the activities of the individual and store in cookies which are generated based on their recent web activity and display a statistical information of how the users web activity for the time period has been utilized for every IP-address in the network. In a ever challenging dynamic world monitoring the productivity of the associates in the organization plays an utmost important role.

  19. Secure Communication and Access Control for Mobile Web Service Provisioning

    CERN Document Server

    Srirama, Satish Narayana

    2010-01-01

    It is now feasible to host basic web services on a smart phone due to the advances in wireless devices and mobile communication technologies. While the applications are quite welcoming, the ability to provide secure and reliable communication in the vulnerable and volatile mobile ad-hoc topologies is vastly becoming necessary. The paper mainly addresses the details and issues in providing secured communication and access control for the mobile web service provisioning domain. While the basic message-level security can be provided, providing proper access control mechanisms for the Mobile Host still poses a great challenge. This paper discusses details of secure communication and proposes the distributed semantics-based authorization mechanism.

  20. CCTOP: a Consensus Constrained TOPology prediction web server.

    Science.gov (United States)

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided.

  1. Sann: solvent accessibility prediction of proteins by nearest neighbor method.

    Science.gov (United States)

    Joo, Keehyoung; Lee, Sung Jong; Lee, Jooyoung

    2012-07-01

    We present a method to predict the solvent accessibility of proteins which is based on a nearest neighbor method applied to the sequence profiles. Using the method, continuous real-value prediction as well as two-state and three-state discrete predictions can be obtained. The method utilizes the z-score value of the distance measure in the feature vector space to estimate the relative contribution among the k-nearest neighbors for prediction of the discrete and continuous solvent accessibility. The Solvent accessibility database is constructed from 5717 proteins extracted from PISCES culling server with the cutoff of 25% sequence identities. Using optimal parameters, the prediction accuracies (for discrete predictions) of 78.38% (two-state prediction with the threshold of 25%), 65.1% (three-state prediction with the thresholds of 9 and 36%), and the Pearson correlation coefficient (between the predicted and true RSA's for continuous prediction) of 0.676 are achieved An independent benchmark test was performed with the CASP8 targets where we find that the proposed method outperforms existing methods. The prediction accuracies are 80.89% (for two state prediction with the threshold of 25%), 67.58% (three-state prediction), and the Pearson correlation coefficient of 0.727 (for continuous prediction) with mean absolute error of 0.148. We have also investigated the effect of increasing database sizes on the prediction accuracy, where additional improvement in the accuracy is observed as the database size increases. The SANN web server is available at http://lee.kias.re.kr/~newton/sann/.

  2. Ensemble Learned Vaccination Uptake Prediction using Web Search Queries

    OpenAIRE

    Hansen, Niels Dalum; Lioma, Christina; Mølbak, Kåre

    2016-01-01

    We present a method that uses ensemble learning to combine clinical and web-mined time-series data in order to predict future vaccination uptake. The clinical data is official vaccination registries, and the web data is query frequencies collected from Google Trends. Experiments with official vaccine records show that our method predicts vaccination uptake eff?ectively (4.7 Root Mean Squared Error). Whereas performance is best when combining clinical and web data, using solely web data yields...

  3. Architecture for large-scale automatic web accessibility evaluation based on the UWEM methodology

    DEFF Research Database (Denmark)

    Ulltveit-Moe, Nils; Olsen, Morten Goodwin; Pillai, Anand B.

    2008-01-01

    The European Internet Accessibility project (EIAO) has developed an Observatory for performing large scale automatic web accessibility evaluations of public sector web sites in Europe. The architecture includes a distributed web crawler that crawls web sites for links until either a given budget...... of web pages have been identified or the web site has been crawled exhaustively. Subsequently, a uniform random subset of the crawled web pages is sampled and sent for accessibility evaluation and the evaluation results are stored in a Resource Description Format (RDF) database that is later loaded...

  4. Predicting consumer behavior with Web search.

    Science.gov (United States)

    Goel, Sharad; Hofman, Jake M; Lahaie, Sébastien; Pennock, David M; Watts, Duncan J

    2010-10-12

    Recent work has demonstrated that Web search volume can "predict the present," meaning that it can be used to accurately track outcomes such as unemployment levels, auto and home sales, and disease prevalence in near real time. Here we show that what consumers are searching for online can also predict their collective future behavior days or even weeks in advance. Specifically we use search query volume to forecast the opening weekend box-office revenue for feature films, first-month sales of video games, and the rank of songs on the Billboard Hot 100 chart, finding in all cases that search counts are highly predictive of future outcomes. We also find that search counts generally boost the performance of baseline models fit on other publicly available data, where the boost varies from modest to dramatic, depending on the application in question. Finally, we reexamine previous work on tracking flu trends and show that, perhaps surprisingly, the utility of search data relative to a simple autoregressive model is modest. We conclude that in the absence of other data sources, or where small improvements in predictive performance are material, search queries provide a useful guide to the near future.

  5. Access Control of Web and Java Based Applications

    Science.gov (United States)

    Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan

    2011-01-01

    Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.

  6. Model for Predicting End User Web Page Response Time

    CERN Document Server

    Nagarajan, Sathya Narayanan

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we d...

  7. Model for Predicting End User Web Page Response Time

    OpenAIRE

    Nagarajan, Sathya Narayanan; Ravikumar, Srijith

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely...

  8. Accessing multimedia content from mobile applications using semantic web technologies

    Science.gov (United States)

    Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin

    2014-02-01

    We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.

  9. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  10. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  11. Ensemble learned vaccination uptake prediction using web search queries

    DEFF Research Database (Denmark)

    Hansen, Niels Dalum; Lioma, Christina; Mølbak, Kåre

    2016-01-01

    We present a method that uses ensemble learning to combine clinical and web-mined time-series data in order to predict future vaccination uptake. The clinical data is official vaccination registries, and the web data is query frequencies collected from Google Trends. Experiments with official...... vaccine records show that our method predicts vaccination uptake eff?ectively (4.7 Root Mean Squared Error). Whereas performance is best when combining clinical and web data, using solely web data yields comparative performance. To our knowledge, this is the ?first study to predict vaccination uptake...

  12. Size-based predictions of food web patterns

    DEFF Research Database (Denmark)

    Zhang, Lai; Hartvig, Martin; Knudsen, Kim

    2014-01-01

    resistance. Our results show that the predicted size-spectrum exponent is borne out in the simulated food webs even with few species, albeit with a systematic bias. The predicted maximum trophic level turns out to be an upper limit since simulated food webs may have a lower number of trophic levels...

  13. File access prediction using neural networks.

    Science.gov (United States)

    Patra, Prashanta Kumar; Sahu, Muktikanta; Mohapatra, Subasish; Samantray, Ronak Kumar

    2010-06-01

    One of the most vexing issues in design of a high-speed computer is the wide gap of access times between the memory and the disk. To solve this problem, static file access predictors have been used. In this paper, we propose dynamic file access predictors using neural networks to significantly improve upon the accuracy, success-per-reference, and effective-success-rate-per-reference by using neural-network-based file access predictor with proper tuning. In particular, we verified that the incorrect prediction has been reduced from 53.11% to 43.63% for the proposed neural network prediction method with a standard configuration than the recent popularity (RP) method. With manual tuning for each trace, we are able to improve upon the misprediction rate and effective-success-rate-per-reference using a standard configuration. Simulations on distributed file system (DFS) traces reveal that exact fit radial basis function (RBF) gives better prediction in high end system whereas multilayer perceptron (MLP) trained with Levenberg-Marquardt (LM) backpropagation outperforms in system having good computational capability. Probabilistic and competitive predictors are the most suitable for work stations having limited resources to deal with and the former predictor is more efficient than the latter for servers having maximum system calls. Finally, we conclude that MLP with LM backpropagation algorithm has better success rate of file prediction than those of simple perceptron, last successor, stable successor, and best k out of m predictors.

  14. WebAUGUSTUS--a web service for training AUGUSTUS and predicting genes in eukaryotes.

    Science.gov (United States)

    Hoff, Katharina J; Stanke, Mario

    2013-07-01

    The prediction of protein coding genes is an important step in the annotation of newly sequenced and assembled genomes. AUGUSTUS is one of the most accurate tools for eukaryotic gene prediction. Here, we present WebAUGUSTUS, a web interface for training AUGUSTUS and predicting genes with AUGUSTUS. Depending on the needs of the user, WebAUGUSTUS generates training gene structures automatically. Besides a genome file, either a file with expressed sequence tags or a file with protein sequences is required for this step. Alternatively, it is possible to submit an externally generated training gene structure file and a genome file. The web service optimizes AUGUSTUS parameters and predicts genes with those parameters. WebAUGUSTUS is available at http://bioinf.uni-greifswald.de/webaugustus.

  15. Web Video Mining: Metadata Predictive Analysis using Classification Techniques

    Directory of Open Access Journals (Sweden)

    Siddu P. Algur

    2016-02-01

    Full Text Available Now a days, the Data Engineering becoming emerging trend to discover knowledge from web audiovisual data such as- YouTube videos, Yahoo Screen, Face Book videos etc. Different categories of web video are being shared on such social websites and are being used by the billions of users all over the world. The uploaded web videos will have different kind of metadata as attribute information of the video data. The metadata attributes defines the contents and features/characteristics of the web videos conceptually. Hence, accomplishing web video mining by extracting features of web videos in terms of metadata is a challenging task. In this work, effective attempts are made to classify and predict the metadata features of web videos such as length of the web videos, number of comments of the web videos, ratings information and view counts of the web videos using data mining algorithms such as Decision tree J48 and navie Bayesian algorithms as a part of web video mining. The results of Decision tree J48 and navie Bayesian classification models are analyzed and compared as a step in the process of knowledge discovery from web videos.

  16. Accessing NASA Technology with the World Wide Web

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  17. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  18. Open access web technology for mathematics learning in higher education

    Directory of Open Access Journals (Sweden)

    Mari Carmen González-Videgaray

    2016-05-01

    Full Text Available Problems with mathematics learning, “math anxiety” or “statistics anxiety” among university students can be avoided by using teaching strategies and technological tools. Besides personal suffering, low achievement in mathematics reduces terminal efficiency and decreases enrollment in careers related to science, technology and mathematics. This paper has two main goals: 1 to offer an organized inventory of open access web resources for math learning in higher education, and 2 to explore to what extent these resources are currently known and used by students and teachers. The first goal was accomplished by running a search in Google and then classifying resources. For the second, we conducted a survey among a sample of students (n=487 and teachers (n=60 from mathematics and engineering within the largest public university in Mexico. We categorized 15 high-quality web resources. Most of them are interactive simulations and computer algebra systems. ResumenLos problemas en el aprendizaje de las matemáticas, como “ansiedad matemática” y “ansiedad estadística” pueden evitarse si se usan estrategias de enseñanza y herramientas tecnológicas. Además de un sufrimiento personal, el bajo rendimiento en matemáticas reduce la eficiencia terminal y decrementa la matrícula en carreras relacionadas con ciencia, tecnología y matemáticas. Este artículo tiene dos objetivos: 1 ofrecer un inventario organizado de recursos web de acceso abierto para aprender matemáticas en la universidad, y 2 explorar en qué medida estos recursos se usan actualmente entre alumnos y profesores. El primer objetivo se logró con un perfil de búsqueda en Google y una clasificación. Para el segundo, se condujo una encuesta en una muestra de estudiantes (n=487 y maestros (n=60 de matemáticas e ingeniería de la universidad más grande de México. Categorizamos 15 recursos web de alta calidad. La mayoría son simulaciones interactivas y

  19. Flexible Web service infrastructure for the development and deployment of predictive models.

    Science.gov (United States)

    Guha, Rajarshi

    2008-02-01

    The development of predictive statistical models is a common task in the field of drug design. The process of developing such models involves two main steps: building the model and then deploying the model. Traditionally such models have been deployed using Web page interfaces. This approach restricts the user to using the specified Web page, and using the model in other ways can be cumbersome. In this paper we present a flexible and generalizable approach to the deployment of predictive models, based on a Web service infrastructure using R. The infrastructure described allows one to access the functionality of these models using a variety of approaches ranging from Web pages to workflow tools. We highlight the advantages of this infrastructure by developing and subsequently deploying random forest models for two data sets.

  20. Archiving Web Sites for Preservation and Access: MODS, METS and MINERVA

    Science.gov (United States)

    Guenther, Rebecca; Myrick, Leslie

    2006-01-01

    Born-digital material such as archived Web sites provides unique challenges in ensuring access and preservation. This article examines some of the technical challenges involved in harvesting and managing Web archives as well as metadata strategies to provide descriptive, technical, and preservation related information about archived Web sites,…

  1. Content accessibility of Web documents: Overview of concepts and needed standards

    DEFF Research Database (Denmark)

    Alapetite, A.

    2006-01-01

    to broaden the scope to any type of user and any type of use case. The document provides an introduction to some required concepts and technical standards for designing accessible Web sites. A brief review of thelegal requirements in a few countries for Web accessibility complements the recommendations...

  2. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester

  3. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Keulen, van Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester wh

  4. Designing A General Deep Web Access Approach Based On A Newly Introduced Factor; Harvestability Factor (HF)

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; van Keulen, Maurice; Hiemstra, Djoerd

    2014-01-01

    The growing need of accessing more and more information draws attentions to huge amount of data hidden behind web forms defined as deep web. To make this data accessible, harvesters have a crucial role. Targeting different domains and websites enhances the need to have a general-purpose harvester wh

  5. Herramientas para la evaluación de la accesibilidad Web/Tools for the evaluation of Web accessibility

    National Research Council Canada - National Science Library

    Esmeralda Serrano Mascaraque

    2009-01-01

    ...: Accesibilidad Web, herramientas de evaluación, programas de accesibilidad, navegadores. ABSTRACT There are different systems to check if a website is accessible or not, among then we can point out the automated tools that help evaluate, through verification of de facto (the average) standards, the global accessibility that the contents of a website presents,...

  6. DIANA-microT web server: elucidating microRNA functions through target prediction.

    Science.gov (United States)

    Maragkakis, M; Reczko, M; Simossis, V A; Alexiou, P; Papadopoulos, G L; Dalamagas, T; Giannopoulos, G; Goumas, G; Koukis, E; Kourtis, K; Vergoulis, T; Koziris, N; Sellis, T; Tsanakas, P; Hatzigeorgiou, A G

    2009-07-01

    Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.

  7. Optimal foraging, not biogenetic law, predicts spider orb web allometry

    Science.gov (United States)

    Gregorič, Matjaž; Kiesbüy, Heine C.; Quiñones Lebrón, Shakira G.; Rozman, Alenka; Agnarsson, Ingi; Kuntner, Matjaž

    2013-03-01

    The biogenetic law posits that the ontogeny of an organism recapitulates the pattern of evolutionary changes. Morphological evidence has offered some support for, but also considerable evidence against, the hypothesis. However, biogenetic law in behavior remains underexplored. As physical manifestation of behavior, spider webs offer an interesting model for the study of ontogenetic behavioral changes. In orb-weaving spiders, web symmetry often gets distorted through ontogeny, and these changes have been interpreted to reflect the biogenetic law. Here, we test the biogenetic law hypothesis against the alternative, the optimal foraging hypothesis, by studying the allometry in Leucauge venusta orb webs. These webs range in inclination from vertical through tilted to horizontal; biogenetic law predicts that allometry relates to ontogenetic stage, whereas optimal foraging predicts that allometry relates to gravity. Specifically, pronounced asymmetry should only be seen in vertical webs under optimal foraging theory. We show that, through ontogeny, vertical webs in L. venusta become more asymmetrical in contrast to tilted and horizontal webs. Biogenetic law thus cannot explain L. venusta web allometry, but our results instead support optimization of foraging area in response to spider size.

  8. An Empirical Evaluation of Web System Access for Smartphone Clients

    Directory of Open Access Journals (Sweden)

    Scott Fowler

    2012-11-01

    Full Text Available As smartphone clients are restricted in computational power and bandwidth, it is important to minimise the overhead of transmitted messages. This paper identifies and studies methods that reduce the amount of data being transferred via wireless links between a web service client and a web service. Measurements were performed in a real environment based on a web service prototype providing public transport information for the city of Hamburg in Germany, using actual wireless links with a mobile smartphone device. REST based web services using the data exchange formats JSON, XML and Fast Infoset were evaluated against the existing SOAP based web service.

  9. Advantages of combined transmembrane topology and signal peptide prediction--the Phobius web server

    DEFF Research Database (Denmark)

    Käll, Lukas; Krogh, Anders; Sonnhammer, Erik L L

    2007-01-01

    predicted transmembrane topologies overlap. This impairs predictions of 5-10% of the proteome, hence this is an important issue in protein annotation. To address this problem, we previously designed a hidden Markov model, Phobius, that combines transmembrane topology and signal peptide predictions....... The method makes an optimal choice between transmembrane segments and signal peptides, and also allows constrained and homology-enriched predictions. We here present a web interface (http://phobius.cgb.ki.se and http://phobius.binf.ku.dk) to access Phobius. Udgivelsesdato: 2007-Jul...

  10. 网络无障碍的发展:政策、理论和方法%Development of Web Accessibility: Policies, Theories and Apporoaches

    Institute of Scientific and Technical Information of China (English)

    Xiaoming Zeng

    2006-01-01

    The article is intended to introduce the readers to the concept and background of Web accessibility in the United States. I will first discuss different definitions of Web accessibility. The beneficiaries of accessible Web or the sufferers from inaccessible Web will be discussed based on the type of disability. The importance of Web accessibility will be introduced from the perspectives of ethical, demographic, legal, and financial importance. Web accessibility related standards and legislations will be discussed in great detail. Previous research on evaluating Web accessibility will be presented. Lastly, a system for automated Web accessibility transformation will be introduced as an alternative approach for enhancing Web accessibility.

  11. Snippet-based relevance predictions for federated web search

    OpenAIRE

    Demeester, Thomas; Nguyen, Dong-Phuong; Trieschnigg, Rudolf Berend; Develder, Chris; Hiemstra, Djoerd

    2013-01-01

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and pages from a diverse set of actual Web search engines. A linear classifier is trained to predict the snippet-based user estimate of page relevance, but also, to predict the actual page relevance, ...

  12. GAPforAPE: an augmented browsing system to improve Web 2.0 accessibility

    Science.gov (United States)

    Mirri, Silvia; Salomoni, Paola; Prandi, Catia; Muratori, Ludovico Antonio

    2012-09-01

    The Web 2.0 evolution has spread more interactive technologies which affected accessibility for users who navigate the Web by using assistive technologies. In particular, the partial download of new data, the continuous refreshing, and the massive use of scripting can represent significant barriers especially for people with visual impairments, who enjoy the Web by means of screen readers. On the other hand, such technologies can be an opportunity, because they can provide a new means of transcoding Web content, making the Web more accessible. In this article we present GAPforAPE, an augmented browsing system (based on Web browsers extensions) which offers a user's profiling system and transcodes Web content according to constrains declared by users: the same Web page is provided to any user, but GAPforAPE computes adequate customizations, by exploiting scripting technologies which usually affect Web pages accessibility. GAPforAPE imitates screen readers behavior: it applies a specific set of transcoding scripts devoted to a given Web site, when available, and a default set of transcoding operations otherwise. The continuous and quick evolution of the Web has shown that a crowdsourcing system is a desirable solution, letting the transcoding scripts evolve in the same way.

  13. Assessment the web accessibility of e-shops of selected Polish e-commerce companies

    Directory of Open Access Journals (Sweden)

    Anna Michalczyk

    2015-11-01

    Full Text Available The article attempts to answer the question: How in terms of web availability presents a group of web services type of e-shops operated by selected polish e-commerce companies? Discusses the essence of the web availability in the context of WCAG 2.0 standard and business benefits for companies arising from ownership accessible website fulfilling the recommendations of WCAG 2.0. Assessed of level the web accessibility of e-shops of selected polish e-commerce companies.

  14. Improved query difficulty prediction for the web

    NARCIS (Netherlands)

    Hauff, C.; Murdock, V.; Baeza-Yates, R.

    2008-01-01

    Query performance prediction aims to predict whether a query will have a high average precision given retrieval from a particular collection, or low average precision. An accurate estimator of the quality of search engine results can allow the search engine to decide to which queries to apply query

  15. Hand Society and Matching Program Web Sites Provide Poor Access to Information Regarding Hand Surgery Fellowship.

    Science.gov (United States)

    Hinds, Richard M; Klifto, Christopher S; Naik, Amish A; Sapienza, Anthony; Capo, John T

    2016-08-01

    The Internet is a common resource for applicants of hand surgery fellowships, however, the quality and accessibility of fellowship online information is unknown. The objectives of this study were to evaluate the accessibility of hand surgery fellowship Web sites and to assess the quality of information provided via program Web sites. Hand fellowship Web site accessibility was evaluated by reviewing the American Society for Surgery of the Hand (ASSH) on November 16, 2014 and the National Resident Matching Program (NRMP) fellowship directories on February 12, 2015, and performing an independent Google search on November 25, 2014. Accessible Web sites were then assessed for quality of the presented information. A total of 81 programs were identified with the ASSH directory featuring direct links to 32% of program Web sites and the NRMP directory directly linking to 0%. A Google search yielded direct links to 86% of program Web sites. The quality of presented information varied greatly among the 72 accessible Web sites. Program description (100%), fellowship application requirements (97%), program contact email address (85%), and research requirements (75%) were the most commonly presented components of fellowship information. Hand fellowship program Web sites can be accessed from the ASSH directory and, to a lesser extent, the NRMP directory. However, a Google search is the most reliable method to access online fellowship information. Of assessable programs, all featured a program description though the quality of the remaining information was variable. Hand surgery fellowship applicants may face some difficulties when attempting to gather program information online. Future efforts should focus on improving the accessibility and content quality on hand surgery fellowship program Web sites.

  16. Design and Implementation of Open-Access Web-Based Education ...

    African Journals Online (AJOL)

    Design and Implementation of Open-Access Web-Based Education Useful ... using an open source platform which will be more flexible, and cost effective due to free licensing. ... It was observed to have service requirements of online activities.

  17. Systems and Services for Real-Time Web Access to NPP Data Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Global Science & Technology, Inc. (GST) proposes to investigate information processing and delivery technologies to provide near-real-time Web-based access to...

  18. Facilitating access to the web of data a guide for librarians

    CERN Document Server

    Stuart, David

    2011-01-01

    Offers an introduction to the web of data and the semantic web, exploring technologies including APIs, microformats and linked data. This title includes topical commentary and practical examples that explore how information professionals can harness the power of this phenomenon to inform strategy and become facilitators of access to data.

  19. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    OpenAIRE

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc.

  20. WebGeocalc and Cosmographia: Modern Tools to Access SPICE Archives

    Science.gov (United States)

    Semenov, B. V.; Acton, C. H.; Bachman, N. J.; Ferguson, E. W.; Rose, M. E.; Wright, E. D.

    2017-06-01

    The WebGeocalc (WGC) web client-server tool and the SPICE-enhanced Cosmographia visualization program are two new ways for accessing space mission geometry data provided in the PDS SPICE kernel archives and by mission operational SPICE kernel sets.

  1. Snippet-based relevance predictions for federated web search

    NARCIS (Netherlands)

    Demeester, Thomas; Nguyen, Dong; Trieschnigg, Dolf; Develder, Chris; Hiemstra, Djoerd

    2013-01-01

    How well can the relevance of a page be predicted, purely based on snippets? This would be highly useful in a Federated Web Search setting where caching large amounts of result snippets is more feasible than caching entire pages. The experiments reported in this paper make use of result snippets and

  2. SIFT web server: predicting effects of amino acid substitutions on proteins.

    Science.gov (United States)

    Sim, Ngak-Leng; Kumar, Prateek; Hu, Jing; Henikoff, Steven; Schneider, Georg; Ng, Pauline C

    2012-07-01

    The Sorting Intolerant from Tolerant (SIFT) algorithm predicts the effect of coding variants on protein function. It was first introduced in 2001, with a corresponding website that provides users with predictions on their variants. Since its release, SIFT has become one of the standard tools for characterizing missense variation. We have updated SIFT's genome-wide prediction tool since our last publication in 2009, and added new features to the insertion/deletion (indel) tool. We also show accuracy metrics on independent data sets. The original developers have hosted the SIFT web server at FHCRC, JCVI and the web server is currently located at BII. The URL is http://sift-dna.org (24 May 2012, date last accessed).

  3. iDrug: a web-accessible and interactive drug discovery and design platform.

    Science.gov (United States)

    Wang, Xia; Chen, Haipeng; Yang, Feng; Gong, Jiayu; Li, Shiliang; Pei, Jianfeng; Liu, Xiaofeng; Jiang, Hualiang; Lai, Luhua; Li, Honglin

    2014-01-01

    The progress in computer-aided drug design (CADD) approaches over the past decades accelerated the early-stage pharmaceutical research. Many powerful standalone tools for CADD have been developed in academia. As programs are developed by various research groups, a consistent user-friendly online graphical working environment, combining computational techniques such as pharmacophore mapping, similarity calculation, scoring, and target identification is needed. We presented a versatile, user-friendly, and efficient online tool for computer-aided drug design based on pharmacophore and 3D molecular similarity searching. The web interface enables binding sites detection, virtual screening hits identification, and drug targets prediction in an interactive manner through a seamless interface to all adapted packages (e.g., Cavity, PocketV.2, PharmMapper, SHAFTS). Several commercially available compound databases for hit identification and a well-annotated pharmacophore database for drug targets prediction were integrated in iDrug as well. The web interface provides tools for real-time molecular building/editing, converting, displaying, and analyzing. All the customized configurations of the functional modules can be accessed through featured session files provided, which can be saved to the local disk and uploaded to resume or update the history work. iDrug is easy to use, and provides a novel, fast and reliable tool for conducting drug design experiments. By using iDrug, various molecular design processing tasks can be submitted and visualized simply in one browser without installing locally any standalone modeling softwares. iDrug is accessible free of charge at http://lilab.ecust.edu.cn/idrug.

  4. Accessing the SEED Genome Databases via Web Services API: Tools for Programmers

    Directory of Open Access Journals (Sweden)

    Vonstein Veronika

    2010-06-01

    Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  5. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content

  6. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign

  7. Developing Guidelines for Evaluating the Adaptation of Accessible Web-Based Learning Materials

    Science.gov (United States)

    Radovan, Marko; Perdih, Mojca

    2016-01-01

    E-learning is a rapidly developing form of education. One of the key characteristics of e-learning is flexibility, which enables easier access to knowledge for everyone. Information and communications technology (ICT), which is e-learning's main component, enables alternative means of accessing the web-based learning materials that comprise the…

  8. A Distributed Intranet/Web Solution to Integrated Management of Access Networks

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In this article, we describe the present situation of access network management, enumerate a few problems during the development of network management systems, then put forward a distributed Intranet/Web solution named iMAN to the integrated management of access networks, present its architecture and protocol stack, and describe its application in practice.

  9. Examining How Web Designers' Activity Systems Address Accessibility: Activity Theory as a Guide

    Science.gov (United States)

    Russell, Kyle

    2014-01-01

    While accessibility of information technologies is often acknowledged as important, it is frequently not well addressed in practice. The purpose of this study was to examine the work of web developers and content managers to explore why and how accessibility is or is not addressed as an objective as websites are planned, built and maintained.…

  10. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric; Geest, van der Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  11. Enhancing Independent Internet Access for Individuals with Mental Retardation through Use of a Specialized Web Browser: A Pilot Study.

    Science.gov (United States)

    Davies, Daniel K.; Stock, Steven E.; Wehmeyer, Michael L.

    2001-01-01

    In this study, a prototype web browser, called Web Trek, that utilizes multimedia to provide access for individuals with cognitive disabilities was developed and pilot-tested with 12 adults with mental retardation. The Web Trek browser provided greater independence in accessing the Internet compared to Internet Explorer. (Contains references.)…

  12. Accesibilidad vs usabilidad web: evaluación y correlación Accessibility vs. WEB Usability- Evaluation and Correlation

    Directory of Open Access Journals (Sweden)

    Esmeralda Serrano Mascaraque

    2009-08-01

    Full Text Available Los organismos oficiales deben facilitar recursos informativos y prestar servicios a través de diversos medios en aras de conseguir el derecho a la información que le asiste a todo ciudadano. En el momento actual la Web es uno de los recursos más extendidos y por ello es fundamental evaluar el grado de accesibilidad que tienen los contenidos volcados en la Red. Para lograr esto se aplicarán las herramientas y software necesarios y se evaluará el nivel de accesibilidad de un grupo de sitios web representativos. Además se intentará determinar si existe algún tipo de relación entre accesibilidad y usabilidad, ya que ambos son aspectos deseables (o incluso exigibles legalmente, en el caso de la accesibilidad para tener un correcto diseño de web.Government agencies should provide information resources and services through various means in order to achieve the right to information that assists all citizens. Being the Web one of the most widespread resources, it becomes essential to evaluate the degree of its content accessibility. We will evaluate this level on a representative group of websites, and we will try to determine whether there is any relationship between accessibility and usability since both aspects are desired (or even legally required in the case of the accesibility in a proper Web design.

  13. Security Guidelines for the Development of Accessible Web Applications through the implementation of intelligent systems

    Directory of Open Access Journals (Sweden)

    Luis Joyanes Aguilar

    2009-12-01

    Full Text Available Due to the significant increase in threats, attacks and vulnerabilities that affect the Web in recent years has resulted the development and implementation of pools and methods to ensure security measures in the privacy, confidentiality and data integrity of users and businesses. Under certain circumstances, despite the implementation of these tools do not always get the flow of information which is passed in a secure manner. Many of these security tools and methods cannot be accessed by people who have disabilities or assistive technologies which enable people to access the Web efficiently. Among these security tools that are not accessible are the virtual keyboard, the CAPTCHA and other technologies that help to some extent to ensure safety on the Internet and are used in certain measures to combat malicious code and attacks that have been increased in recent times on the Web. Through the implementation of intelligent systems can detect, recover and receive information on the characteristics and properties of the different tools and hardware devices or software with which the user is accessing a web application and through analysis and interpretation of these intelligent systems can infer and automatically adjust the characteristics necessary to have these tools to be accessible by anyone regardless of disability or navigation context. This paper defines a set of guidelines and specific features that should have the security tools and methods to ensure the Web accessibility through the implementation of intelligent systems.

  14. Kids Not Getting the Web Access They Want

    Science.gov (United States)

    Minkel, Walter

    2004-01-01

    A new study shows that students aged 6 to 17 who have access to the Interact at home are growing afore and more dissatisfied with the access to the Net available to them at school. Grunwald Associates, a California market research firm, released the results of their survey, "Children, Families and the Internet," on December 4. Seventy-six percent…

  15. EST-PAC a web package for EST annotation and protein sequence prediction

    Directory of Open Access Journals (Sweden)

    Strahm Yvan

    2006-10-01

    Full Text Available Abstract With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1 searching local or remote biological databases for sequence similarities using Blast services, 2 predicting protein coding sequence from EST data and, 3 annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics.

  16. JavaScript Access to DICOM Network and Objects in Web Browser.

    Science.gov (United States)

    Drnasin, Ivan; Grgić, Mislav; Gogić, Goran

    2017-01-30

    Digital imaging and communications in medicine (DICOM) 3.0 standard provides the baseline for the picture archiving and communication systems (PACS). The development of Internet and various communication media initiated demand for non-DICOM access to PACS systems. Ever-increasing utilization of the web browsers, laptops and handheld devices, as opposed to desktop applications and static organizational computers, lead to development of different web technologies. The DICOM standard officials accepted those subsequently as tools of alternative access. This paper provides an overview of the current state of development of the web access technology to the DICOM repositories. It presents a different approach of using HTML5 features of the web browsers through the JavaScript language and the WebSocket protocol by enabling real-time communication with DICOM repositories. JavaScript DICOM network library, DICOM to WebSocket proxy and a proof-of-concept web application that qualifies as a DICOM 3.0 device were developed.

  17. STUDY ON ACCESS CONTROL FOR WEB SERVICES BASED ON ABAC%基于ABAC的Web Services访问控制研究

    Institute of Scientific and Technical Information of China (English)

    夏春涛; 杨艳丽; 曹利峰

    2012-01-01

    为解决Web Services访问控制问题,分析了传统访问控制模型在Web Services应用中的不足,给出了面向Web Services 的基于属性的访问控制模型ABAC(Attribute Based Access Control)的定义,设计了ABAC访问控制架构,并利用可扩展的访问控制标记语言XACML( eXtensible Access Control Markup Language)实现了细粒度的Web Services访问控制系统.系统的应用有效保护了Web Services资源.%To deal with access control for web services, the problem of application of traditional access control model in web services is analysed, then the definition of web services-oriented attribute-based access control ( ABAC) model is presented, and the architecture of ABAC is designed. Furthermore, the fine-grained access control system for web services is implemented with XACML, the application of the system has effectively protected the resources of web services.

  18. Enhancing Access to Scientific Models through Standard Web Services Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to investigate the feasibility and value of the "Software as a Service" paradigm in facilitating access to Earth Science numerical models. We...

  19. The WebACS - An Accessible Graphical Editor.

    Science.gov (United States)

    Parker, Stefan; Nussbaum, Gerhard; Pölzer, Stephan

    2017-01-01

    This paper is about the solution to accessibility problems met when implementing a graphical editor, a major challenge being the comprehension of the relationships between graphical components, which needs to be guaranteed for blind and vision impaired users. In the concrete case the HTML5 canvas and Javascript were used. Accessibility was reached by implementing a list view of elements, which also enhances the usability of the editor.

  20. Accessibility of dynamic web applications with emphasis on visually impaired users

    Directory of Open Access Journals (Sweden)

    Kingsley Okoye

    2014-09-01

    Full Text Available As the internet is fast migrating from static web pages to dynamic web pages, the users with visual impairment find it confusing and challenging when accessing the contents on the web. There is evidence that dynamic web applications pose accessibility challenges for the visually impaired users. This study shows that a difference can be made through the basic understanding of the technical requirement of users with visual impairment and addresses a number of issues pertinent to the accessibility needs for such users. We propose that only by designing a framework that is structurally flexible, by removing unnecessary extras and thereby making every bit useful (fit-for-purpose, will visually impaired users be given an increased capacity to intuitively access e-contents. This theory is implemented in a dynamic website for the visually impaired designed in this study. Designers should be aware of how the screen reading software works to enable them make reasonable adjustments or provide alternative content that still corresponds to the objective content to increase the possibility of offering faultless service to such users. The result of our research reveals that materials can be added to a content repository or re-used from existing ones by identifying the content types and then transforming them into a flexible and accessible one that fits the requirements of the visually impaired through our method (no-frill + agile methodology rather than computing in advance or designing according to a given specification.

  1. Predicting subcontractor performance using web-based Evolutionary Fuzzy Neural Networks.

    Science.gov (United States)

    Ko, Chien-Ho

    2013-01-01

    Subcontractor performance directly affects project success. The use of inappropriate subcontractors may result in individual work delays, cost overruns, and quality defects throughout the project. This study develops web-based Evolutionary Fuzzy Neural Networks (EFNNs) to predict subcontractor performance. EFNNs are a fusion of Genetic Algorithms (GAs), Fuzzy Logic (FL), and Neural Networks (NNs). FL is primarily used to mimic high level of decision-making processes and deal with uncertainty in the construction industry. NNs are used to identify the association between previous performance and future status when predicting subcontractor performance. GAs are optimizing parameters required in FL and NNs. EFNNs encode FL and NNs using floating numbers to shorten the length of a string. A multi-cut-point crossover operator is used to explore the parameter and retain solution legality. Finally, the applicability of the proposed EFNNs is validated using real subcontractors. The EFNNs are evolved using 22 historical patterns and tested using 12 unseen cases. Application results show that the proposed EFNNs surpass FL and NNs in predicting subcontractor performance. The proposed approach improves prediction accuracy and reduces the effort required to predict subcontractor performance, providing field operators with web-based remote access to a reliable, scientific prediction mechanism.

  2. An Efficient Hybrid Algorithm for Mining Web Frequent Access Patterns

    Institute of Scientific and Technical Information of China (English)

    ZHAN Li-qiang; LIU Da-xin

    2004-01-01

    We propose an efficient hybrid algorithm WDHP in this paper for mining frequent access patterns.WDHP adopts the techniques of DHP to optimize its performance, which is using hash table to filter candidate set and trimming database.Whenever the database is trimmed to a size less than a specified threshold, the algorithm puts the database into main memory by constructing a tree, and finds frequent patterns on the tree.The experiment shows that WDHP outperform algorithm DHP and main memory based algorithm WAP in execution efficiency.

  3. Cloud-based Web Services for Near-Real-Time Web access to NPP Satellite Imagery and other Data

    Science.gov (United States)

    Evans, J. D.; Valente, E. G.

    2010-12-01

    We are building a scalable, cloud computing-based infrastructure for Web access to near-real-time data products synthesized from the U.S. National Polar-Orbiting Environmental Satellite System (NPOESS) Preparatory Project (NPP) and other geospatial and meteorological data. Given recent and ongoing changes in the the NPP and NPOESS programs (now Joint Polar Satellite System), the need for timely delivery of NPP data is urgent. We propose an alternative to a traditional, centralized ground segment, using distributed Direct Broadcast facilities linked to industry-standard Web services by a streamlined processing chain running in a scalable cloud computing environment. Our processing chain, currently implemented on Amazon.com's Elastic Compute Cloud (EC2), retrieves raw data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) and synthesizes data products such as Sea-Surface Temperature, Vegetation Indices, etc. The cloud computing approach lets us grow and shrink computing resources to meet large and rapid fluctuations (twice daily) in both end-user demand and data availability from polar-orbiting sensors. Early prototypes have delivered various data products to end-users with latencies between 6 and 32 minutes. We have begun to replicate machine instances in the cloud, so as to reduce latency and maintain near-real time data access regardless of increased data input rates or user demand -- all at quite moderate monthly costs. Our service-based approach (in which users invoke software processes on a Web-accessible server) facilitates access into datasets of arbitrary size and resolution, and allows users to request and receive tailored and composite (e.g., false-color multiband) products on demand. To facilitate broad impact and adoption of our technology, we have emphasized open, industry-standard software interfaces and open source software. Through our work, we envision the widespread establishment of similar, derived, or interoperable systems for

  4. Predicting biomedical document access as a function of past use.

    Science.gov (United States)

    Goodwin, J Caleb; Johnson, Todd R; Cohen, Trevor; Herskovic, Jorge R; Bernstam, Elmer V

    2012-01-01

    To determine whether past access to biomedical documents can predict future document access. The authors used 394 days of query log (August 1, 2009 to August 29, 2010) from PubMed users in the Texas Medical Center, which is the largest medical center in the world. The authors evaluated two document access models based on the work of Anderson and Schooler. The first is based on how frequently a document was accessed. The second is based on both frequency and recency. The model based only on frequency of past access was highly correlated with the empirical data (R²=0.932), whereas the model based on frequency and recency had a much lower correlation (R²=0.668). The frequency-only model accurately predicted whether a document will be accessed based on past use. Modeling accesses as a function of frequency requires storing only the number of accesses and the creation date for the document. This model requires low storage overheads and is computationally efficient, making it scalable to large corpora such as MEDLINE. It is feasible to accurately model the probability of a document being accessed in the future based on past accesses.

  5. Pilot Evaluation of a Web-Based Intervention Targeting Sexual Health Service Access

    Science.gov (United States)

    Brown, K. E.; Newby, K.; Caley, M.; Danahay, A.; Kehal, I.

    2016-01-01

    Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among…

  6. Towards automated processing of the right of access in inter-organizational Web Service compositions

    DEFF Research Database (Denmark)

    Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko

    2010-01-01

    with trade secret protection. In this paper, we present an automated architecture to enable exercising the right of access in the domain of inter-organizational business processes based on Web Services technology. Deriving its requirements from the legal, economical, and technical obligations, we show...

  7. Towards automated processing of the right of access in inter-organizational Web Service compositions

    DEFF Research Database (Denmark)

    Herkenhöner, Ralph; De Meer, Hermann; Jensen, Meiko;

    2010-01-01

    with trade secret protection. In this paper, we present an automated architecture to enable exercising the right of access in the domain of inter-organizational business processes based on Web Services technology. Deriving its requirements from the legal, economical, and technical obligations, we show...

  8. Mirage of us: A reflection on the role of the Web in widening access ...

    African Journals Online (AJOL)

    Mirage of us: A reflection on the role of the Web in widening access to references on Southern African arts, culture and heritage. ... a wiki encyclopaedia facilitates networked social collaboration uniquely suited to the co-operative principles of ...

  9. Programmatic access to data and information at the IRIS DMC via web services

    Science.gov (United States)

    Weertman, B. R.; Trabant, C.; Karstens, R.; Suleiman, Y. Y.; Ahern, T. K.; Casey, R.; Benson, R. B.

    2011-12-01

    The IRIS Data Management Center (DMC) has developed a suite of web services that provide access to the DMC's time series holdings, their related metadata and earthquake catalogs. In addition, services are available to perform simple, on-demand time series processing at the DMC prior to being shipped to the user. The primary goal is to provide programmatic access to data and processing services in a manner usable by and useful to the research community. The web services are relatively simple to understand and use and will form the foundation on which future DMC access tools will be built. Based on standard Web technologies they can be accessed programmatically with a wide range of programming languages (e.g. Perl, Python, Java), command line utilities such as wget and curl or with any web browser. We anticipate these services being used for everything from simple command line access, used in shell scripts and higher programming languages to being integrated within complex data processing software. In addition to improving access to our data by the seismological community the web services will also make our data more accessible to other disciplines. The web services available from the DMC include ws-bulkdataselect for the retrieval of large volumes of miniSEED data, ws-timeseries for the retrieval of individual segments of time series data in a variety of formats (miniSEED, SAC, ASCII, audio WAVE, and PNG plots) with optional signal processing, ws-station for station metadata in StationXML format, ws-resp for the retrieval of instrument response in RESP format, ws-sacpz for the retrieval of sensor response in the SAC poles and zeros convention and ws-event for the retrieval of earthquake catalogs. To make the services even easier to use, the DMC is developing a library that allows Java programmers to seamlessly retrieve and integrate DMC information into their own programs. The library will handle all aspects of dealing with the services and will parse the returned

  10. Meta4: a web application for sharing and annotating metagenomic gene predictions using web services.

    Science.gov (United States)

    Richardson, Emily J; Escalettes, Franck; Fotheringham, Ian; Wallace, Robert J; Watson, Mick

    2013-01-01

    Whole-genome shotgun metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website, code is available on Github, a cloud image is available, and an example implementation can be seen at.

  11. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyze collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  12. New data access with HTTP/WebDAV in the ATLAS experiment

    CERN Document Server

    Elmsheuser, Johannes; The ATLAS collaboration; Serfon, Cedric; Garonne, Vincent; Blunier, Sylvain; Lavorini, Vincenzo; Nilsson, Paul

    2015-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in the years 2010-2012, distributed computing has become the established way to analyse collider data. The ATLAS experiment Grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centres to smaller university clusters. So far the storage technologies and access protocols to the clusters that host this tremendous amount of data vary from site to site. HTTP/WebDAV offers the possibility to use a unified industry standard to access the storage. We present the deployment and testing of HTTP/WebDAV for local and remote data access in the ATLAS experiment for the new data management system Rucio and the PanDA workload management system. Deployment and large scale tests have been performed using the Grid testing system HammerCloud and the ROOT HTTP plugin Davix.

  13. Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses

    Science.gov (United States)

    Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon

    This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.

  14. Berkeley PHOG: PhyloFacts orthology group prediction web server.

    Science.gov (United States)

    Datta, Ruchira S; Meacham, Christopher; Samad, Bushra; Neyer, Christoph; Sjölander, Kimmen

    2009-07-01

    Ortholog detection is essential in functional annotation of genomes, with applications to phylogenetic tree construction, prediction of protein-protein interaction and other bioinformatics tasks. We present here the PHOG web server employing a novel algorithm to identify orthologs based on phylogenetic analysis. Results on a benchmark dataset from the TreeFam-A manually curated orthology database show that PHOG provides a combination of high recall and precision competitive with both InParanoid and OrthoMCL, and allows users to target different taxonomic distances and precision levels through the use of tree-distance thresholds. For instance, OrthoMCL-DB achieved 76% recall and 66% precision on this dataset; at a slightly higher precision (68%) PHOG achieves 10% higher recall (86%). InParanoid achieved 87% recall at 24% precision on this dataset, while a PHOG variant designed for high recall achieves 88% recall at 61% precision, increasing precision by 37% over InParanoid. PHOG is based on pre-computed trees in the PhyloFacts resource, and contains over 366 K orthology groups with a minimum of three species. Predicted orthologs are linked to GO annotations, pathway information and biological literature. The PHOG web server is available at http://phylofacts.berkeley.edu/orthologs/.

  15. PredPlantPTS1: A Web Server for the Prediction of Plant Peroxisomal Proteins.

    Science.gov (United States)

    Reumann, Sigrun; Buchwald, Daniela; Lingner, Thomas

    2012-01-01

    Prediction of subcellular protein localization is essential to correctly assign unknown proteins to cell organelle-specific protein networks and to ultimately determine protein function. For metazoa, several computational approaches have been developed in the past decade to predict peroxisomal proteins carrying the peroxisome targeting signal type 1 (PTS1). However, plant-specific PTS1 protein prediction methods have been lacking up to now, and pre-existing methods generally were incapable of correctly predicting low-abundance plant proteins possessing non-canonical PTS1 patterns. Recently, we presented a machine learning approach that is able to predict PTS1 proteins for higher plants (spermatophytes) with high accuracy and which can correctly identify unknown targeting patterns, i.e., novel PTS1 tripeptides and tripeptide residues. Here we describe the first plant-specific web server PredPlantPTS1 for the prediction of plant PTS1 proteins using the above-mentioned underlying models. The server allows the submission of protein sequences from diverse spermatophytes and also performs well for mosses and algae. The easy-to-use web interface provides detailed output in terms of (i) the peroxisomal targeting probability of the given sequence, (ii) information whether a particular non-canonical PTS1 tripeptide has already been experimentally verified, and (iii) the prediction scores for the single C-terminal 14 amino acid residues. The latter allows identification of predicted residues that inhibit peroxisome targeting and which can be optimized using site-directed mutagenesis to raise the peroxisome targeting efficiency. The prediction server will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants. PredPlantPTS1 is freely accessible at ppp.gobics.de.

  16. PredPlantPTS1: a web server for the prediction of plant peroxisomal proteins

    Directory of Open Access Journals (Sweden)

    Sigrun eReumann

    2012-08-01

    Full Text Available Prediction of subcellular protein localization is essential to correctly assign unknown proteins to cell organelle-specific protein networks and to ultimately determine protein function. For metazoa, several computational approaches have been developed in the past decade to predict peroxisomal proteins carrying the peroxisome targeting signal type 1 (PTS1. However, plant-specific PTS1 protein prediction methods have been lacking up to now, and pre-existing methods generally were incapable of correctly predicting low-abundance plant proteins possessing non-canonical PTS1 patterns. Recently, we presented a machine learning approach that is able to predict PTS1 proteins for higher plants (spermatophytes with high accuracy and which can correctly identify unknown targeting patterns, i.e. novel PTS1 tripeptides and tripeptide residues. Here we describe the first plant-specific web server PredPlantPTS1 for the prediction of plant PTS1 proteins using the above-mentioned underlying models. The server allows the submission of protein sequences from diverse spermatophytes and also performs well for mosses and algae. The easy-to-use web interface provides detailed output in terms of (i the peroxisomal targeting probability of the given sequence, (ii information whether a particular non-canonical PTS1 tripeptide has already been experimentally verified, and (iii the prediction scores for the single C-terminal 14 amino acid residues. The latter allows identification of predicted residues that inhibit peroxisome targeting and which can be optimized using site-directed mutagenesis to raise the peroxisome targeting efficiency. The prediction server will be instrumental in identifying low-abundance and stress-inducible peroxisomal proteins and defining the entire peroxisomal proteome of Arabidopsis and agronomically important crop plants. PredPlantPTS1 is freely accessible at ppp.gobics.de.

  17. The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction.

    Science.gov (United States)

    Roche, Daniel B; Buenavista, Maria T; Tetchner, Stuart J; McGuffin, Liam J

    2011-07-01

    The IntFOLD server is a novel independent server that integrates several cutting edge methods for the prediction of structure and function from sequence. Our guiding principles behind the server development were as follows: (i) to provide a simple unified resource that makes our prediction software accessible to all and (ii) to produce integrated output for predictions that can be easily interpreted. The output for predictions is presented as a simple table that summarizes all results graphically via plots and annotated 3D models. The raw machine readable data files for each set of predictions are also provided for developers, which comply with the Critical Assessment of Methods for Protein Structure Prediction (CASP) data standards. The server comprises an integrated suite of five novel methods: nFOLD4, for tertiary structure prediction; ModFOLD 3.0, for model quality assessment; DISOclust 2.0, for disorder prediction; DomFOLD 2.0 for domain prediction; and FunFOLD 1.0, for ligand binding site prediction. Predictions from the IntFOLD server were found to be competitive in several categories in the recent CASP9 experiment. The IntFOLD server is available at the following web site: http://www.reading.ac.uk/bioinf/IntFOLD/.

  18. A University Web Portal redesign applying accessibility patterns. Breaking Down Barriers for Visually Impaired Users

    Directory of Open Access Journals (Sweden)

    Hernán Sosa

    2015-08-01

    Full Text Available Definitely, the WWW and ICTs have become the preferred media for the interaction between society and its citizens, and public and private organizations have today the possibility of deploying their activities through the Web. In particular, university education is a domain where the benefits of these technological resources can strongly contribute in caring for students. However, most university Web portals are inaccessible to their user community (students, professors, and non-teaching staff, between others, since these portals do not take into account the needs of people with different capabilities. In this work, we propose an accessibility pattern driven process to the redesign of university Web portals, aiming to break down barriers for visually impaired users. The approach is implemented to a real case study: the Web portal of Universidad Nacional de la Patagonia Austral (UNPA. The results come from applying accessibility recommendations and evaluation tools (automatic and manual from internationally recognized organizations, to both versions of the Web portal: the original and the redesign one.

  19. Search, Read and Write: An Inquiry into Web Accessibility for People with Dyslexia.

    Science.gov (United States)

    Berget, Gerd; Herstad, Jo; Sandnes, Frode Eika

    2016-01-01

    Universal design in context of digitalisation has become an integrated part of international conventions and national legislations. A goal is to make the Web accessible for people of different genders, ages, backgrounds, cultures and physical, sensory and cognitive abilities. Political demands for universally designed solutions have raised questions about how it is achieved in practice. Developers, designers and legislators have looked towards the Web Content Accessibility Guidelines (WCAG) for answers. WCAG 2.0 has become the de facto standard for universal design on the Web. Some of the guidelines are directed at the general population, while others are targeted at more specific user groups, such as the visually impaired or hearing impaired. Issues related to cognitive impairments such as dyslexia receive less attention, although dyslexia is prevalent in at least 5-10% of the population. Navigation and search are two common ways of using the Web. However, while navigation has received a fair amount of attention, search systems are not explicitly included, although search has become an important part of people's daily routines. This paper discusses WCAG in the context of dyslexia for the Web in general and search user interfaces specifically. Although certain guidelines address topics that affect dyslexia, WCAG does not seem to fully accommodate users with dyslexia.

  20. Pilot evaluation of a web-based intervention targeting sexual health service access.

    Science.gov (United States)

    Brown, K E; Newby, K; Caley, M; Danahay, A; Kehal, I

    2016-04-01

    Sexual health service access is fundamental to good sexual health, yet interventions designed to address this have rarely been implemented or evaluated. In this article, pilot evaluation findings for a targeted public health behavior change intervention, delivered via a website and web-app, aiming to increase uptake of sexual health services among 13-19-year olds are reported. A pre-post questionnaire-based design was used. Matched baseline and follow-up data were identified from 148 respondents aged 13-18 years. Outcome measures were self-reported service access, self-reported intention to access services and beliefs about services and service access identified through needs analysis. Objective service access data provided by local sexual health services were also analyzed. Analysis suggests the intervention had a significant positive effect on psychological barriers to and antecedents of service access among females. Males, who reported greater confidence in service access compared with females, significantly increased service access by time 2 follow-up. Available objective service access data support the assertion that the intervention may have led to increases in service access. There is real promise for this novel digital intervention. Further evaluation is planned as the model is licensed to and rolled out by other local authorities in the United Kingdom.

  1. EntrezAJAX: direct web browser access to the Entrez Programming Utilities

    Directory of Open Access Journals (Sweden)

    Pallen Mark J

    2010-06-01

    Full Text Available Abstract Web applications for biology and medicine often need to integrate data from Entrez services provided by the National Center for Biotechnology Information. However, direct access to Entrez from a web browser is not possible due to 'same-origin' security restrictions. The use of "Asynchronous JavaScript and XML" (AJAX to create rich, interactive web applications is now commonplace. The ability to access Entrez via AJAX would be advantageous in the creation of integrated biomedical web resources. We describe EntrezAJAX, which provides access to Entrez eUtils and is able to circumvent same-origin browser restrictions. EntrezAJAX is easily implemented by JavaScript developers and provides identical functionality as Entrez eUtils as well as enhanced functionality to ease development. We provide easy-to-understand developer examples written in JavaScript to illustrate potential uses of this service. For the purposes of speed, reliability and scalability, EntrezAJAX has been deployed on Google App Engine, a freely available cloud service. The EntrezAJAX webpage is located at http://entrezajax.appspot.com/

  2. ORION: a web server for protein fold recognition and structure prediction using evolutionary hybrid profiles.

    Science.gov (United States)

    Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe

    2016-06-20

    Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/.

  3. Prediction of users webpage access behaviour using association rule mining

    Indian Academy of Sciences (India)

    R Geetharamani; P Revathy; Shomona G Jacob

    2015-12-01

    Web Usage mining is a technique used to identify the user needs from the web log. Discovering hidden patterns from the logs is an upcoming research area. Association rules play an important role in many web mining applications to detect interesting patterns. However, it generates enormous rules that cause researchers to spend ample time and expertise to discover the really interesting ones. This paper works on the server logs from the MSNBC dataset for the month of September 1999. This research aims at predicting the probable subsequent page in the usage of web pages listed in this data based on their navigating behaviour by using Apriori prefix tree (PT) algorithm. The generated rules were ranked based on the support, confidence and lift evaluation measures. The final predictions revealed that the interestingness of pages mainly depended on the support and lift measure whereas confidence assumed a uniform value among all the pages. It proved that the system guaranteed 100% confidence with the support of 1.3E−05. It revealed that the pages such as Front page, On-air, News, Sports and BBS attracted more interested subsequent users compared to Travel, MSN-News and MSN-Sports which were of less interest.

  4. A Web-Accessible Protein Structure Prediction Pipeline

    Science.gov (United States)

    2009-06-01

    procedure described previously. Finally, the top-scoring models from this round are structurally compared against the SCOP [18] using the combinatorial...confidence rank, SCOP template ID, % identity to template, SCOP fold family ID, and SCOP fold description. Finally, the ab initio component (not...shown) lists Z-score rank, SCOP template ID, model score, SCOP fold family ID, and SCOP fold description. Scaling: Different components of the

  5. Web-accessible digital brain atlas of the common marmoset (Callithrix jacchus).

    Science.gov (United States)

    Tokuno, Hironobu; Tanaka, Ikuko; Umitsu, Yoshitomo; Akazawa, Toshikazu; Nakamura, Yasuhisa

    2009-05-01

    Here we describe a web-accessible digital brain atlas of the common marmoset (Callithrix jacchus) at http://marmoset-brain.org:2008. We prepared the histological sections of the marmoset brain using various staining techniques. For virtual microscopy, high-resolution digital images of sections were obtained with Aperio Scanscope. The digital images were then converted to Zoomify files (zoomable multiresolution image files). Thereby, we could provide the multiresolution images of the marmoset brains for fast interactive viewing on the web via the Internet. In addition, we describe an automated method to obtain drawings of Nissl-stained sections.

  6. Design and Implementation of File Access and Control System Based on Dynamic Web

    Institute of Scientific and Technical Information of China (English)

    GAO Fuxiang; YAO Lan; BAO Shengfei; YU Ge

    2006-01-01

    A dynamic Web application, which can help the departments of enterprise to collaborate with each other conveniently, is proposed. Several popular design solutions are introduced at first. Then, dynamic Web system is chosen for developing the file access and control system. Finally, the paper gives the detailed process of the design and implementation of the system, which includes some key problems such as solutions of document management and system security. Additionally, the limitations of the system as well as the suggestions of further improvement are also explained.

  7. An Alternative Solution to Https for Secure Access to Web Services

    Directory of Open Access Journals (Sweden)

    Cristina Livia Iancu

    2012-06-01

    Full Text Available This paper presents a solution for accessing web services in a light-secure way. Because the payload of the messages is not so sensitive, it is taken care only about protecting the user name and the password used for authentication and authorization into the web services system. The advantage of this solution compared to the common used SSL is avoiding the overhead related to the handshake and encryption, providing a faster response to the clients. The solution is intended for Windows machines and is developed using the latest stable Microsoft technologies.

  8. A web product data management system based on Simple Object Access Protocol

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new web product data management architecture is presented. The three-tier web architecture and Simple Object Access Protocol (SOAP) are combined to build the web-based product data management (PDM) system which includes three tiers: the user services tier, the business services tier, and the data services tier. The client service component uses the serverside technology, and Extensible Markup Language (XML) web service which uses SOAP as the communication protocol is chosen as the business service component. To illustrate how to build a web-based PDM system using the proposed architecture,a case PDM system which included three logical tires was built. To use the security and central management features of the database, a stored procedure was recommended in the data services tier. The business object was implemented as an XML web service so that client could use standard internet protocols to communicate with the business object from any platform. In order to satisfy users using all sorts of browser, the server-side technology and Microsoft ASP.NET was used to create the dynamic user interface.

  9. ngLOC: software and web server for predicting protein subcellular localization in prokaryotes and eukaryotes

    Directory of Open Access Journals (Sweden)

    King Brian R

    2012-07-01

    Full Text Available Abstract Background Understanding protein subcellular localization is a necessary component toward understanding the overall function of a protein. Numerous computational methods have been published over the past decade, with varying degrees of success. Despite the large number of published methods in this area, only a small fraction of them are available for researchers to use in their own studies. Of those that are available, many are limited by predicting only a small number of organelles in the cell. Additionally, the majority of methods predict only a single location for a sequence, even though it is known that a large fraction of the proteins in eukaryotic species shuttle between locations to carry out their function. Findings We present a software package and a web server for predicting the subcellular localization of protein sequences based on the ngLOC method. ngLOC is an n-gram-based Bayesian classifier that predicts subcellular localization of proteins both in prokaryotes and eukaryotes. The overall prediction accuracy varies from 89.8% to 91.4% across species. This program can predict 11 distinct locations each in plant and animal species. ngLOC also predicts 4 and 5 distinct locations on gram-positive and gram-negative bacterial datasets, respectively. Conclusions ngLOC is a generic method that can be trained by data from a variety of species or classes for predicting protein subcellular localization. The standalone software is freely available for academic use under GNU GPL, and the ngLOC web server is also accessible at http://ngloc.unmc.edu.

  10. Uniform Access to Astronomical Web Services and its Implementation in SkyMouse

    Science.gov (United States)

    Sun, Hua-Ping; Cui, Chen-Zhou; Zhao, Yong-Heng

    2008-06-01

    With the progress of information technologies and astronomical observation technologies, as an example of cyber-infrastructure based sciences, the Virtual Observatory (VO) is initiated and spreaded quickly. More and more on-line accessible database systems and different kinds of services are available. Although astronomers have been aware of the importance of the interoperability, integrated access to the on-line information is still difficult. The SkyMouse is a smart system developed by Chinese Virtual Observatory project to let us access different online resource systems easier than ever. Not like some VO efforts on uniformed access systems, for example, NVO DataScope, SkyMouse tries to show a comprehensive overview for a specific object, but not to snatch as much data as possible. Stimulated by a simple "Mouse Over" on an interested object name, various VO-compliant and traditional databases, i.e. SIMBAD, NED, VizieR, DSS, ADS, are queried by the SkyMouse. An overview for the given object, including basic information, image, observation and references, is displayed in the user's default web browser. In this article, the authors will introduce the framework of SkyMouse. During the development of SkyMouse, various Web services will be called. In order to invoke these Web services, two problems must be solved, i.e. interoperability and performance. In the paper, a detailed description for these problems and the authors' resolution are given.

  11. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    Science.gov (United States)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  12. Semantically Enriched Web Usage Mining for Predicting User Future Movements

    Directory of Open Access Journals (Sweden)

    Suresh Shirgave

    2013-10-01

    Full Text Available Explosive and quick growth of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and sophisticated tools to help the Web user to find the desired information. Finding desired information on the Web has become a critical ingredient of everyday personal, educational, and business life. Thus, there is a demand for more sophisticated tools to help the user to navigate a Web site and find the desired information. The users must be provided with information and services specific to their needs, rather than an undifferentiated mass of information. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of solely usage based techniques can be improved by integrating Web site content and site structure in the personalization process.Herein, we propose Semantically enriched Web Usage Mining method (SWUM, which combines the fields of Web Usage Mining and Semantic Web. In the proposed method, the undirected graph derived from usage data is enriched with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUM generates accurate recommendations with integration of usage, semantic data and Web site structure. The results shows that proposed method is able to achieve 10-20% better accuracy than the solely usage based model, and 5-8% better than an ontology based model.

  13. RaptorX-Property: a web server for protein structure property prediction.

    Science.gov (United States)

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-07-08

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.

  14. ChEMBL web services: streamlining access to drug discovery data and utilities.

    Science.gov (United States)

    Davies, Mark; Nowotka, Michał; Papadatos, George; Dedman, Nathan; Gaulton, Anna; Atkinson, Francis; Bellis, Louisa; Overington, John P

    2015-07-01

    ChEMBL is now a well-established resource in the fields of drug discovery and medicinal chemistry research. The ChEMBL database curates and stores standardized bioactivity, molecule, target and drug data extracted from multiple sources, including the primary medicinal chemistry literature. Programmatic access to ChEMBL data has been improved by a recent update to the ChEMBL web services (version 2.0.x, https://www.ebi.ac.uk/chembl/api/data/docs), which exposes significantly more data from the underlying database and introduces new functionality. To complement the data-focused services, a utility service (version 1.0.x, https://www.ebi.ac.uk/chembl/api/utils/docs), which provides RESTful access to commonly used cheminformatics methods, has also been concurrently developed. The ChEMBL web services can be used together or independently to build applications and data processing workflows relevant to drug discovery and chemical biology.

  15. A Web Service for File-Level Access to Disk Images

    Directory of Open Access Journals (Sweden)

    Sunitha Misra

    2014-07-01

    Full Text Available Digital forensics tools have many potential applications in the curation of digital materials in libraries, archives and museums (LAMs. Open source digital forensics tools can help LAM professionals to extract digital contents from born-digital media and make more informed preservation decisions. Many of these tools have ways to display the metadata of the digital media, but few provide file-level access without having to mount the device or use complex command-line utilities. This paper describes a project to develop software that supports access to the contents of digital media without having to mount or download the entire image. The work examines two approaches in creating this tool: First, a graphical user interface running on a local machine. Second, a web-based application running in web browser. The project incorporates existing open source forensics tools and libraries including The Sleuth Kit and libewf along with the Flask web application framework and custom Python scripts to generate web pages supporting disk image browsing.

  16. E-serials cataloging access to continuing and integrating resources via the catalog and the web

    CERN Document Server

    Cole, Jim

    2014-01-01

    This comprehensive guide examines the state of electronic serials cataloging with special attention paid to online capacities. E-Serials Cataloging: Access to Continuing and Integrating Resources via the Catalog and the Web presents a review of the e-serials cataloging methods of the 1990s and discusses the international standards (ISSN, ISBD[ER], AACR2) that are applicable. It puts the concept of online accessibility into historical perspective and offers a look at current applications to consider. Practicing librarians, catalogers and administrators of technical services, cataloging and serv

  17. World Wide Webs: Crossing the Digital Divide through Promotion of Public Access

    Science.gov (United States)

    Coetzee, Liezl

    “As Bill Gates and Steve Case proclaim the global omnipresence of the Internet, the majority of non-Western nations and 97 per cent of the world's population remain unconnected to the net for lack of money, access, or knowledge. This exclusion of so vast a share of the global population from the Internet sharply contradicts the claims of those who posit the World Wide Web as a ‘universal' medium of egalitarian communication.” (Trend 2001:2)

  18. On the best learning algorithm for web services response time prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Popentiu-Vladicescu, Florin

    2013-01-01

    an application is. A Web service is better imagined as an application "segment," or better as a program enabler. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the response of web services during their operation is very important.......In this article we will examine the effect of different learning algorithms, while training the MLP (Multilayer Perceptron) with the intention of predicting web services response time. Web services do not necessitate a user interface. This may seem contradictory to most people's concept of what...

  19. Web Accessibility of the Higher Education Institute Websites Based on the World Wide Web Consortium and Section 508 of the Rehabilitation Act

    Science.gov (United States)

    Alam, Najma H.

    2014-01-01

    The problem observed in this study is the low level of compliance of higher education website accessibility with Section 508 of the Rehabilitation Act of 1973. The literature supports the non-compliance of websites with the federal policy in general. Studies were performed to analyze the accessibility of fifty-four sample web pages using automated…

  20. Web Based Access to Real-Time Meteorological Products Optimized for PDA- Smartphones

    Science.gov (United States)

    Dengel, R. C.; Bellon, W.; Robaidek, J.

    2006-05-01

    Recent advances in wireless broadband services and coverage have made access to the internet possible in remote locations. Users can now access the web via an ever increasing number of small, handheld devices specifically designed to allow voice and data exchange using this expanding service. So called PDA phones or smartphones blend the features of traditional PDA devices with telecommunications capabilities. The University of Wisconsin - Madison, Space Science and Engineering Center (SSEC) has produced a web site holding a variety of meteorological image and text displays optimized for this new technology. The site features animations of real-time radar and satellite clouds with value added graphical overlays of severe watches and warnings. Products focus on remotely sensed information supplemented with conventional ground observations. The PDA Animated Weather (PAW) website has rapidly been adopted by numerous institutions and individuals desiring access to real-time meteorological information independent of their location. Of particular note are users that can be classified as first responders, including foreign and domestic based police and file departments. This paper offers an overview of the PAW project including product design, automated production and web presentation. Numerous examples of user applications will be presented, planned future products and functionality will be discussed.

  1. SalanderMaps: A rapid overview about felt earthquakes through data mining of web-accesses

    Science.gov (United States)

    Kradolfer, Urs

    2013-04-01

    While seismological observatories detect and locate earthquakes based on measurements of the ground motion, they neither know a priori whether an earthquake has been felt by the public nor is it known, where it has been felt. Such information is usually gathered by evaluating feedback reported by the public through on-line forms on the web. However, after a felt earthquake in Switzerland, many people visit the webpages of the Swiss Seismological Service (SED) at the ETH Zurich and each such visit leaves traces in the logfiles on our web-servers. Data mining techniques, applied to these logfiles and mining publicly available data bases on the internet open possibilities to obtain previously unknown information about our virtual visitors. In order to provide precise information to authorities and the media, it would be desirable to rapidly know from which locations these web-accesses origin. The method 'Salander' (Seismic Activitiy Linked to Area codes - Nimble Detection of Earthquake Rumbles) will be introduced and it will be explained, how the IP-addresses (each computer or router directly connected to the internet has a unique IP-address; an example would be 129.132.53.5) of a sufficient amount of our virtual visitors were linked to their geographical area. This allows us to unprecedentedly quickly know whether and where an earthquake was felt in Switzerland. It will also be explained, why the method Salander is superior to commercial so-called geolocation products. The corresponding products of the Salander method, animated SalanderMaps, which are routinely generated after each earthquake with a magnitude of M>2 in Switzerland (http://www.seismo.ethz.ch/prod/salandermaps/, available after March 2013), demonstrate how the wavefield of earthquakes propagates through Switzerland and where it was felt. Often, such information is available within less than 60 seconds after origin time, and we always get a clear picture within already five minutes after origin time

  2. SABinder: A Web Service for Predicting Streptavidin-Binding Peptides.

    Science.gov (United States)

    He, Bifang; Kang, Juanjuan; Ru, Beibei; Ding, Hui; Zhou, Peng; Huang, Jian

    2016-01-01

    Streptavidin is sometimes used as the intended target to screen phage-displayed combinatorial peptide libraries for streptavidin-binding peptides (SBPs). More often in the biopanning system, however, streptavidin is just a commonly used anchoring molecule that can efficiently capture the biotinylated target. In this case, SBPs creeping into the biopanning results are not desired binders but target-unrelated peptides (TUP). Taking them as intended binders may mislead subsequent studies. Therefore, it is important to find if a peptide is likely to be an SBP when streptavidin is either the intended target or just the anchoring molecule. In this paper, we describe an SVM-based ensemble predictor called SABinder. It is the first predictor for SBP. The model was built with the feature of optimized dipeptide composition. It was observed that 89.20% (MCC = 0.78; AUC = 0.93; permutation test, p web server, SABinder is freely accessible. The tool provides a highly efficient way to exclude potential SBP when they are TUP or to facilitate identification of possibly new SBP when they are the desired binders. In either case, it will be helpful and can benefit related scientific community.

  3. Increasing access to terrestrial ecology and remote sensing (MODIS) data through Web services and visualization tools

    Science.gov (United States)

    Santhana Vannan, S.; Cook, R. B.; Wei, Y.

    2012-12-01

    In recent years user access to data and information is increasingly handled through tools, services, and applications. Standards-based services have facilitated this development. These service-based methods to access data has boosted the use of data and in increasingly complex ways. The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has taken the approach of service-based access to data and visualization for distribution and visualization of its terrestrial ecology data, including MODIS (Moderate Resolution Imaging Spectroradiometer) remote sensing data products. The MODIS data products are highly useful for field research. The spectral, spatial and temporal characteristics of MODIS products have made them an important data source for analyzing key science questions relating to Earth system processes at multiple spatial and temporal scales. However, MODIS data volume and the complexity in data format make it less usable in some cases. To solve this usability issue, the ORNL DAAC has developed a system that prepares and distributes subsets of selected MODIS land products in a scale and format useful for field researchers. Web and Web service tools provide MODIS subsets in comma-delimited text format and in GIS compatible GeoTIFF format. Users can download and visualize MODIS subsets for a set of pre-defined locations, order MODIS subsets for any land location or automate the process of subset extraction using a SOAP-based Web service. The MODIS tools and services can be extended to support the large volume of data that would be produced by the various decadal survey missions. http://daac.ornl.gov/MODIS . The ORNL DAAC has also created a Web-based Spatial Data Access Tool (SDAT) that enables users to browse, visualize, and download a wide variety of geospatial data in various user-selected spatial/temporal extents, formats, and projections. SDAT is based on Open Geospatial Consortium (OGC) Web service standards that allows users to

  4. Using the STOQS Web Application for Access to in situ Oceanographic Data

    Science.gov (United States)

    McCann, M. P.

    2012-12-01

    Using the STOQS Web Application for Access to in situ Oceanographic Data Mike McCann 7 August 2012 With increasing measurement and sampling capabilities of autonomous oceanographic platforms (e.g. Gliders, Autonomous Underwater Vehicles, Wavegliders), the need to efficiently access and visualize the data they collect is growing. The Monterey Bay Aquarium Research Institute has designed and built the Spatial Temporal Oceanographic Query System (STOQS) specifically to address this issue. The need for STOQS arises from inefficiencies discovered from using CF-NetCDF point observation conventions for these data. The problem is that access efficiency decreases with decreasing dimension of CF-NetCDF data. For example, the Trajectory Common Data Model feature type has only one coordinate dimension, usually Time - positions of the trajectory (Depth, Latitude, Longitude) are stored as non-indexed record variables within the NetCDF file. If client software needs to access data between two depth values or from a bounded geographic area, then the whole data set must be read and the selection made within the client software. This is very inefficient. What is needed is a way to easily select data of interest from an archive given any number of spatial, temporal, or other constraints. Geospatial relational database technology provides this capability. The full STOQS application consists of a Postgres/PostGIS database, Mapserver, and Python-Django running on a server and Web 2.0 technology (jQuery, OpenLayers, Twitter Bootstrap) running in a modern web browser. The web application provides faceted search capabilities allowing a user to quickly drill into the data of interest. Data selection can be constrained by spatial, temporal, and depth selections as well as by parameter value and platform name. The web application layer also provides a REST (Representational State Transfer) Application Programming Interface allowing tools such as the Matlab stoqstoolbox to retrieve data

  5. Working without a Crystal Ball: Predicting Web Trends for Web Services Librarians

    Science.gov (United States)

    Ovadia, Steven

    2008-01-01

    User-centered design is a principle stating that electronic resources, like library Web sites, should be built around the needs of the users. This article interviews Web developers of library and non-library-related Web sites, determining how they assess user needs and how they decide to adapt certain technologies for users. According to the…

  6. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  7. SensorWeb Hub infrastructure for open access to scientific research data

    Science.gov (United States)

    de Filippis, Tiziana; Rocchi, Leandro; Rapisardi, Elena

    2015-04-01

    The sharing of research data is a new challenge for the scientific community that may benefit from a large amount of information to solve environmental issues and sustainability in agriculture and urban contexts. Prerequisites for this challenge is the development of an infrastructure that ensure access, management and preservation of data, technical support for a coordinated and harmonious management of data that, in the framework of Open Data Policies, should encourages the reuse and the collaboration. The neogeography and the citizen as sensors approach, highlight that new data sources need a new set of tools and practices so to collect, validate, categorize, and use / access these "crowdsourced" data, that integrate the data sets produced in the scientific field, thus "feeding" the overall available data for analysis and research. When the scientific community embraces the dimension of collaboration and sharing, access and re-use, in order to accept the open innovation approach, it should redesign and reshape the processes of data management: the challenges of technological and cultural innovation, enabled by web 2.0 technologies, bring to the scenario where the sharing of structured and interoperable data will constitute the unavoidable building block to set up a new paradigm of scientific research. In this perspective the Institute of Biometeorology, CNR, whose aim is contributing to sharing and development of research data, has developed the "SensorWebHub" (SWH) infrastructure to support the scientific activities carried out in several research projects at national and international level. It is designed to manage both mobile and fixed open source meteorological and environmental sensors, in order to integrate the existing agro-meteorological and urban monitoring networks. The proposed architecture uses open source tools to ensure sustainability in the development and deployment of web applications with geographic features and custom analysis, as requested

  8. QoS prediction for Web service in Mobile Internet environment

    Science.gov (United States)

    Sun, Qibo; Wang, Lubao; Wang, Shangguang; Ma, You; Hsu, Ching-Hsien

    2016-07-01

    Quality of Services (QoS) prediction plays an important role in Web service recommendation. Many existing Web service QoS prediction approaches are highly accurate and useful in Internet environments. However, the QoS data of Web service in Mobile Internet are notably more volatile, which makes these approaches fail in making accurate QoS predictions of Web services. In this paper, by weakening the volatility of QoS data, we propose an accurate Web service QoS prediction approach based on the collaborative filtering algorithm. This approach contains three processes, that is, QoS preprocessing, user similarity computing and QoS predicting. We have implemented our proposed approach with an experiment based on real-world and synthetic datasets. The results demonstrate that our approach outperforms other approaches in Mobile Internet.

  9. The new ALICE DQM client: a web access to ROOT-based objects

    Science.gov (United States)

    von Haller, B.; Carena, F.; Carena, W.; Chapeland, S.; Chibante Barroso, V.; Costa, F.; Delort, C.; Dénes, E.; Diviá, R.; Fuchs, U.; Niedziela, J.; Simonetti, G.; Soós, C.; Telesca, A.; Vande Vyvre, P.; Wegrzynek, A.

    2015-12-01

    A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and overcome problems. An immediate access to the DQM results is needed not only by shifters in the control room but also by detector experts worldwide. As a consequence, a new web application has been developed to dynamically display and manipulate the ROOT-based objects produced by the DQM system in a flexible and user friendly interface. The architecture and design of the tool, its main features and the technologies that were used, both on the server and the client side, are described. In particular, we detail how we took advantage of the most recent ROOT JavaScript I/O and web server library to give interactive access to ROOT objects stored in a database. We describe as well the use of modern web techniques and packages such as AJAX, DHTMLX and jQuery, which has been instrumental in the successful implementation of a reactive and efficient application. We finally present the resulting application and how code quality was ensured. We conclude with a roadmap for future technical and functional developments.

  10. Mining Sequential Access Pattern with Low Support From Large Pre-Processed Web Logs

    Directory of Open Access Journals (Sweden)

    S. Vijayalakshmi

    2010-01-01

    Full Text Available Problem statement: To find frequently occurring Sequential patterns from web log file on the basis of minimum support provided. We introduced an efficient strategy for discovering Web usage mining is the application of sequential pattern mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Approach: The approaches adopt a divide-and conquer pattern-growth principle. Our proposed method combined tree projection and prefix growth features from pattern-growth category with position coded feature from early-pruning category, all of these features are key characteristics of their respective categories, so we consider our proposed method as a pattern growth, early-pruning hybrid algorithm. Results: Our proposed Hybrid algorithm eliminated the need to store numerous intermediate WAP trees during mining. Since only the original tree was stored, it drastically cuts off huge memory access costs, which may include disk I/O cost in a virtual memory environment, especially when mining very long sequences with millions of records. Conclusion: An attempt had been made to our approach for improving efficiency. Our proposed method totally eliminates reconstructions of intermediate WAP-trees during mining and considerably reduces execution time.

  11. U-Access: a web-based system for routing pedestrians of differing abilities

    Science.gov (United States)

    Sobek, Adam D.; Miller, Harvey J.

    2006-09-01

    For most people, traveling through urban and built environments is straightforward. However, for people with physical disabilities, even a short trip can be difficult and perhaps impossible. This paper provides the design and implementation of a web-based system for the routing and prescriptive analysis of pedestrians with different physical abilities within built environments. U-Access, as a routing tool, provides pedestrians with the shortest feasible route with respect to one of three differing ability levels, namely, peripatetic (unaided mobility), aided mobility (mobility with the help of a cane, walker or crutches) and wheelchair users. U-Access is also an analytical tool that can help identify obstacles in built environments that create routing discrepancies among pedestrians with different physical abilities. This paper discusses the system design, including database, algorithm and interface specifications, and technologies for efficiently delivering results through the World Wide Web (WWW). This paper also provides an illustrative example of a routing problem and an analytical evaluation of the existing infrastructure which identifies the obstacles that pose the greatest discrepancies between physical ability levels. U-Access was evaluated by wheelchair users and route experts from the Center for Disability Services at The University of Utah, USA.

  12. The impacts of problem gambling on concerned significant others accessing web-based counselling.

    Science.gov (United States)

    Dowling, Nicki A; Rodda, Simone N; Lubman, Dan I; Jackson, Alun C

    2014-08-01

    The 'concerned significant others' (CSOs) of people with problem gambling frequently seek professional support. However, there is surprisingly little research investigating the characteristics or help-seeking behaviour of these CSOs, particularly for web-based counselling. The aims of this study were to describe the characteristics of CSOs accessing the web-based counselling service (real time chat) offered by the Australian national gambling web-based counselling site, explore the most commonly reported CSO impacts using a new brief scale (the Problem Gambling Significant Other Impact Scale: PG-SOIS), and identify the factors associated with different types of CSO impact. The sample comprised all 366 CSOs accessing the service over a 21 month period. The findings revealed that the CSOs were most often the intimate partners of problem gamblers and that they were most often females aged under 30 years. All CSOs displayed a similar profile of impact, with emotional distress (97.5%) and impacts on the relationship (95.9%) reported to be the most commonly endorsed impacts, followed by impacts on social life (92.1%) and finances (91.3%). Impacts on employment (83.6%) and physical health (77.3%) were the least commonly endorsed. There were few significant differences in impacts between family members (children, partners, parents, and siblings), but friends consistently reported the lowest impact scores. Only prior counselling experience and Asian cultural background were consistently associated with higher CSO impacts. The findings can serve to inform the development of web-based interventions specifically designed for the CSOs of problem gamblers.

  13. PromBase: a web resource for various genomic features and predicted promoters in prokaryotic genomes

    Directory of Open Access Journals (Sweden)

    Bansal Manju

    2011-07-01

    Full Text Available Abstract Background As more and more genomes are being sequenced, an overview of their genomic features and annotation of their functional elements, which control the expression of each gene or transcription unit of the genome, is a fundamental challenge in genomics and bioinformatics. Findings Relative stability of DNA sequence has been used to predict promoter regions in 913 microbial genomic sequences with GC-content ranging from 16.6% to 74.9%. Irrespective of the genome GC-content the relative stability based promoter prediction method has already been proven to be robust in terms of recall and precision. The predicted promoter regions for the 913 microbial genomes have been accumulated in a database called PromBase. Promoter search can be carried out in PromBase either by specifying the gene name or the genomic position. Each predicted promoter region has been assigned to a reliability class (low, medium, high, very high and highest based on the difference between its average free energy and the downstream region. The recall and precision values for each class are shown graphically in PromBase. In addition, PromBase provides detailed information about base composition, CDS and CG/TA skews for each genome and various DNA sequence dependent structural properties (average free energy, curvature and bendability in the vicinity of all annotated translation start sites (TLS. Conclusion PromBase is a database, which contains predicted promoter regions and detailed analysis of various genomic features for 913 microbial genomes. PromBase can serve as a valuable resource for comparative genomics study and help the experimentalist to rapidly access detailed information on various genomic features and putative promoter regions in any given genome. This database is freely accessible for academic and non- academic users via the worldwide web http://nucleix.mbu.iisc.ernet.in/prombase/.

  14. Remote Internet access to advanced analytical facilities: a new approach with Web-based services.

    Science.gov (United States)

    Sherry, N; Qin, J; Fuller, M Suominen; Xie, Y; Mola, O; Bauer, M; McIntyre, N S; Maxwell, D; Liu, D; Matias, E; Armstrong, C

    2012-09-04

    Over the past decade, the increasing availability of the World Wide Web has held out the possibility that the efficiency of scientific measurements could be enhanced in cases where experiments were being conducted at distant facilities. Examples of early successes have included X-ray diffraction (XRD) experimental measurements of protein crystal structures at synchrotrons and access to scanning electron microscopy (SEM) and NMR facilities by users from institutions that do not possess such advanced capabilities. Experimental control, visual contact, and receipt of results has used some form of X forwarding and/or VNC (virtual network computing) software that transfers the screen image of a server at the experimental site to that of the users' home site. A more recent development is a web services platform called Science Studio that provides teams of scientists with secure links to experiments at one or more advanced research facilities. The software provides a widely distributed team with a set of controls and screens to operate, observe, and record essential parts of the experiment. As well, Science Studio provides high speed network access to computing resources to process the large data sets that are often involved in complex experiments. The simple web browser and the rapid transfer of experimental data to a processing site allow efficient use of the facility and assist decision making during the acquisition of the experimental results. The software provides users with a comprehensive overview and record of all parts of the experimental process. A prototype network is described involving X-ray beamlines at two different synchrotrons and an SEM facility. An online parallel processing facility has been developed that analyzes the data in near-real time using stream processing. Science Studio and can be expanded to include many other analytical applications, providing teams of users with rapid access to processed results along with the means for detailed

  15. Web 2.0 Sites for Collaborative Self-Access: The Learning Advisor vs. Google®

    Directory of Open Access Journals (Sweden)

    Craig D. Howard

    2011-09-01

    Full Text Available While Web 2.0 technologies provide motivated, self-access learners with unprecedented opportunities for language learning, Web 2.0 designs are not of universally equal value for learning. This article reports on research carried out at Indiana University Bloomington using an empirical method to select websites for self-access language learning. Two questions related to Web 2.0 recommendations were asked: (1 How do recommended Web 2.0 sites rank in terms of interactivity features? (2 How likely is a learner to find highly interactive sites on their own? A list of 20 sites used for supplemental and self-access activities in language programs at five universities was compiled and provided the initial data set. Purposive sampling criteria revealed 10 sites truly represented Web 2.0 design. To address the first question, a feature analysis was applied (Herring, The international handbook of internet research. Berlin: Springer, 2008. An interactivity framework was developed from previous research to identify Web 2.0 design features, and sites were ranked according to feature quantity. The method used to address the second question was an interconnectivity analysis that measured direct and indirect interconnectivity within Google results. Highly interactive Web 2.0 sites were not prominent in Google search results, nor were they often linked via third party sites. It was determined that, using typical keywords or searching via blogs and recommendation sites, self-access learners were highly unlikely to find the most promising Web 2.0 sites for language learning. A discussion of the role of the learning advisor in guiding Web 2.0 collaborative self-access, as well as some strategic short cuts to quick analysis, conclude the article.

  16. Access and completion of a Web-based treatment in a population-based sample of tornado-affected adolescents.

    Science.gov (United States)

    Price, Matthew; Yuen, Erica K; Davidson, Tatiana M; Hubel, Grace; Ruggiero, Kenneth J

    2015-08-01

    Although Web-based treatments have significant potential to assess and treat difficult-to-reach populations, such as trauma-exposed adolescents, the extent that such treatments are accessed and used is unclear. The present study evaluated the proportion of adolescents who accessed and completed a Web-based treatment for postdisaster mental health symptoms. Correlates of access and completion were examined. A sample of 2,000 adolescents living in tornado-affected communities was assessed via structured telephone interview and invited to a Web-based treatment. The modular treatment addressed symptoms of posttraumatic stress disorder, depression, and alcohol and tobacco use. Participants were randomized to experimental or control conditions after accessing the site. Overall access for the intervention was 35.8%. Module completion for those who accessed ranged from 52.8% to 85.6%. Adolescents with parents who used the Internet to obtain health-related information were more likely to access the treatment. Adolescent males were less likely to access the treatment. Future work is needed to identify strategies to further increase the reach of Web-based treatments to provide clinical services in a postdisaster context. (c) 2015 APA, all rights reserved).

  17. Accessibility and Use of Web-Based Electronic Resources by Physicians in a Psychiatric Institution in Nigeria

    Science.gov (United States)

    Oduwole, Adebambo Adewale; Oyewumi, Olatundun

    2010-01-01

    Purpose: This study aims to examine the accessibility and use of web-based electronic databases on the Health InterNetwork Access to Research Initiative (HINARI) portal by physicians in the Neuropsychiatric Hospital, Aro--a psychiatry health institution in Nigeria. Design/methodology/approach: Collection of data was through the use of a three-part…

  18. Making It Work for Everyone: HTML5 and CSS Level 3 for Responsive, Accessible Design on Your Library's Web Site

    Science.gov (United States)

    Baker, Stewart C.

    2014-01-01

    This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…

  19. Making It Work for Everyone: HTML5 and CSS Level 3 for Responsive, Accessible Design on Your Library's Web Site

    Science.gov (United States)

    Baker, Stewart C.

    2014-01-01

    This article argues that accessibility and universality are essential to good Web design. A brief review of library science literature sets the issue of Web accessibility in context. The bulk of the article explains the design philosophies of progressive enhancement and responsive Web design, and summarizes recent updates to WCAG 2.0, HTML5, CSS…

  20. Access and privacy rights using web security standards to increase patient empowerment.

    Science.gov (United States)

    Falcão-Reis, Filipa; Costa-Pereira, Altamiro; Correia, Manuel E

    2008-01-01

    Electronic Health Record (EHR) systems are becoming more and more sophisticated and include nowadays numerous applications, which are not only accessed by medical professionals, but also by accounting and administrative personnel. This could represent a problem concerning basic rights such as privacy and confidentiality. The principles, guidelines and recommendations compiled by the OECD protection of privacy and trans-border flow of personal data are described and considered within health information system development. Granting access to an EHR should be dependent upon the owner of the record; the patient: he must be entitled to define who is allowed to access his EHRs, besides the access control scheme each health organization may have implemented. In this way, it's not only up to health professionals to decide who have access to what, but the patient himself. Implementing such a policy is walking towards patient empowerment which society should encourage and governments should promote. The paper then introduces a technical solution based on web security standards. This would give patients the ability to monitor and control which entities have access to their personal EHRs, thus empowering them with the knowledge of how much of his medical history is known and by whom. It is necessary to create standard data access protocols, mechanisms and policies to protect the privacy rights and furthermore, to enable patients, to automatically track the movement (flow) of their personal data and information in the context of health information systems. This solution must be functional and, above all, user-friendly and the interface should take in consideration some heuristics of usability in order to provide the user with the best tools. The current official standards on confidentiality and privacy in health care, currently being developed within the EU, are explained, in order to achieve a consensual idea of the guidelines that all member states should follow to transfer

  1. Web Maps and Services at NOAA for Bathymetric Data Discovery, Visualization, and Access

    Science.gov (United States)

    Varner, J. D.; Cartwright, J.

    2016-12-01

    NOAA's National Centers for Environmental Information (NCEI) ensures the security and widespread availability of marine geophysical data through long-term stewardship. NCEI stewards bathymetric data and products from numerous sources, including near-shore hydrographic survey data from NOAA's National Ocean Service, deep-water multibeam and single-beam echosounder data collected by U.S. and non-U.S. institutions, as well as digital elevation models (DEMs) that integrate ocean bathymetry and land topography. These data can be discovered, visualized, and accessed via a suite of ArcGIS web services and by using a web map which integrates these component services: the Bathymetric Data Viewer. The services provide data coverage (e.g. survey tracklines, DEM footprints), color shaded relief visualizations of bathymetry, and seamless mosaics of elevation data. These services are usable in web applications (both within and outside NOAA), and in desktop GIS software. Users can utilize the Bathymetric Data Viewer to narrow down data of interest, identify datasets, then submit an order to NCEI's extract system for data retrieval.

  2. FLOSYS--a web-accessible workflow system for protocol-driven biomolecular sequence analysis.

    Science.gov (United States)

    Badidi, E; Lang, B F; Burger, G

    2004-11-01

    FLOSYS is an interactive web-accessible bioinformatics workflow system designed to assist biologists in multi-step data analyses. FLOSYS allows the user to create complex analysis pathways (protocols) graphically, similar to drawing a flowchart: icons representing particular bioinformatics tools are dragged and dropped onto a canvas and lines connecting those icons are drawn to specify the relationships between the tools. In addition, FLOSYS permits to select input-data, execute the protocol and store the results in a personal workspace. The three-tier architecture of FLOSYS has been implemented in Java and uses a relational database system together with new technologies for distributed and web computing such as CORBA, RMI, JSP and JDBC. The prototype of FLOSYS, which is part of the bioinformatics workbench AnaBench, is accessible on-line at http://malawimonas.bcm.umontreal.ca: 8091/anabench. The entire package is available on request to academic groups who wish to have a customized local analysis environment for research or teaching.

  3. COMPARE: a web accessible tool for investigating mechanisms of cell growth inhibition.

    Science.gov (United States)

    Zaharevitz, Daniel W; Holbeck, Susan L; Bowerman, Christopher; Svetlik, Penny A

    2002-01-01

    For more than 10 years the National Cancer Institute (NCI) has tested compounds for their ability to inhibit the growth of human tumor cell lines in culture (NCI screen). Work of Ken Paull [J. Natl. Cancer Inst. 81 (1989) 1088] demonstrated that compounds with similar mechanism of cell growth inhibition show similar patterns of activity in the NCI screen. This observation was developed into an algorithm called COMPARE and has been successfully used to predict mechanisms for a wide variety of compounds. More recently, this method has been extended to associate patterns of cell growth inhibition by compounds with measurements of molecular entities (such as gene expression) in the cell lines in the NCI screen. The COMPARE method and associated data are freely available on the Developmental Therapeutics Program (DTP) web site (http://dtp.nci.nih.gov/). Examples of the use of COMPARE on these web pages will be explained and demonstrated. Published by Elsevier Science Inc.

  4. SPSmart: adapting population based SNP genotype databases for fast and comprehensive web access

    Directory of Open Access Journals (Sweden)

    Carracedo Ángel

    2008-10-01

    Full Text Available Abstract Background In the last five years large online resources of human variability have appeared, notably HapMap, Perlegen and the CEPH foundation. These databases of genotypes with population information act as catalogues of human diversity, and are widely used as reference sources for population genetics studies. Although many useful conclusions may be extracted by querying databases individually, the lack of flexibility for combining data from within and between each database does not allow the calculation of key population variability statistics. Results We have developed a novel tool for accessing and combining large-scale genomic databases of single nucleotide polymorphisms (SNPs in widespread use in human population genetics: SPSmart (SNPs for Population Studies. A fast pipeline creates and maintains a data mart from the most commonly accessed databases of genotypes containing population information: data is mined, summarized into the standard statistical reference indices, and stored into a relational database that currently handles as many as 4 × 109 genotypes and that can be easily extended to new database initiatives. We have also built a web interface to the data mart that allows the browsing of underlying data indexed by population and the combining of populations, allowing intuitive and straightforward comparison of population groups. All the information served is optimized for web display, and most of the computations are already pre-processed in the data mart to speed up the data browsing and any computational treatment requested. Conclusion In practice, SPSmart allows populations to be combined into user-defined groups, while multiple databases can be accessed and compared in a few simple steps from a single query. It performs the queries rapidly and gives straightforward graphical summaries of SNP population variability through visual inspection of allele frequencies outlined in standard pie-chart format. In addition, full

  5. Release Early, Release Often: Predicting Change in Versioned Knowledge Organization Systems on the Web

    OpenAIRE

    Meroño-Peñuela, Albert; Guéret, Christophe; Schlobach, Stefan

    2015-01-01

    The Semantic Web is built on top of Knowledge Organization Systems (KOS) (vocabularies, ontologies, concept schemes) that provide a structured, interoperable and distributed access to Linked Data on the Web. The maintenance of these KOS over time has produced a number of KOS version chains: subsequent unique version identifiers to unique states of a KOS. However, the release of new KOS versions pose challenges to both KOS publishers and users. For publishers, updating a KOS is a knowledge int...

  6. A Comparison of Prediction Algorithms for Prefetching in the Current Web

    OpenAIRE

    Josep Domenech; Sahuquillo Borrás, Julio; Gil Salinas, José Antonio; Pont Sanjuan, Ana

    2012-01-01

    This paper reviews a representative subset of the prediction algorithms used for Web prefetching classifying them according to the information gathered. Then, the DDG algorithm is described. The main novelty of this algorithm lies in the fact that, unlike previous algorithms, it creates a prediction model according to the structure of the current web. To this end, the algorithm distinguishes between container objects and embedded objects. Its performance is compared against important existing...

  7. Integrated Web-based Oracle and GIS Access to Natural Hazards Data

    Science.gov (United States)

    Dunbar, P. K.; Cartwright, J. C.; Kowal, D.; Gaines, T.

    2002-12-01

    The National Geophysical Data Center (NGDC) catalogs information on tsunamis, significant earthquakes, and volcanoes including effects such as fatalities and damage. NGDC also maintains a large collection of geologic hazards photos. All of these databases are now stored in an Oracle relational database management system (RDMS) and accessible over the Web as tables, reports and interactive maps. Storing the data in a RDMS facilitates the search for earthquake, tsunami and volcano data related to a specific event. For example, a user might be interested in all of the earthquakes greater than magnitude 8.0 that have occurred in Alaska. If the earthquake triggered a tsunami, the user could then directly access related information from the tsunami tables without having to run a separate search of the tsunami database. Users could also first access the tsunami database and then obtain related significant earthquake and volcano data. The ArcIMS-based interactive maps provide integrated Web-based GIS access to these hazards databases as well as additional auxiliary geospatial data. The first interactive map provides access to individual GIS layers of significant earthquakes, tsunami sources, tsunami effects, volcano locations, and various spatial reference layers including topography, population density, and political boundaries. The map service also provides ftp links and hyperlinks to additional hazards information such as NGDC's extensive collection of geologic hazards photos. For example, a user could display all of the significant earthquakes that have occurred in California and then by using a hyperlinks tool, display images showing damage from a specific earthquake such as the 1989 Loma Prieta event. The second interactive map allows users to display related natural hazards GIS layers. For example, a user might first display tsunami source locations and select tsunami effects as the related feature. Using a tool developed at NGDC, the user can then select a specific

  8. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    Science.gov (United States)

    2015-02-01

    data for project planning, design , and evaluation studies, including how to generate input files for numerical wave models. WaveNet employs a Google ...ERDC/CHL CHETN-IV-103 February 2015 Approved for public release; distribution is unlimited. WaveNet: A Web -Based Metocean Data Access, Processing...modeling and planning missions require metocean data (e.g., winds, waves, tides, water levels). WaveNet is a web -based graphical-user-interface (GUI

  9. Distill: a suite of web servers for the prediction of one-, two- and three-dimensional structural features of proteins

    Directory of Open Access Journals (Sweden)

    Walsh Ian

    2006-09-01

    Full Text Available Abstract Background We describe Distill, a suite of servers for the prediction of protein structural features: secondary structure; relative solvent accessibility; contact density; backbone structural motifs; residue contact maps at 6, 8 and 12 Angstrom; coarse protein topology. The servers are based on large-scale ensembles of recursive neural networks and trained on large, up-to-date, non-redundant subsets of the Protein Data Bank. Together with structural feature predictions, Distill includes a server for prediction of Cα traces for short proteins (up to 200 amino acids. Results The servers are state-of-the-art, with secondary structure predicted correctly for nearly 80% of residues (currently the top performance on EVA, 2-class solvent accessibility nearly 80% correct, and contact maps exceeding 50% precision on the top non-diagonal contacts. A preliminary implementation of the predictor of protein Cα traces featured among the top 20 Novel Fold predictors at the last CASP6 experiment as group Distill (ID 0348. The majority of the servers, including the Cα trace predictor, now take into account homology information from the PDB, when available, resulting in greatly improved reliability. Conclusion All predictions are freely available through a simple joint web interface and the results are returned by email. In a single submission the user can send protein sequences for a total of up to 32k residues to all or a selection of the servers. Distill is accessible at the address: http://distill.ucd.ie/distill/.

  10. Web access and dissemination of Andalusian coastal erosion rates: viewers and standard/filtered map services.

    Science.gov (United States)

    Álvarez Francoso, Jose; Prieto Campos, Antonio; Ojeda Zujar, Jose; Guisado-Pintado, Emilia; Pérez Alcántara, Juan Pedro

    2017-04-01

    The accessibility to environmental information via web viewers using map services (OGC or proprietary services) has become more frequent since newly information sources (ortophotos, LIDAR, GPS) are of great detailed and thus generate a great volume of data which barely can be disseminated using either analogue (paper maps) or digital (pdf) formats. Moreover, governments and public institutions are concerned about the need of facilitates provision to research results and improve communication about natural hazards to citizens and stakeholders. This information ultimately, if adequately disseminated, it's crucial in decision making processes, risk management approaches and could help to increase social awareness related to environmental issues (particularly climate change impacts). To overcome this issue, two strategies for wide dissemination and communication of the results achieved in the calculation of beach erosion for the 640 km length of the Andalusian coast (South Spain) using web viewer technology are presented. Each of them are oriented to different end users and thus based on different methodologies. Erosion rates has been calculated at 50m intervals for different periods (1956-1977-2001-2011) as part of a National Research Project based on the spasialisation and web-access of coastal vulnerability indicators for Andalusian region. The 1st proposal generates WMS services (following OGC standards) that are made available by Geoserver, using a geoviewer client developed through Leaflet. This viewer is designed to be used by the general public (citizens, politics, etc) by combining a set of tools that give access to related documents (pdfs), visualisation tools (panoramio pictures, geo-localisation with GPS) are which are displayed within an user-friendly interface. Further, the use of WMS services (implemented on Geoserver) provides a detailed semiology (arrows and proportional symbols, using alongshore coastaline buffers to represent data) which not only

  11. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  12. Accessibility and reliability of cutaneous laser surgery information on the World Wide Web.

    Science.gov (United States)

    Bykowski, J L; Alora, M B; Dover, J S; Arndt, K A

    2000-05-01

    The World Wide Web has provided the public with easy and affordable access to a vast range of information. However, claims may be unsubstantiated and misleading. The purpose of this study was to use cutaneous laser surgery as a model to assess the availability and reliability of Web sites and to evaluate this resource for the quality of patient and provider education. Three commercial methods of searching the Internet were used, identifying nearly 500,000 possible sites. The first 100 sites listed by each search engine (a total of 300 sites) were compared. Of these, 126 were listed repeatedly within a given retrieval method, whereas only 3 sites were identified by all 3 search engines. After elimination of duplicates, 40 sites were evaluated for content and currency of information. The most common features included postoperative care suggestions, options for pain management or anesthesia, a description of the way in which lasers work, and the types of lasers used for different procedures. Potential contraindications to laser procedures were described on fewer than 30% of the sites reviewed. None of the sites contained substantiation of claims or referrals to peer-reviewed publications or research. Because of duplication and the prioritization systems of search engines, the ease of finding sites did not correlate with the quality of the site's content. Our findings show that advertisements for services exceed useful information.

  13. Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access

    Directory of Open Access Journals (Sweden)

    Barbara Tillett

    2011-12-01

    Full Text Available Catalogare non vuol dire semplicemente costruire un catalogo. Vuol dire far sì che gli utenti accedano tempestivamente alle informazioni pertinenti alle loro esigenze. Il lavoro di identificazione delle risorse raccolte da biblioteche, archivi, musei, dà luogo a ricchi metadati che possono essere riutilizzati per molti scopi ("le attività dell’utente". Ciò comporta la descrizione delle risorse e il mostrare le loro relazioni con persone, famiglie, enti e altre risorse, consentendo così agli utenti di navigare attraverso surrogati delle risorse per ottenere più rapidamente le informazioni di cui hanno bisogno. I metadati costruiti lungo tutto il ciclo di vita di una risorsa sono particolarmente preziosi per molti tipi di utenti: dai creatori delle risorse, agli editori, alle agenzie, ai librai, agli aggregatori di risorse, ai fornitori di sistemi, alle biblioteche, ad altre istituzioni culturali ed agli utenti finali. Il nuovo codice internazionale di catalogazione, RDA: Resource Description e Access, è progettato per soddisfare le attività di base degli utenti producendo metadati ben formati e interconnessi per l'ambiente digitale, dando la possibilità alle biblioteche di rimanere rilevanti nel web semantico. Acknowledge Il testo inglese del saggio è pubblicato in "Serials", November 2011, 24, 3, con il titolo Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access, DOI: http://dx.doi.org/10.1629/24266. Traduzione italiana di Maria Chiara Iorio e Tiziana Possemato, che ringraziano Carlo Bianchini e Mauro Guerrini per la rilettura della traduzione.

  14. Urban Poor community Access to reproductive Health care in Surabaya. An Equity Analysis Using Spider Web

    Directory of Open Access Journals (Sweden)

    Ernawaty Ernawaty

    2015-01-01

    Full Text Available background:Poverty in urban area triggered new problem related with their access to health care services. Heavy burden of life in urban makes health, especially reproductive health, never become first priority for urban poor community. This study is aimed to analyzed equity of urban poor community when accessing the reproductive health care. Methods:This is descriptive study with cross sectional design applied. There were 78 women residents of Penjaringansari II Flats in Surabaya selected by simple random sampling (α=10% as respondents. Penjaringansari II Flats was choosen because it is a slum area that residence by unskilled labours and odd employees. The need fulfillment of respondents in reproductive health care was analyzed by spider web analysis. result: Most of respondents (55.1% were first married before 20 years old. There were 60.5% of them gave birth before 20 years old also, so it belongs to high risk pregnant women. Spider web area for health care of under age married was less than ideal age married which means that under age married urban poor experienced inequity for ANC health services. Approximately 10.3% of respondents had never use contraceptives because they fear of side effects and inhibiton of their husband. conclutions:Better equity shown in the prevention of cervical cancer. Being perceived as the poor who need assistance, free pap smear was often held by Penjaringansari II residents. Poor conditions experienced by a group not only can promote health care inequity, but also promote health care equity. recomendation:Health care equity for urban poor can be pursued through both aid schemes provided by the community or the government.

  15. Mantenere il ruolo delle biblioteche nel web semantico tramite RDA: Resource Description and Access

    Directory of Open Access Journals (Sweden)

    Barbara Tillett

    2011-10-01

    Full Text Available Catalogare non vuol dire semplicemente costruire un catalogo. Vuol dire far sì che gli utenti accedano tempestivamente alle informazioni pertinenti alle loro esigenze. Il lavoro di identificazione delle risorse raccolte da biblioteche, archivi, musei, dà luogo a ricchi metadati che possono essere riutilizzati per molti scopi ("le attività dell’utente". Ciò comporta la descrizione delle risorse e il mostrare le loro relazioni con persone, famiglie, enti e altre risorse, consentendo così agli utenti di navigare attraverso surrogati delle risorse per ottenere più rapidamente le informazioni di cui hanno bisogno. I metadati costruiti lungo tutto il ciclo di vita di una risorsa sono particolarmente preziosi per molti tipi di utenti: dai creatori delle risorse, agli editori, alle agenzie, ai librai, agli aggregatori di risorse, ai fornitori di sistemi, alle biblioteche, ad altre istituzioni culturali ed agli utenti finali. Il nuovo codice internazionale di catalogazione, RDA: Resource Description e Access, è progettato per soddisfare le attività di base degli utenti producendo metadati ben formati e interconnessi per l'ambiente digitale, dando la possibilità alle biblioteche di rimanere rilevanti nel web semantico. Acknowledge Il testo inglese del saggio è pubblicato in "Serials", November 2011, 24, 3, con il titolo Keeping Libraries Relevant in the Semantic Web with RDA: Resource Description and Access, DOI: http://dx.doi.org/10.1629/24266. Traduzione italiana di Maria Chiara Iorio e Tiziana Possemato, che ringraziano Carlo Bianchini e Mauro Guerrini per la rilettura della traduzione.

  16. Development of a Web-Accessible Population Pharmacokinetic Service—Hemophilia (WAPPS-Hemo): Study Protocol

    Science.gov (United States)

    Foster, Gary; Navarro-Ruan, Tamara; McEneny-King, Alanna; Edginton, Andrea N; Thabane, Lehana

    2016-01-01

    Background Individual pharmacokinetic assessment is a critical component of tailored prophylaxis for hemophilia patients. Population pharmacokinetics allows using individual sparse data, thus simplifying individual pharmacokinetic studies. Implementing population pharmacokinetics capacity for the hemophilia community is beyond individual reach and requires a system effort. Objective The Web-Accessible Population Pharmacokinetic Service—Hemophilia (WAPPS-Hemo) project aims to assemble a database of patient pharmacokinetic data for all existing factor concentrates, develop and validate population pharmacokinetics models, and integrate these models within a Web-based calculator for individualized pharmacokinetic estimation in patients at participating treatment centers. Methods Individual pharmacokinetic studies on factor VIII and IX concentrates will be sourced from pharmaceutical companies and independent investigators. All factor concentrate manufacturers, hemophilia treatment centers (HTCs), and independent investigators (identified via a systematic review of the literature) having on file pharmacokinetic data and willing to contribute full or sparse pharmacokinetic data will be eligible for participation. Multicompartmental modeling will be performed using a mixed-model approach for derivation and Bayesian forecasting for estimation of individual sparse data. NONMEM (ICON Development Solutions) will be used as modeling software. Results The WAPPS-Hemo research network has been launched and is currently joined by 30 HTCs from across the world. We have gathered dense individual pharmacokinetic data on 878 subjects, including several replicates, on 21 different molecules from 17 different sources. We have collected sparse individual pharmacokinetic data on 289 subjects from the participating centers through the testing phase of the WAPPS-Hemo Web interface. We have developed prototypal population pharmacokinetics models for 11 molecules. The WAPPS-Hemo website

  17. The Live Access Server - A Web-Services Framework for Earth Science Data

    Science.gov (United States)

    Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.

    2005-12-01

    The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is

  18. The Accessibility, Usability, and Reliability of Chinese Web-Based Information on HIV/AIDS

    Science.gov (United States)

    Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan

    2016-01-01

    Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475

  19. A Data Capsule Framework For Web Services: Providing Flexible Data Access Control To Users

    CERN Document Server

    Kannan, Jayanthkumar; Chun, Byung-Gon

    2010-01-01

    This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameter...

  20. 一种基于 Web 访问模型的网络隐蔽通道%A New Network Covert Channel Based on Web Access Model

    Institute of Scientific and Technical Information of China (English)

    廖晓锋; 邱桂华

    2013-01-01

      网络隐蔽信道是将窃取的机密信息隐藏在正常的网络传输协议中的一种通信方法。由于网络时间隐蔽信道不修改网络数据包的内容,因此更加难以检测和限制,从而具有更大的威胁。提出一种新的基于 Web 访问模型的网络时间隐蔽信道,恶意用户通过规律性的访问 Web 服务器实现机密信息传输;实现了该网络隐蔽信道原型,并给出了信道的性能分析结果。%Network covert channel is a transmission scheme which hides the confidential information to normal network channel. Network covert timing channel does not modify the network packets, therefore it is more difficult to detect and more dangerous. This paper presents a new network covert timing channel based on Web access model. Malicious users transfer the confidential information by regularly access the Web server in this channel. We implement the prototype of the covert channel, and analyze the channel performance.

  1. J-TEXT WebScope: An efficient data access and visualization system for long pulse fusion experiment

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei, E-mail: zhenghaku@gmail.com [State Key Laboratory of Advanced Electromagnetic Engineering and Technology in Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering in Huazhong University of Science and Technology, Wuhan 430074 (China); Wan, Kuanhong; Chen, Zhi; Hu, Feiran; Liu, Qiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology in Huazhong University of Science and Technology, Wuhan 430074 (China); School of Electrical and Electronic Engineering in Huazhong University of Science and Technology, Wuhan 430074 (China)

    2016-11-15

    Highlights: • No matter how large the data is, the response time is always less than 500 milliseconds. • It is intelligent and just gives you the data you want. • It can be accessed directly over the Internet without installing special client software if you already have a browser. • Adopt scale and segment technology to organize data. • To support a new database for the WebScope is quite easy. • With the configuration stored in user’s profile, you have your own portable WebScope. - Abstract: Fusion research is an international collaboration work. To enable researchers across the world to visualize and analyze the experiment data, a web based data access and visualization tool is quite important [1]. Now, a new WebScope based on RIA (Rich Internet Application) is designed and implemented to meet these requirements. On the browser side, a fluent and intuitive interface is provided for researchers at J-TEXT laboratory and collaborators from all over the world to view experiment data and related metadata. The fusion experiments will feature long pulse and high sampling rate in the future. The data access and visualization system in this work has adopted segment and scale concept. Large data samples are re-sampled in different scales and then split into segments for instant response. It allows users to view extremely large data on the web browser efficiently, without worrying about the limitation on the size of the data. The HTML5 and JavaScript based web front-end can provide intuitive and fluent user experience. On the server side, a RESTful (Representational State Transfer) web API, which is based on ASP.NET MVC (Model View Controller), allows users to access the data and its metadata through HTTP (HyperText Transfer Protocol). An interface to the database has been designed to decouple the data access and visualization system from the data storage. It can be applied upon any data storage system like MDSplus or JTEXTDB, and this system is very easy to

  2. PONGO: a web server for multiple predictions of all-alpha transmembrane proteins

    DEFF Research Database (Denmark)

    Amico, M.; Finelli, M.; Rossi, I.;

    2006-01-01

    ://pongo.biocomp.unibo.it/ ) provides the annotation on predictive basis for the all-alpha membrane proteins in the human genome, not only through DAS queries, but also directly using a simple web interface. In order to produce a more comprehensive analysis of the sequence at hand, this annotation is carried out with four selected...... and high scoring predictors: TMHMM2.0, MEMSAT, PRODIV and ENSEMBLE1.0. The stored and pre-computed predictions for the human proteins can be searched and displayed in a graphical view. However the web service allows the prediction of the topology of any kind of putative membrane proteins, regardless...... of the organism and more importantly with the same sequence profile for a given sequence when required. Here we present a new web server that incorporates the state-of-the-art topology predictors in a single framework, so that putative users can interactively compare and evaluate four predictions simultaneously...

  3. A web accessible scientific workflow system for vadoze zone performance monitoring: design and implementation examples

    Science.gov (United States)

    Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.

    2005-12-01

    Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser

  4. A Promising Practicum Pilot--Exploring Associate Teachers' Access and Interactions with a Web-Based Learning Tool

    Science.gov (United States)

    Petrarca, Diana

    2013-01-01

    This paper explores how a small group of associate teachers (i.e., the classroom teachers who host, supervise, and mentor teacher candidates during practicum placements) accessed and interacted with the Associate Teacher Learning Tool (ATLT), a web-based learning tool created specifically for this new group of users. The ATLT is grounded in…

  5. Research on Application of Metacognitive Strategy in English Listening in the Web-based Self-access Learning Environment

    Institute of Scientific and Technical Information of China (English)

    罗雅清

    2012-01-01

      Metacognitive strategies are regarded as advanced strategies in all the learning strategies. This study focuses on the ap⁃plication of metacognitive strategies in English listening in the web-based self-access learning environment (WSLE) and tries to provide some references for those students and teachers in the vocational colleges.

  6. Development of Remote Monitoring and a Control System Based on PLC and WebAccess for Learning Mechatronics

    Directory of Open Access Journals (Sweden)

    Wen-Jye Shyr

    2013-02-01

    Full Text Available This study develops a novel method for learning mechatronics using remote monitoring and control, based on a programmable logic controller (PLC and WebAccess. A mechatronics module, a Web‐CAM and a PLC were integrated with WebAccess software to organize a remote laboratory. The proposed system enables users to access the Internet for remote monitoring and control of the mechatronics module via a web browser, thereby enhancing work flexibility by enabling personnel to control mechatronics equipment from a remote location. Mechatronics control and long‐distance monitoring were realized by establishing communication between the PLC and WebAccess. Analytical results indicate that the proposed system is feasible. The suitability of this system is demonstrated in the department of industrial education and technology at National Changhua University of Education, Taiwan. Preliminary evaluation of the system was encouraging and has shown that it has achieved success in helping students understand concepts and master remote monitoring and control techniques.

  7. Factors explaining adoption and implementation processes for web accessibility standards within eGovernment systems and organizations

    NARCIS (Netherlands)

    Velleman, Eric M.; Nahuis, Inge; Geest, van der Thea

    2015-01-01

    Local government organizations such as municipalities often seem unable to fully adopt or implement web accessibility standards even if they are actively pursuing it. Based on existing adoption models, this study identifies factors in five categories that influence the adoption and implementation of

  8. 78 FR 67881 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2013-11-12

    ... section 508 standards for Web content, forms and applications.\\12\\ \\10\\ See 75 FR 43460-43467 (July 26... information and communication technology (ICT) procurements that specifically proposes WCAG 2.0 Level AA as... Factors (HF); Accessibility Requirements for Public Procurement of ICT products and services in...

  9. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-05-04

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster.

  10. New Web Services for Broader Access to National Deep Submergence Facility Data Resources Through the Interdisciplinary Earth Data Alliance

    Science.gov (United States)

    Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.

    2016-12-01

    The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.

  11. Global Location-Based Access to Web Applications Using Atom-Based Automatic Update

    Science.gov (United States)

    Singh, Kulwinder; Park, Dong-Won

    We propose an architecture which enables people to enquire about information available in directory services by voice using regular phones. We implement a Virtual User Agent (VUA) which mediates between the human user and a business directory service. The system enables the user to search for the nearest clinic, gas station by price, motel by price, food / coffee, banks/ATM etc. and fix an appointment, or automatically establish a call between the user and the business party if the user prefers. The user also has an option to receive appointment confirmation by phone, SMS, or e-mail. The VUA is accessible by a toll free DID (Direct Inward Dialing) number using a phone by anyone, anywhere, anytime. We use the Euclidean formula for distance measurement. Since, shorter geodesic distances (on the Earth’s surface) correspond to shorter Euclidean distances (measured by a straight line through the Earth). Our proposed architecture uses Atom XML syndication format protocol for data integration, VoiceXML for creating the voice user interface (VUI) and CCXML for controlling the call components. We also provide an efficient algorithm for parsing Atom feeds which provide data to the system. Moreover, we describe a cost-effective way for providing global access to the VUA based on Asterisk (an open source IP-PBX). We also provide some information on how our system can be integrated with GPS for locating the user coordinates and therefore efficiently and spontaneously enhancing the system response. Additionally, the system has a mechanism for validating the phone numbers in its database, and it updates the number and other information such as daily price of gas, motel etc. automatically using an Atom-based feed. Currently, the commercial directory services (Example 411) do not have facilities to update the listing in the database automatically, so that why callers most of the times get out-of-date phone numbers or other information. Our system can be integrated very easily

  12. A revaluation of the cultural dimension of disability policy in the European Union: the impact of digitization and web accessibility.

    Science.gov (United States)

    Ferri, Delia; Giannoumis, G Anthony

    2014-01-01

    Reflecting the commitments undertaken by the EU through the conclusion of the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), the European Disability Strategy 2010–2020 not only gives a prominent position to accessibility, broadly interpreted, but also suggests an examination of the obligations for access to cultural goods and services. The European Disability Strategy 2010–2020 expressly acknowledges that EU action will support national activities to make sports, leisure, cultural and recreational organizations and activities accessible, and use the possibilities for copyright exceptions in the Directive 2001/29/EC (Infosoc Directive). This article discusses to what extent the EU has realized the principle of accessibility and the right to access cultural goods and services envisaged in the UNCRPD. Previous research has yet to explore how web accessibility and digitization interact with the cultural dimension of disability policy in the European Union. This examination attempts to fill this gap by discussing to what extent the European Union has put this cultural dimension into effect and how web accessibility policies and the digitization of cultural materials influence these efforts.

  13. RaptorX-Property: a web server for protein structure property prediction

    OpenAIRE

    Wang, Sheng; Li, Wei; Liu, Shiwang; Xu, Jinbo

    2016-01-01

    RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent acce...

  14. PBOND: web server for the prediction of proline and non-proline cis/trans isomerization.

    Science.gov (United States)

    Exarchos, Konstantinos P; Exarchos, Themis P; Papaloukas, Costas; Troganis, Anastassios N; Fotiadis, Dimitrios I

    2009-09-01

    PBOND is a web server that predicts the conformation of the peptide bond between any two amino acids. PBOND classifies the peptide bonds into one out of four classes, namely cis imide (cis-Pro), cis amide (cis-nonPro), trans imide (trans-Pro) and trans amide (trans-nonPro). Moreover, for every prediction a reliability index is computed. The underlying structure of the server consists of three stages: (1) feature extraction, (2) feature selection and (3) peptide bond classification. PBOND can handle both single sequences as well as multiple sequences for batch processing. The predictions can either be directly downloaded from the web site or returned via e-mail. The PBOND web server is freely available at http://195.251.198.21/pbond.html.

  15. PBOND: Web Server for the Prediction of Proline and Non-Proline cis / trans Isomerization

    Institute of Scientific and Technical Information of China (English)

    Konstantinos P. Exarchos; Themis P. Exarchos; Costas Papaloukas; Anastassios N. Troganis; Dimitrios I. Fotiadis

    2009-01-01

    PBOND is a web server that predicts the conformation of the peptide bond be-tween any two amino acids. PBOND classifies the peptide bonds into one out of four classes, namely cis imide(cis-Pro), cis amide(cis-nonPro), trans imide (trans-Pro)and trans amide(trans-nonPro). Moreover, for every prediction a reliability index is computed. The underlying structure of the server consists of three stages:(1)feature extraction,(2)feature selection and(3)peptide bond clas-sification. PBOND can handle both single sequences as well as multiple sequences for batch processing. The predictions can either be directly downloaded from the web site or returned via e-mail. The PBOND web server is freely available at http://195.251.198.21/pbond.html.

  16. Web search queries can predict stock market volumes.

    Science.gov (United States)

    Bordino, Ilaria; Battiston, Stefano; Caldarelli, Guido; Cristelli, Matthieu; Ukkonen, Antti; Weber, Ingmar

    2012-01-01

    We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  17. Web search queries can predict stock market volumes.

    Directory of Open Access Journals (Sweden)

    Ilaria Bordino

    Full Text Available We live in a computerized and networked society where many of our actions leave a digital trace and affect other people's actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.

  18. Web Search Queries Can Predict Stock Market Volumes

    Science.gov (United States)

    Bordino, Ilaria; Battiston, Stefano; Caldarelli, Guido; Cristelli, Matthieu; Ukkonen, Antti; Weber, Ingmar

    2012-01-01

    We live in a computerized and networked society where many of our actions leave a digital trace and affect other people’s actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www. PMID:22829871

  19. Meta4: a web-application for sharing and annotating metagenomic gene predictions using web-services

    Directory of Open Access Journals (Sweden)

    Emily J Richardson

    2013-09-01

    Full Text Available Whole-genome-shotgun (WGS metagenomics experiments produce DNA sequence data from entire ecosystems, and provide a huge amount of novel information. Gene discovery projects require up-to-date information about sequence homology and domain structure for millions of predicted proteins to be presented in a simple, easy-to-use system. There is a lack of simple, open, flexible tools that allow the rapid sharing of metagenomics datasets with collaborators in a format they can easily interrogate. We present Meta4, a flexible and extensible web-application that can be used to share and annotate metagenomic gene predictions. Proteins and predicted domains are stored in a simple relational database, with a dynamic front-end which displays the results in an internet browser. Web-services are used to provide up-to-date information about the proteins from homology searches against public databases. Information about Meta4 can be found on the project website (http://www.ark-genomics.org/bioinformatics/meta4, code is available on Github (https://github.com/mw55309/meta4, a cloud image is available, and an example implementation can be seen at http://www.ark-genomics.org/tools/meta4

  20. Online Access to Weather Satellite Imagery Through the World Wide Web

    Science.gov (United States)

    Emery, W.; Baldwin, D.

    1998-01-01

    Both global area coverage (GAC) and high-resolution picture transmission (HRTP) data from the Advanced Very High Resolution Radiometer (AVHRR) are made available to laternet users through an online data access system. Older GOES-7 data am also available. Created as a "testbed" data system for NASA's future Earth Observing System Data and Information System (EOSDIS), this testbed provides an opportunity to test both the technical requirements of an onune'd;ta system and the different ways in which the -general user, community would employ such a system. Initiated in December 1991, the basic data system experienced five major evolutionary changes In response to user requests and requirements. Features added with these changes were the addition of online browse, user subsetting, dynamic image Processing/navigation, a stand-alone data storage system, and movement,from an X-windows graphical user Interface (GUI) to a World Wide Web (WWW) interface. Over Its lifetime, the system has had as many as 2500 registered users. The system on the WWW has had over 2500 hits since October 1995. Many of these hits are by casual users that only take the GIF images directly from the interface screens and do not specifically order digital data. Still, there b a consistent stream of users ordering the navigated image data and related products (maps and so forth). We have recently added a real-time, seven- day, northwestern United States normalized difference vegetation index (NDVI) composite that has generated considerable Interest. Index Terms-Data system, earth science, online access, satellite data.

  1. MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation

    Science.gov (United States)

    Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo

    2016-07-01

    In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.

  2. Declarative Access Control for WebDSL: Combining Language Integration and Separation of Concerns

    NARCIS (Netherlands)

    Groenewegen, D.; Visser, E.

    2008-01-01

    Preprint of paper published in: ICWE 2008 - 8th International Conference on Web Engineering, 14-18 July 2008; doi:10.1109/ICWE.2008.15 In this paper, we present the extension of WebDSL, a domain-specific language for web application development, with abstractions for declarative definition of acces

  3. Study of HTML Meta-Tags Utilization in Web-based Open-Access Journals

    Directory of Open Access Journals (Sweden)

    Pegah Pishva

    2007-04-01

    Full Text Available The present study investigates the extent of utilization of two meta tags – “keywords” and “descriptors” – in Web-based Open-Access Journals. A sample composed of 707 journals taken from DOAJ was used. These were analyzed on the account of utilization of the said meta tags. Findings demonstrated that these journals utilized “keywords” and “descriptors” meta-tags, 33.1% and 29.9% respectively. It was further demonstrated that among various subject classifications, “General Journals” had been the highest while “Mathematics and Statistics Journals” had the least utilization as “keywords” meta-tags. Moreover, “General Journals” and “Chemistry journals”, with 55.6% and 15.4% utilization respectively, had the highest and the lowest “descriptors” meta-tag usage rate. Based on our findings, and when compared against other similar research findings, there had been no significant growth experienced in utilization of these meta tags.

  4. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    Science.gov (United States)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  5. Human membrane transporter database: a Web-accessible relational database for drug transport studies and pharmacogenomics.

    Science.gov (United States)

    Yan, Q; Sadée, W

    2000-01-01

    The human genome contains numerous genes that encode membrane transporters and related proteins. For drug discovery, development, and targeting, one needs to know which transporters play a role in drug disposition and effects. Moreover, genetic polymorphisms in human membrane transporters may contribute to interindividual differences in the response to drugs. Pharmacogenetics, and, on a genome-wide basis, pharmacogenomics, address the effect of genetic variants on an individual's response to drugs and xenobiotics. However, our knowledge of the relevant transporters is limited at present. To facilitate the study of drug transporters on a broad scale, including the use of microarray technology, we have constructed a human membrane transporter database (HMTD). Even though it is still largely incomplete, the database contains information on more than 250 human membrane transporters, such as sequence, gene family, structure, function, substrate, tissue distribution, and genetic disorders associated with transporter polymorphisms. Readers are invited to submit additional data. Implemented as a relational database, HMTD supports complex biological queries. Accessible through a Web browser user interface via Common Gateway Interface (CGI) and Java Database Connection (JDBC), HMTD also provides useful links and references, allowing interactive searching and downloading of data. Taking advantage of the features of an electronic journal, this paper serves as an interactive tutorial for using the database, which we expect to develop into a research tool.

  6. Access to a syllabus of human hemoglobin variants (1996) via the World Wide Web.

    Science.gov (United States)

    Hardison, R C; Chui, D H; Riemer, C R; Miller, W; Carver, M F; Molchanova, T P; Efremov, G D; Huisman, T H

    1998-03-01

    Information on mutations in human hemoglobin is important in many efforts, including understanding the pathophysiology of hemoglobin diseases, developing therapies, elucidating the dynamics of sequence alterations inhuman populations, and dissecting the details of protein structure/function relationships. Currently, information is available on a large number of mutations and variants, but is distributed among thousands of papers. In an effort to organize this voluminous data set, two Syllabi have been prepared compiling succinct information on human hemoglobin abnormalities. In both of these, each entry provides amino acid and/or DNA sequence alterations, hematological and clinical data, methodology used for characterization, ethnic distribution, and functional properties and stability of the hemoglobin, together with appropriate literature references. A Syllabus of Human Hemoglobin Variants (1996) describes 693 abnormal hemoglobins resulting from alterations in the alpha-, beta-, gamma-, and delta-globin chains, including special abnormalities such as double mutations, hybrid chains, elongated chains, deletions, and insertions. We have converted this resource to an electronic form that is accessible via the World Wide Web at the Globin Gene Server (http://globin.cse.psu.edu). Hyperlinks are provided from each entry in the tables of variants to the corresponding full description. In addition, a simple query interface allows the user to find all entries containing a designated word or phrase. We are in the process of converting A Syllabus of Thalassemia Mutations (1997) to a similar electronic format.

  7. A web-accessible content-based cervicographic image retrieval system

    Science.gov (United States)

    Xue, Zhiyun; Long, L. Rodney; Antani, Sameer; Jeronimo, Jose; Thoma, George R.

    2008-03-01

    Content-based image retrieval (CBIR) is the process of retrieving images by directly using image visual characteristics. In this paper, we present a prototype system implemented for CBIR for a uterine cervix image (cervigram) database. This cervigram database is a part of data collected in a multi-year longitudinal effort by the National Cancer Institute (NCI), and archived by the National Library of Medicine (NLM), for the study of the origins of, and factors related to, cervical precancer/cancer. Users may access the system with any Web browser. The system is built with a distributed architecture which is modular and expandable; the user interface is decoupled from the core indexing and retrieving algorithms, and uses open communication standards and open source software. The system tries to bridge the gap between a user's semantic understanding and image feature representation, by incorporating the user's knowledge. Given a user-specified query region, the system returns the most similar regions from the database, with respect to attributes of color, texture, and size. Experimental evaluation of the retrieval performance of the system on "groundtruth" test data illustrates its feasibility to serve as a possible research tool to aid the study of the visual characteristics of cervical neoplasia.

  8. Design and Implement an Novel File Access Prediction Model in Linux

    Institute of Scientific and Technical Information of China (English)

    LIU Xie; LIU Xin-song; YANG Feng; BAI Ying-jie

    2004-01-01

    So far, file access prediction models is mainly based on either the file access frequency or the historical record of the latest access. In this paper, a new file access prediction model called frequency- and recency-based successor (FRS) is presented which combines the advantages of the file frequency with the historical record. FRS model has the capability of rapid response to workload changes and can predict future events with greater accuracy than most of other prediction models. To evaluate the performance of FRS mode, the Linux kernel is modified to predict and prefetch upcoming accesses. The experiment shows that FRS can accurately predict approximately 80% of all file access events, while maintaining an immediate successor queue (ISQ) per-file which only requires regular dynamic updates.

  9. QoS-Driven Self-Healing Web Service Composition Based on Performance Prediction

    Institute of Scientific and Technical Information of China (English)

    Yu Dai; Lei Yang; Bin Zhang

    2009-01-01

    Web services run in a highly dynamic environment, as a result, the QoS of which will change relatively frequently.In order to make the composite service adapt to such dynamic property of Web services, we propose a self-healing approach for web service composition. Such an approach is an integration of backing up in selection and reselecting in execution. In order to make the composite service heal itself as quickly as possible and minimize the number of reselections, a way of performance prediction is proposed in this paper. On this basis, the self-healing approach is presented including framework,the triggering algorithm of the reselection and the reliability model of the service. Experiments show that the proposed solutions have better performance in supporting the self-healing Web service composition.

  10. Compact Web browsing profiles for click-through rate prediction

    DEFF Research Database (Denmark)

    Fruergaard, Bjarne Ørum; Hansen, Lars Kai

    2014-01-01

    In real time advertising we are interested in finding features that improve click-through rate prediction. One source of available information is the bipartite graph of websites previously engaged by identifiable users. In this work, we investigate three different decompositions of such a graph...... with varying degrees of sparsity in the representations. The decompositions that we consider are SVD, NMF, and IRM. To quantify the utility, we measure the performances of these representations when used as features in a sparse logistic regression model for click-through rate prediction. We recommend the IRM...

  11. ProtSA: a web application for calculating sequence specific protein solvent accessibilities in the unfolded ensemble

    Directory of Open Access Journals (Sweden)

    Blackledge Martin

    2009-04-01

    Full Text Available Abstract Background The stability of proteins is governed by the heat capacity, enthalpy and entropy changes of folding, which are strongly correlated to the change in solvent accessible surface area experienced by the polypeptide. While the surface exposed in the folded state can be easily determined, accessibilities for the unfolded state at the atomic level cannot be obtained experimentally and are typically estimated using simplistic models of the unfolded ensemble. A web application providing realistic accessibilities of the unfolded ensemble of a given protein at the atomic level will prove useful. Results ProtSA, a web application that calculates sequence-specific solvent accessibilities of the unfolded state ensembles of proteins has been developed and made freely available to the scientific community. The input is the amino acid sequence of the protein of interest. ProtSA follows a previously published calculation protocol which uses the Flexible-Meccano algorithm to generate unfolded conformations representative of the unfolded ensemble of the protein, and uses the exact analytical software ALPHASURF to calculate atom solvent accessibilities, which are averaged on the ensemble. Conclusion ProtSA is a novel tool for the researcher investigating protein folding energetics. The sequence specific atom accessibilities provided by ProtSA will allow obtaining better estimates of the contribution of the hydrophobic effect to the free energy of folding, will help to refine existing parameterizations of protein folding energetics, and will be useful to understand the influence of point mutations on protein stability.

  12. ESB-Based Sensor Web Integration for the Prediction of Electric Power Supply System Vulnerability

    Science.gov (United States)

    Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja

    2013-01-01

    Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application. PMID:23955435

  13. ESB-Based Sensor Web Integration for the Prediction of Electric Power Supply System Vulnerability

    Directory of Open Access Journals (Sweden)

    Milos Bogdanovic

    2013-08-01

    Full Text Available Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application.

  14. ESB-based Sensor Web integration for the prediction of electric power supply system vulnerability.

    Science.gov (United States)

    Stoimenov, Leonid; Bogdanovic, Milos; Bogdanovic-Dinic, Sanja

    2013-08-15

    Electric power supply companies increasingly rely on enterprise IT systems to provide them with a comprehensive view of the state of the distribution network. Within a utility-wide network, enterprise IT systems collect data from various metering devices. Such data can be effectively used for the prediction of power supply network vulnerability. The purpose of this paper is to present the Enterprise Service Bus (ESB)-based Sensor Web integration solution that we have developed with the purpose of enabling prediction of power supply network vulnerability, in terms of a prediction of defect probability for a particular network element. We will give an example of its usage and demonstrate our vulnerability prediction model on data collected from two different power supply companies. The proposed solution is an extension of the GinisSense Sensor Web-based architecture for collecting, processing, analyzing, decision making and alerting based on the data received from heterogeneous data sources. In this case, GinisSense has been upgraded to be capable of operating in an ESB environment and combine Sensor Web and GIS technologies to enable prediction of electric power supply system vulnerability. Aside from electrical values, the proposed solution gathers ambient values from additional sensors installed in the existing power supply network infrastructure. GinisSense aggregates gathered data according to an adapted Omnibus data fusion model and applies decision-making logic on the aggregated data. Detected vulnerabilities are visualized to end-users through means of a specialized Web GIS application.

  15. Are specialized web servers better at predicting protein structures ...

    African Journals Online (AJOL)

    RABAIL HAFEEZ (0973106)

    2012-07-03

    Jul 3, 2012 ... This research study answers the question that technology is the best for predicting protein structures. Stand-alone .... server 3D Jury to show how it produces high quality, accurate ..... Nucleic Acids Res., 32: 14-16. Ginalski K ...

  16. PDTD: a web-accessible protein database for drug target identification

    Directory of Open Access Journals (Sweden)

    Gao Zhenting

    2008-02-01

    Full Text Available Abstract Background Target identification is important for modern drug discovery. With the advances in the development of molecular docking, potential binding proteins may be discovered by docking a small molecule to a repository of proteins with three-dimensional (3D structures. To complete this task, a reverse docking program and a drug target database with 3D structures are necessary. To this end, we have developed a web server tool, TarFisDock (Target Fishing Docking http://www.dddc.ac.cn/tarfisdock, which has been used widely by others. Recently, we have constructed a protein target database, Potential Drug Target Database (PDTD, and have integrated PDTD with TarFisDock. This combination aims to assist target identification and validation. Description PDTD is a web-accessible protein database for in silico target identification. It currently contains >1100 protein entries with 3D structures presented in the Protein Data Bank. The data are extracted from the literatures and several online databases such as TTD, DrugBank and Thomson Pharma. The database covers diverse information of >830 known or potential drug targets, including protein and active sites structures in both PDB and mol2 formats, related diseases, biological functions as well as associated regulating (signaling pathways. Each target is categorized by both nosology and biochemical function. PDTD supports keyword search function, such as PDB ID, target name, and disease name. Data set generated by PDTD can be viewed with the plug-in of molecular visualization tools and also can be downloaded freely. Remarkably, PDTD is specially designed for target identification. In conjunction with TarFisDock, PDTD can be used to identify binding proteins for small molecules. The results can be downloaded in the form of mol2 file with the binding pose of the probe compound and a list of potential binding targets according to their ranking scores. Conclusion PDTD serves as a comprehensive and

  17. Predicting the effect of home Wi-Fi quality on Web QoE

    OpenAIRE

    Da Hora, Diego; Neves da Hora, Diego; Teixeira, Renata; Van Doorselaer, Karel; Van Oost, Koen

    2016-01-01

    International audience; Wi-Fi is the preferred way of accessing the Internet for many devices at home, but it is vulnerable to performance problems. The analysis of Wi-Fi quality metrics such as RSSI or PHY rate may indicate a number of problems, but users may not notice many of these problems if they don't degrade the performance of the applications they are using. In this work, we study the effects of the home Wi-Fi quality on Web browsing experience. We instrument a commodity access point ...

  18. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  19. Using Forecasting to Predict Long-Term Resource Utilization for Web Services

    Science.gov (United States)

    Yoas, Daniel W.

    2013-01-01

    Researchers have spent years understanding resource utilization to improve scheduling, load balancing, and system management through short-term prediction of resource utilization. Early research focused primarily on single operating systems; later, interest shifted to distributed systems and, finally, into web services. In each case researchers…

  20. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-09-06

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study.

  1. Looking back, looking forward: 10 years of development to collect, preserve and access the Danish Web

    DEFF Research Database (Denmark)

    Laursen, Ditte; Møldrup-Dalum, Per

    Digital heritage archiving is an ongoing activity that requires commitment, involvement and cooperation between heritage institutions and policy makers as well as producers and users of information. In this presentation, we will address how a web archive is created over time as well as what or who...... we see the development of the web archive in the near future. Findings are relevant for curators and researchers interested in the web archive as a historical source....

  2. Multimedia comprehension skill predicts differential outcomes of web-based and lecture courses.

    Science.gov (United States)

    Maki, William S; Maki, Ruth H

    2002-06-01

    College students (134 women and 55 men) participated in introductory psychology courses that were offered largely online (on the World Wide Web) or in a lecture format. Student comprehension skills were inferred from their scores on a multimedia comprehension battery. The learning of content knowledge was affected interactively by comprehension skill level and course format. Differences between format increased with comprehension skill such that the Web-based course advantage became greater as comprehension skill increased. This same pattern was not seen when self-reports of comprehension ability were used as the predictor. Furthermore, comprehension skill did not predict course satisfaction. Generally, students of all skill levels preferred the lecture courses.

  3. AthMethPre: a web server for the prediction and query of mRNA m(6)A sites in Arabidopsis thaliana.

    Science.gov (United States)

    Xiang, Shunian; Yan, Zhangming; Liu, Ke; Zhang, Yaou; Sun, Zhirong

    2016-10-18

    N(6)-Methyladenosine (m(6)A) is the most prevalent and abundant modification in mRNA that has been linked to many key biological processes. High-throughput experiments have generated m(6)A-peaks across the transcriptome of A. thaliana, but the specific methylated sites were not assigned, which impedes the understanding of m(6)A functions in plants. Therefore, computational prediction of mRNA m(6)A sites becomes emergently important. Here, we present a method to predict the m(6)A sites for A. thaliana mRNA sequence(s). To predict the m(6)A sites of an mRNA sequence, we employed the support vector machine to build a classifier using the features of the positional flanking nucleotide sequence and position-independent k-mer nucleotide spectrum. Our method achieved good performance and was applied to a web server to provide service for the prediction of A. thaliana m(6)A sites. The server also provides a comprehensive database of predicted transcriptome-wide m(6)A sites and curated m(6)A-seq peaks from the literature for query and visualization. The AthMethPre web server is the first web server that provides a user-friendly tool for the prediction and query of A. thaliana mRNA m(6)A sites, which is freely accessible for public use at .

  4. pcrEfficiency: a Web tool for PCR amplification efficiency prediction

    Directory of Open Access Journals (Sweden)

    Mallona Izaskun

    2011-10-01

    Full Text Available Abstract Background Relative calculation of differential gene expression in quantitative PCR reactions requires comparison between amplification experiments that include reference genes and genes under study. Ignoring the differences between their efficiencies may lead to miscalculation of gene expression even with the same starting amount of template. Although there are several tools performing PCR primer design, there is no tool available that predicts PCR efficiency for a given amplicon and primer pair. Results We have used a statistical approach based on 90 primer pair combinations amplifying templates from bacteria, yeast, plants and humans, ranging in size between 74 and 907 bp to identify the parameters that affect PCR efficiency. We developed a generalized additive model fitting the data and constructed an open source Web interface that allows the obtention of oligonucleotides optimized for PCR with predicted amplification efficiencies starting from a given sequence. Conclusions pcrEfficiency provides an easy-to-use web interface allowing the prediction of PCR efficiencies prior to web lab experiments thus easing quantitative real-time PCR set-up. A web-based service as well the source code are provided freely at http://srvgen.upct.es/efficiency.html under the GPL v2 license.

  5. MetalPredator: a web server to predict iron-sulfur cluster binding proteomes.

    Science.gov (United States)

    Valasatava, Yana; Rosato, Antonio; Banci, Lucia; Andreini, Claudia

    2016-09-15

    The prediction of the iron-sulfur proteome is highly desirable for biomedical and biological research but a freely available tool to predict iron-sulfur proteins has not been developed yet. We developed a web server to predict iron-sulfur proteins from protein sequence(s). This tool, called MetalPredator, is able to process complete proteomes rapidly with high recall and precision. The web server is freely available at: http://metalweb.cerm.unifi.it/tools/metalpredator/ andreini@cerm.unifi.it Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Inequalities versus Utilization: Factors Predicting Access to Healthcare in Ghana

    Directory of Open Access Journals (Sweden)

    Dominic Buer Boyetey

    2016-08-01

    Full Text Available Universal access to health care remains a significant source of inequality especially among vulnerable groups. Challenges such as lack of insurance coverage, absence of certain types of care, as well as high individual financial care cost can be blamed for the growing inequality in the healthcare sector. The concern is worrying especially when people are denied care. It is in this light that the study set to find out what factors are likely to impact the chances of access to health care, so far as the Ghana Demographic and Health Survey Data 2014 data are concerned, particularly to examine the differences in access to healthcare in connection with varying income groups, educational levels and residential locations. The study relied on the logistic regression analysis to establish that people with some level of education have greater chances of accessing health care compared with those without education. Also chances of access to health care in the sample were high for people in the lower quartile and upper quartile of the household wealth index and a local minimum for those in the middle class. It became evident also that increased number of people with NHIS or PHIS or combination of cash with NHIS or PHIS will give rise to a corresponding increment in the probability of gaining access to health care.

  7. Usage of data-encoded web maps with client side color rendering for combined data access, visualization, and modeling purposes

    Science.gov (United States)

    Pliutau, Denis; Prasad, Narasimha S.

    2013-05-01

    Current approaches to satellite observation data storage and distribution implement separate visualization and data access methodologies which often leads to the need in time consuming data ordering and coding for applications requiring both visual representation as well as data handling and modeling capabilities. We describe an approach we implemented for a data-encoded web map service based on storing numerical data within server map tiles and subsequent client side data manipulation and map color rendering. The approach relies on storing data using the lossless compression Portable Network Graphics (PNG) image data format which is natively supported by web-browsers allowing on-the-fly browser rendering and modification of the map tiles. The method is easy to implement using existing software libraries and has the advantage of easy client side map color modifications, as well as spatial subsetting with physical parameter range filtering. This method is demonstrated for the ASTER-GDEM elevation model and selected MODIS data products and represents an alternative to the currently used storage and data access methods. One additional benefit includes providing multiple levels of averaging due to the need in generating map tiles at varying resolutions for various map magnification levels. We suggest that such merged data and mapping approach may be a viable alternative to existing static storage and data access methods for a wide array of combined simulation, data access and visualization purposes.

  8. The wisdom of crowds in action: Forecasting epidemic diseases with a web-based prediction market system.

    Science.gov (United States)

    Li, Eldon Y; Tung, Chen-Yuan; Chang, Shu-Hsun

    2016-08-01

    The quest for an effective system capable of monitoring and predicting the trends of epidemic diseases is a critical issue for communities worldwide. With the prevalence of Internet access, more and more researchers today are using data from both search engines and social media to improve the prediction accuracy. In particular, a prediction market system (PMS) exploits the wisdom of crowds on the Internet to effectively accomplish relatively high accuracy. This study presents the architecture of a PMS and demonstrates the matching mechanism of logarithmic market scoring rules. The system was implemented to predict infectious diseases in Taiwan with the wisdom of crowds in order to improve the accuracy of epidemic forecasting. The PMS architecture contains three design components: database clusters, market engine, and Web applications. The system accumulated knowledge from 126 health professionals for 31 weeks to predict five disease indicators: the confirmed cases of dengue fever, the confirmed cases of severe and complicated influenza, the rate of enterovirus infections, the rate of influenza-like illnesses, and the confirmed cases of severe and complicated enterovirus infection. Based on the winning ratio, the PMS predicts the trends of three out of five disease indicators more accurately than does the existing system that uses the five-year average values of historical data for the same weeks. In addition, the PMS with the matching mechanism of logarithmic market scoring rules is easy to understand for health professionals and applicable to predict all the five disease indicators. The PMS architecture of this study affords organizations and individuals to implement it for various purposes in our society. The system can continuously update the data and improve prediction accuracy in monitoring and forecasting the trends of epidemic diseases. Future researchers could replicate and apply the PMS demonstrated in this study to more infectious diseases and wider

  9. Improving Performance on WWW using Intelligent Predictive Caching for Web Proxy Servers

    Directory of Open Access Journals (Sweden)

    J. B. Patil

    2011-01-01

    Full Text Available Web proxy caching is used to improve the performance of the Web infrastructure. It aims to reduce network traffic, server load, and user perceived retrieval delays. The heart of a caching system is its page replacement policy, which needs to make good replacement decisions when its cache is full and a new document needs to be stored. The latest and most popular replacement policies like GDSF and GDSF# use the file size, access frequency, and age in the decision process. The effectiveness of any replacement policy can be evaluated using two metrics: hit ratio (HR and byte hit ratio (BHR. There is always a trade-off between HR and BHR. In this paper, using three different Web proxy server logs, we use trace driven analysis to evaluate the effects of different replacement policies on the performance of a Web proxy server. We propose a modification of GDSF# policy, IPGDSF#. Our simulation results show that our proposed replacement policy IPGDSF# performs better than several policies proposed in the literature in terms of hit rate as well as byte hit rate.

  10. Web standards facilitating accessibility in a digitally inclusive South Africa – Perspectives from developing the South African National Accessibility Portal

    CSIR Research Space (South Africa)

    Coetzee, L

    2008-11-01

    Full Text Available Many factors impact on the ability to create a digitally inclusive society in a developing world context. These include lack of access to information and communication technology (ICT), infrastructure, low literacy levels as well as low ICT related...

  11. Potential impacts of ocean acidification on the Puget Sound food web (NCEI Accession 0134852)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The ecosystem impacts of ocean acidification (OA) were explored by imposing scenarios designed to mimic OA on a food web model of Puget Sound, a large estuary in the...

  12. BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data

    Directory of Open Access Journals (Sweden)

    Khan Michael

    2011-07-01

    Full Text Available Abstract Background Innovations in biological and biomedical imaging produce complex high-content and multivariate image data. For decision-making and generation of hypotheses, scientists need novel information technology tools that enable them to visually explore and analyze the data and to discuss and communicate results or findings with collaborating experts from various places. Results In this paper, we present a novel Web2.0 approach, BioIMAX, for the collaborative exploration and analysis of multivariate image data by combining the webs collaboration and distribution architecture with the interface interactivity and computation power of desktop applications, recently called rich internet application. Conclusions BioIMAX allows scientists to discuss and share data or results with collaborating experts and to visualize, annotate, and explore multivariate image data within one web-based platform from any location via a standard web browser requiring only a username and a password. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username "test" and the password "test1" for testing purposes.

  13. Fuzzy-logic based learning style prediction in e-learning using web interface information

    Indian Academy of Sciences (India)

    L Jegatha Deborah; R Sathiyaseelan; S Audithan; P Vijayakumar

    2015-04-01

    he e-learners' excellence can be improved by recommending suitable e-contents available in e-learning servers that are based on investigating their learning styles. The learning styles had to be predicted carefully, because the psychological balance is variable in nature and the e-learners are diversified based on the learning patterns, environment, time and their mood. Moreover, the knowledge about the learners used for learning style prediction is uncertain in nature. This paper identifies Felder–Silverman learning style model as a suitable model for learning style prediction, especially in web environments and proposes to use Fuzzy rules to handle the uncertainty in the learning style predictions. The evaluations have used the Gaussian membership function based fuzzy logic for 120 students and tested for learning of C programming language and it has been observed that the proposed model improved the accuracy in prediction significantly.

  14. Epitopia: a web-server for predicting B-cell epitopes

    Directory of Open Access Journals (Sweden)

    Martz Eric

    2009-09-01

    Full Text Available Abstract Background Detecting candidate B-cell epitopes in a protein is a basic and fundamental step in many immunological applications. Due to the impracticality of experimental approaches to systematically scan the entire protein, a computational tool that predicts the most probable epitope regions is desirable. Results The Epitopia server is a web-based tool that aims to predict immunogenic regions in either a protein three-dimensional structure or a linear sequence. Epitopia implements a machine-learning algorithm that was trained to discern antigenic features within a given protein. The Epitopia algorithm has been compared to other available epitope prediction tools and was found to have higher predictive power. A special emphasis was put on the development of a user-friendly graphical interface for displaying the results. Conclusion Epitopia is a user-friendly web-server that predicts immunogenic regions for both a protein structure and a protein sequence. Its accuracy and functionality make it a highly useful tool. Epitopia is available at http://epitopia.tau.ac.il and includes extensive explanations and example predictions.

  15. MINEs: open access databases of computationally predicted enzyme promiscuity products for untargeted metabolomics.

    Science.gov (United States)

    Jeffryes, James G; Colastani, Ricardo L; Elbadawi-Sidhu, Mona; Kind, Tobias; Niehaus, Thomas D; Broadbelt, Linda J; Hanson, Andrew D; Fiehn, Oliver; Tyo, Keith E J; Henry, Christopher S

    2015-01-01

    In spite of its great promise, metabolomics has proven difficult to execute in an untargeted and generalizable manner. Liquid chromatography-mass spectrometry (LC-MS) has made it possible to gather data on thousands of cellular metabolites. However, matching metabolites to their spectral features continues to be a bottleneck, meaning that much of the collected information remains uninterpreted and that new metabolites are seldom discovered in untargeted studies. These challenges require new approaches that consider compounds beyond those available in curated biochemistry databases. Here we present Metabolic In silico Network Expansions (MINEs), an extension of known metabolite databases to include molecules that have not been observed, but are likely to occur based on known metabolites and common biochemical reactions. We utilize an algorithm called the Biochemical Network Integrated Computational Explorer (BNICE) and expert-curated reaction rules based on the Enzyme Commission classification system to propose the novel chemical structures and reactions that comprise MINE databases. Starting from the Kyoto Encyclopedia of Genes and Genomes (KEGG) COMPOUND database, the MINE contains over 571,000 compounds, of which 93% are not present in the PubChem database. However, these MINE compounds have on average higher structural similarity to natural products than compounds from KEGG or PubChem. MINE databases were able to propose annotations for 98.6% of a set of 667 MassBank spectra, 14% more than KEGG alone and equivalent to PubChem while returning far fewer candidates per spectra than PubChem (46 vs. 1715 median candidates). Application of MINEs to LC-MS accurate mass data enabled the identity of an unknown peak to be confidently predicted. MINE databases are freely accessible for non-commercial use via user-friendly web-tools at http://minedatabase.mcs.anl.gov and developer-friendly APIs. MINEs improve metabolomics peak identification as compared to general chemical

  16. Web服务访问控制规范及其实现%Specification and realization of access control of Web services

    Institute of Scientific and Technical Information of China (English)

    张赛男

    2011-01-01

    This paper proposes an access control model for Web services. The integration of the security model into Web services can realize dynamic right changes of security access control on Web services for improving static access control at present. The new model provides view policy language to describe access control policy of Web services. At the end of the paper we describe an infrastructure of integration of the security model into Web services to enforce access control polices of Web services.%提出了一种用于Web服务的访问控制模型,这种模型和Web服务相结合,能够实现Web服务下安全访问控制权限的动态改变,改善目前静态访问控制问题。新的模型提供的视图策略语言VPL用于描述Web服务的访问控制策略。给出了新的安全模型和Web服务集成的结构,用于执行Web服务访问控制策略。

  17. Conference "Internet, Web, What's next?" on 26 June 1998 at CERN: Mark Bernstein, Vice President of CNN Interactive, describes the impact of the Web on world media and predicts what we can expect as the next developments

    CERN Multimedia

    1998-01-01

    Conference "Internet, Web, What's next?" on 26 June 1998 at CERN: Mark Bernstein, Vice President of CNN Interactive, describes the impact of the Web on world media and predicts what we can expect as the next developments

  18. On indexing in the Web of Science and predicting journal impact factor.

    Science.gov (United States)

    Wu, Xiu-Fang; Fu, Qiang; Rousseau, Ronald

    2008-07-01

    We discuss what document types account for the calculation of the journal impact factor (JIF) as published in the Journal Citation Reports (JCR). Based on a brief review of articles discussing how to predict JIFs and taking data differences between the Web of Science (WoS) and the JCR into account, we make our own predictions. Using data by cited-reference searching for Thomson Scientific's WoS, we predict 2007 impact factors (IFs) for several journals, such as Nature, Science, Learned Publishing and some Library and Information Sciences journals. Based on our colleagues' experiences we expect our predictions to be lower bounds for the official journal impact factors. We explain why it is useful to derive one's own journal impact factor.

  19. Editorial:On indexing in the Web of Science and predicting journal impact factor

    Institute of Scientific and Technical Information of China (English)

    Xiu-fang WU; Qiang FU; Ronald ROUSSEAU

    2008-01-01

    We discuss what document types account for the calculation of the journal impact factor (JIF) as published in the Journal Citation Reports (JCR). Based on a brief review of articles discussing how to predict JIFs and taking data differences between the Web of Science (WoS) and the JCR into account, we make our own predictions. Using data by cited-reference searching for Thomson Scientific's WoS, we predict 2007 impact factors (IFs) for several journals, such as Nature, Science,Learned Publishing and some Library and Information Sciences journals. Based on our colleagues' experiences we expect our predictions to be lower bounds for the official journal impact factors. We explain why it is useful to derive one's own journal impact factor.

  20. Using NASA's Giovanni Web Portal to Access and Visualize Satellite-based Earth Science Data in the Classroom

    Science.gov (United States)

    Lloyd, Steven; Acker, James G.; Prados, Ana I.; Leptoukh, Gregory G.

    2008-01-01

    One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite-based remote sensing data sets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable data set to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface.

  1. Visibilidad y accesibilidad web de las tesinas de licenciatura en Bibliotecología y Documentación en la Argentina = Web Visibility and Accessibility of Theses in Library Science and Documentation in Argentina

    Directory of Open Access Journals (Sweden)

    Sandra Gisela Martín

    2013-06-01

    Full Text Available Se busca describir la visibilidad y accesibilidad web de la investigación en Bibliotecología y Documentación en la Argentina mediante el estudio de las tesinas presentadas para optar al título de licenciatura. Constituye un estudio exploratorio descriptivo con enfoque cuantitativo donde se investiga la visibilidad de tesinas en catálogos y repositorios institucionales, la cantidad de tesinas visibles en la web /cantidad de tesinas accesibles en texto completo, los metadatos de los registros en los catálogos, los metadatos de los registros en los repositorios, la producción de tesinas por año según visibilidad web, la cantidad de autores por tesina según visibilidad web y la distribución temática del contenido de las tesinas. Se concluye que la producción científica de tesinas en Bibliotecología en la Argentina está muy dispersa en la web y que la visibilidad y accesibilidad a las mismas es muy escasa = It describes the web visibility and accessibility of research in library and documentation in Argentina by studying dissertations submitted to qualify for the bachelor's degree. It is an exploratory study with quantitative approach where the visibility of theses in catalogs and institutional repositories, the number of theses visible on the web/amount accessible in full text, metadata records in catalogs, metadata records in the repositories, the production of dissertations per year according to web visibility, the number of authors per dissertation as web visibility and thematic distribution of the content of dissertations. It is concluded that the production of dissertations in library science in Argentina is spread on the web and that the visibility and accessibility of these is very low.

  2. Evaluación comparativa de la accesibilidad de los espacios web de las bibliotecas universitarias españolas y norteamericanas Comparative accessibility assessment of Web spaces in Spanish and American university libraries

    Directory of Open Access Journals (Sweden)

    Laura Caballero-Cortés

    2009-04-01

    Full Text Available El objetivo principal de la presente investigación es analizar y comparar el grado de cumplimiento de determinadas pautas de accesibilidad web en dos grupos de espacios web que pertenecen a una misma tipología conceptual: "Bibliotecas Universitarias", pero que forman parte de dos realidades geográficas, sociales y económicas diferentes: España y Norteamérica. La interpretación de los resultados pone de manifiesto que es posible utilizar técnicas webmétricas basadas en las características de accesibilidad web para contrastar dos conjuntos de espacios web cerrados.The main objective of this research is to analyze and compare the degree in which certain Accessibility Guidelines comply with two groups of web spaces which belong to the same conceptual typology: "University Libraries", but conform two different geographic, social and economical realities -Spain and the United States. Interpretation of results reveals the possibility of using web metrics techniques based on Web accessibility characteristics in order to contrast two categories of closed web spaces.

  3. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  4. Addressing Challenges in Web Accessibility for the Blind and Visually Impaired

    Science.gov (United States)

    Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles

    2011-01-01

    Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…

  5. El acceso a VacciMonitor puede hacerse a través de la Web of Science / Accessing VacciMonitor by the Web of Science

    Directory of Open Access Journals (Sweden)

    Daniel Francisco Arencibia-Arrebola

    2015-01-01

    Full Text Available VacciMonitor has gradually increased its visibility by access to different databases. Thus, it was introduced in the project SciELO, EBSCO, HINARI, Redalyc, SCOPUS, DOAJ, SICC Data Bases, SeCiMed, among almost thirty well-known index sites, including the virtual libraries of the main universities from United States of America and other countries. Through an agreement SciELO-Web of Science (WoS it will be possible to include the journals that are indexed in SciELO in the WoS, however this collaboration work is already presenting its outcomes, it is possible to access the content of SciELO by WoS in the link: http://wokinfo.com/products_tools/multidisciplinar y/scielo/ WoS was designed by the Institute for Scientific Information (ISI and it is one of the products of the pack ISI Web of Knowledge, currently property of Thomson Reuters (1. WoS is a service of citation index and databases, worldwide on-line leader with multidisciplinary information covering the knowledge fields of sciences in general, social sciences as well as arts and humanities with more than 46 million of bibliographical references and other hundreds of citations, that made possible navigation in the broad web of journal articles, lecture materials and other registers included in its collection (1. The logic of the functioning of WoS is based on quantitative criteria, since a bigger production demonstrates a greater number of registered papers in most recognized Journals and to what extend these papers are cited by these journals (2. The information obtained from WoS databases are very useful to address efforts of scientific research to a personal, institutional or national level. Scientists publishing in WoS journals not only produce more scientific literature but also this literature is more consulted and used (3. However, it should be considered that statistics of this site for the bibliometric analysis only take into account those journals in this web, but contains three

  6. Comparing Accessibility Auditing Methods for Ebooks: Crowdsourced, Functionality-Led Versus Web Content Methodologies.

    Science.gov (United States)

    James, Abi; Draffan, E A; Wald, Mike

    2017-01-01

    This paper presents a gap analysis between crowdsourced functional accessibility evaluations of ebooks conducted by non-experts and the technical accessibility standards employed by developers. It also illustrates how combining these approaches can provide more appropriate information for a wider group of users with print impairments.

  7. Robust Query Processing for Personalized Information Access on the Semantic Web

    DEFF Research Database (Denmark)

    Dolog, Peter; Stuckenschmidt, Heiner; Wache, Holger

    and user preferences. We describe a framework for information access that combines query refinement and relaxation in order to provide robust, personalized access to heterogeneous RDF data as well as an implementation in terms of rewriting rules and explain its application in the context of e...

  8. The Personal Sequence Database: a suite of tools to create and maintain web-accessible sequence databases

    Directory of Open Access Journals (Sweden)

    Sullivan Christopher M

    2007-12-01

    Full Text Available Abstract Background Large molecular sequence databases are fundamental resources for modern bioscientists. Whether for project-specific purposes or sharing data with colleagues, it is often advantageous to maintain smaller sequence databases. However, this is usually not an easy task for the average bench scientist. Results We present the Personal Sequence Database (PSD, a suite of tools to create and maintain small- to medium-sized web-accessible sequence databases. All interactions with PSD tools occur via the internet with a web browser. Users may define sequence groups within their database that can be maintained privately or published to the web for public use. A sequence group can be downloaded, browsed, searched by keyword or searched for sequence similarities using BLAST. Publishing a sequence group extends these capabilities to colleagues and collaborators. In addition to being able to manage their own sequence databases, users can enroll sequences in BLASTAgent, a BLAST hit tracking system, to monitor NCBI databases for new entries displaying a specified level of nucleotide or amino acid similarity. Conclusion The PSD offers a valuable set of resources unavailable elsewhere. In addition to managing sequence data and BLAST search results, it facilitates data sharing with colleagues, collaborators and public users. The PSD is hosted by the authors and is available at http://bioinfo.cgrb.oregonstate.edu/psd/.

  9. SVMDLF: A novel R-based Web application for prediction of dipeptidyl peptidase 4 inhibitors.

    Science.gov (United States)

    Chandra, Sharat; Pandey, Jyotsana; Tamrakar, Akhilesh K; Siddiqi, Mohammad Imran

    2017-06-06

    Dipeptidyl peptidase 4 (DPP4) is a well-known target for the antidiabetic drugs. However, currently available DPP4 inhibitor screening assays are costly and labor-intensive. It is important to create a robust in silico method to predict the activity of DPP4 inhibitor for the new lead finding. Here, we introduce an R-based Web application SVMDLF (SVM-based DPP4 Lead Finder) to predict the inhibitor of DPP4, based on support vector machine (SVM) model, predictions of which are confirmed by in vitro biological evaluation. The best model generated by MACCS structure fingerprint gave the Matthews correlation coefficient of 0.87 for the test set and 0.883 for the external test set. We screened Maybridge database consisting approximately 53,000 compounds. For further bioactivity assay, six compounds were shortlisted, and of six hits, three compounds showed significant DPP4 inhibitory activities with IC50 values ranging from 8.01 to 10.73 μm. This application is an OpenCPU server app which is a novel single-page R-based Web application for the DPP4 inhibitor prediction. The SVMDLF is freely available and open to all users at http://svmdlf.net/ocpu/library/dlfsvm/www/ and http://www.cdri.res.in/svmdlf/. © 2017 John Wiley & Sons A/S.

  10. LDAP: a web server for lncRNA-disease association prediction.

    Science.gov (United States)

    Lan, Wei; Li, Min; Zhao, Kaijie; Liu, Jin; Wu, Fang-Xiang; Pan, Yi; Wang, Jianxin

    2017-02-01

    Increasing evidences have demonstrated that long noncoding RNAs (lncRNAs) play important roles in many human diseases. Therefore, predicting novel lncRNA-disease associations would contribute to dissect the complex mechanisms of disease pathogenesis. Some computational methods have been developed to infer lncRNA-disease associations. However, most of these methods infer lncRNA-disease associations only based on single data resource. In this paper, we propose a new computational method to predict lncRNA-disease associations by integrating multiple biological data resources. Then, we implement this method as a web server for lncRNA-disease association prediction (LDAP). The input of the LDAP server is the lncRNA sequence. The LDAP predicts potential lncRNA-disease associations by using a bagging SVM classifier based on lncRNA similarity and disease similarity. The web server is available at http://bioinformatics.csu.edu.cn/ldap jxwang@mail.csu.edu.cn. Supplementary data are available at Bioinformatics online.

  11. TBI server: a web server for predicting ion effects in RNA folding.

    Directory of Open Access Journals (Sweden)

    Yuhong Zhu

    Full Text Available Metal ions play a critical role in the stabilization of RNA structures. Therefore, accurate prediction of the ion effects in RNA folding can have a far-reaching impact on our understanding of RNA structure and function. Multivalent ions, especially Mg²⁺, are essential for RNA tertiary structure formation. These ions can possibly become strongly correlated in the close vicinity of RNA surface. Most of the currently available software packages, which have widespread success in predicting ion effects in biomolecular systems, however, do not explicitly account for the ion correlation effect. Therefore, it is important to develop a software package/web server for the prediction of ion electrostatics in RNA folding by including ion correlation effects.The TBI web server http://rna.physics.missouri.edu/tbi_index.html provides predictions for the total electrostatic free energy, the different free energy components, and the mean number and the most probable distributions of the bound ions. A novel feature of the TBI server is its ability to account for ion correlation and ion distribution fluctuation effects.By accounting for the ion correlation and fluctuation effects, the TBI server is a unique online tool for computing ion-mediated electrostatic properties for given RNA structures. The results can provide important data for in-depth analysis for ion effects in RNA folding including the ion-dependence of folding stability, ion uptake in the folding process, and the interplay between the different energetic components.

  12. CT-Finder: A Web Service for CRISPR Optimal Target Prediction and Visualization.

    Science.gov (United States)

    Zhu, Houxiang; Misel, Lauren; Graham, Mitchell; Robinson, Michael L; Liang, Chun

    2016-05-23

    The CRISPR system holds much promise for successful genome engineering, but therapeutic, industrial, and research applications will place high demand on improving the specificity and efficiency of this tool. CT-Finder (http://bioinfolab.miamioh.edu/ct-finder) is a web service to help users design guide RNAs (gRNAs) optimized for specificity. CT-Finder accommodates the original single-gRNA Cas9 system and two specificity-enhancing paired-gRNA systems: Cas9 D10A nickases (Cas9n) and dimeric RNA-guided FokI nucleases (RFNs). Optimal target candidates can be chosen based on the minimization of predicted off-target effects. Graphical visualization of on-target and off-target sites in the genome is provided for target validation. Major model organisms are covered by this web service.

  13. Web of Science: showing a bug today that can mislead scientific research output's prediction

    CERN Document Server

    Batista, Pablo Diniz; Fauth, Leduc Hermeto de Almeida; Brandão, Marcia de Oliveira Reis

    2016-01-01

    As it happened in all domains of human activities, economic issues and the increase of people working in scientific research have altered the way scientific production is evaluated so as the objectives of performing the evaluation. Introduced in 2005 by J. E. Hirsch as an indicator able to measure individual scientific output not only in terms of quantity, but also in terms of quality, h index has spread throughout the world. In 2007, Hirsch proposed its adoption also as the best to predict future scientific achievement and, consequently, a useful guide for investments in research and for institutions when hiring members for their scientific staff. Since then, several authors have also been using the Thomson ISI Web of Science database to develop their proposals for evaluating research output. Here we show that a subtle flaw in Web of Science can inflate the results of info collected, therefore compromising the exactness and, consequently, the effectiveness of Hirsch's proposal and its variations

  14. Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS

    Science.gov (United States)

    Mitchell, Christine M.; Thurman, David A.

    1999-01-01

    AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the

  15. RS-WebPredictor

    DEFF Research Database (Denmark)

    Zaretzki, J.; Bergeron, C.; Huang, T.-W.;

    2013-01-01

    . RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects...

  16. Web-oriented interface for remotely access the Kiev Internet-telescope

    Science.gov (United States)

    Kleshchonok, V.; Luk'yanyk, I.

    2017-06-01

    The partial revision of the Kiev internet-telescope was described in the article. The structure of the telescope and software and features its work. Methods of work with the telescope with help of remotely access were examined.

  17. BindUP: a web server for non-homology-based prediction of DNA and RNA binding proteins.

    Science.gov (United States)

    Paz, Inbal; Kligun, Efrat; Bengad, Barak; Mandel-Gutfreund, Yael

    2016-07-08

    Gene expression is a multi-step process involving many layers of regulation. The main regulators of the pathway are DNA and RNA binding proteins. While over the years, a large number of DNA and RNA binding proteins have been identified and extensively studied, it is still expected that many other proteins, some with yet another known function, are awaiting to be discovered. Here we present a new web server, BindUP, freely accessible through the website http://bindup.technion.ac.il/, for predicting DNA and RNA binding proteins using a non-homology-based approach. Our method is based on the electrostatic features of the protein surface and other general properties of the protein. BindUP predicts nucleic acid binding function given the proteins three-dimensional structure or a structural model. Additionally, BindUP provides information on the largest electrostatic surface patches, visualized on the server. The server was tested on several datasets of DNA and RNA binding proteins, including proteins which do not possess DNA or RNA binding domains and have no similarity to known nucleic acid binding proteins, achieving very high accuracy. BindUP is applicable in either single or batch modes and can be applied for testing hundreds of proteins simultaneously in a highly efficient manner.

  18. Automating testbed documentation and database access using World Wide Web (WWW) tools

    Science.gov (United States)

    Ames, Charles; Auernheimer, Brent; Lee, Young H.

    1994-01-01

    A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.

  19. The information-seeking behaviour of paediatricians accessing web-based resources.

    LENUS (Irish Health Repository)

    Prendiville, T W

    2012-02-01

    OBJECTIVES: To establish the information-seeking behaviours of paediatricians in answering every-day clinical queries. DESIGN: A questionnaire was distributed to every hospital-based paediatrician (paediatric registrar and consultant) working in Ireland. RESULTS: The study received 156 completed questionnaires, a 66.1% response. 67% of paediatricians utilised the internet as their first "port of call" when looking to answer a medical question. 85% believe that web-based resources have improved medical practice, with 88% reporting web-based resources are essential for medical practice today. 93.5% of paediatricians believe attempting to answer clinical questions as they arise is an important component in practising evidence-based medicine. 54% of all paediatricians have recommended websites to parents or patients. 75.5% of paediatricians report finding it difficult to keep up-to-date with new information relevant to their practice. CONCLUSIONS: Web-based paediatric resources are of increasing significance in day-to-day clinical practice. Many paediatricians now believe that the quality of patient care depends on it. Information technology resources play a key role in helping physicians to deliver, in a time-efficient manner, solutions to clinical queries at the point of care.

  20. Controlling and accessing vehicle functions by mobile from remote place by sending GPS Co-ordinates to the Web server

    Directory of Open Access Journals (Sweden)

    Dr. Khanna SamratVivekanand Omprakash

    2012-01-01

    Full Text Available This paper represents how the co-ordinates from the Google map stored into database . It stored into the central web server . This co-ordinates then transfer to client program for searching the locations of particular location for electronic device . Client can access the data from internet and use into program by using API . Development of software for a particular device for putting into the vehicle has been develop. In the inbuilt circuit assigning sim card and transferring the signal to the network. Supplying a single text of co-ordinates of locations using google map in terms of latitudes and longitudes. The information in terms of string separated by comma can be extracted and stored into the database of web server . Different mobile number with locations can be stored into the database simultaneously into the server of different clients . The concept of 3 Tier Client /Server architecture is used. The sim card can access information of GPRS system with the network provider of card . Setting of electronic device signal for receiving and sending message done. Different operations can be performed on the device as it can be attached with other electronic circuit of vehicle. Windows Mobile application developed for client slide. User can take different decision of vehicle from mobile by sending sms to the device . Device receives the operation and send to the electronic circuit of vehicle for certain operations. From remote place using mobile you can get the information of your vehicle and also you can control vehicle it by providing password to the electronic circuit for authorization and authentication. The concept of vehicle security and location of vehicle can be identified. The functions of vehicle can be accessed and control like speed , brakes and lights etc as per the software application interface with electronic circuit of vehicle.

  1. Prediction of protein solvent accessibility using fuzzy k-nearest neighbor method.

    Science.gov (United States)

    Sim, Jaehyun; Kim, Seung-Yeon; Lee, Julian

    2005-06-15

    The solvent accessibility of amino acid residues plays an important role in tertiary structure prediction, especially in the absence of significant sequence similarity of a query protein to those with known structures. The prediction of solvent accessibility is less accurate than secondary structure prediction in spite of improvements in recent researches. The k-nearest neighbor method, a simple but powerful classification algorithm, has never been applied to the prediction of solvent accessibility, although it has been used frequently for the classification of biological and medical data. We applied the fuzzy k-nearest neighbor method to the solvent accessibility prediction, using PSI-BLAST profiles as feature vectors, and achieved high prediction accuracies. With leave-one-out cross-validation on the ASTRAL SCOP reference dataset constructed by sequence clustering, our method achieved 64.1% accuracy for a 3-state (buried/intermediate/exposed) prediction (thresholds of 9% for buried/intermediate and 36% for intermediate/exposed) and 86.7, 82.0, 79.0 and 78.5% accuracies for 2-state (buried/exposed) predictions (thresholds of each 0, 5, 16 and 25% for buried/exposed), respectively. Our method also showed slightly better accuracies than other methods by about 2-5% on the RS126 dataset and a benchmarking dataset with 229 proteins. Program and datasets are available at http://biocom1.ssu.ac.kr/FKNNacc/ jul@ssu.ac.kr.

  2. Using Stochastic Models to Describe and Predict Social Dynamics of Web Users

    CERN Document Server

    Lerman, Kristina

    2010-01-01

    Popularity of content in social media is unequally distributed, with some items receiving a disproportionate share of attention from users. Predicting which newly-submitted items will become popular is critically important for both hosts of social media content and its consumers. Accurate and timely prediction would enable hosts to maximize revenue through differential pricing for access to content or ad placement. Prediction would also give consumers an important tool for filtering the ever-growing amount of content. Predicting popularity of content in social media, however, is challenging due to the complex interactions between content quality and how the social media site chooses to highlight content. Moreover, most social media sites also selectively present content that has been highly rated by similar users, whose similarity is indicated implicitly by their behavior or explicitly by links in a social network. While these factors make it difficult to predict popularity \\emph{a priori}, we show that stoch...

  3. The RNAsnp web server: predicting SNP effects on local RNA secondary structure.

    Science.gov (United States)

    Sabarinathan, Radhakrishnan; Tafer, Hakim; Seemann, Stefan E; Hofacker, Ivo L; Stadler, Peter F; Gorodkin, Jan

    2013-07-01

    The function of many non-coding RNA genes and cis-regulatory elements of messenger RNA largely depends on the structure, which is in turn determined by their sequence. Single nucleotide polymorphisms (SNPs) and other mutations may disrupt the RNA structure, interfere with the molecular function and hence cause a phenotypic effect. RNAsnp is an efficient method to predict the effect of SNPs on local RNA secondary structure based on the RNA folding algorithms implemented in the Vienna RNA package. The SNP effects are quantified in terms of empirical P-values, which, for computational efficiency, are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/.

  4. Web Access to Digitised Content of the Exhibition Novo Mesto 1848-1918 at the Dolenjska Museum, Novo Mesto

    Directory of Open Access Journals (Sweden)

    Majda Pungerčar

    2013-09-01

    Full Text Available EXTENDED ABSTRACTFor the first time, the Dolenjska museum Novo mesto provided access to digitised museum resources when they took the decision to enrich the exhibition Novo mesto 1848-1918 by adding digital content. The following goals were identified: the digital content was created at the time of exhibition planning and design, it met the needs of different age groups of visitors, and during the exhibition the content was accessible via touch screen. As such, it also served for educational purposes (content-oriented lectures or problem solving team work. In the course of exhibition digital content was accessible on the museum website http://www.novomesto1848-1918.si. The digital content was divided into the following sections: the web photo gallery, the quiz and the game. The photo gallery was designed in the same way as the exhibition and the print catalogue and extended by the photos of contemporary Novo mesto and accompanied by the music from the orchestron machine. The following themes were outlined: the Austrian Empire, the Krka and Novo mesto, the town and its symbols, images of the town and people, administration and economy, social life and Novo mesto today followed by digitised archive materials and sources from that period such as the Commemorative book of the Uniformed Town Guard, the National Reading Room Guest Book, the Kazina guest book, the album of postcards and the Diploma of Honoured Citizen Josip Gerdešič. The Web application was also a tool for a simple and on line selection of digitised material and the creation of new digital content which proved to be much more convenient for lecturing than Power Point presentations. The quiz consisted of 40 questions relating to the exhibition theme and the catalogue. Each question offered a set of three answers only one of them being correct and illustrated by photography. The application auto selected ten questions and valued the answers immediately. The quiz could be accessed

  5. A Web-based computer system supporting information access, exchange and management during building processes

    DEFF Research Database (Denmark)

    Sørensen, Lars Schiøtt

    1998-01-01

    During the last two decades, a number of research efforts have been made in the field of computing systmes related to the building construction industry. Most of the projects have focused on a part of the entire design process and have typically been limited to a specific domain. This paper prese...... presents a newly developed computer system based on the World Wide Web on the Internet. The focus is on the simplicity of the systems structure and on an intuitive and user friendly interface...

  6. Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study

    CERN Document Server

    Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M

    2005-01-01

    In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...

  7. EarthServer2 : The Marine Data Service - Web based and Programmatic Access to Ocean Colour Open Data

    Science.gov (United States)

    Clements, Oliver; Walker, Peter

    2017-04-01

    The ESA Ocean Colour - Climate Change Initiative (ESA OC-CCI) has produced a long-term high quality global dataset with associated per-pixel uncertainty data. This dataset has now grown to several hundred terabytes (uncompressed) and is freely available to download. However, the sheer size of the dataset can act as a barrier to many users; large network bandwidth, local storage and processing requirements can prevent researchers without the backing of a large organisation from taking advantage of this raw data. The EC H2020 project, EarthServer2, aims to create a federated data service providing access to more than 1 petabyte of earth science data. Within this federation the Marine Data Service already provides an innovative on-line tool-kit for filtering, analysing and visualising OC-CCI data. Data are made available, filtered and processed at source through a standards-based interface, the Open Geospatial Consortium Web Coverage Service and Web Coverage Processing Service. This work was initiated in the EC FP7 EarthServer project where it was found that the unfamiliarity and complexity of these interfaces itself created a barrier to wider uptake. The continuation project, EarthServer2, addresses these issues by providing higher level tools for working with these data. We will present some examples of these tools. Many researchers wish to extract time series data from discrete points of interest. We will present a web based interface, based on NASA/ESA WebWorldWind, for selecting points of interest and plotting time series from a chosen dataset. In addition, a CSV file of locations and times, such as a ship's track, can be uploaded and these points extracted and returned in a CSV file allowing researchers to work with the extract locally, such as a spreadsheet. We will also present a set of Python and JavaScript APIs that have been created to complement and extend the web based GUI. These APIs allow the selection of single points and areas for extraction. The

  8. BioPortal: enhanced functionality via new Web services from the National Center for Biomedical Ontology to access and use ontologies in software applications.

    Science.gov (United States)

    Whetzel, Patricia L; Noy, Natalya F; Shah, Nigam H; Alexander, Paul R; Nyulas, Csongor; Tudorache, Tania; Musen, Mark A

    2011-07-01

    The National Center for Biomedical Ontology (NCBO) is one of the National Centers for Biomedical Computing funded under the NIH Roadmap Initiative. Contributing to the national computing infrastructure, NCBO has developed BioPortal, a web portal that provides access to a library of biomedical ontologies and terminologies (http://bioportal.bioontology.org) via the NCBO Web services. BioPortal enables community participation in the evaluation and evolution of ontology content by providing features to add mappings between terms, to add comments linked to specific ontology terms and to provide ontology reviews. The NCBO Web services (http://www.bioontology.org/wiki/index.php/NCBO_REST_services) enable this functionality and provide a uniform mechanism to access ontologies from a variety of knowledge representation formats, such as Web Ontology Language (OWL) and Open Biological and Biomedical Ontologies (OBO) format. The Web services provide multi-layered access to the ontology content, from getting all terms in an ontology to retrieving metadata about a term. Users can easily incorporate the NCBO Web services into software applications to generate semantically aware applications and to facilitate structured data collection.

  9. Web聊天室探测系统的网页获取和改进研究%Web Access and Improvement Study on Detection System of the Web Chat Rooms

    Institute of Scientific and Technical Information of China (English)

    孙群; 漆正东

    2012-01-01

    网络聊天以它低成本,高效率的优势给网络用户提供了在线实时通信的功能,从而成为目前互联网使用最广泛的网络服务。以网络聊天室的探测为载体深入研究网页获取和预处理的技术问题。主要探讨网络爬虫的原理和工作流程,在网络爬虫器中引入网络并行多线程处理技术。讨论WebLech的技术特点和实现技术,对WebLech做出了改进。%Web chat with its low-cost,high-efficiency advantages of online real-time communication capabilities,thus becoming the most widely used Internet network services to network users.Detection of Internet chat rooms as a carrier-depth study of Web access to technical problems and the pretreatment.Of the principles and workflow of the web crawler,Web crawler in the introduction of network parallel multi-threading technology.Discuss the technical features of the WebLech and implementation technology,improvements made WebLech.

  10. Impact of web accessibility barriers on users with a hearing impairment

    Directory of Open Access Journals (Sweden)

    Afra Pascual

    2015-01-01

    Full Text Available Se realizaron pruebas de usuarios a personas con discapacidad a uditiva evaluando el impacto que las diferentes barreras de acc esibilidad causan en este tipo de usuarios. El objetivo de recoger esta in formación fue para comunicar a personas que editan contenido en la Web de forma más empática los problemas d e accesibilidad que más afect an a este colectivo, las personas con discapacidad auditiva,y a sí evitar las barreras de accesibilidad que potencialmente podrían estar creando. Como resultado, se obse rva que las barreras que causan mas impacto a usuarios con discapacidad audi tiva son el “texto complejo” y el “contenido multimedia” sin alternativas. En ambos casos los editores de contenido deberían tener en cuenta vigilar la legibilidad del c ontenido web y acompañar de subtítulos y lenguaje de signos el contenido multimedia.

  11. Web-based access to teaching files in a filmless radiology environment

    Science.gov (United States)

    Rubin, Richard K.; Henri, Christopher J.; Cox, Robert D.; Bret, Patrice M.

    1998-07-01

    This paper describes the incorporation of radiology teaching files within our existing filmless radiology Picture Archiving and Communications System (PACS). The creation of teaching files employs an intuitive World Wide Web (WWW) application that relieves the creator of the technical details involving the underlying PACS and obviates the need for knowledge of Internet publishing. Currently, our PACS supports filmless operation of CT, MRI, and ultrasound modalities, conforming to the Digital Imaging and Communications in Medicine (DICOM) and Health Level 7 (HL7) standards. Web-based teaching files are one module in a suite of WWW tools, developed in-house, for platform independent management of radiology data. The WWW browser tools act as liaison between inexpensive desktop PCs and the DICOM PACS. The creation of a teaching file is made as efficient as possible by allowing the creator to select the images and prepare the text within a single application, while finding and reviewing existing teaching files is simplified with a flexible, multi-criteria searching tool. This efficient and easy-to-use interface is largely responsible for the development of a database, currently containing over 400 teaching files, that has been generated in a short period of time.

  12. Translating access into utilization: lessons from the design and evaluation of a health insurance Web site to promote reproductive health care for young women in Massachusetts.

    Science.gov (United States)

    Janiak, Elizabeth; Rhodes, Elizabeth; Foster, Angel M

    2013-12-01

    Following state-level health care reform in Massachusetts, young women reported confusion over coverage of contraception and other sexual and reproductive health services under newly available health insurance products. To address this gap, a plain-language Web site titled "My Little Black Book for Sexual Health" was developed by a statewide network of reproductive health stakeholders. The purpose of this evaluation was to assess the health literacy demands and usability of the site among its target audience, women ages 18-26 years. We performed an evaluation of the literacy demands of the Web site's written content and tested the Web site's usability in a health communications laboratory. Participants found the Web site visually appealing and its overall design concept accessible. However, the Web site's literacy demands were high, and all participants encountered problems navigating through the Web site. Following this evaluation, the Web site was modified to be more usable and more comprehensible to women of all health literacy levels. To avail themselves of sexual and reproductive health services newly available under expanded health insurance coverage, young women require customized educational resources that are rigorously evaluated to ensure accessibility. To maximize utilization of reproductive health services under expanded health insurance coverage, US women require customized educational resources commensurate with their literacy skills. The application of established research methods from the field of health communications will enable advocates to evaluate and adapt these resources to best serve their targeted audiences. © 2013.

  13. Towards a tangible web: using physical objects to access and manipulate the Internet of Things

    CSIR Research Space (South Africa)

    Smith, Andrew C

    2013-09-01

    Full Text Available . This additional step has resulted in the phenomenon commonly referred to as the Internet of Things (IoT). In order to realise the full potential of the IoT, individuals need a mechanism to access and manipulate it. A potential mechanism for achieving...

  14. SPEER-SERVER: a web server for prediction of protein specificity determining sites

    Science.gov (United States)

    Chakraborty, Abhijit; Mandloi, Sapan; Lanczycki, Christopher J.; Panchenko, Anna R.; Chakrabarti, Saikat

    2012-01-01

    Sites that show specific conservation patterns within subsets of proteins in a protein family are likely to be involved in the development of functional specificity. These sites, generally termed specificity determining sites (SDS), might play a crucial role in binding to a specific substrate or proteins. Identification of SDS through experimental techniques is a slow, difficult and tedious job. Hence, it is very important to develop efficient computational methods that can more expediently identify SDS. Herein, we present Specificity prediction using amino acids’ Properties, Entropy and Evolution Rate (SPEER)-SERVER, a web server that predicts SDS by analyzing quantitative measures of the conservation patterns of protein sites based on their physico-chemical properties and the heterogeneity of evolutionary changes between and within the protein subfamilies. This web server provides an improved representation of results, adds useful input and output options and integrates a wide range of analysis and data visualization tools when compared with the original standalone version of the SPEER algorithm. Extensive benchmarking finds that SPEER-SERVER exhibits sensitivity and precision performance that, on average, meets or exceeds that of other currently available methods. SPEER-SERVER is available at http://www.hpppi.iicb.res.in/ss/. PMID:22689646

  15. Hanford Borehole Geologic Information System (HBGIS) Updated User’s Guide for Web-based Data Access and Export

    Energy Technology Data Exchange (ETDEWEB)

    Mackley, Rob D.; Last, George V.; Allwardt, Craig H.

    2008-09-24

    The Hanford Borehole Geologic Information System (HBGIS) is a prototype web-based graphical user interface (GUI) for viewing and downloading borehole geologic data. The HBGIS is being developed as part of the Remediation Decision Support function of the Soil and Groundwater Remediation Project, managed by Fluor Hanford, Inc., Richland, Washington. Recent efforts have focused on improving the functionality of the HBGIS website in order to allow more efficient access and exportation of available data in HBGIS. Users will benefit from enhancements such as a dynamic browsing, user-driven forms, and multi-select options for selecting borehole geologic data for export. The need for translating borehole geologic data into electronic form within the HBGIS continues to increase, and efforts to populate the database continue at an increasing rate. These new web-based tools should help the end user quickly visualize what data are available in HBGIS, select from among these data, and download the borehole geologic data into a consistent and reproducible tabular form. This revised user’s guide supersedes the previous user’s guide (PNNL-15362) for viewing and downloading data from HBGIS. It contains an updated data dictionary for tables and fields containing borehole geologic data as well as instructions for viewing and downloading borehole geologic data.

  16. Kinome Render: a stand-alone and web-accessible tool to annotate the human protein kinome tree.

    Science.gov (United States)

    Chartier, Matthieu; Chénard, Thierry; Barker, Jonathan; Najmanovich, Rafael

    2013-01-01

    Human protein kinases play fundamental roles mediating the majority of signal transduction pathways in eukaryotic cells as well as a multitude of other processes involved in metabolism, cell-cycle regulation, cellular shape, motility, differentiation and apoptosis. The human protein kinome contains 518 members. Most studies that focus on the human kinome require, at some point, the visualization of large amounts of data. The visualization of such data within the framework of a phylogenetic tree may help identify key relationships between different protein kinases in view of their evolutionary distance and the information used to annotate the kinome tree. For example, studies that focus on the promiscuity of kinase inhibitors can benefit from the annotations to depict binding affinities across kinase groups. Images involving the mapping of information into the kinome tree are common. However, producing such figures manually can be a long arduous process prone to errors. To circumvent this issue, we have developed a web-based tool called Kinome Render (KR) that produces customized annotations on the human kinome tree. KR allows the creation and automatic overlay of customizable text or shape-based annotations of different sizes and colors on the human kinome tree. The web interface can be accessed at: http://bcb.med.usherbrooke.ca/kinomerender. A stand-alone version is also available and can be run locally.

  17. Using NASA's Giovanni Web Portal to Access and Visualize Satellite-Based Earth Science Data in the Classroom

    Science.gov (United States)

    Lloyd, S. A.; Acker, J. G.; Prados, A. I.; Leptoukh, G. G.

    2008-12-01

    One of the biggest obstacles for the average Earth science student today is locating and obtaining satellite- based remote sensing datasets in a format that is accessible and optimal for their data analysis needs. At the Goddard Earth Sciences Data and Information Services Center (GES-DISC) alone, on the order of hundreds of Terabytes of data are available for distribution to scientists, students and the general public. The single biggest and time-consuming hurdle for most students when they begin their study of the various datasets is how to slog through this mountain of data to arrive at a properly sub-setted and manageable dataset to answer their science question(s). The GES DISC provides a number of tools for data access and visualization, including the Google-like Mirador search engine and the powerful GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni) web interface. Giovanni provides a simple way to visualize, analyze and access vast amounts of satellite-based Earth science data. Giovanni's features and practical examples of its use will be demonstrated, with an emphasis on how satellite remote sensing can help students understand recent events in the atmosphere and biosphere. Giovanni is actually a series of sixteen similar web-based data interfaces, each of which covers a single satellite dataset (such as TRMM, TOMS, OMI, AIRS, MLS, HALOE, etc.) or a group of related datasets (such as MODIS and MISR for aerosols, SeaWIFS and MODIS for ocean color, and the suite of A-Train observations co-located along the CloudSat orbital path). Recently, ground-based datasets have been included in Giovanni, including the Northern Eurasian Earth Science Partnership Initiative (NEESPI), and EPA fine particulate matter (PM2.5) for air quality. Model data such as the Goddard GOCART model and MERRA meteorological reanalyses (in process) are being increasingly incorporated into Giovanni to facilitate model- data intercomparison. A full suite of data

  18. AN EFFICIENT APPROACH FOR KEYWORD SELECTION; IMPROVING ACCESSIBILITY OF WEB CONTENTS BY GENERAL SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    H. H. Kian

    2011-11-01

    Full Text Available General search engines often provide low precise results even for detailed queries. So there is a vital needto elicit useful information like keywords for search engines to provide acceptable results for user’s searchqueries. Although many methods have been proposed to show how to extract keywords automatically, allattempt to get a better recall, precision and other criteria which describe how the method has done its jobas an author. This paper presents a new automatic keyword extraction method which improves accessibilityof web content by search engines. The proposed method defines some coefficients determining featuresefficiency and tries to optimize them by using a genetic algorithm. Furthermore, it evaluates candidatekeywords by a function that utilizes the result of search engines. When comparing to the other methods,experiments demonstrate that by using the proposed method, a higher score is achieved from searchengines without losing noticeable recall or precision.

  19. Web based dosimetry system for reading and monitoring dose through internet access

    Energy Technology Data Exchange (ETDEWEB)

    Perle, S.C.; Bennett, K.; Kahilainen, J.; Vuotila, M. [Mirion Technologies (United States); Mirion Technologies (Finland)

    2010-07-01

    The Instadose{sup TM} dosemeter from Mirion Technologies is a small, rugged device based on patented direct ion storage technology and is accredited by the National Voluntary Laboratory Accreditation Program (NVLAP) through NIST, bringing radiation monitoring into the digital age. Smaller than a flash drive, this dosemeter provides an instant read-out when connected to any computer with internet access and a USB connection. Instadose devices provide radiation workers with more flexibility than today's dosemeters. Non Volatile Analog Memory Cell surrounded by a Gas Filled Ion Chamber. Dose changes the amount of Electric Charge in the DIS Analog Memory. The total charge storage capacity of the memory determines the available dose range. The state of the Analog Memory is determined by measuring the voltage across the memory cell. AMP (Account Management Program) provides secure real time access to account details, device assignments, reports and all pertinent account information. Access can be restricted based on the role assignment assigned to an individual. A variety of reports are available for download and customizing. The Advantages of the Instadose dosemeter are: - Unlimited reading capability, - Concerns about a possible exposure can be addressed immediately, - Re-readability without loss of exposure data, with cumulative exposure maintained. (authors)

  20. RSARF: Prediction of residue solvent accessibility from protein sequence using random forest method

    KAUST Repository

    Ganesan, Pugalenthi

    2012-01-01

    Prediction of protein structure from its amino acid sequence is still a challenging problem. The complete physicochemical understanding of protein folding is essential for the accurate structure prediction. Knowledge of residue solvent accessibility gives useful insights into protein structure prediction and function prediction. In this work, we propose a random forest method, RSARF, to predict residue accessible surface area from protein sequence information. The training and testing was performed using 120 proteins containing 22006 residues. For each residue, buried and exposed state was computed using five thresholds (0%, 5%, 10%, 25%, and 50%). The prediction accuracy for 0%, 5%, 10%, 25%, and 50% thresholds are 72.9%, 78.25%, 78.12%, 77.57% and 72.07% respectively. Further, comparison of RSARF with other methods using a benchmark dataset containing 20 proteins shows that our approach is useful for prediction of residue solvent accessibility from protein sequence without using structural information. The RSARF program, datasets and supplementary data are available at http://caps.ncbs.res.in/download/pugal/RSARF/. - See more at: http://www.eurekaselect.com/89216/article#sthash.pwVGFUjq.dpuf

  1. Drug-target interaction prediction: databases, web servers and computational models.

    Science.gov (United States)

    Chen, Xing; Yan, Chenggang Clarence; Zhang, Xiaotian; Zhang, Xu; Dai, Feng; Yin, Jian; Zhang, Yongdong

    2016-07-01

    Identification of drug-target interactions is an important process in drug discovery. Although high-throughput screening and other biological assays are becoming available, experimental methods for drug-target interaction identification remain to be extremely costly, time-consuming and challenging even nowadays. Therefore, various computational models have been developed to predict potential drug-target associations on a large scale. In this review, databases and web servers involved in drug-target identification and drug discovery are summarized. In addition, we mainly introduced some state-of-the-art computational models for drug-target interactions prediction, including network-based method, machine learning-based method and so on. Specially, for the machine learning-based method, much attention was paid to supervised and semi-supervised models, which have essential difference in the adoption of negative samples. Although significant improvements for drug-target interaction prediction have been obtained by many effective computational models, both network-based and machine learning-based methods have their disadvantages, respectively. Furthermore, we discuss the future directions of the network-based drug discovery and network approach for personalized drug discovery based on personalized medicine, genome sequencing, tumor clone-based network and cancer hallmark-based network. Finally, we discussed the new evaluation validation framework and the formulation of drug-target interactions prediction problem by more realistic regression formulation based on quantitative bioactivity data.

  2. Interactive access to LP DAAC satellite data archives through a combination of open-source and custom middleware web services

    Science.gov (United States)

    Davis, Brian N.; Werpy, Jason; Friesz, Aaron M.; Impecoven, Kevin; Quenzer, Robert; Maiersperger, Tom; Meyer, David J.

    2015-01-01

    Current methods of searching for and retrieving data from satellite land remote sensing archives do not allow for interactive information extraction. Instead, Earth science data users are required to download files over low-bandwidth networks to local workstations and process data before science questions can be addressed. New methods of extracting information from data archives need to become more interactive to meet user demands for deriving increasingly complex information from rapidly expanding archives. Moving the tools required for processing data to computer systems of data providers, and away from systems of the data consumer, can improve turnaround times for data processing workflows. The implementation of middleware services was used to provide interactive access to archive data. The goal of this middleware services development is to enable Earth science data users to access remote sensing archives for immediate answers to science questions instead of links to large volumes of data to download and process. Exposing data and metadata to web-based services enables machine-driven queries and data interaction. Also, product quality information can be integrated to enable additional filtering and sub-setting. Only the reduced content required to complete an analysis is then transferred to the user.

  3. Research and Application of Role-Based Access Control Model in Web Application System%Web应用系统中RBAC模型的研究与实现

    Institute of Scientific and Technical Information of China (English)

    黄秀文

    2015-01-01

    Access control is the main strategy of security and protection in Web system, the traditional access control can not meet the needs of the growing security. With using the role based access control (RBAC) model and introducing the concept of the role in the web system, the user is mapped to a role in an organization, access to the corresponding role authorization, access authorization and control according to the user's role in an organization, so as to improve the web system flexibility and security permissions and access control.%访问控制是Web系统中安全防范和保护的主要策略,传统的访问控制已不能满足日益增长的安全性需求。本文在web应用系统中,使用基于角色的访问控制(RBAC)模型,通过引入角色的概念,将用户映射为在一个组织中的某种角色,将访问权限授权给相应的角色,根据用户在组织内所处的角色进行访问授权与控制,从而提高了在web系统中权限分配和访问控制的灵活性与安全性。

  4. AGRIS: providing access to agricultural research data exploiting open data on the web.

    Science.gov (United States)

    Celli, Fabrizio; Malapela, Thembani; Wegner, Karna; Subirats, Imma; Kokoliou, Elena; Keizer, Johannes

    2015-01-01

    AGRIS is the International System for Agricultural Science and Technology. It is supported by a large community of data providers, partners and users. AGRIS is a database that aggregates bibliographic data, and through this core data, related content across online information systems is retrieved by taking advantage of Semantic Web capabilities. AGRIS is a global public good and its vision is to be a responsive service to its user needs by facilitating contributions and feedback regarding the AGRIS core knowledgebase, AGRIS's future and its continuous development. Periodic AGRIS e-consultations, partner meetings and user feedback are assimilated to the development of the AGRIS application and content coverage. This paper outlines the current AGRIS technical set-up, its network of partners, data providers and users as well as how AGRIS's responsiveness to clients' needs inspires the continuous technical development of the application. The paper concludes by providing a use case of how the AGRIS stakeholder input and the subsequent AGRIS e-consultation results influence the development of the AGRIS application, knowledgebase and service delivery.

  5. AVCpred: an integrated web server for prediction and design of antiviral compounds.

    Science.gov (United States)

    Qureshi, Abid; Kaur, Gazaldeep; Kumar, Manoj

    2017-01-01

    Viral infections constantly jeopardize the global public health due to lack of effective antiviral therapeutics. Therefore, there is an imperative need to speed up the drug discovery process to identify novel and efficient drug candidates. In this study, we have developed quantitative structure-activity relationship (QSAR)-based models for predicting antiviral compounds (AVCs) against deadly viruses like human immunodeficiency virus (HIV), hepatitis C virus (HCV), hepatitis B virus (HBV), human herpesvirus (HHV) and 26 others using publicly available experimental data from the ChEMBL bioactivity database. Support vector machine (SVM) models achieved a maximum Pearson correlation coefficient of 0.72, 0.74, 0.66, 0.68, and 0.71 in regression mode and a maximum Matthew's correlation coefficient 0.91, 0.93, 0.70, 0.89, and 0.71, respectively, in classification mode during 10-fold cross-validation. Furthermore, similar performance was observed on the independent validation sets. We have integrated these models in the AVCpred web server, freely available at http://crdd.osdd.net/servers/avcpred. In addition, the datasets are provided in a searchable format. We hope this web server will assist researchers in the identification of potential antiviral agents. It would also save time and cost by prioritizing new drugs against viruses before their synthesis and experimental testing.

  6. Struct2Net: a web service to predict protein-protein interactions using a structure-based approach.

    Science.gov (United States)

    Singh, Rohit; Park, Daniel; Xu, Jinbo; Hosur, Raghavendra; Berger, Bonnie

    2010-07-01

    Struct2Net is a web server for predicting interactions between arbitrary protein pairs using a structure-based approach. Prediction of protein-protein interactions (PPIs) is a central area of interest and successful prediction would provide leads for experiments and drug design; however, the experimental coverage of the PPI interactome remains inadequate. We believe that Struct2Net is the first community-wide resource to provide structure-based PPI predictions that go beyond homology modeling. Also, most web-resources for predicting PPIs currently rely on functional genomic data (e.g. GO annotation, gene expression, cellular localization, etc.). Our structure-based approach is independent of such methods and only requires the sequence information of the proteins being queried. The web service allows multiple querying options, aimed at maximizing flexibility. For the most commonly studied organisms (fly, human and yeast), predictions have been pre-computed and can be retrieved almost instantaneously. For proteins from other species, users have the option of getting a quick-but-approximate result (using orthology over pre-computed results) or having a full-blown computation performed. The web service is freely available at http://struct2net.csail.mit.edu.

  7. LigoDV-web: Providing easy, secure and universal access to a large distributed scientific data store for the LIGO Scientific Collaboration

    CERN Document Server

    Areeda, Joseph S; Lundgren, Andrew P; Maros, Edward; Macleod, Duncan M; Zweizig, John

    2016-01-01

    Gravitational-wave observatories around the world, including the Laser Interferometer Gravitational-wave Observatory (LIGO), record a large volume of gravitational-wave output data and auxiliary data about the instruments and their environments. These data are stored at the observatory sites and distributed to computing clusters for data analysis. LigoDV-web is a web-based data viewer that provides access to data recorded at the LIGO Hanford, LIGO Livingston and GEO600 observatories, and the 40m prototype interferometer at Caltech. The challenge addressed by this project is to provide meaningful visualizations of small data sets to anyone in the collaboration in a fast, secure and reliable manner with minimal software, hardware and training required of the end users. LigoDV-web is implemented as a Java Enterprise Application, with Shibboleth Single Sign On for authentication and authorization and a proprietary network protocol used for data access on the back end. Collaboration members with proper credentials...

  8. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    Science.gov (United States)

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  9. Factors Influencing Webmasters and the Level of Web Accessibility and Section 508 Compliance at SACS Accredited Postsecondary Institutions: A Study Using the Theory of Planned Behavior

    Science.gov (United States)

    Freeman, Misty Danielle

    2013-01-01

    The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…

  10. A Grounded Theory Study of the Process of Accessing Information on the World Wide Web by People with Mild Traumatic Brain Injury

    Science.gov (United States)

    Blodgett, Cynthia S.

    2008-01-01

    The purpose of this grounded theory study was to examine the process by which people with Mild Traumatic Brain Injury (MTBI) access information on the web. Recent estimates include amateur sports and recreation injuries, non-hospital clinics and treatment facilities, private and public emergency department visits and admissions, providing…

  11. Factors Influencing Webmasters and the Level of Web Accessibility and Section 508 Compliance at SACS Accredited Postsecondary Institutions: A Study Using the Theory of Planned Behavior

    Science.gov (United States)

    Freeman, Misty Danielle

    2013-01-01

    The purpose of this research was to explore Webmasters' behaviors and factors that influence Web accessibility at postsecondary institutions. Postsecondary institutions that were accredited by the Southern Association of Colleges and Schools were used as the population. The study was based on the theory of planned behavior, and Webmasters'…

  12. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    Science.gov (United States)

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  13. Optimal use of conservation and accessibility filters in microRNA target prediction.

    Directory of Open Access Journals (Sweden)

    Ray M Marín

    Full Text Available It is generally accepted that filtering microRNA (miRNA target predictions by conservation or by accessibility can reduce the false discovery rate. However, these two strategies are usually not exploited in a combined and flexible manner. Here, we introduce PACCMIT, a flexible method that filters miRNA binding sites by their conservation, accessibility, or both. The improvement in performance obtained with each of these three filters is demonstrated on the prediction of targets for both i highly and ii weakly conserved miRNAs, i.e., in two scenarios in which the miRNA-target interactions are subjected to different evolutionary pressures. We show that in the first scenario conservation is a better filter than accessibility (as both sensitivity and precision are higher among the top predictions and that the combined filter improves performance of PACCMIT even further. In the second scenario, on the other hand, the accessibility filter performs better than both the conservation and combined filters, suggesting that the site conservation is not equally effective in rejecting false positive predictions for all miRNAs. Regarding the quality of the ranking criterion proposed by Robins and Press and used in PACCMIT, it is shown that top ranking interactions correspond to more downregulated proteins than do the lower ranking interactions. Comparison with several other target prediction algorithms shows that the ranking of predictions provided by PACCMIT is at least as good as the ranking generated by other conservation-based methods and considerably better than the energy-based ranking used in other accessibility-based methods.

  14. Internet accessibility and usage among urban adolescents in Southern California: implications for web-based health research.

    Science.gov (United States)

    Sun, Ping; Unger, Jennifer B; Palmer, Paula H; Gallaher, Peggy; Chou, Chih-Ping; Baezconde-Garbanati, Lourdes; Sussman, Steve; Johnson, C Anderson

    2005-10-01

    The World Wide Web (WWW) poses a distinct capability to offer interventions tailored to the individual's characteristics. To fine tune the tailoring process, studies are needed to explore how Internet accessibility and usage are related to demographic, psychosocial, behavioral, and other health related characteristics. This study was based on a cross-sectional survey conducted on 2373 7th grade students of various ethnic groups in Southern California. Measures of Internet use included Internet use at school or at home, Email use, chat-room use, and Internet favoring. Logistic regressions were conducted to assess the associations between Internet uses with selected demographic, psychosocial, behavioral variables and self-reported health statuses. The proportion of students who could access the Internet at school or home was 90% and 40%, separately. Nearly all (99%) of the respondents could access the Internet either at school or at home. Higher SES and Asian ethnicity were associated with higher internet use. Among those who could access the Internet and after adjusting for the selected demographic and psychosocial variables, depression was positively related with chat-room use and using the Internet longer than 1 hour per day at home, and hostility was positively related with Internet favoring (All ORs = 1.2 for +1 STD, p Internet use (ORs for +1 STD ranged from 1.2 to 2.0, all p Internet use. Substance use was positively related to email use, chat-room use, and at home Internet use (OR for "used" vs. "not used" ranged from 1.2 to 4.0, p Internet use at home but lower levels of Internet use at school. More physical activity was related to more email use (OR = 1.3 for +1 STD), chat room use (OR = 1.2 for +1 STD), and at school ever Internet use (OR = 1.2 for +1 STD, all p Internet use-related measures. In this ethnically diverse sample of Southern California 7(th) grade students, 99% could access the Internet at school and/or at home. This suggests that the Internet

  15. nuMap:A Web Platform for Accurate Prediction of Nucleosome Positioning

    Institute of Scientific and Technical Information of China (English)

    Bader A Alharbi; Thamir H Alshammari; Nathan L Felton; Victor B Zhurkin; Feng Cui

    2014-01-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and param-eters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site.

  16. Predicting IVF Outcome: A Proposed Web-based System Using Artificial Intelligence.

    Science.gov (United States)

    Siristatidis, Charalampos; Vogiatzi, Paraskevi; Pouliakis, Abraham; Trivella, Marialenna; Papantoniou, Nikolaos; Bettocchi, Stefano

    2016-01-01

    To propose a functional in vitro fertilization (IVF) prediction model to assist clinicians in tailoring personalized treatment of subfertile couples and improve assisted reproduction outcome. Construction and evaluation of an enhanced web-based system with a novel Artificial Neural Network (ANN) architecture and conformed input and output parameters according to the clinical and bibliographical standards, driven by a complete data set and "trained" by a network expert in an IVF setting. The system is capable to act as a routine information technology platform for the IVF unit and is capable of recalling and evaluating a vast amount of information in a rapid and automated manner to provide an objective indication on the outcome of an artificial reproductive cycle. ANNs are an exceptional candidate in providing the fertility specialist with numerical estimates to promote personalization of healthcare and adaptation of the course of treatment according to the indications. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  17. Data Quality Parameters and Web Services Facilitate User Access to Research-Ready Seismic Data

    Science.gov (United States)

    Trabant, C. M.; Templeton, M. E.; Van Fossen, M.; Weertman, B.; Ahern, T. K.; Casey, R. E.; Keyson, L.; Sharer, G.

    2016-12-01

    IRIS Data Services has the mission of providing efficient access to a wide variety of seismic and related geoscience data to the user community. With our vast archive of freely available data, we recognize that there is a constant challenge to provide data to scientists and students that are of a consistently useful level of quality. To address this issue, we began by undertaking a comprehensive survey of the data and generating metrics measurements that provide estimates of data quality. These measurements can inform the scientist of the level of suitability of a given set of data for their scientific investigation. They also serve as a quality assurance check for network operators, who can act on this information to improve their current recording or mitigate issues with already recorded data and metadata. Following this effort, IRIS Data Services is moving forward to focus on providing tools for the scientist that make it easier to access data of a quality and characteristic that suits their investigation. Data that fulfill this criterion are termed "research-ready". In addition to filtering data by type, geographic location, proximity to events, and specific time ranges, we will offer the ability to filter data based on specific quality assessments. These include signal-to-noise ratio measurements, data continuity, timing quality, absence of channel cross-talk, and potentially many other factors. Our goal is to ensure that the user receives only the data that meets their specifications and will not require extensive review and culling after delivery. We will present the latest developments of the MUSTANG automated data quality system and introduce the Research-Ready Data Sets (RRDS) service. Together these two technologies serve as a data quality assurance ecosystem that will provide benefit to the scientific community by aiding efforts to readily find appropriate and suitable data for use in any number of objectives.

  18. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. CONCLUSION...

  19. GRIP: A web-based system for constructing Gold Standard datasets for protein-protein interaction prediction.

    Science.gov (United States)

    Browne, Fiona; Wang, Haiying; Zheng, Huiru; Azuaje, Francisco

    2009-01-26

    Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases) and non-interacting proteins (negative cases) are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. GRIP (Gold Reference dataset constructor from Information on Protein complexes) is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.

  20. GRIP: A web-based system for constructing Gold Standard datasets for protein-protein interaction prediction

    Directory of Open Access Journals (Sweden)

    Zheng Huiru

    2009-01-01

    Full Text Available Abstract Background Information about protein interaction networks is fundamental to understanding protein function and cellular processes. Interaction patterns among proteins can suggest new drug targets and aid in the design of new therapeutic interventions. Efforts have been made to map interactions on a proteomic-wide scale using both experimental and computational techniques. Reference datasets that contain known interacting proteins (positive cases and non-interacting proteins (negative cases are essential to support computational prediction and validation of protein-protein interactions. Information on known interacting and non interacting proteins are usually stored within databases. Extraction of these data can be both complex and time consuming. Although, the automatic construction of reference datasets for classification is a useful resource for researchers no public resource currently exists to perform this task. Results GRIP (Gold Reference dataset constructor from Information on Protein complexes is a web-based system that provides researchers with the functionality to create reference datasets for protein-protein interaction prediction in Saccharomyces cerevisiae. Both positive and negative cases for a reference dataset can be extracted, organised and downloaded by the user. GRIP also provides an upload facility whereby users can submit proteins to determine protein complex membership. A search facility is provided where a user can search for protein complex information in Saccharomyces cerevisiae. Conclusion GRIP is developed to retrieve information on protein complex, cellular localisation, and physical and genetic interactions in Saccharomyces cerevisiae. Manual construction of reference datasets can be a time consuming process requiring programming knowledge. GRIP simplifies and speeds up this process by allowing users to automatically construct reference datasets. GRIP is free to access at http://rosalind.infj.ulst.ac.uk/GRIP/.

  1. MimoPro: a more efficient Web-based tool for epitope prediction using phage display libraries

    Directory of Open Access Journals (Sweden)

    Guo William W

    2011-05-01

    Full Text Available Abstract Background A B-cell epitope is a group of residues on the surface of an antigen which stimulates humoral responses. Locating these epitopes on antigens is important for the purpose of effective vaccine design. In recent years, mapping affinity-selected peptides screened from a random phage display library to the native epitope has become popular in epitope prediction. These peptides, also known as mimotopes, share the similar structure and function with the corresponding native epitopes. Great effort has been made in using this similarity between such mimotopes and native epitopes in prediction, which has resulted in better outcomes than statistics-based methods can. However, it cannot maintain a high degree of satisfaction in various circumstances. Results In this study, we propose a new method that maps a group of mimotopes back to a source antigen so as to locate the interacting epitope on the antigen. The core of this method is a searching algorithm that is incorporated with both dynamic programming (DP and branch and bound (BB optimization and operated on a series of overlapping patches on the surface of a protein. These patches are then transformed to a number of graphs using an adaptable distance threshold (ADT regulated by an appropriate compactness factor (CF, a novel parameter proposed in this study. Compared with both Pep-3D-Search and PepSurf, two leading graph-based search tools, on average from the results of 18 test cases, MimoPro, the Web-based implementation of our proposed method, performed better in sensitivity, precision, and Matthews correlation coefficient (MCC than both did in epitope prediction. In addition, MimoPro is significantly faster than both Pep-3D-Search and PepSurf in processing. Conclusions Our search algorithm designed for processing well constructed graphs using an ADT regulated by CF is more sensitive and significantly faster than other graph-based approaches in epitope prediction. MimoPro is a

  2. Coupling News Sentiment with Web Browsing Data Improves Prediction of Intra-Day Price Dynamics.

    Directory of Open Access Journals (Sweden)

    Gabriele Ranco

    Full Text Available The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users' behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012-2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a "wisdom-of-the-crowd" effect that allows to exploit users' activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment.

  3. A novel web server predicts amino acid residue protection against hydrogen-deuterium exchange.

    Science.gov (United States)

    Lobanov, Mikhail Yu; Suvorina, Masha Yu; Dovidchenko, Nikita V; Sokolovskiy, Igor V; Surin, Alexey K; Galzitskaya, Oxana V

    2013-06-01

    To clarify the relationship between structural elements and polypeptide chain mobility, a set of statistical analyses of structures is necessary. Because at present proteins with determined spatial structures are much less numerous than those with amino acid sequence known, it is important to be able to predict the extent of proton protection from hydrogen-deuterium (HD) exchange basing solely on the protein primary structure. Here we present a novel web server aimed to predict the degree of amino acid residue protection against HD exchange solely from the primary structure of the protein chain under study. On the basis of the amino acid sequence, the presented server offers the following three possibilities (predictors) for user's choice. First, prediction of the number of contacts occurring in this protein, which is shown to be helpful in estimating the number of protons protected against HD exchange (sensitivity 0.71). Second, probability of H-bonding in this protein, which is useful for finding the number of unprotected protons (specificity 0.71). The last is the use of an artificial predictor. Also, we report on mass spectrometry analysis of HD exchange that has been first applied to free amino acids. Its results showed a good agreement with theoretical data (number of protons) for 10 globular proteins (correlation coefficient 0.73). We pioneered in compiling two datasets of experimental HD exchange data for 35 proteins. The H-Protection server is available for users at http://bioinfo.protres.ru/ogp/ Supplementary data are available at Bioinformatics online.

  4. The Phyre2 web portal for protein modelling, prediction and analysis

    Science.gov (United States)

    Kelley, Lawrence A; Mezulis, Stefans; Yates, Christopher M; Wass, Mark N; Sternberg, Michael JE

    2017-01-01

    Summary Phyre2 is a suite of tools available on the web to predict and analyse protein structure, function and mutations. The focus of Phyre2 is to provide biologists with a simple and intuitive interface to state-of-the-art protein bioinformatics tools. Phyre2 replaces Phyre, the original version of the server for which we previously published a protocol. In this updated protocol, we describe Phyre2, which uses advanced remote homology detection methods to build 3D models, predict ligand binding sites, and analyse the effect of amino-acid variants (e.g. nsSNPs) for a user’s protein sequence. Users are guided through results by a simple interface at a level of detail determined by them. This protocol will guide a user from submitting a protein sequence to interpreting the secondary and tertiary structure of their models, their domain composition and model quality. A range of additional available tools is described to find a protein structure in a genome, to submit large number of sequences at once and to automatically run weekly searches for proteins difficult to model. The server is available at http://www.sbg.bio.ic.ac.uk/phyre2. A typical structure prediction will be returned between 30mins and 2 hours after submission. PMID:25950237

  5. Coupling News Sentiment with Web Browsing Data Improves Prediction of Intra-Day Price Dynamics.

    Science.gov (United States)

    Ranco, Gabriele; Bordino, Ilaria; Bormetti, Giacomo; Caldarelli, Guido; Lillo, Fabrizio; Treccani, Michele

    2016-01-01

    The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users' behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012-2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a "wisdom-of-the-crowd" effect that allows to exploit users' activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment.

  6. Intro and Recent Advances: Remote Data Access via OPeNDAP Web Services

    Science.gov (United States)

    Fulker, David

    2016-01-01

    During the upcoming Summer 2016 meeting of the ESIP Federation (July 19-22), OpenDAP will hold a Developers and Users Workshop. While a broad set of topics will be covered, a key focus is capitalizing on recent EOSDIS-sponsored advances in Hyrax, OPeNDAPs own software for server-side realization of the DAP2 and DAP4 protocols. These Hyrax advances are as important to data users as to data providers, and the workshop will include hands-on experiences of value to both. Specifically, a balanced set of presentations and hands-on tutorials will address advances in1.server installation,2.server configuration,3.Hyrax aggregation capabilities,4.support for data-access from clients that are HTTP-based, JSON-based or OGC-compliant (especially WCS and WMS),5.support for DAP4,6.use and extension of server-side computational capabilities, and7.several performance-affecting matters. Topics 2 through 7 will be relevant to data consumers, data providers and notably, due to the open-source nature of all OPeNDAP software to developers wishing to extend Hyrax, to build compatible clients and servers, and/or to employ Hyrax as middleware that enables interoperability across a variety of end-user and source-data contexts. A session for contributed talks will elaborate the topics listed above and embrace additional ones.

  7. Predicting Postfire Hillslope Erosion with a Web-based Probabilistic Model

    Science.gov (United States)

    Robichaud, P. R.; Elliot, W. J.; Pierson, F. B.; Hall, D. E.; Moffet, C. A.

    2005-12-01

    Modeling erosion after major disturbances, such as wildfire, has major challenges that need to be overcome. Fire-induced changes include increased erosion due to loss of the protective litter and duff, loss of soil water storage, and in some cases, creation of water repellent soil conditions. These conditions increase the potential for flooding, and sedimentation, which are of special concern to people who live and mange resources in the areas adjacent to burned areas. A web-based Erosion Risk Management Tool (ERMiT), has been developed to predict surface erosion from postfire hillslopes and to evaluate the potential effectiveness of various erosion mitigation practices. The model uses a probabilistic approach that incorporates variability in weather, soil properties, and burn severity for forests, rangeland, and chaparral hillslopes. The Water Erosion Prediction Project (WEPP) is the erosion prediction engine used in a Monte Carlo simulation mode to provide event-based erosion rate probabilities. The one-page custom interface is targeted for hydrologists and soil scientists. The interface allows users to select climate, soil texture, burn severity, and hillslope topography. For a given hillslope, the model uses a single 100-year run to obtain weather variability and then twenty 5- to 10-year runs to incorporate soil property, cover, and spatial burn severity variability. The output, in both tabular and graphical form, relates the probability of soil erosion exceeding a given amount in each of the first five years following the fire. Event statistics are provided to show the magnitude and rainfall intensity of the storms used to predict erosion rates. ERMiT also allows users to compare the effects of various mitigation treatments (mulches, seeding, and barrier treatments such as contour-felled logs or straw wattles) on the erosion rate probability. Data from rainfall simulation and concentrated flow (rill) techniques were used to parameterize ERMiT for these varied

  8. ProBiS-CHARMMing: Web Interface for Prediction and Optimization of Ligands in Protein Binding Sites.

    Science.gov (United States)

    Konc, Janez; Miller, Benjamin T; Štular, Tanja; Lešnik, Samo; Woodcock, H Lee; Brooks, Bernard R; Janežič, Dušanka

    2015-11-23

    Proteins often exist only as apo structures (unligated) in the Protein Data Bank, with their corresponding holo structures (with ligands) unavailable. However, apoproteins may not represent the amino-acid residue arrangement upon ligand binding well, which is especially problematic for molecular docking. We developed the ProBiS-CHARMMing web interface by connecting the ProBiS ( http://probis.cmm.ki.si ) and CHARMMing ( http://www.charmming.org ) web servers into one functional unit that enables prediction of protein-ligand complexes and allows for their geometry optimization and interaction energy calculation. The ProBiS web server predicts ligands (small compounds, proteins, nucleic acids, and single-atom ligands) that may bind to a query protein. This is achieved by comparing its surface structure against a nonredundant database of protein structures and finding those that have binding sites similar to that of the query protein. Existing ligands found in the similar binding sites are then transposed to the query according to predictions from ProBiS. The CHARMMing web server enables, among other things, minimization and potential energy calculation for a wide variety of biomolecular systems, and it is used here to optimize the geometry of the predicted protein-ligand complex structures using the CHARMM force field and to calculate their interaction energies with the corresponding query proteins. We show how ProBiS-CHARMMing can be used to predict ligands and their poses for a particular binding site, and minimize the predicted protein-ligand complexes to obtain representations of holoproteins. The ProBiS-CHARMMing web interface is freely available for academic users at http://probis.nih.gov.

  9. Design and Realization of Embedded Web Access Control System%嵌入式Web访问控制系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    谯倩; 毛燕琴; 沈苏彬

    2011-01-01

    针对嵌入式Web系统自身的安全,结合嵌入式Web系统的特点,在对基于角色的访问控制模型研究的基础上对其进行简化修改,去掉角色继承的复杂模式,在此基础上提出了适用于嵌入式Web系统的“用户-角色-权限集(业务-页面-操作)”访问控制设计方案.并利用CGI技术实现了特定的嵌入式Web应用系统的访问控制功能,限制了合法用户对嵌入式Web系统资源的访问,防止了非法用户的侵入或因合法用户的不慎操作而造成的破坏.对实现的Web应用系统进行了测试,测试结果表明该模型具有良好的功能.%For the security of embedded Web system itself, combined with the characteristics of embedded Web system and based on the research on the model, it simplifies RBAC model to remove the role of complex patterns of inheritance and gives the embedded Web solution for access control system that is "user-role-privilege set (business-page-operation("model. The embedded Web access control system is achieved through CGI technology, limiting user access to embedded Web systems resources, and preventing the intrusion of unauthorized users or the damage caused by careless operation of legitimate users. The Web application system was tested, and the test results show that the model has good functions.

  10. StaRProtein, A Web Server for Prediction of the Stability of Repeat Proteins

    Science.gov (United States)

    Xu, Yongtao; Zhou, Xu; Huang, Meilan

    2015-01-01

    Repeat proteins have become increasingly important due to their capability to bind to almost any proteins and the potential as alternative therapy to monoclonal antibodies. In the past decade repeat proteins have been designed to mediate specific protein-protein interactions. The tetratricopeptide and ankyrin repeat proteins are two classes of helical repeat proteins that form different binding pockets to accommodate various partners. It is important to understand the factors that define folding and stability of repeat proteins in order to prioritize the most stable designed repeat proteins to further explore their potential binding affinities. Here we developed distance-dependant statistical potentials using two classes of alpha-helical repeat proteins, tetratricopeptide and ankyrin repeat proteins respectively, and evaluated their efficiency in predicting the stability of repeat proteins. We demonstrated that the repeat-specific statistical potentials based on these two classes of repeat proteins showed paramount accuracy compared with non-specific statistical potentials in: 1) discriminate correct vs. incorrect models 2) rank the stability of designed repeat proteins. In particular, the statistical scores correlate closely with the equilibrium unfolding free energies of repeat proteins and therefore would serve as a novel tool in quickly prioritizing the designed repeat proteins with high stability. StaRProtein web server was developed for predicting the stability of repeat proteins. PMID:25807112

  11. Exploring Factors that Predict Preservice Teachers' Intentions to Use Web 2.0 Technologies Using Decomposed Theory of Planned Behavior

    Science.gov (United States)

    Sadaf, Ayesha; Newby, Timothy J.; Ertmer, Peggy A.

    2013-01-01

    This study investigated factors that predict preservice teachers' intentions to use Web 2.0 technologies in their future classrooms. The researchers used a mixed-methods research design and collected qualitative interview data (n = 7) to triangulate quantitative survey data (n = 286). Results indicate that positive attitudes and perceptions of…

  12. OGIS Access System

    Data.gov (United States)

    National Archives and Records Administration — The OGIS Access System (OAS) provides case management, stakeholder collaboration, and public communications activities including a web presence via a web portal.

  13. 基于SOAP网关的Web Services访问控制%Study on Access Control for Web Services Based on SOAP Gateway

    Institute of Scientific and Technical Information of China (English)

    夏春涛; 陈性元; 张斌; 王婷

    2007-01-01

    本文探讨了Web Services通信中对SOAP网关的需求,提出了基于SOAP网关的Web Services访问控制架构,分析了架构中的参与者及其职责,并给出了两种SOAP网关的实现方法和基于XACML的授权服务的实现机制.

  14. Children and young people's views on access to a web-based application to support personal management of long-term conditions: a qualitative study.

    Science.gov (United States)

    Huby, K; Swallow, V; Smith, T; Carolan, I

    2017-01-01

    An exploration of children and young people's views on a proposed web-based application to support personal management of chronic kidney disease at home is important for developing resources that meet their needs and preferences. As part of a wider study to develop and evaluate a web-based information and support application for parents managing their child's chronic kidney disease, qualitative interviews were conducted with 26 children and young people aged 5-17 years. Interviews explored their views on content of a proposed child and young person-appropriate application to support personal management of their condition. Data were analysed by using framework technique and self-efficacy theory. One overarching theme of Access and three subthemes (information, accessibility and normalization) were identified. Information needed to be clear and accurate, age appropriate and secure. Access to Wi-Fi was essential to utilize information and retain contact with peers. For some, it was important to feel 'normal' and so they would choose not to access any care information when outside of the hospital as this reduced their ability to feel normal. Developing a web-based application that meets children and young peoples' information and support needs will maximize its utility and enhance the effectiveness of home-based clinical caregiving, therefore contributing to improved outcomes for patients. © 2016 John Wiley & Sons Ltd.

  15. SVM-Prot 2016: A Web-Server for Machine Learning Prediction of Protein Functional Families from Sequence Irrespective of Similarity.

    Science.gov (United States)

    Li, Ying Hong; Xu, Jing Yu; Tao, Lin; Li, Xiao Feng; Li, Shuang; Zeng, Xian; Chen, Shang Ying; Zhang, Peng; Qin, Chu; Zhang, Cheng; Chen, Zhe; Zhu, Feng; Chen, Yu Zong

    2016-01-01

    Knowledge of protein function is important for biological, medical and therapeutic studies, but many proteins are still unknown in function. There is a need for more improved functional prediction methods. Our SVM-Prot web-server employed a machine learning method for predicting protein functional families from protein sequences irrespective of similarity, which complemented those similarity-based and other methods in predicting diverse classes of proteins including the distantly-related proteins and homologous proteins of different functions. Since its publication in 2003, we made major improvements to SVM-Prot with (1) expanded coverage from 54 to 192 functional families, (2) more diverse protein descriptors protein representation, (3) improved predictive performances due to the use of more enriched training datasets and more variety of protein descriptors, (4) newly integrated BLAST analysis option for assessing proteins in the SVM-Prot predicted functional families that were similar in sequence to a query protein, and (5) newly added batch submission option for supporting the classification of multiple proteins. Moreover, 2 more machine learning approaches, K nearest neighbor and probabilistic neural networks, were added for facilitating collective assessment of protein functions by multiple methods. SVM-Prot can be accessed at http://bidd2.nus.edu.sg/cgi-bin/svmprot/svmprot.cgi.

  16. DT-Web: a web-based application for drug-target interaction and drug combination prediction through domain-tuned network-based inference.

    Science.gov (United States)

    Alaimo, Salvatore; Bonnici, Vincenzo; Cancemi, Damiano; Ferro, Alfredo; Giugno, Rosalba; Pulvirenti, Alfredo

    2015-01-01

    The identification of drug-target interactions (DTI) is a costly and time-consuming step in drug discovery and design. Computational methods capable of predicting reliable DTI play an important role in the field. Algorithms may aim to design new therapies based on a single approved drug or a combination of them. Recently, recommendation methods relying on network-based inference in connection with knowledge coming from the specific domain have been proposed. Here we propose a web-based interface to the DT-Hybrid algorithm, which applies a recommendation technique based on bipartite network projection implementing resources transfer within the network. This technique combined with domain-specific knowledge expressing drugs and targets similarity is used to compute recommendations for each drug. Our web interface allows the users: (i) to browse all the predictions inferred by the algorithm; (ii) to upload their custom data on which they wish to obtain a prediction through a DT-Hybrid based pipeline; (iii) to help in the early stages of drug combinations, repositioning, substitution, or resistance studies by finding drugs that can act simultaneously on multiple targets in a multi-pathway environment. Our system is periodically synchronized with DrugBank and updated accordingly. The website is free, open to all users, and available at http://alpha.dmi.unict.it/dtweb/. Our web interface allows users to search and visualize information on drugs and targets eventually providing their own data to compute a list of predictions. The user can visualize information about the characteristics of each drug, a list of predicted and validated targets, associated enzymes and transporters. A table containing key information and GO classification allows the users to perform their own analysis on our data. A special interface for data submission allows the execution of a pipeline, based on DT-Hybrid, predicting new targets with the corresponding p-values expressing the reliability of

  17. Los catálogos en línea de acceso público del Mercosur disponibles en entorno web Web accessible online public access catalogs in the Mercosur

    Directory of Open Access Journals (Sweden)

    Elsa Barber

    2008-06-01

    Full Text Available Se analizan las interfaces de usuario de los catálogos en línea de acceso público (OPACs en entorno web de las bibliotecas universitarias, especializadas, públicas y nacionales de los países parte del Mercosur (Argentina, Brasil, Paraguay, Uruguay, para elaborar un diagnóstico de situación sobre: descripción bibliográfica, análisis temático, mensajes de ayuda al usuario, visualización de datos bibliográficos. Se adopta una metodología cuali-cuantitativa, se utiliza como instrumento de recolección de datos la lista de funcionalidades del sistema que proporciona Hildreth (1982, se actualiza, se obtiene un formulario que permite, mediante 38 preguntas cerradas, observar la frecuencia de aparición de las funcionalidades básicas propias de cuatro áreas: Área I - control de operaciones; Área II - control de formulación de la búsqueda y puntos de acceso; Área III - control de salida y Área IV - asistencia al usuario: información e instrucción. Se trabaja con la información correspondiente a 297 unidades. Se delimitan estratos por tipo de software, tipo de biblioteca y país. Se aplican a los resultados las pruebas de Chi-cuadrado, Odds ratio y regresión logística multinomial. El análisis corrobora la existencia de diferencias significativas en cada uno de los estratos y verifica que la mayoría de los OPACs relevados brindan prestaciones mínimas.User interfaces of web based online public access catalogs (OPACs of academic, special, public and national libraries in countries belonging to Mercosur (Argentina, Brazil, Paraguay, Uruguay are studied to provide a diagnosis of the situation of bibliographic description, subject analisis, help messages and bibliographic display. A cuali-cuantitative methodology is adopted and a checklist of systems functions created by Hildreth (1982 is updated and used as data collection tool. The resulting 38 closed questions checklist has allowed to observe the frequency of appearance of the

  18. An unified access method for Web services in IoT based on CoAP%基于 CoAP 的物联网 Web 服务统一访问方法

    Institute of Scientific and Technical Information of China (English)

    黄忠; 葛连升

    2014-01-01

    A COAP-based unified Web access architecture was proposed, through which several different RFID networks could seamlessly join the internet.Moreover, a new Web access approach was presented, which could bind SOAP to the constrained application protocol ( CoAP) .Experimental results showed that the SOAP/CoAP binding was an effec-tive unified Web access approach to RFID networks, which had much lower network overhead compared with traditional SOAP/HTTP binding approach.%提出了1种将不同规范的RFID网络无缝接入到Internet的Web服务统一访问架构,并在此基础上进一步提出了1种基于CoAP协议的Web服务访问方法---SOAP/CoAP绑定。实验结果表明, SOAP/CoAP绑定是实现RFID网络Web服务统一访问的有效方法,且相比传统的SOAP/HTTP绑定方法,具有更低的网络开销。

  19. MO-E-18C-01: Open Access Web-Based Peer-To-Peer Training and Education in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Pawlicki, T [UC San Diego Medical Center, La Jolla, CA (United States); Brown, D; Dunscombe, P [Tom Baker Cancer Centre, Calgary, AB (Canada); Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)

    2014-06-15

    Purpose: Current training and education delivery models have limitations which result in gaps in clinical proficiency with equipment, procedures, and techniques. Educational and training opportunities offered by vendors and professional societies are by their nature not available at point of need or for the life of clinical systems. The objective of this work is to leverage modern communications technology to provide peer-to-peer training and education for radiotherapy professionals, in the clinic and on demand, as they undertake their clinical duties. Methods: We have developed a free of charge web site ( https://i.treatsafely.org ) using the Google App Engine and datastore (NDB, GQL), Python with AJAX-RPC, and Javascript. The site is a radiotherapy-specific hosting service to which user-created videos illustrating clinical or physics processes and other relevant educational material can be uploaded. Efficient navigation to the material of interest is provided through several RT specific search tools and videos can be scored by users, thus providing comprehensive peer review of the site content. The site also supports multilingual narration\\translation of videos, a quiz function for competence assessment and a library function allowing groups or institutions to define their standard operating procedures based on the video content. Results: The website went live in August 2013 and currently has over 680 registered users from 55 countries; 27.2% from the United States, 9.8% from India, 8.3% from the United Kingdom, 7.3% from Brazil, and 47.5% from other countries. The users include physicists (57.4%), Oncologists (12.5%), therapists (8.2%) and dosimetrists (4.8%). There are 75 videos to date including English, Portuguese, Mandarin, and Thai. Conclusion: Based on the initial acceptance of the site, we conclude that this open access web-based peer-to-peer tool is fulfilling an important need in radiotherapy training and education. Site functionality should expand in

  20. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  1. Improving Flexibility and Accessibility of Higher Education with Web 2.0 Technologies: Needs Analysis of Public Health Education Programs in Bulgaria

    Directory of Open Access Journals (Sweden)

    I. Sarieva

    2011-12-01

    Full Text Available The case study presented in this paper aims to address the issues related to the use of Web 2.0 technology in public health education in a particular college in Bulgaria in relation to providing flexible and accessible education consistent with the current trends in public health practices. The outcomes of the case study suggest that systematic steps are needed in order to assure effective inclusion of technology into the learning process; these steps include the completion of systematic studies of attrition rate and the reasons for student drop-out, training of administration and faculty members in effective incorporation of Web 2.0 technologies, introduction and promotion of Medicine 2.0 practices, and initiating the planning of design and development of Web 2.0 learning applications and environments in Bulgarian which is the language of instruction.

  2. Comparison of trial participants and open access users of a web-based physical activity intervention regarding adherence, attrition, and repeated participation.

    Science.gov (United States)

    Wanner, Miriam; Martin-Diener, Eva; Bauer, Georg; Braun-Fahrländer, Charlotte; Martin, Brian W

    2010-02-10

    Web-based interventions are popular for promoting healthy lifestyles such as physical activity. However, little is known about user characteristics, adherence, attrition, and predictors of repeated participation on open access physical activity websites. The focus of this study was Active-online, a Web-based individually tailored physical activity intervention. The aims were (1) to assess and compare user characteristics and adherence to the website (a) in the open access context over time from 2003 to 2009, and (b) between trial participants and open access users; and (2) to analyze attrition and predictors of repeated use among participants in a randomized controlled trial compared with registered open access users. Data routinely recorded in the Active-online user database were used. Adherence was defined as: the number of pages viewed, the proportion of visits during which a tailored module was begun, the proportion of visits during which tailored feedback was received, and the time spent in the tailored modules. Adherence was analyzed according to six one-year periods (2003-2009) and according to the context (trial or open access) based on first visits and longest visits. Attrition and predictors of repeated participation were compared between trial participants and open access users. The number of recorded visits per year on Active-online decreased from 42,626 in 2003-2004 to 8343 in 2008-2009 (each of six one-year time periods ran from April 23 to April 22 of the following year). The mean age of users was between 38.4 and 43.1 years in all time periods and both contexts. The proportion of women increased from 49.5% in 2003-2004 to 61.3% in 2008-2009 (Popen access users. For open access users, adherence was similar during the first and the longest visits; for trial participants, adherence was lower during the first visits and higher during the longest visits. Of registered open access users and trial participants, 25.8% and 67.3% respectively visited Active

  3. Web Access Control on Petrochemical Information Service System%石油化工信息系统Web权限管理的研究

    Institute of Scientific and Technical Information of China (English)

    贾红阳; 郭力; 李晓霞; 杨章远; 姜林; 陈晓青

    2001-01-01

    对Web权限控制进行了研究分析和应用。首先分析了进行权限控制的必要性;介绍了进行权限控制的几种实现形式,包括利用Web Server本身权限管理工具,通过在ASP/PHP页面中嵌入权限认证代码,或是将二者结合;最后,基于Apache服务器开发了图形化的权限管理系统,并已将它应用在Internet石化信息服务系统中。该软件可以方便地完成增删改用户/组,为用户/组设定权限,限制某些IP对本系统的访问等功能;并可以方便地移植到其他类似系统中。%Web Access Control is analyzed and applied to information service system in this article. First, the need of Access Control is discussed. Second, a few of implementation methods are introduced . Web servers have access control functions by itself. In addition, we may insert some codes in ASP/PHP page to check access rights. CGI/ISAPI may use either or both of the above methods. As to Internet Petrochemical Information Service System, we design and complete a software to finish this job. It has a series of functions such as add, delete, edit users/groups' information, grant or revoke access to users/groups, allow or deny some IPs to access the information system, etc. It can also be applied to other similar information systems conveniently.

  4. LigoDV-web: Providing easy, secure and universal access to a large distributed scientific data store for the LIGO scientific collaboration

    Science.gov (United States)

    Areeda, J. S.; Smith, J. R.; Lundgren, A. P.; Maros, E.; Macleod, D. M.; Zweizig, J.

    2017-01-01

    Gravitational-wave observatories around the world, including the Laser Interferometer Gravitational-Wave Observatory (LIGO), record a large volume of gravitational-wave output data and auxiliary data about the instruments and their environments. These data are stored at the observatory sites and distributed to computing clusters for data analysis. LigoDV-web is a web-based data viewer that provides access to data recorded at the LIGO Hanford, LIGO Livingston and GEO600 observatories, and the 40 m prototype interferometer at Caltech. The challenge addressed by this project is to provide meaningful visualizations of small data sets to anyone in the collaboration in a fast, secure and reliable manner with minimal software, hardware and training required of the end users. LigoDV-web is implemented as a Java Enterprise Application, with Shibboleth Single Sign On for authentication and authorization, and a proprietary network protocol used for data access on the back end. Collaboration members with proper credentials can request data be displayed in any of several general formats from any Internet appliance that supports a modern browser with Javascript and minimal HTML5 support, including personal computers, smartphones, and tablets. Since its inception in 2012, 634 unique users have visited the LigoDV-web website in a total of 33 , 861 sessions and generated a total of 139 , 875 plots. This infrastructure has been helpful in many analyses within the collaboration including follow-up of the data surrounding the first gravitational-wave events observed by LIGO in 2015.

  5. Evaluation of a web portal for improving public access to evidence-based health information and health literacy skills: a pragmatic trial.

    Directory of Open Access Journals (Sweden)

    Astrid Austvoll-Dahlgren

    Full Text Available BACKGROUND: Using the conceptual framework of shared decision-making and evidence-based practice, a web portal was developed to serve as a generic (non disease-specific tailored intervention to improve the lay public's health literacy skills. OBJECTIVE: To evaluate the effects of the web portal compared to no intervention in a real-life setting. METHODS: A pragmatic randomised controlled parallel trial using simple randomisation of 96 parents who had children aged <4 years. Parents were allocated to receive either access to the portal or no intervention, and assigned three tasks to perform over a three-week period. These included a searching task, a critical appraisal task, and reporting on perceptions about participation. Data were collected from March through June 2011. RESULTS: Use of the web portal was found to improve attitudes towards searching for health information. This variable was identified as the most important predictor of intention to search in both samples. Participants considered the web portal to have good usability, usefulness, and credibility. The intervention group showed slight increases in the use of evidence-based information, critical appraisal skills, and participation compared to the group receiving no intervention, but these differences were not statistically significant. CONCLUSION: Despite the fact that the study was underpowered, we found that the web portal may have a positive effect on attitudes towards searching for health information. Furthermore, participants considered the web portal to be a relevant tool. It is important to continue experimenting with web-based resources in order to increase user participation in health care decision-making. TRIAL REGISTRATION: ClinicalTrials.gov NCT01266798.

  6. Analysis and prediction of agricultural pest dynamics with Tiko'n, a generic tool to develop agroecological food web models

    Science.gov (United States)

    Malard, J. J.; Rojas, M.; Adamowski, J. F.; Anandaraja, N.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    While several well-validated crop growth models are currently widely used, very few crop pest models of the same caliber have been developed or applied, and pest models that take trophic interactions into account are even rarer. This may be due to several factors, including 1) the difficulty of representing complex agroecological food webs in a quantifiable model, and 2) the general belief that pesticides effectively remove insect pests from immediate concern. However, pests currently claim a substantial amount of harvests every year (and account for additional control costs), and the impact of insects and of their trophic interactions on agricultural crops cannot be ignored, especially in the context of changing climates and increasing pressures on crops across the globe. Unfortunately, most integrated pest management frameworks rely on very simple models (if at all), and most examples of successful agroecological management remain more anecdotal than scientifically replicable. In light of this, there is a need for validated and robust agroecological food web models that allow users to predict the response of these webs to changes in management, crops or climate, both in order to predict future pest problems under a changing climate as well as to develop effective integrated management plans. Here we present Tiko'n, a Python-based software whose API allows users to rapidly build and validate trophic web agroecological models that predict pest dynamics in the field. The programme uses a Bayesian inference approach to calibrate the models according to field data, allowing for the reuse of literature data from various sources and reducing the need for extensive field data collection. We apply the model to the cononut black-headed caterpillar (Opisina arenosella) and associated parasitoid data from Sri Lanka, showing how the modeling framework can be used to rapidly develop, calibrate and validate models that elucidate how the internal structures of food webs

  7. 76 FR 59307 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2011-09-26

    ... Room Web site, http://www.regulationroom.org , to learn about the rule and the rulemaking process, to... carriers enter into written agreements spelling out the respective responsibilities of the parties for... W3C adopted WCAG 2.0, incorporating developments in Web technology and lessons learned since WCAG...

  8. 基于时间序列分析的Web Service QoS预测方法%Web Service QoS Prediction Method Based on Time Series Analysis

    Institute of Scientific and Technical Information of China (English)

    华哲邦; 李萌; 赵俊峰; 谢冰

    2013-01-01

    The QoS (quality of service) of Web services will fluctuate according to the variations of Internet environment and server loads. Therefore, the key question in the service computing areas is how to help users select appropriate Web Service. This paper presents a Web Service QoS prediction method based on time series analysis to address the above question. And it accomplishes the tool of predicting Web Service QoS automatically. This tool can predict the QoS information in the short future according to the historic data of Web Service. And the experiments verify the effectiveness of the method based on the historic data from 17832 services.%  通过网络提供服务的Web Service的服务质量会随着网络环境、服务器负载等因素的变化而变化,如何更好地帮助用户选择在未来一段时间内符合服务质量需求的Web Service,是目前服务计算领域中需要解决的关键问题之一。针对上述问题,提出了一种基于时间序列分析的Web Service QoS预测方法,并实现了相应的Web Service QoS自动预测工具。该工具能够根据Web Service的历史QoS数据,有效地预测未来短期内的QoS信息。以17832个Web Service的历史数据为基础,设计了相关实验,并验证了方法的有效性。

  9. DEPTH: a web server to compute depth and predict small-molecule binding cavities in proteins.

    Science.gov (United States)

    Tan, Kuan Pern; Varadarajan, Raghavan; Madhusudhan, M S

    2011-07-01

    Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.

  10. explICU: A web-based visualization and predictive modeling toolkit for mortality in intensive care patients.

    Science.gov (United States)

    Chen, Robert; Kumar, Vikas; Fitch, Natalie; Jagadish, Jitesh; Lifan Zhang; Dunn, William; Duen Horng Chau

    2015-01-01

    Preventing mortality in intensive care units (ICUs) has been a top priority in American hospitals. Predictive modeling has been shown to be effective in prediction of mortality based upon data from patients' past medical histories from electronic health records (EHRs). Furthermore, visualization of timeline events is imperative in the ICU setting in order to quickly identify trends in patient histories that may lead to mortality. With the increasing adoption of EHRs, a wealth of medical data is becoming increasingly available for secondary uses such as data exploration and predictive modeling. While data exploration and predictive modeling are useful for finding risk factors in ICU patients, the process is time consuming and requires a high level of computer programming ability. We propose explICU, a web service that hosts EHR data, displays timelines of patient events based upon user-specified preferences, performs predictive modeling in the back end, and displays results to the user via intuitive, interactive visualizations.

  11. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  12. Research of Communication of Lubrication Station Control System Based on WebAccess%基于WebAccess的润滑站控制系统通信的研究

    Institute of Scientific and Technical Information of China (English)

    巴鹏; 张雨; 焦圳

    2015-01-01

    Through establishing the communication between site plant and IPC configuration software WebAccess, it achieves the filling oil monitoring, operation control and data processing of the lubrication station control system. This arti-cle uses VB to establish procedures for communication as the data exchange program, combining configuration software WebAccess to build the lubrication station automation injection oil monitoring and management system. It can effectively solve the configuration software lack of driver and monitoring system of data transmission is not timely, inaccurate data re-cords, and other issues. The experiment results show that the system is easy to operate, accurate data transmission, and stable running and easy to maintain. It is the development trend of lubrication station in the future.%通过建立现场设备与工控机组态软件WebAccess的通信,实现了对润滑站控制系统加注油品监测、运行控制和数据处理。本文采用VB建立通讯连接程序作为数据交换程序,结合组态软件WebAccess建立润滑站自动加注油品的监控与管理系统,有效地解决了组态软件缺乏驱动和监控系统数据传递不及时、数据记录不准确等问题。实验结果表明:该系统易于操作,数据传输准确,运行稳定和便于维护,是润滑站今后发展的趋势。

  13. Brokering access to massive climate and landscape data via web services: observations and lessons learned after five years of the Geo Data Portal project.

    Science.gov (United States)

    Blodgett, D. L.; Walker, J. I.; Read, J. S.

    2015-12-01

    The USGS Geo Data Portal (GDP) project started in 2010 with the goal of providing climate and landscape model output data to hydrology and ecology modelers in model-ready form. The system takes a user-specified collection of polygons and a gridded time series dataset and returns a time series of spatial statistics for each polygon. The GDP is designed for scalability and is generalized such that any data, hosted anywhere on the Internet adhering to the NetCDF-CF conventions, can be processed. Five years into the project, over 600 unique users from more than 200 organizations have used the system's web user interface and some datasets have been accessed thousands of times. In addition to the web interface, python and R client libraries have seen steady usage growth and several third-party web applications have been developed to use the GDP for easy data access. Here, we will present lessons learned and improvements made after five years of operation of the system's user interfaces, processing server, and data holdings. A vision for the future availability and processing of massive climate and landscape data will be outlined.

  14. Dynamic Science Data Services for Display, Analysis and Interaction in Widely-Accessible, Web-Based Geospatial Platforms Project

    Data.gov (United States)

    National Aeronautics and Space Administration — TerraMetrics, Inc., proposes an SBIR Phase I R/R&D program to investigate and develop a key web services architecture that provides data processing, storage and...

  15. The ATS Web Page Provides "Tool Boxes" for: Access Opportunities, Performance, Interfaces, Volume, Environments, "Wish List" Entry and Educational Outreach

    Science.gov (United States)

    1999-01-01

    This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.

  16. The EarthScope Array Network Facility: application-driven low-latency web-based tools for accessing high-resolution multi-channel waveform data

    Science.gov (United States)

    Newman, R. L.; Lindquist, K. G.; Clemesha, A.; Vernon, F. L.

    2008-12-01

    Since April 2004 the EarthScope USArray seismic network has grown to over 400 broadband stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. Providing secure, yet open, access to real-time and archived data for a broad range of audiences is best served by a series of platform agnostic low-latency web-based applications. We present a framework of tools that interface between the world wide web and Boulder Real Time Technologies Antelope Environmental Monitoring System data acquisition and archival software. These tools provide audiences ranging from network operators and geoscience researchers, to funding agencies and the general public, with comprehensive information about the experiment. This ranges from network-wide to station-specific metadata, state-of-health metrics, event detection rates, archival data and dynamic report generation over a stations two year life span. Leveraging open source web-site development frameworks for both the server side (Perl, Python and PHP) and client-side (Flickr, Google Maps/Earth and jQuery) facilitates the development of a robust extensible architecture that can be tailored on a per-user basis, with rapid prototyping and development that adheres to web-standards.

  17. Interoperative fundus image and report sharing in compliance with integrating the healthcare enterprise conformance and web access todigital imaging and communication in medicinepersistent object protocol

    Directory of Open Access Journals (Sweden)

    Hui-Qun Wu

    2013-12-01

    Full Text Available AIM:To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS framework in conformance with digital imaging and communication in medicine (DICOM and health level 7 (HL7 protocol to realize fundus images and reports sharing and communication through internet.METHODS: Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO protocol, which contains three tiers.RESULTS:In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images.CONCLUSION:Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.

  18. Communication of uncertainty in hydrological predictions: a user-driven example web service for Europe

    Science.gov (United States)

    Fry, Matt; Smith, Katie; Sheffield, Justin; Watts, Glenn; Wood, Eric; Cooper, Jon; Prudhomme, Christel; Rees, Gwyn

    2017-04-01

    Water is fundamental to society as it impacts on all facets of life, the economy and the environment. But whilst it creates opportunities for growth and life, it can also cause serious damages to society and infrastructure through extreme hydro-meteorological events such as floods or droughts. Anticipation of future water availability and extreme event risks would both help optimise growth and limit damage through better preparedness and planning, hence providing huge societal benefits. Recent scientific research advances make it now possible to provide hydrological outlooks at monthly to seasonal lead time, and future projections up to the end of the century accounting for climatic changes. However, high uncertainty remains in the predictions, which varies depending on location, time of the year, prediction range and hydrological variable. It is essential that this uncertainty is fully understood by decision makers so they can account for it in their planning. Hence, the challenge is to finds ways to communicate such uncertainty for a range of stakeholders with different technical background and environmental science knowledge. The project EDgE (End-to end Demonstrator for improved decision making in the water sector for Europe) funded by the Copernicus programme (C3S) is a proof-of-concept project that develops a unique service to support decision making for the water sector at monthly to seasonal and to multi-decadal lead times. It is a mutual effort of co-production between hydrologists and environmental modellers, computer scientists and stakeholders representative of key decision makers in Europe for the water sector. This talk will present the iterative co-production process of a web service that serves the need of the user community. Through a series of Focus Group meetings in Spain, Norway and the UK, options for visualising the hydrological predictions and associated uncertainties are presented and discussed first as mock-up dash boards, off-line tools

  19. Age-related differences in the accuracy of web query-based predictions of influenza-like illness.

    Directory of Open Access Journals (Sweden)

    Alexander Domnich

    Full Text Available Web queries are now widely used for modeling, nowcasting and forecasting influenza-like illness (ILI. However, given that ILI attack rates vary significantly across ages, in terms of both magnitude and timing, little is known about whether the association between ILI morbidity and ILI-related queries is comparable across different age-groups. The present study aimed to investigate features of the association between ILI morbidity and ILI-related query volume from the perspective of age.Since Google Flu Trends is unavailable in Italy, Google Trends was used to identify entry terms that correlated highly with official ILI surveillance data. All-age and age-class-specific modeling was performed by means of linear models with generalized least-square estimation. Hold-out validation was used to quantify prediction accuracy. For purposes of comparison, predictions generated by exponential smoothing were computed.Five search terms showed high correlation coefficients of > .6. In comparison with exponential smoothing, the all-age query-based model correctly predicted the peak time and yielded a higher correlation coefficient with observed ILI morbidity (.978 vs. .929. However, query-based prediction of ILI morbidity was associated with a greater error. Age-class-specific query-based models varied significantly in terms of prediction accuracy. In the 0-4 and 25-44-year age-groups, these did well and outperformed exponential smoothing predictions; in the 15-24 and ≥ 65-year age-classes, however, the query-based models were inaccurate and highly overestimated peak height. In all but one age-class, peak timing predicted by the query-based models coincided with observed timing.The accuracy of web query-based models in predicting ILI morbidity rates could differ among ages. Greater age-specific detail may be useful in flu query-based studies in order to account for age-specific features of the epidemiology of ILI.

  20. When should we expect early bursts of trait evolution in comparative data? Predictions from an evolutionary food web model.

    Science.gov (United States)

    Ingram, T; Harmon, L J; Shurin, J B

    2012-09-01

    Conceptual models of adaptive radiation predict that competitive interactions among species will result in an early burst of speciation and trait evolution followed by a slowdown in diversification rates. Empirical studies often show early accumulation of lineages in phylogenetic trees, but usually fail to detect early bursts of phenotypic evolution. We use an evolutionary simulation model to assemble food webs through adaptive radiation, and examine patterns in the resulting phylogenetic trees and species' traits (body size and trophic position). We find that when foraging trade-offs result in food webs where all species occupy integer trophic levels, lineage diversity and trait disparity are concentrated early in the tree, consistent with the early burst model. In contrast, in food webs in which many omnivorous species feed at multiple trophic levels, high levels of turnover of species' identities and traits tend to eliminate the early burst signal. These results suggest testable predictions about how the niche structure of ecological communities may be reflected by macroevolutionary patterns.

  1. Prediction of the behavior of reinforced concrete deep beams with web openings using the finite ele

    Directory of Open Access Journals (Sweden)

    Ashraf Ragab Mohamed

    2014-06-01

    Full Text Available The exact analysis of reinforced concrete deep beams is a complex problem and the presence of web openings aggravates the situation. However, no code provision exists for the analysis of deep beams with web opening. The code implemented strut and tie models are debatable and no unique solution using these models is available. In this study, the finite element method is utilized to study the behavior of reinforced concrete deep beams with and without web openings. Furthermore, the effect of the reinforcement distribution on the beam overall capacity has been studied and compared to the Egyptian code guidelines. The damaged plasticity model has been used for the analysis. Models of simply supported deep beams under 3 and 4-point bending and continuous deep beams with and without web openings have been analyzed. Model verification has shown good agreement to literature experimental work. Results of the parametric analysis have shown that web openings crossing the expected compression struts should be avoided, and the depth of the opening should not exceed 20% of the beam overall depth. The reinforcement distribution should be in the range of 0.1–0.2 beam depth for simply supported deep beams.

  2. PDD CRAWLER: A FOCUSED WEB CRAWLER USING LINK AND CONTENT ANALYSIS FOR RELEVENCE PREDICTION

    Directory of Open Access Journals (Sweden)

    Prashant Dahiwale

    2014-11-01

    Full Text Available Majority of the computer or mobile phone enthusiasts make use of the web for searching activity. Web search engines are used for the searching; The results that the search engines get are provided to it by a software module known as the Web Crawler. The size of this web is increasing round-the-clock. The principal problem is to search this huge database for specific information. To state whether a web page is relevant to a search topic is a dilemma. This paper proposes a crawler called as “PDD crawler” which will follow both a link based as well as a content based approach. This crawler follows a completely new crawling strategy to compute the relevance of the page. It analyses the content of the page based on the information contained in various tags within the HTML source code and then computes the total weight of the page. The page with the highest weight, thus has the maximum content and highest relevance.

  3. Medical high-resolution image sharing and electronic whiteboard system: A pure-web-based system for accessing and discussing lossless original images in telemedicine.

    Science.gov (United States)

    Qiao, Liang; Li, Ying; Chen, Xin; Yang, Sheng; Gao, Peng; Liu, Hongjun; Feng, Zhengquan; Nian, Yongjian; Qiu, Mingguo

    2015-09-01

    There are various medical image sharing and electronic whiteboard systems available for diagnosis and discussion purposes. However, most of these systems ask clients to install special software tools or web plug-ins to support whiteboard discussion, special medical image format, and customized decoding algorithm of data transmission of HRIs (high-resolution images). This limits the accessibility of the software running on different devices and operating systems. In this paper, we propose a solution based on pure web pages for medical HRIs lossless sharing and e-whiteboard discussion, and have set up a medical HRI sharing and e-whiteboard system, which has four-layered design: (1) HRIs access layer: we improved an tile-pyramid model named unbalanced ratio pyramid structure (URPS), to rapidly share lossless HRIs and to adapt to the reading habits of users; (2) format conversion layer: we designed a format conversion engine (FCE) on server side to real time convert and cache DICOM tiles which clients requesting with window-level parameters, to make browsers compatible and keep response efficiency to server-client; (3) business logic layer: we built a XML behavior relationship storage structure to store and share users' behavior, to keep real time co-browsing and discussion between clients; (4) web-user-interface layer: AJAX technology and Raphael toolkit were used to combine HTML and JavaScript to build client RIA (rich Internet application), to meet clients' desktop-like interaction on any pure webpage. This system can be used to quickly browse lossless HRIs, and support discussing and co-browsing smoothly on any web browser in a diversified network environment. The proposal methods can provide a way to share HRIs safely, and may be used in the field of regional health, telemedicine and remote education at a low cost.

  4. Evaluación de la accesibilidad de páginas web de universidades españolas y extranjeras incluidas en rankings universitarios internacionales/Accessibility assessment of web pages of Spanish and foreign universities included in international rankings

    National Research Council Canada - National Science Library

    José R Hilera; Luis Fernández; Esther Suárez; Elena T Vilar

    2013-01-01

      This article describes a study conducted to evaluate the accessibility of the contents of the Web sites of some of the most important universities in Spain and the rest or the world, according with...

  5. Web使用模式研究中的数据挖掘%Web Access Pattern Data-mining

    Institute of Scientific and Technical Information of China (English)

    张娥; 冯秋红; 宣慧玉; 田增瑞

    2001-01-01

    Web使用模式挖掘是利用Web使用数据的高级手段,是对Web使用数据的深层次分析,从而挖掘出有效的、新颖的、潜在的、有用的及最终可以理解的知识,以帮助管理决策。综述了Web使用模式的数据挖掘研究技术的内容、现状和研究的方向。%Companies are interested in how the users use their Web sites and what they mostly care day by day, for it is fundamental in company making it's strategy. Web usage data mining is an effective means to deeply analyze Web usage data, and it can offer valid, novelty and useful knowledge, then it would be helpful to management decision. In this paper, we introduce what is Web usage or Web usability data-mining, at the same time we present the method be used and the question should be solved in this domain in the future.

  6. Enabling the dynamic coupling between sensor web and Earth system models - The Self-Adaptive Earth Predictive Systems (SEPS) framework

    Science.gov (United States)

    di, L.; Yu, G.; Chen, N.

    2007-12-01

    The self-adaptation concept is the central piece of the control theory widely and successfully used in engineering and military systems. Such a system contains a predictor and a measurer. The predictor takes initial condition and makes an initial prediction and the measurer then measures the state of a real world phenomenon. A feedback mechanism is built in that automatically feeds the measurement back to the predictor. The predictor takes the measurement against the prediction to calculate the prediction error and adjust its internal state based on the error. Thus, the predictor learns from the error and makes a more accurate prediction in the next step. By adopting the self-adaptation concept, we proposed the Self-adaptive Earth Predictive System (SEPS) concept for enabling the dynamic coupling between the sensor web and the Earth system models. The concept treats Earth System Models (ESM) and Earth Observations (EO) as integral components of the SEPS coupled by the SEPS framework. EO measures the Earth system state while ESM predicts the evolution of the state. A feedback mechanism processes EO measurements and feeds them into ESM during model runs or as initial conditions. A feed-forward mechanism analyzes the ESM predictions against science goals for scheduling optimized/targeted observations. The SEPS framework automates the Feedback and Feed-forward mechanisms (the FF-loop). Based on open consensus-based standards, a general SEPS framework can be developed for supporting the dynamic, interoperable coupling between ESMs and EO. Such a framework can support the plug-in-and-play capability of both ESMs and diverse sensors and data systems as long as they support the standard interfaces. This presentation discusses the SEPS concept, the service-oriented architecture (SOA) of SEPS framework, standards of choices for the framework, and the implementation. The presentation also presents examples of SEPS to demonstrate dynamic, interoperable, and live coupling of

  7. Accurate single-sequence prediction of solvent accessible surface area using local and global features.

    Science.gov (United States)

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-11-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org.

  8. Accurate single-sequence prediction of solvent accessible surface area using local and global features

    Science.gov (United States)

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-01-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636

  9. 基于WebAccess的远程实验物流控制系统设计%Design of a Remote Logistics Control System Based on WebAccess

    Institute of Scientific and Technical Information of China (English)

    朱光灿; 郑萍; 邵子惠; 彭昱; 温百东

    2012-01-01

    根据对远程监控的需求,提出了一种完全基于IE浏览器的网际组态软件WebAccess实现对实验室物流控制系统的远程监控设计.该设计构建了一个具有现场控制对象、控制层、网络层以及基于西门子组态软件WinCC的监控管理层3层网络的物流控制系统,同时充分利用网际组态软件WebAccess便捷的网际功能,通过OPC方式与监控管理层的WinCC服务器进行数据交换,实现了系统的远程控制、远程组态以及远程访问的客户监控数无限扩展.实际运行证明,该系统成本低,网络层次分明,是一种可激发学生创新能力,可实现现代大综合设计实验的良好平台.%With the application of configuration software fully based on IE browser-WebAccess, a design to fulfill the remote monitoring and control of the laboratory logistics control system is presented. The logistics control system with control objects at the scene has been constructed by three layers: the control layer, the network layer, and the management and monitoring layer based on Siemens configuration software WinCC. Meanwhile, by making sufficient use of the WebAccess' s convenient internet function, through internal data exchanging with the WinCC server in the management and monitoring layer by OPC , the number of the clients monitored, remote configuration and remote access can be infinitely expanded. Actual practice proves that the proposed system is economical and clearly structured, and is a good platform for modern comprehensive design experiment,which can arouse students' innovative ability.

  10. Integrated web-based viewing and secure remote access to a clinical data repository and diverse clinical systems.

    Science.gov (United States)

    Duncan, R G; Saperia, D; Dulbandzhyan, R; Shabot, M M; Polaschek, J X; Jones, D T

    2001-01-01

    The advent of the World-Wide-Web protocols and client-server technology has made it easy to build low-cost, user-friendly, platform-independent graphical user interfaces to health information systems and to integrate the presentation of data from multiple systems. The authors describe a Web interface for a clinical data repository (CDR) that was moved from concept to production status in less than six months using a rapid prototyping approach, multi-disciplinary development team, and off-the-shelf hardware and software. The system has since been expanded to provide an integrated display of clinical data from nearly 20 disparate information systems.

  11. Ajax and Web Services

    CERN Document Server

    Pruett, Mark

    2006-01-01

    Ajax and web services are a perfect match for developing web applications. Ajax has built-in abilities to access and manipulate XML data, the native format for almost all REST and SOAP web services. Using numerous examples, this document explores how to fit the pieces together. Examples demonstrate how to use Ajax to access publicly available web services fromYahoo! and Google. You'll also learn how to use web proxies to access data on remote servers and how to transform XML data using XSLT.

  12. Fast and Accurate Accessible Surface Area Prediction Without a Sequence Profile.

    Science.gov (United States)

    Faraggi, Eshel; Kouza, Maksim; Zhou, Yaoqi; Kloczkowski, Andrzej

    2017-01-01

    A fast accessible surface area (ASA) predictor is presented. In this new approach no residue mutation profiles generated by multiple sequence alignments are used as inputs. Instead, we use only single sequence information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for ASAquick are available from Research and Information Systems at http://mamiris.com and from the Battelle Center for Mathematical Medicine at http://mathmed.org .

  13. An Adaptive Medium Access Parameter Prediction Scheme for IEEE 802.11 Real-Time Applications

    Directory of Open Access Journals (Sweden)

    Estefanía Coronado

    2017-01-01

    Full Text Available Multimedia communications have experienced an unprecedented growth due mainly to the increase in the content quality and the emergence of smart devices. The demand for these contents is tending towards wireless technologies. However, these transmissions are quite sensitive to network delays. Therefore, ensuring an optimum QoS level becomes of great importance. The IEEE 802.11e amendment was released to address the lack of QoS capabilities in the original IEEE 802.11 standard. Accordingly, the Enhanced Distributed Channel Access (EDCA function was introduced, allowing it to differentiate traffic streams through a group of Medium Access Control (MAC parameters. Although EDCA recommends a default configuration for these parameters, it has been proved that it is not optimum in many scenarios. In this work a dynamic prediction scheme for these parameters is presented. This approach ensures an appropriate traffic differentiation while maintaining compatibility with the stations without QoS support. As the APs are the only devices that use this algorithm, no changes are required to current network cards. The results show improvements in both voice and video transmissions, as well as in the QoS level of the network that the proposal achieves with regard to EDCA.

  14. Integrating in Silico and in Vitro Approaches To Predict Drug Accessibility to the Central Nervous System.

    Science.gov (United States)

    Zhang, Yan-Yan; Liu, Houfu; Summerfield, Scott G; Luscombe, Christopher N; Sahi, Jasminder

    2016-05-01

    Estimation of uptake across the blood-brain barrier (BBB) is key to designing central nervous system (CNS) therapeutics. In silico approaches ranging from physicochemical rules to quantitative structure-activity relationship (QSAR) models are utilized to predict potential for CNS penetration of new chemical entities. However, there are still gaps in our knowledge of (1) the relationship between marketed human drug derived CNS-accessible chemical space and preclinical neuropharmacokinetic (neuroPK) data, (2) interpretability of the selected physicochemical descriptors, and (3) correlation of the in vitro human P-glycoprotein (P-gp) efflux ratio (ER) and in vivo rodent unbound brain-to-blood ratio (Kp,uu), as these are assays routinely used to predict clinical CNS exposure, during drug discovery. To close these gaps, we explored the CNS druglike property boundaries of 920 market oral drugs (315 CNS and 605 non-CNS) and 846 compounds (54 CNS drugs and 792 proprietary GlaxoSmithKline compounds) with available rat Kp,uu data. The exact permeability coefficient (Pexact) and P-gp ER were determined for 176 compounds from the rat Kp,uu data set. Receiver operating characteristic curves were performed to evaluate the predictive power of human P-gp ER for rat Kp,uu. Our data demonstrates that simple physicochemical rules (most acidic pKa ≥ 9.5 and TPSA methods were investigated using multiple sets of in silico molecular descriptors. We present a random forest model with excellent predictive power (∼0.75 overall accuracy) using the rat neuroPK data set. We also observed good concordance between the structural interpretation results and physicochemical descriptor importance from the Kp,uu classification QSAR model. In summary, we propose a novel, hybrid in silico/in vitro approach and an in silico screening model for the effective development of chemical series with the potential to achieve optimal CNS exposure.

  15. Eduquito: ferramentas de autoria e de colaboração acessíveis na perspectiva da web 2.0 Eduquito: accessible authorship and collaboration tools from the perspective of web 2.0

    Directory of Open Access Journals (Sweden)

    Lucila Maria Costi Santarosa

    2012-09-01

    Full Text Available o Eduquito, ambiente digital/virtual de aprendizagem desenvolvido pela equipe de pesquisadores do NIEE/UFRGS, busca apoiar processos de inclusão sociodigital, por ser projetado em sintonia com os princípios de acessibilidade e de desenho universal, normatizados pela WAI/W3C. O desenvolvimento da plataforma digital/virtual acessível e os resultados da utilização por pessoas com deficiências são discutidos, revelando um processo permanente de verificação e de validação dos recursos e da funcionalidade do ambiente Eduquito junto a diversidade humana. Apresentamos e problematizamos duas ferramentas de autoria individual e coletiva - a Oficina Multimídia e o Bloguito, um blog acessível -, novos recursos do ambiente Eduquito que emergem da aplicabilidade do conceito de pervasividade, buscando instituir espaços de letramento e impulsionar práticas de mediação tecnológica para a inclusão sociodigital no contexto da Web 2.0.the Eduquito, a digital/virtual learning environment developed by a NIEE / UFRGS team of researchers, seeks to support processes of socio-digital inclusion, and for that reason it was devised according to accessibility principles and universal design systematized by WAI/W3C. We discuss the development of a digital/virtual accessible platform and the results of its use by people with special needs, revealing an ongoing process of verification and validation of features and functionality of the Eduquito environment considering human diversity. We present and question two individual and collective authorship tools - the Multimedia Workshop and Bloguito, an accessible blog - new features of Eduquito Environment that emerge from the applicability of the concept of pervasiveness, in order to establish literacy spaces and boost technological mediation for socio-digital inclusion in the Web 2.0 context.

  16. Accessibility, usability, and usefulness of a Web-based clinical decision support tool to enhance provider-patient communication around Self-management TO Prevent (STOP) Stroke.

    Science.gov (United States)

    Anderson, Jane A; Godwin, Kyler M; Saleem, Jason J; Russell, Scott; Robinson, Joshua J; Kimmel, Barbara

    2014-12-01

    This article reports redesign strategies identified to create a Web-based user-interface for the Self-management TO Prevent (STOP) Stroke Tool. Members of a Stroke Quality Improvement Network (N = 12) viewed a visualization video of a proposed prototype and provided feedback on implementation barriers/facilitators. Stroke-care providers (N = 10) tested the Web-based prototype in think-aloud sessions of simulated clinic visits. Participants' dialogues were coded into themes. Access to comprehensive information and the automated features/systematized processes were the primary accessibility and usability facilitator themes. The need for training, time to complete the tool, and computer-centric care were identified as possible usability barriers. Patient accountability, reminders for best practice, goal-focused care, and communication/counseling themes indicate that the STOP Stroke Tool supports the paradigm of patient-centered care. The STOP Stroke Tool was found to prompt clinicians on secondary stroke-prevention clinical-practice guidelines, facilitate comprehensive documentation of evidence-based care, and support clinicians in providing patient-centered care through the shared decision-making process that occurred while using the action-planning/goal-setting feature of the tool.

  17. Web-Based Predictive Analytics to Improve Patient Flow in the Emergency Department

    Science.gov (United States)

    Buckler, David L.

    2012-01-01

    The Emergency Department (ED) simulation project was established to demonstrate how requirements-driven analysis and process simulation can help improve the quality of patient care for the Veterans Health Administration's (VHA) Veterans Affairs Medical Centers (VAMC). This project developed a web-based simulation prototype of patient flow in EDs, validated the performance of the simulation against operational data, and documented IT requirements for the ED simulation.

  18. ON WEB SERVICES ACCESS CONTROL BASED ON QUANTIFIED-ROLE%基于量化角色的Web服务访问控制研究

    Institute of Scientific and Technical Information of China (English)

    吴春雷; 崔学荣

    2012-01-01

    The concepts of permission value and quantified-role are introduced to build a fine-grained Web services access control model. By defining the resources of Web services, service attributes and access modes set, the definitions of permissions set is expanded. The definition and distribution of permission values are studied, and the validation and representation of quantified-role are analysed. The concept of ' behaviour value' of Web services user is proposed, and the correlation between the behaviour values with the role quantity of a user is established. The dynamic calculation of behaviour value and the adjustment of users permissions are achieved based on users behaviours and the context.%引入权限量值和量化角色的概念,建立一个细粒度的Web服务访问控制模型.通过定义Web服务和服务属性资源以及访问模式集,扩展权限集的定义;研究Web服务权限量值的定义和分配,以及量化角色的验证和表示形式;提出Web服务主体的行为量值的概念,建立与主体的角色量值的关联,实现根据Web服务主体的行为和上下文环境动态计算行为量值并调整主体权限的方法.

  19. Enabling Web-Based GIS Tools for Internet and Mobile Devices To Improve and Expand NASA Data Accessibility and Analysis Functionality for the Renewable Energy and Agricultural Applications

    Science.gov (United States)

    Ross, A.; Stackhouse, P. W.; Tisdale, B.; Tisdale, M.; Chandler, W.; Hoell, J. M., Jr.; Kusterer, J.

    2014-12-01

    The NASA Langley Research Center Science Directorate and Atmospheric Science Data Center have initiated a pilot program to utilize Geographic Information System (GIS) tools that enable, generate and store climatological averages using spatial queries and calculations in a spatial database resulting in greater accessibility of data for government agencies, industry and private sector individuals. The major objectives of this effort include the 1) Processing and reformulation of current data to be consistent with ESRI and openGIS tools, 2) Develop functions to improve capability and analysis that produce "on-the-fly" data products, extending these past the single location to regional and global scales. 3) Update the current web sites to enable both web-based and mobile application displays for optimization on mobile platforms, 4) Interact with user communities in government and industry to test formats and usage of optimization, and 5) develop a series of metrics that allow for monitoring of progressive performance. Significant project results will include the the development of Open Geospatial Consortium (OGC) compliant web services (WMS, WCS, WFS, WPS) that serve renewable energy and agricultural application products to users using GIS software and tools. Each data product and OGC service will be registered within ECHO, the Common Metadata Repository, the Geospatial Platform, and Data.gov to ensure the data are easily discoverable and provide data users with enhanced access to SSE data, parameters, services, and applications. This effort supports cross agency, cross organization, and interoperability of SSE data products and services by collaborating with DOI, NRCan, NREL, NCAR, and HOMER for requirements vetting and test bed users before making available to the wider public.

  20. Multisite Web-based training in using the Braden Scale to predict pressure sore risk.

    Science.gov (United States)

    Magnan, Morris A; Maklebust, JoAnn

    2008-03-01

    To evaluate the effect of a Web-based Braden Scale training module on nurses' knowledge of pressure-ulcer risk assessment and prevention. Pre-experimental, posttest-only design. Web-based learning environment. Registered nurses (N=1391) working at 3 medical centers in the Midwest. Primary outcomes of interest were reliability and competence associated with using the Braden Scale for pressure-ulcer risk assessment. Secondary outcomes of interest focused on program evaluation, specifically nurses' perceptions of program adequacy and ease of use. After training, nurses correctly rated Braden Scale level of risk 82.6% of the time. Numeric ratings for Braden subscales were generally more reliable when case-study data indicated extreme risk levels (generally not at-risk level, high-risk level, and very high level) than when data indicated midlevels of risk (mild-risk level and moderate-risk level). Nurses' knowledge of appropriate risk-based preventive interventions was high, but correlated poorly with the ability to correctly assign numeric ratings to Braden subscales. Web-based training alone may not ensure reliable, competent estimates of pressure-ulcer risk for patients at all risk levels. Other strategies, such as clinical practice with expert supervision, should be considered. Further research is needed to clarify the links between scoring Braden subscales correctly and selecting appropriate risk-based preventive interventions.

  1. Lotic cyprinid communities can be structured as nest webs and predicted by the stress-gradient hypothesis.

    Science.gov (United States)

    Peoples, Brandon K; Blanc, Lori A; Frimpong, Emmanuel A

    2015-11-01

    Little is known about how positive biotic interactions structure animal communities. Nest association is a common reproductive facilitation in which associate species spawn in nests constructed by host species. Nest-associative behaviour is nearly obligate for some species, but facultative for others; this can complicate interaction network topology. Nest web diagrams can be used to depict interactions in nesting-structured communities and generate predictions about those interactions, but have thus far only been applied to cavity-nesting vertebrate communities. Likewise, the stress-gradient hypothesis (SGH) predicts that prevalent biotic interactions shift from competition to facilitation as abiotic and biotic stress increase; this model has been hardly applied to animal communities. Here, both of these models were applied to nest-associative fish communities and extended in novel ways to broaden their applicability. A nest web was constructed using spawning observations over 3 years in several streams in south-western Virginia, USA. Structural equation modelling (SEM) was then implemented through an information-theoretic framework to identify the most plausible nest web topology in stream fish communities at 45 sites in the New River basin of the central Appalachian Mountains, USA. To test the SGH, the per-nest reproductive success of 'strong' (nearly obligate) nest associates was used to represent interaction importance. Eigenvectors were extracted from a principal coordinate analysis (PCoA) of proportional species abundances to represent community structure. Both of these metrics were regressed on physical stress, a combination of catchment-scale agricultural land use and stream size (representing spatiotemporal habitat variability). Seventy-one per cent of SEM model evidence supported a parsimonious interaction topology in which strong associates rely on a single host (Nocomis), but not other species. PCoA identified a gradient of community structure dominated

  2. Organization-based Access Control Model for Web Service%基于组织的Web服务访问控制模型

    Institute of Scientific and Technical Information of China (English)

    李怀明; 王慧佳; 符林

    2014-01-01

    For the problem of current access control strategies difficultly guaranteeing the flexibility of authorization of complex E-government system for Web service,this paper proposes an organization-based access control model for Web services on the basis of the research of the organization-based 4 level access control model. The model takes organization as the core and studies the issue of access control and authorization management from the perspective of management. Through importing the position agent and authorization unit in the model,the authorization can be adjusted according to the change of the environment context information to implement the dynamic authorization,while taking advantage of the state migration of authorization units,provides support for workflow patterns. Furthermore,the model divides permissions into service permissions and service attribute permissions, and achieves fine-grained resource protection. Application examples show that the model can commendably fit the complex organization structure in E-government system. Moreover,it can make authorization more efficient and flexible meanwhile protecting the Web service resources.%针对现有访问控制策略难以保障面向Web服务的复杂电子政务系统授权的灵活性问题,在研究基于组织的四层访问控制模型(OB4LAC)的基础上,提出一种基于组织的Web服务访问控制模型。以组织为核心,从管理的视角研究访问控制与授权管理问题。通过引入岗位代理和授权单元,使授权随着环境上下文信息的变化而调整,从而实现动态授权,同时利用授权单元的状态迁移,对工作流模式提供支持。并且模型将权限分为服务权限和服务属性权限2级,实现细粒度的资源保护。应用实例结果表明,该模型能够契合电子政务系统中的复杂组织结构,在保护Web服务资源的同时,使得授权更加高效和灵活。

  3. Web Services-Based Access to Local Clinical Trial Databases: A Standards Initiative of the Association of American Cancer Institutes

    OpenAIRE

    Stahl, Douglas C.; Evans, Richard M.; Afrin, Lawrence B.; DeTeresa, Richard M.; Ko, Dave; Mitchell, Kevin

    2003-01-01

    Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center’s locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date hav...

  4. iGPCR-drug: a web server for predicting interaction between GPCRs and drugs in cellular networking.

    Directory of Open Access Journals (Sweden)

    Xuan Xiao

    Full Text Available Involved in many diseases such as cancer, diabetes, neurodegenerative, inflammatory and respiratory disorders, G-protein-coupled receptors (GPCRs are among the most frequent targets of therapeutic drugs. It is time-consuming and expensive to determine whether a drug and a GPCR are to interact with each other in a cellular network purely by means of experimental techniques. Although some computational methods were developed in this regard based on the knowledge of the 3D (dimensional structure of protein, unfortunately their usage is quite limited because the 3D structures for most GPCRs are still unknown. To overcome the situation, a sequence-based classifier, called "iGPCR-drug", was developed to predict the interactions between GPCRs and drugs in cellular networking. In the predictor, the drug compound is formulated by a 2D (dimensional fingerprint via a 256D vector, GPCR by the PseAAC (pseudo amino acid composition generated with the grey model theory, and the prediction engine is operated by the fuzzy K-nearest neighbour algorithm. Moreover, a user-friendly web-server for iGPCR-drug was established at http://www.jci-bioinfo.cn/iGPCR-Drug/. For the convenience of most experimental scientists, a step-by-step guide is provided on how to use the web-server to get the desired results without the need to follow the complicated math equations presented in this paper just for its integrity. The overall success rate achieved by iGPCR-drug via the jackknife test was 85.5%, which is remarkably higher than the rate by the existing peer method developed in 2010 although no web server was ever established for it. It is anticipated that iGPCR-Drug may become a useful high throughput tool for both basic research and drug development, and that the approach presented here can also be extended to study other drug - target interaction networks.

  5. Web services-based access to local clinical trial databases: a standards initiative of the Association of American Cancer Institutes.

    Science.gov (United States)

    Stahl, Douglas C; Evans, Richard M; Afrin, Lawrence B; DeTeresa, Richard M; Ko, Dave; Mitchell, Kevin

    2003-01-01

    Electronic discovery of the clinical trials being performed at a specific research center is a challenging task, which presently requires manual review of the center's locally maintained databases or web pages of protocol listings. Near real-time automated discovery of available trials would increase the efficiency and effectiveness of clinical trial searching, and would facilitate the development of new services for information providers and consumers. Automated discovery efforts to date have been hindered by issues such as disparate database schemas, vocabularies, and insufficient standards for easy intersystem exchange of high-level data, but adequate infrastructure now exists that make possible the development of applications for near real-time automated discovery of trials. This paper describes the current state (design and implementation) of the Web Services Specification for Publication and Discovery of Clinical Trials as developed by the Technology Task Force of the Association of American Cancer Institutes. The paper then briefly discusses a prototype web service-based application that implements the specification. Directions for evolution of this specification are also discussed.

  6. Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS): A web-based tool for addressing the challenges of cross-species extrapolation of chemical toxicity

    Science.gov (United States)

    Conservation of a molecular target across species can be used as a line-of-evidence to predict the likelihood of chemical susceptibility. The web-based Sequence Alignment to Predict Across Species Susceptibility (SeqAPASS) tool was developed to simplify, streamline, and quantitat...

  7. Network of Research Infrastructures for European Seismology (NERIES)-Web Portal Developments for Interactive Access to Earthquake Data on a European Scale

    Science.gov (United States)

    Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.

    2009-04-01

    The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the

  8. Ship accessibility predictions for the Arctic Ocean based on IPCC CO2 emission scenarios

    Science.gov (United States)

    Oh, Jai-Ho; Woo, Sumin; Yang, Sin-Il

    2017-02-01

    Changes in the extent of Arctic sea ice, which have resulted from climate change, offer new opportunities to use the Northern Sea Route (NSR) and Northwest Passage (NWP) for shipping. However, choosing to navigate the Arctic Ocean remains challenging due to the limited accessibility of ships and the balance between economic gain and potential risk. As a result, more precise and detailed information on both weather and sea ice change in the Arctic are required. In this study, a high-resolution global AGCM was used to provide detailed information on the extent and thickness of Arctic sea ice. For this simulation, we have simulated the AMIP-type simulation for the present-day climate during 31 years from 1979 to 2009 with observed SST and Sea Ice concentration. For the future climate projection, we have performed the historical climate during 1979-2005 and subsequently the future climate projection during 2010-2099 with mean of four CMIP5 models due to the two Representative Concentration Pathway scenarios (RCP 8.5 and RCP 4.5). First, the AMIP-type simulation was evaluated by comparison with observations from the Hadley Centre sea-ice and Sea Surface Temperature (HadlSST) dataset. The model reflects the maximum (in March) and minimum (in September) sea ice extent and annual cycle. Based on this validation, the future sea ice extents show the decreasing trend for both the maximum and minimum seasons and RCP 8.5 shows more sharply decreasing patterns of sea ice than RCP 4.5. Under both scenarios, ships classified as Polar Class (PC) 3 and Open-Water (OW) were predicted to have the largest and smallest number of ship-accessible days (in any given year) for the NSR and NWP, respectively. Based on the RCP 8.5 scenario, the projections suggest that after 2070, PC3 and PC6 vessels will have year-round access across to the Arctic Ocean. In contrast, OW vessels will continue to have a seasonal handicap, inhibiting their ability to pass through the NSR and NWP.

  9. A SMART groundwater portal: An OGC web services orchestration framework for hydrology to improve data access and visualisation in New Zealand

    Science.gov (United States)

    Klug, Hermann; Kmoch, Alexander

    2014-08-01

    Transboundary and cross-catchment access to hydrological data is the key to designing successful environmental policies and activities. Electronic maps based on distributed databases are fundamental for planning and decision making in all regions and for all spatial and temporal scales. Freshwater is an essential asset in New Zealand (and globally) and the availability as well as accessibility of hydrological information held by or held for public authorities and businesses are becoming a crucial management factor. Access to and visual representation of environmental information for the public is essential for attracting greater awareness of water quality and quantity matters. Detailed interdisciplinary knowledge about the environment is required to ensure that the environmental policy-making community of New Zealand considers regional and local differences of hydrological statuses, while assessing the overall national situation. However, cross-regional and inter-agency sharing of environmental spatial data is complex and challenging. In this article, we firstly provide an overview of the state of the art standard compliant techniques and methodologies for the practical implementation of simple, measurable, achievable, repeatable, and time-based (SMART) hydrological data management principles. Secondly, we contrast international state of the art data management developments with the present status for groundwater information in New Zealand. Finally, for the topics (i) data access and harmonisation, (ii) sensor web enablement and (iii) metadata, we summarise our findings, provide recommendations on future developments and highlight the specific advantages resulting from a seamless view, discovery, access, and analysis of interoperable hydrological information and metadata for decision making.

  10. Mining Web-based Educational Systems to Predict Student Learning Achievements

    Directory of Open Access Journals (Sweden)

    José del Campo-Ávila

    2015-03-01

    Full Text Available Educational Data Mining (EDM is getting great importance as a new interdisciplinary research field related to some other areas. It is directly connected with Web-based Educational Systems (WBES and Data Mining (DM, a fundamental part of Knowledge Discovery in Databases. The former defines the context: WBES store and manage huge amounts of data. Such data are increasingly growing and they contain hidden knowledge that could be very useful to the users (both teachers and students. It is desirable to identify such knowledge in the form of models, patterns or any other representation schema that allows a better exploitation of the system. The latter reveals itself as the tool to achieve such discovering. Data mining must afford very complex and different situations to reach quality solutions. Therefore, data mining is a research field where many advances are being done to accommodate and solve emerging problems. For this purpose, many techniques are usually considered. In this paper we study how data mining can be used to induce student models from the data acquired by a specific Web-based tool for adaptive testing, called SIETTE. Concretely we have used top down induction decision trees algorithms to extract the patterns because these models, decision trees, are easily understandable. In addition, the conducted validation processes have assured high quality models.

  11. Web-GIS platform for forest fire danger prediction in Ukraine: prospects of RS technologies

    Science.gov (United States)

    Baranovskiy, N. V.; Zharikova, M. V.

    2016-10-01

    There are many different statistical and empirical methods of forest fire danger use at present time. All systems have not physical basis. Last decade deterministic-probabilistic method is rapidly developed in Tomsk Polytechnic University. Forest sites classification is one way to estimate forest fire danger. We used this method in present work. Forest fire danger estimation depends on forest vegetation condition, forest fire retrospective, precipitation and air temperature. In fact, we use modified Nesterov Criterion. Lightning activity is under consideration as a high temperature source in present work. We use Web-GIS platform for program realization of this method. The program realization of the fire danger assessment system is the Web-oriented geoinformation system developed by the Django platform in the programming language Python. The GeoDjango framework was used for realization of cartographic functions. We suggest using of Terra/Aqua MODIS products for hot spot monitoring. Typical territory for forest fire danger estimation is Proletarskoe forestry of Kherson region (Ukraine).

  12. 76 FR 71914 - Nondiscrimination on the Basis of Disability in Air Travel: Accessibility of Web Sites and...

    Science.gov (United States)

    2011-11-21

    ... industry. We are aware that there are pros and cons to our proposal to require carriers to work with their... the pros and cons from their perspectives of this approach. 6. Ongoing Costs To Maintain an Accessible... compatible with screen-reader technology and is activated by a single click on the homepage of the...

  13. Impact of Predicting Health Care Utilization Via Web Search Behavior: A Data-Driven Analysis.

    Science.gov (United States)

    Agarwal, Vibhu; Zhang, Liangliang; Zhu, Josh; Fang, Shiyuan; Cheng, Tim; Hong, Chloe; Shah, Nigam H

    2016-09-21

    By recent estimates, the steady rise in health care costs has deprived more than 45 million Americans of health care services and has encouraged health care providers to better understand the key drivers of health care utilization from a population health management perspective. Prior studies suggest the feasibility of mining population-level patterns of health care resource utilization from observational analysis of Internet search logs; however, the utility of the endeavor to the various stakeholders in a health ecosystem remains unclear. The aim was to carry out a closed-loop evaluation of the utility of health care use predictions using the conversion rates of advertisements that were displayed to the predicted future utilizers as a surrogate. The statistical models to predict the probability of user's future visit to a medical facility were built using effective predictors of health care resource utilization, extracted from a deidentified dataset of geotagged mobile Internet search logs representing searches made by users of the Baidu search engine between March 2015 and May 2015. We inferred presence within the geofence of a medical facility from location and duration information from users' search logs and putatively assigned medical facility visit labels to qualifying search logs. We constructed a matrix of general, semantic, and location-based features from search logs of users that had 42 or more search days preceding a medical facility visit as well as from search logs of users that had no medical visits and trained statistical learners for predicting future medical visits. We then carried out a closed-loop evaluation of the utility of health care use predictions using the show conversion rates of advertisements displayed to the predicted future utilizers. In the context of behaviorally targeted advertising, wherein health care providers are interested in minimizing their cost per conversion, the association between show conversion rate and predicted

  14. Chess games study & prediction through the use of Web-Scraping and regression analysis

    OpenAIRE

    Vence Pillarella, Elvin

    2015-01-01

    La predicción del resultado de partidas de ajedrez usando distintos tipos de Análisis de Regresión. Regresión Logística tomando la diferencia de los Rating Elo de los jugadores involucrados y Regresión basada en distancias haciendo uso de variables explicativas no-cuantitativas. A partir de la obtención de los resultados, una comparación y análisis entre ellos fue llevado a cabo. Para la obtención de la data se hizo uso de técnicas de Web Scraping y funciones de Regular Expressions (detección...

  15. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  16. Gentrepid V2.0: A web server for candidate disease gene prediction

    NARCIS (Netherlands)

    Ballouz, S.; Liu, J.Y.; George, R.A.; Bains, N.; Liu, A.; Oti, M.O.; Gaeta, B.; Fatkin, D.; Wouters, M.A.

    2013-01-01

    BACKGROUND: Candidate disease gene prediction is a rapidly developing area of bioinformatics research with the potential to deliver great benefits to human health. As experimental studies detecting associations between genetic intervals and disease proliferate, better bioinformatic techniques that c

  17. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models.

    Science.gov (United States)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com .

  18. TargetNet: a web service for predicting potential drug-target interaction profiling via multi-target SAR models

    Science.gov (United States)

    Yao, Zhi-Jiang; Dong, Jie; Che, Yu-Jing; Zhu, Min-Feng; Wen, Ming; Wang, Ning-Ning; Wang, Shan; Lu, Ai-Ping; Cao, Dong-Sheng

    2016-05-01

    Drug-target interactions (DTIs) are central to current drug discovery processes and public health fields. Analyzing the DTI profiling of the drugs helps to infer drug indications, adverse drug reactions, drug-drug interactions, and drug mode of actions. Therefore, it is of high importance to reliably and fast predict DTI profiling of the drugs on a genome-scale level. Here, we develop the TargetNet server, which can make real-time DTI predictions based only on molecular structures, following the spirit of multi-target SAR methodology. Naïve Bayes models together with various molecular fingerprints were employed to construct prediction models. Ensemble learning from these fingerprints was also provided to improve the prediction ability. When the user submits a molecule, the server will predict the activity of the user's molecule across 623 human proteins by the established high quality SAR model, thus generating a DTI profiling that can be used as a feature vector of chemicals for wide applications. The 623 SAR models related to 623 human proteins were strictly evaluated and validated by several model validation strategies, resulting in the AUC scores of 75-100 %. We applied the generated DTI profiling to successfully predict potential targets, toxicity classification, drug-drug interactions, and drug mode of action, which sufficiently demonstrated the wide application value of the potential DTI profiling. The TargetNet webserver is designed based on the Django framework in Python, and is freely accessible at http://targetnet.scbdd.com.

  19. WEB GIS: IMPLEMENTATION ISSUES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    With the rapid expansion and development of Internet and WWW (World Wide Web or Web), Web GIS (Web Geographical Information Systen) is becoming ever more popular and as a result numerous sites have added GIS capability on their Web sites. In this paper, the reasons behind developing a Web GIS instead of a “traditional” GIS are first outlined. Then the current status of Web GIS is reviewed, and their implementation methodologies are explored as well.The underlying technologies for developing Web GIS, such as Web Server, Web browser, CGI (Common Gateway Interface), Java, ActiveX, are discussed, and some typical implementation tools from both commercial and public domain are given as well. Finally, the future development direction of Web GIS is predicted.

  20. MemType-2L: a web server for predicting membrane proteins and their types by incorporating evolution information through Pse-PSSM.

    Science.gov (United States)

    Chou, Kuo-Chen; Shen, Hong-Bin

    2007-08-24

    Given an uncharacterized protein sequence, how can we identify whether it is a membrane protein or not? If it is, which membrane protein type it belongs to? These questions are important because they are closely relevant to the biological function of the query protein and to its interaction process with other molecules in a biological system. Particularly, with the avalanche of protein sequences generated in the Post-Genomic Age and the relatively much slower progress in using biochemical experiments to determine their functions, it is highly desired to develop an automated method that can be used to help address these questions. In this study, a 2-layer predictor, called MemType-2L, has been developed: the 1st layer prediction engine is to identify a query protein as membrane or non-membrane; if it is a membrane protein, the process will be automatically continued with the 2nd-layer prediction engine to further identify its type among the following eight categories: (1) type I, (2) type II, (3) type III, (4) type IV, (5) multipass, (6) lipid-chain-anchored, (7) GPI-anchored, and (8) peripheral. MemType-2L is featured by incorporating the evolution information through representing the protein samples with the Pse-PSSM (Pseudo Position-Specific Score Matrix) vectors, and by containing an ensemble classifier formed by fusing many powerful individual OET-KNN (Optimized Evidence-Theoretic K-Nearest Neighbor) classifiers. The success rates obtained by MemType-2L on a new-constructed stringent dataset by both the jackknife test and the independent dataset test are quite high, indicating that MemType-2L may become a very useful high throughput tool. As a Web server, MemType-2L is freely accessible to the public at http://chou.med.harvard.edu/bioinf/MemType.

  1. Web 2.0

    CERN Document Server

    Han, Sam

    2012-01-01

    Web 2.0 is a highly accessible introductory text examining all the crucial discussions and issues which surround the changing nature of the World Wide Web. It not only contextualises the Web 2.0 within the history of the Web, but also goes on to explore its position within the broader dispositif of emerging media technologies.The book uncovers the connections between diverse media technologies including mobile smart phones, hand-held multimedia players, ""netbooks"" and electronic book readers such as the Amazon Kindle, all of which are made possible only by the Web 2.0. In addition, Web 2.0 m

  2. Improving prediction of secondary structure, local backbone angles, and solvent accessible surface area of proteins by iterative deep learning.

    Science.gov (United States)

    Heffernan, Rhys; Paliwal, Kuldip; Lyons, James; Dehzangi, Abdollah; Sharma, Alok; Wang, Jihua; Sattar, Abdul; Yang, Yuedong; Zhou, Yaoqi

    2015-01-01

    Direct prediction of protein structure from sequence is a challenging problem. An effective approach is to break it up into independent sub-problems. These sub-problems such as prediction of protein secondary structure can then be solved independently. In a previous study, we found that an iterative use of predicted secondary structure and backbone torsion angles can further improve secondary structure and torsion angle prediction. In this study, we expand the iterative features to include solvent accessible surface area and backbone angles and dihedrals based on Cα atoms. By using a deep learning neural network in three iterations, we achieved 82% accuracy for secondary structure prediction, 0.76 for the correlation coefficient between predicted and actual solvent accessible surface area, 19° and 30° for mean absolute errors of backbone φ and ψ angles, respectively, and 8° and 32° for mean absolute errors of Cα-based θ and τ angles, respectively, for an independent test dataset of 1199 proteins. The accuracy of the method is slightly lower for 72 CASP 11 targets but much higher than those of model structures from current state-of-the-art techniques. This suggests the potentially beneficial use of these predicted properties for model assessment and ranking.

  3. Social Web mining and exploitation for serious applications: Technosocial Predictive Analytics and related technologies for public health, environmental and national security surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Kamel Boulos, Maged; Sanfilippo, Antonio P.; Corley, Courtney D.; Wheeler, Steve

    2010-03-17

    This paper explores techno-social predictive analytics (TPA) and related methods for Web “data mining” where users’ posts and queries are garnered from Social Web (“Web 2.0”) tools such as blogs, microblogging and social networking sites to form coherent representations of real-time health events. The paper includes a brief introduction to commonly used Social Web tools such as mashups and aggregators, and maps their exponential growth as an open architecture of participation for the masses and an emerging way to gain insight about people’s collective health status of whole populations. Several health related tool examples are described and demonstrated as practical means through which health professionals might create clear location specific pictures of epidemiological data such as flu outbreaks.

  4. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan

  5. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan

  6. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammad; Hiemstra, Djoerd; Keulen, van Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan foll

  7. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammad; Hiemstra, Djoerd; Keulen, van Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan followi

  8. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    The change of the web content is rapid. In Focused Web Harvesting [?], which aims at achieving a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a complete set of related web data to their interesting topics. Whether you are a fan foll

  9. Efficient Web Harvesting Strategies for Monitoring Deep Web Content

    NARCIS (Netherlands)

    Khelghati, Mohammadreza; Hiemstra, Djoerd; van Keulen, Maurice

    2016-01-01

    Web content changes rapidly [18]. In Focused Web Harvesting [17] which aim it is to achieve a complete harvest for a given topic, this dynamic nature of the web creates problems for users who need to access a set of all the relevant web data to their topics of interest. Whether you are a fan followi

  10. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    Science.gov (United States)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced

  11. 基于Web访问日志的异常行为检测%Abnormal Behavior Detection Based on Web Access Log

    Institute of Scientific and Technical Information of China (English)

    刘志宏; 孙长国

    2015-01-01

    With the rapid development of the Internet, all kinds of site of the attack technology emerge in an endless stream. This paper introduces the use log analysis of huge amount of web access log analysis process, also by using characteristic string matching and access frequency statistical analysis and other methods to excavate the aggressive behavior, through the practical application scenarios to show the described in the actual attack occurred after how to find the source of the attack, so as to improve the detection capability of security threats.%随着互联网的快速发展,各类对网站的攻击技术层出不穷,文章介绍了使用日志分析技术对海量Web访问日志进行分析的流程,同时通过使用特征字符匹配、访问频率统计分析等方法去挖掘攻击行为,并通过实际应用场景的展现,描述了在实际攻击发生后如何发现攻击源,从而提高安全威胁的检测能力。

  12. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    Science.gov (United States)

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/.

  13. iLIR: A web resource for prediction of Atg8-family interacting proteins.

    Science.gov (United States)

    Kalvari, Ioanna; Tsompanis, Stelios; Mulakkal, Nitha C; Osgood, Richard; Johansen, Terje; Nezis, Ioannis P; Promponas, Vasilis J

    2014-05-01

    Macroautophagy was initially considered to be a nonselective process for bulk breakdown of cytosolic material. However, recent evidence points toward a selective mode of autophagy mediated by the so-called selective autophagy receptors (SARs). SARs act by recognizing and sorting diverse cargo substrates (e.g., proteins, organelles, pathogens) to the autophagic machinery. Known SARs are characterized by a short linear sequence motif (LIR-, LRS-, or AIM-motif) responsible for the interaction between SARs and proteins of the Atg8 family. Interestingly, many LIR-containing proteins (LIRCPs) are also involved in autophagosome formation and maturation and a few of them in regulating signaling pathways. Despite recent research efforts to experimentally identify LIRCPs, only a few dozen of this class of-often unrelated-proteins have been characterized so far using tedious cell biological, biochemical, and crystallographic approaches. The availability of an ever-increasing number of complete eukaryotic genomes provides a grand challenge for characterizing novel LIRCPs throughout the eukaryotes. Along these lines, we developed iLIR, a freely available web resource, which provides in silico tools for assisting the identification of novel LIRCPs. Given an amino acid sequence as input, iLIR searches for instances of short sequences compliant with a refined sensitive regular expression pattern of the extended LIR motif (xLIR-motif) and retrieves characterized protein domains from the SMART database for the query. Additionally, iLIR scores xLIRs against a custom position-specific scoring matrix (PSSM) and identifies potentially disordered subsequences with protein interaction potential overlapping with detected xLIR-motifs. Here we demonstrate that proteins satisfying these criteria make good LIRCP candidates for further experimental verification. Domain architecture is displayed in an informative graphic, and detailed results are also available in tabular form. We anticipate

  14. ProstaWeb: an online tool for predicting prostatic pathologies from Primary Care consultations.

    Directory of Open Access Journals (Sweden)

    Francisco J. Pérez-Gil

    2017-04-01

    Full Text Available Objective: To develop a tool to support and aid the diagnosis of prostate pathologies. Method: The database provided by a previous research project in which anthropometric, clinical and analytical variables are related to the development of predictive statistics algorithms that provide the probability of having a prostate or other pathology. Results: A diagnostic tool that performs with some precision the probability of having a prostatic pathology or another depending on the variables that the patient includes. Conclusions: Prostaweb is a useful and practical prostate diagnosis tool for the primary care physician. Although the hit rate is considerably high, it would still be necessary to construct models with more refined variables and perhaps with a larger number of patients, to increase the precision of the predictions. "

  15. Antibody modeling using the prediction of immunoglobulin structure (PIGS) web server [corrected].

    Science.gov (United States)

    Marcatili, Paolo; Olimpieri, Pier Paolo; Chailyan, Anna; Tramontano, Anna

    2014-12-01

    Antibodies (or immunoglobulins) are crucial for defending organisms from pathogens, but they are also key players in many medical, diagnostic and biotechnological applications. The ability to predict their structure and the specific residues involved in antigen recognition has several useful applications in all of these areas. Over the years, we have developed or collaborated in developing a strategy that enables researchers to predict the 3D structure of antibodies with a very satisfactory accuracy. The strategy is completely automated and extremely fast, requiring only a few minutes (∼10 min on average) to build a structural model of an antibody. It is based on the concept of canonical structures of antibody loops and on our understanding of the way light and heavy chains pack together.

  16. PlantLoc: an accurate web server for predicting plant protein subcellular localization by substantiality motif

    OpenAIRE

    Tang, Shengnan; Li, Tonghua; Cong, Peisheng; Xiong, Wenwei; Wang, Zhiheng; Sun, Jiangming

    2013-01-01

    Knowledge of subcellular localizations (SCLs) of plant proteins relates to their functions and aids in understanding the regulation of biological processes at the cellular level. We present PlantLoc, a highly accurate and fast webserver for predicting the multi-label SCLs of plant proteins. The PlantLoc server has two innovative characters: building localization motif libraries by a recursive method without alignment and Gene Ontology information; and establishing simple architecture for rapi...

  17. Features predicting weight loss in overweight or obese participants in a web-based intervention: randomized trial.

    Science.gov (United States)

    Brindal, Emily; Freyne, Jill; Saunders, Ian; Berkovsky, Shlomo; Smith, Greg; Noakes, Manny

    2012-12-12

    at week 12 (P = .01). The average number of days that each site was used varied significantly (P = .02) and was higher for the supportive site at 5.96 (SD 11.36) and personalized-supportive site at 5.50 (SD 10.35), relative to the information-based site at 3.43 (SD 4.28). In total, 435 participants provided a valid final weight at the 12-week follow-up. Intention-to-treat analyses (using multiple imputations) revealed that there were no statistically significant differences in weight loss between sites (P = .42). On average, participants lost 2.76% (SE 0.32%) of their initial body weight, with 23.7% (SE 3.7%) losing 5% or more of their initial weight. Within supportive conditions, the level of use of the online weight tracker was predictive of weight loss (model estimate = 0.34, P social networking features and personalized meal planning recommendations in a web-based weight loss program did not demonstrate additive effects for user weight loss or retention. These features did, however, increase the average number of days that a user engaged with the system. For users of the supportive websites, greater use of the weight tracker tool was associated with greater weight loss.

  18. Spinning a laser web: predicting spider distributions using LiDAR.

    Science.gov (United States)

    Vierling, K T; Bässler, C; Brandl, R; Vierling, L A; Weiss, I; Müller, J

    2011-03-01

    LiDAR remote sensing has been used to examine relationships between vertebrate diversity and environmental characteristics, but its application to invertebrates has been limited. Our objectives were to determine whether LiDAR-derived variables could be used to accurately describe single-species distributions and community characteristics of spiders in remote forested and mountainous terrain. We collected over 5300 spiders across multiple transects in the Bavarian National Park (Germany) using pitfall traps. We examined spider community characteristics (species richness, the Shannon index, the Simpson index, community composition, mean body size, and abundance) and single-species distribution and abundance with LiDAR variables and ground-based measurements. We used the R2 and partial R2 provided by variance partitioning to evaluate the predictive power of LiDAR-derived variables compared to ground measurements for each of the community characteristics. The total adjusted R2 for species richness, the Shannon index, community species composition, and body size had a range of 25-57%. LiDAR variables and ground measurements both contributed >80% to the total predictive power. For species composition, the explained variance was approximately 32%, which was significantly greater than expected by chance. The predictive power of LiDAR-derived variables was comparable or superior to that of the ground-based variables for examinations of single-species distributions, and it explained up to 55% of the variance. The predictability of species distributions was higher for species that had strong associations with shade in open-forest habitats, and this niche position has been well documented across the European continent for spider species. The similar statistical performance between LiDAR and ground-based measures at our field sites indicated that deriving spider community and species distribution information using LiDAR data can provide not only high predictive power at

  19. Cy-preds: An algorithm and a web service for the analysis and prediction of cysteine reactivity.

    Science.gov (United States)

    Soylu, İnanç; Marino, Stefano M

    2016-02-01

    Cysteine (Cys) is a critically important amino acid, serving a variety of functions within proteins including structural roles, catalysis, and regulation of function through post-translational modifications. Predicting which Cys residues are likely to be reactive is a very sought after feature. Few methods are currently available for the task, either based on evaluation of physicochemical features (e.g., pKa and exposure) or based on similarity with known instances. In this study, we developed an algorithm (named HAL-Cy) which blends previous work with novel implementations to identify reactive Cys from nonreactive. HAL-Cy present two major components: (i) an energy based part, rooted on the evaluation of H-bond network contributions and (ii) a knowledge based part, composed of different profiling approaches (including a newly developed weighting matrix for sequence profiling). In our evaluations, HAL-Cy provided significantly improved performances, as tested in comparisons with existing approaches. We implemented our algorithm in a web service (Cy-preds), the ultimate product of our work; we provided it with a variety of additional features, tools, and options: Cy-preds is capable of performing fully automated calculations for a thorough analysis of Cys reactivity in proteins, ranging from reactivity predictions (e.g., with HAL-Cy) to functional characterization. We believe it represents an original, effective, and very useful addition to the current array of tools available to scientists involved in redox biology, Cys biochemistry, and structural bioinformatics.

  20. Web 2.0 Articles: Content Analysis and a Statistical Model to Predict Recognition of the Need for New Instructional Design Strategies

    Science.gov (United States)

    Liu, Leping; Maddux, Cleborne D.

    2008-01-01

    This article presents a study of Web 2.0 articles intended to (a) analyze the content of what is written and (b) develop a statistical model to predict whether authors' write about the need for new instructional design strategies and models. Eighty-eight technology articles were subjected to lexical analysis and a logistic regression model was…

  1. The Study and Implementation of Accessing Web Services for the Peer in P2P Network%P2P网络Peer访问Web Service的研究和实现

    Institute of Scientific and Technical Information of China (English)

    翟丽芳; 张卫

    2004-01-01

    文章首先简要介绍了P2P网络和Web Service的技术背景,接着提出了P2P网络与Web Service集成的思想,为此提出Web Service Broker的概念,从而实现P2P网络Peer透明访问Web Service.

  2. Toward Predictive Theories of Nuclear Reactions Across the Isotopic Chart: Web Report

    Energy Technology Data Exchange (ETDEWEB)

    Escher, J. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Blackmon, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Elster, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Launey, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lee, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Scielzo, N. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-12

    Recent years have seen exciting new developments and progress in nuclear structure theory, reaction theory, and experimental techniques, that allow us to move towards a description of exotic systems and environments, setting the stage for new discoveries. The purpose of the 5-week program was to bring together physicists from the low-energy nuclear structure and reaction communities to identify avenues for achieving reliable and predictive descriptions of reactions involving nuclei across the isotopic chart. The 4-day embedded workshop focused on connecting theory developments to experimental advances and data needs for astrophysics and other applications. Nuclear theory must address phenomena from laboratory experiments to stellar environments, from stable nuclei to weakly-bound and exotic isotopes. Expanding the reach of theory to these regimes requires a comprehensive understanding of the reaction mechanisms involved as well as detailed knowledge of nuclear structure. A recurring theme throughout the program was the desire to produce reliable predictions rooted in either ab initio or microscopic approaches. At the same time it was recognized that some applications involving heavy nuclei away from stability, e.g. those involving fi ssion fragments, may need to rely on simple parameterizations of incomplete data for the foreseeable future. The goal here, however, is to subsequently improve and refine the descriptions, moving to phenomenological, then microscopic approaches. There was overarching consensus that future work should also focus on reliable estimates of errors in theoretical descriptions.

  3. An Intelligent Web Pre-fetching Based on Hidden Markov Model

    Institute of Scientific and Technical Information of China (English)

    许欢庆; 金鑫

    2004-01-01

    Web pre-fetching is one of the most popular strategies,which are proposed for reducing the perceived access delay and improving the service quality of web server. In this paper, we present a pre-fetching model based on the hidden Markov model, which mines the latent information requirement concepts that the user's access path contains and makes semantic-based pre-fetching decisions.Experimental results show that our scheme has better predictive pre-fetching precision.

  4. Ontology-oriented retrieval of putative microRNAs in Vitis vinifera via GrapeMiRNA: a web database of de novo predicted grape microRNAs

    Directory of Open Access Journals (Sweden)

    Fontana Paolo

    2009-06-01

    Full Text Available Abstract Background Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. Description The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. Conclusion The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.

  5. Web Mining: An Overview

    Directory of Open Access Journals (Sweden)

    P. V. G. S. Mudiraj B. Jabber K. David raju

    2011-12-01

    Full Text Available Web usage mining is a main research area in Web mining focused on learning about Web users and their interactions with Web sites. The motive of mining is to find users’ access models automatically and quickly from the vast Web log data, such as frequent access paths, frequent access page groups and user clustering. Through web usage mining, the server log, registration information and other relative information left by user provide foundation for decision making of organizations. This article provides a survey and analysis of current Web usage mining systems and technologies. There are generally three tasks in Web Usage Mining: Preprocessing, Pattern analysis and Knowledge discovery. Preprocessing cleans log file of server by removing log entries such as error or failure and repeated request for the same URL from the same host etc... The main task of Pattern analysis is to filter uninteresting information and to visualize and interpret the interesting pattern to users. The statistics collected from the log file can help to discover the knowledge. This knowledge collected can be used to take decision on various factors like Excellent, Medium, Weak users and Excellent, Medium and Weak web pages based on hit counts of the web page in the web site. The design of the website is restructured based on user’s behavior or hit counts which provides quick response to the web users, saves memory space of servers and thus reducing HTTP requests and bandwidth utilization. This paper addresses challenges in three phases of Web Usage mining along with Web Structure Mining.This paper also discusses an application of WUM, an online Recommender System that dynamically generates links to pages that have not yet been visited by a user and might be of his potential interest. Differently from the recommender systems proposed so far, ONLINE MINER does not make use of any off-line component, and is able to manage Web sites made up of pages dynamically generated.

  6. 基于角色-功能的Web应用系统访问控制方法%Access Control Method for Web Application System Based on Role-function

    Institute of Scientific and Technical Information of China (English)

    庞希愚; 王成; 仝春玲

    2014-01-01

    The access control requirements of Web application system and the shortcomings in Web application system with Role-based Access Control(RBAC) model are analyzed, a fundamental idea of access control based on role-function model is proposed and its implementation details are discussed. Based on naturally formed Web page organization structure according to the business function requirements of the system and access control requirements of users, business functions of pages are partitioned in bottom menu in order to form the basic unit of permissions configuration. Through configuring the relation between user, role, page, menu, function to control user access to system resources such as Web page, the html element and operation in the page. Through the practical application of scientific research management system in Shandong Jiaotong University, application shows that implementation of access control in the page and menu to achieve business function, can well meet the enterprise requirements for user access control of Web system. It has the advantages of simple operation, strong versatility, and effectively reduces the workload of Web system development.%分析现有基于角色的访问控制模型在Web应用系统中的不足,提出一种基于角色-功能模型的用户访问控制方法,并对其具体的实现进行讨论。以系统业务功能需求自然形成的Web页面组织结构和用户访问控制需求为基础,划分最底层菜单中页面实现的业务功能,以业务功能作为权限配置的基本单位,通过配置用户、角色、页面、菜单、功能之间的关系,控制用户对页面、页面中所包含的html元素及其操作等Web系统资源的访问。在山东交通学院科研管理系统中的实际应用结果表明,该方法在菜单及页面实现的业务功能上实施访问控制,可使Web系统用户访问控制较好地满足用户要求,有效降低Web系统开发的工作量。

  7. Data Analysis Protocol for the Development and Evaluation of Population Pharmacokinetic Models for Incorporation Into the Web-Accessible Population Pharmacokinetic Service - Hemophilia (WAPPS-Hemo)

    Science.gov (United States)

    McEneny-King, Alanna; Foster, Gary; Edginton, Andrea N

    2016-01-01

    Background Hemophilia is an inherited bleeding disorder caused by a deficiency in a specific clotting factor. This results in spontaneous bleeding episodes and eventual arthropathy. The mainstay of hemophilia treatment is prophylactic replacement of the missing factor, but an optimal regimen remains to be determined. Rather, individualized prophylaxis has been suggested to improve both patient safety and resource utilization. However, uptake of this approach has been hampered by the demanding sampling schedules and complex calculations required to obtain individual estimates of pharmacokinetic (PK) parameters. The use of population pharmacokinetics (PopPK) can alleviate this burden by reducing the number of plasma samples required for accurate estimation, but few tools incorporating this approach are readily available to clinicians. Objective The Web-accessible Population Pharmacokinetic Service - Hemophilia (WAPPS-Hemo) project aims to bridge this gap by providing a Web-accessible service for the reliable estimation of individual PK parameters from only a few patient samples. This service is predicated on the development of validated brand-specific PopPK models. Methods We describe the data analysis plan for the development and evaluation of each PopPK model to be incorporated into the WAPPS-Hemo platform. The data sources and structure of the dataset are discussed first, followed by the procedures for handling both data below limit of quantification (BLQ) and absence of such BLQ data. Next, we outline the strategies for building the appropriate structural and covariate models, including the possible need for a process algorithm when PK behavior varies between subjects or significant covariates are not provided. Prior to use in a prospective manner, the models will undergo extensive evaluation using a variety of techniques such as diagnostic plots, bootstrap analysis and cross-validation. Finally, we describe the incorporation of a validated PopPK model into the

  8. Applying WebMining on KM system

    Science.gov (United States)

    Shimazu, Keiko; Ozaki, Tomonobu; Furukawa, Koichi

    KM (Knowledge Management) systems have recently been adopted within the realm of enterprise management. On the other hand, data mining technology is widely acknowledged within Information systems' R&D Divisions. Specially, acquisition of meaningful information from Web usage data has become one of the most exciting eras. In this paper, we employ a Web based KM system and propose a framework for applying Web Usage Mining technology to KM data. As it turns out, task duration varies according to different user operations such as referencing a table-of-contents page, down-loading a target file, and writing to a bulletin board. This in turn makes it possible to easily predict the purpose of the user's task. By taking these observations into account, we segmented access log data manually. These results were compared with results abstained by applying the constant interval method. Next, we obtained a segmentation rule of Web access logs by applying a machine-learning algorithm to manually segmented access logs as training data. Then, the newly obtained segmentation rule was compared with other known methods including the time interval method by evaluating their segmentation results in terms of recall and precision rates and it was shown that our rule attained the best results in both measures. Furthermore, the segmented data were fed to an association rule miner and the obtained association rules were utilized to modify the Web structure.

  9. An Agent-Based Fuzzy Collaborative Intelligence Approach for Predicting the Price of a Dynamic Random Access Memory (DRAM Product

    Directory of Open Access Journals (Sweden)

    Toly Chen

    2012-05-01

    Full Text Available Predicting the price of a dynamic random access memory (DRAM product is a critical task to the manufacturer. However, it is not easy to contend with the uncertainty of the price. In order to effectively predict the price of a DRAM product, an agent-based fuzzy collaborative intelligence approach is proposed in this study. In the agent-based fuzzy collaborative intelligence approach, each agent uses a fuzzy neural network to predict the DRAM price based on its view. The agent then communicates its view and forecasting results to other agents with the aid of an automatic collaboration mechanism. According to the experimental results, the overall performance was improved through the agents’ collaboration.

  10. An Efficient Cluster Based Web Object Filters From Web Pre-Fetching And Web Caching On Web User Navigation

    Directory of Open Access Journals (Sweden)

    A. K. Santra

    2012-05-01

    Full Text Available The World Wide Web is a distributed internet system, which provides dynamic and interactive services includes on line tutoring, video/audio conferencing, e-commerce, and etc., which generated heavy demand on network resources and web servers. It increase over the past few year at a very rapidly rate, due to which the amount of traffic over the internet is increasing. As a result, the network performance has now become very slow. Web Pre-fetching and Caching is one of the effective solutions to reduce the web access latency and improve the quality of service. The existing model presented a Cluster based pre-fetching scheme identified clusters of correlated Web pages based on users access patterns. Web Pre-fetching and Caching cause significant improvements on the performance of Web infrastructure. In this paper, we present an efficient Cluster based Web Object Filters from Web Pre-fetching and Web caching scheme to evaluate the web user navigation patterns and user references of product search. Clustering of web page objects obtained from pre-fetched and web cached contents. User Navigation is evaluated from the web cluster objects with similarity retrieval in subsequent user sessions. Web Object Filters are built with the interpretation of the cluster web pages related to the unique users by discarding redundant pages. Ranking is done on users web page product preferences at multiple sessions of each individual user. The performance is measured in terms of Objective function, Number of clusters and cluster accuracy.

  11. Cognitive-behavioral therapy for obsessive–compulsive disorder: access to treatment, prediction of long-term outcome with neuroimaging

    Directory of Open Access Journals (Sweden)

    O’Neill J

    2015-07-01

    Full Text Available Joseph O'Neill,1 Jamie D Feusner,2 1Division of Child Psychiatry, 2Division of Adult Psychiatry, UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA, USA Abstract: This article reviews issues related to a major challenge to the field for obsessive–compulsive disorder (OCD: improving access to cognitive-behavioral therapy (CBT. Patient-related barriers to access include the stigma of OCD and reluctance to take on the demands of CBT. Patient-external factors include the shortage of trained CBT therapists and the high costs of CBT. The second half of the review focuses on one partial, yet plausible aid to improve accessprediction of long-term response to CBT, particularly using neuroimaging methods. Recent pilot data are presented revealing a potential for pretreatment resting-state functional magnetic resonance imaging and magnetic resonance spectroscopy of the brain to forecast OCD symptom severity up to 1 year after completing CBT. Keywords: follow-up, access to treatment, relapse, resting-state fMRI, magnetic resonance spectroscopy

  12. The benefit of non contrast-enhanced magnetic resonance angiography for predicting vascular access surgery outcome: a computer model perspective.

    Directory of Open Access Journals (Sweden)

    Maarten A G Merkx

    Full Text Available INTRODUCTION: Vascular access (VA surgery, a prerequisite for hemodialysis treatment of end-stage renal-disease (ESRD patients, is hampered by complication rates, which are frequently related to flow enhancement. To assist in VA surgery planning, a patient-specific computer model for postoperative flow enhancement was developed. The purpose of this study is to assess the benefit of non contrast-enhanced magnetic resonance angiography (NCE-MRA data as patient-specific geometrical input for the model-based prediction of surgery outcome. METHODS: 25 ESRD patients were included in this study. All patients received a NCE-MRA examination of the upper extremity blood vessels in addition to routine ultrasound (US. Local arterial radii were assessed from NCE-MRA and converted to model input using a linear fit per artery. Venous radii were determined with US. The effect of radius measurement uncertainty on model predictions was accounted for by performing Monte-Carlo simulations. The resulting flow prediction interval of the computer model was compared with the postoperative flow obtained from US. Patients with no overlap between model-based prediction and postoperative measurement were further analyzed to determine whether an increase in geometrical detail improved computer model prediction. RESULTS: Overlap between postoperative flows and model-based predictions was obtained for 71% of patients. Detailed inspection of non-overlapping cases revealed that the geometrical details that could be assessed from NCE-MRA explained most of the differences, and moreover, upon addition of these details in the computer model the flow predictions improved. CONCLUSIONS: The results demonstrate clearly that NCE-MRA does provide valuable geometrical information for VA surgery planning. Therefore, it is recommended to use this modality, at least for patients at risk for local or global narrowing of the blood vessels as well as for patients for whom an US-based model

  13. Semantic Web Technologies for the Adaptive Web

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    Ontologies and reasoning are the key terms brought into focus by the semantic web community. Formal representation of ontologies in a common data model on the web can be taken as a foundation for adaptive web technologies as well. This chapter describes how ontologies shared on the semantic web...... provide conceptualization for the links which are a main vehicle to access information on the web. The subject domain ontologies serve as constraints for generating only those links which are relevant for the domain a user is currently interested in. Furthermore, user model ontologies provide additional...... means for deciding which links to show, annotate, hide, generate, and reorder. The semantic web technologies provide means to formalize the domain ontologies and metadata created from them. The formalization enables reasoning for personalization decisions. This chapter describes which components...

  14. Hong Kong CIE sky classification and prediction by accessible weather data and trees-based methods

    Science.gov (United States)

    Lou, S.; Li, D. H. W.; Lam, J. C.

    2016-08-01

    Solar irradiance and daylight illuminance are important for solar energy and daylighting designs. Recently, the International Commission of Illuminance (CIE) adopted a range of sky conditions to represent the possible sky distributions which are crucial to the estimation of solar irradiance and daylight illuminance on vertical building facades. The important issue would be whether the sky conditions are correctly identified by the accessible variables. Previously, a number of climatic parameters including sky luminance distributions, vertical solar irradiance and sky illuminance were proposed for the CIE sky classification. However, such data are not always available. This paper proposes an approach based on the readily accessible data that systematically recorded by the local meteorological station for many years. The performance was evaluated using measured vertical solar irradiance and illuminance. The results show that the proposed approach is reliable for sky classification.

  15. Predicting Social Networking Site Use and Online Communication Practices among Adolescents: The Role of Access and Device Ownership

    Directory of Open Access Journals (Sweden)

    Drew P. Cingel

    2014-06-01

    Full Text Available Given adolescents' heavy social media use, this study examined a number of predictors of adolescent social media use, as well as predictors of online communication practices. Using data collected from a national sample of 467 adolescents between the ages of 13 and 17, results indicate that demographics, technology access, and technology ownership are related to social media use and communication practices. Specifically, females log onto and use more constructive com-munication practices on Facebook compared to males. Additionally, adolescents who own smartphones engage in more constructive online communication practices than those who share regular cell phones or those who do not have access to a cell phone. Overall, results imply that ownership of mobile technologies, such as smartphones and iPads, may be more predictive of social networking site use and online communication practices than general ownership of technology.

  16. Predicting Social Networking Site Use and Online Communication Practices among Adolescents: The Role of Access and Device Ownership

    Directory of Open Access Journals (Sweden)

    Drew P. Cingel

    2014-01-01

    Full Text Available Given adolescents' heavy social media use, this study examined a number of predictors of adolescent social media use, as well as predictors of online communication practices. Using data collected from a national sample of 467 adolescents between the ages of 13 and 17, results indicate that demographics, technology access, and technology ownership are related to social media use and communication practices. Specifically, females log onto and use more constructive communication practices on Facebook compared to males. Additionally, adolescents who own smartphones engage in more constructive online communication practices than those who share regular cell phones or those who do not have access to a cell phone. Overall, results imply that ownership of mobile technologies, such as smartphones and iPads, may be more predictive of social networking site use and online communication practices than general ownership of technology.

  17. Female gender predicts lower access and adherence to antiretroviral therapy in a setting of free healthcare

    Directory of Open Access Journals (Sweden)

    Hogg Robert S

    2011-04-01

    Full Text Available Abstract Background Barriers to HIV treatment among injection drug users (IDU are a major public health concern. However, there remain few long-term studies investigating key demographic and behavioral factors - and gender differences in particular - that may pose barriers to antiretroviral therapy (ART, especially in settings with universal healthcare. We evaluated access and adherence to ART in a long-term cohort of HIV-positive IDU in a setting where medical care and antiretroviral therapy are provided free of charge through a universal healthcare system. Methods We evaluated baseline antiretroviral use and subsequent adherence to ART among a Canadian cohort of HIV-positive IDU. We used generalized estimating equation logistic regression to evaluate factors associated with 95% adherence to antiretroviral therapy estimated based on prescription refill compliance. Results Between May 1996 and April 2008, 545 IDU participants were followed for a median of 23.8 months (Inter-quartile range: 8.5 - 91.6, among whom 341 (63% were male and 204 (37% were female. Within the six-month period prior to the baseline interview, 133 (39% men and 62 (30% women were on ART (p = 0.042. After adjusting for clinical characteristics as well as drug use patterns measured longitudinally throughout follow-up, female gender was independently associated with a lower likelihood of being 95% adherent to ART (Odds Ratio [OR] = 0.70; 95% Confidence Interval: 0.53-0.93. Conclusions Despite universal access to free HIV treatment and medical care, female IDU were less likely to access and adhere to antiretroviral therapy, a finding that was independent of drug use and clinical characteristics. These data suggest that interventions to improve access to HIV treatment among IDU must be tailored to address unique barriers to antiretroviral therapy faced by female IDU.

  18. FireMap: A Web Tool for Dynamic Data-Driven Predictive Wildfire Modeling Powered by the WIFIRE Cyberinfrastructure

    Science.gov (United States)

    Block, J.; Crawl, D.; Artes, T.; Cowart, C.; de Callafon, R.; DeFanti, T.; Graham, J.; Smarr, L.; Srivas, T.; Altintas, I.

    2016-12-01

    The NSF-funded WIFIRE project has designed a web-based wildfire modeling simulation and visualization tool called FireMap. The tool executes FARSITE to model fire propagation using dynamic weather and fire data, configuration settings provided by the user, and static topography and fuel datasets already built-in. Using GIS capabilities combined with scalable big data integration and processing, FireMap enables simple execution of the model with options for running ensembles by taking the information uncertainty into account. The results are easily viewable, sharable, repeatable, and can be animated as a time series. From these capabilities, users can model real-time fire behavior, analyze what-if scenarios, and keep a history of model runs over time for sharing with collaborators. Firemap runs FARSITE with national and local sensor networks for real-time weather data ingestion and High-Resolution Rapid Refresh (HRRR) weather for forecasted weather. The HRRR is a NOAA/NCEP operational weather prediction system comprised of a numerical forecast model and an analysis/assimilation system to initialize the model. It is run with a horizontal resolution of 3 km, has 50 vertical levels, and has a temporal resolution of 15 minutes. The HRRR requires an Environmental Data Exchange (EDEX) server to receive the feed and generate secondary products out of it for the modeling. UCSD's EDEX server, funded by NSF, makes high-resolution weather data available to researchers worldwide and enables visualization of weather systems and weather events lasting months or even years. The high-speed server aggregates weather data from the University Consortium for Atmospheric Research by way of a subscription service from the Consortium called the Internet Data Distribution system. These features are part of WIFIRE's long term goals to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. Although Firemap is a

  19. RNA FRABASE 2.0: an advanced web-accessible database with the capacity to search the three-dimensional fragments within RNA structures

    Directory of Open Access Journals (Sweden)

    Wasik Szymon

    2010-05-01

    Full Text Available Abstract Background Recent discoveries concerning novel functions of RNA, such as RNA interference, have contributed towards the growing importance of the field. In this respect, a deeper knowledge of complex three-dimensional RNA structures is essential to understand their new biological functions. A number of bioinformatic tools have been proposed to explore two major structural databases (PDB, NDB in order to analyze various aspects of RNA tertiary structures. One of these tools is RNA FRABASE 1.0, the first web-accessible database with an engine for automatic search of 3D fragments within PDB-derived RNA structures. This search is based upon the user-defined RNA secondary structure pattern. In this paper, we present and discuss RNA FRABASE 2.0. This second version of the system represents a major extension of this tool in terms of providing new data and a wide spectrum of novel functionalities. An intuitionally operated web server platform enables very fast user-tailored search of three-dimensional RNA fragments, their multi-parameter conformational analysis and visualization. Description RNA FRABASE 2.0 has stored information on 1565 PDB-deposited RNA structures, including all NMR models. The RNA FRABASE 2.0 search engine algorithms operate on the database of the RNA sequences and the new library of RNA secondary structures, coded in the dot-bracket format extended to hold multi-stranded structures and to cover residues whose coordinates are missing in the PDB files. The library of RNA secondary structures (and their graphics is made available. A high level of efficiency of the 3D search has been achieved by introducing novel tools to formulate advanced searching patterns and to screen highly populated tertiary structure elements. RNA FRABASE 2.0 also stores data and conformational parameters in order to provide "on the spot" structural filters to explore the three-dimensional RNA structures. An instant visualization of the 3D RNA

  20. Pro web project management

    CERN Document Server

    Emond, Justin

    2012-01-01

    Caters to an under-served niche market of small and medium-sized web consulting projects Eases people's project management pain Uses a clear, simple, and accessible style that eschews theory and hates walls of text

  1. The Adversarial Route Analysis Tool: A Web Application

    Energy Technology Data Exchange (ETDEWEB)

    Casson, William H. Jr. [Los Alamos National Laboratory

    2012-08-02

    The Adversarial Route Analysis Tool is a type of Google maps for adversaries. It's a web-based Geospatial application similar to Google Maps. It helps the U.S. government plan operations that predict where an adversary might be. It's easily accessible and maintainble and it's simple to use without much training.

  2. Web Design Matters

    Science.gov (United States)

    Mathews, Brian

    2009-01-01

    The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…

  3. Web Design Matters

    Science.gov (United States)

    Mathews, Brian

    2009-01-01

    The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…

  4. 基于Web服务的学生公寓门禁管理系统设计与研究%Design and Research on the Access Control Management System of Student Apartments Based on Web Services

    Institute of Scientific and Technical Information of China (English)

    汤新昌

    2013-01-01

    随着网络技术的进一步发展,Web服务(Web Services)技术逐渐被应用于各类管理系统中,Web服务本身具有组件模型无关性、平台无关性、编程语言无关性的优良特性,使得Web服务可以用于系统的集成。本文着重介绍一种基于Web服务的学生公寓门禁管理系统,从系统结构、系统设计模式、Web服务关键性技术等方面阐释系统的设计,构建于Web服务基础上的学生公寓门禁管理系统的数据能够被其它应用系统直接调用,用于高校信息系统集成化建设。%With the in-depth development of network technology, web services technology is gradually applied to vari-ous types of management systems. Web services can be used for the integration of the system due to the excellent characteristics of its own component model-independent, platform independent, programming language independence. In this paper, a kind of access control management system is designed for student apartments based on web services;the system design is illustrated with system architecture, system design patterns and web services critical technology. The data of building the students the apartment access control management system based on web services can be directly transferred by other applying system and applied for the other applications with the construction of university information systems integration.

  5. A Beta Version of the GIS-Enabled NASA Surface meteorology and Solar Energy (SSE) Web Site With Expanded Data Accessibility and Analysis Functionality for Renewable Energy and Other Applications

    Science.gov (United States)

    Stackhouse, P. W.; Barnett, A. J.; Tisdale, M.; Tisdale, B.; Chandler, W.; Hoell, J. M., Jr.; Westberg, D. J.; Quam, B.

    2015-12-01

    The NASA LaRC Atmospheric Science Data Center has deployed it's beta version of an existing geophysical parameter website employing off the shelf Geographic Information System (GIS) tools. The revitalized web portal is entitled the "Surface meteorological and Solar Energy" (SSE - https://eosweb.larc.nasa.gov/sse/) and has been supporting an estimated 175,000 users with baseline solar and meteorological parameters as well as calculated parameters that enable feasibility studies for a wide range of renewable energy systems, particularly those systems featuring solar energy technologies. The GIS tools enable, generate and store climatological averages using spatial queries and calculations (by parameter for the globe) in a spatial database resulting in greater accessibility by government agencies, industry and individuals. The data parameters are produced from NASA science projects and reformulated specifically for the renewable energy industry and other applications. This first version includes: 1) processed and reformulated set of baseline data parameters that are consistent with Esri and open GIS tools, 2) development of a limited set of Python based functions to compute additional parameters "on-the-fly" from the baseline data products, 3) updated the current web sites to enable web-based displays of these parameters for plotting and analysis and 4) provided for the output of data parameters in geoTiff, ASCII and .netCDF data formats. The beta version is being actively reviewed through interaction with a group of collaborators from government and industry in order to test web site usability, display tools and features, and output data formats. This presentation provides an overview of this project and the current version of the new SSE-GIS web capabilities through to the end usage. This project supports cross agency and cross organization interoperability and access to NASA SSE data products and OGC compliant web services and aims also to provide mobile platform

  6. VisPort: Web-Based Access to Community-Specific Visualization Functionality [Shedding New Light on Exploding Stars: Visualization for TeraScale Simulation of Neutrino-Driven Supernovae (Final Technical Report)

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M Pauline

    2007-06-30

    The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and products of visualization.

  7. An Implementation of Semantic Web System for Information retrieval using J2EE Technologies.

    OpenAIRE

    B.Hemanth kumar,; Prof. M. Surendra Prasad Babu

    2011-01-01

    Accessing web resources (Information) is an essential facility provided by web applications to every body. Semantic web is one of the systems that provide a facility to access the resources through web service applications. Semantic web and web Services are new emerging web based technologies. An automatic information processing system can be developed by using semantic web and web services, each having its own contribution within the context of developing web-based information systems and ap...

  8. Freiburg RNA Tools: a web server integrating IntaRNA, ExpaRNA and LocARNA

    OpenAIRE

    Smith, Cameron; Heyne, Steffen; Richter, Andreas S.; Will, Sebastian; Backofen, Rolf

    2010-01-01

    The Freiburg RNA tools web server integrates three tools for the advanced analysis of RNA in a common web-based user interface. The tools IntaRNA, ExpaRNA and LocARNA support the prediction of RNA–RNA interaction, exact RNA matching and alignment of RNA, respectively. The Freiburg RNA tools web server and the software packages of the stand-alone tools are freely accessible at http://rna.informatik.uni-freiburg.de.

  9. Prediction of highly expressed genes in microbes based on chromatin accessibility

    DEFF Research Database (Denmark)

    Willenbrock, Hanni; Ussery, David

    2007-01-01

    BACKGROUND: It is well known that gene expression is dependent on chromatin structure in eukaryotes and it is likely that chromatin can play a role in bacterial gene expression as well. Here, we use a nucleosomal position preference measure of anisotropic DNA flexibility to predict highly expressed...... and ribosomal RNA are encoded by DNA having significantly lower position preference values than other genes in fast-replicating microbes. CONCLUSION: This insight into DNA structure-dependent gene expression in microbes may be exploited for predicting the expression of non-translated genes such as non...

  10. The Salmonella In Silico Typing Resource (SISTR): An Open Web-Accessible Tool for Rapidly Typing and Subtyping Draft Salmonella Genome Assemblies.

    Science.gov (United States)

    Yoshida, Catherine E; Kruczkiewicz, Peter; Laing, Chad R; Lingohr, Erika J; Gannon, Victor P J; Nash, John H E; Taboada, Eduardo N

    2016-01-01

    For nearly 100 years serotyping has been the gold standard for the identification of Salmonella serovars. Despite the increasing adoption of DNA-based subtyping approaches, serotype information remains a cornerstone in food safety and public health activities aimed at reducing the burden of salmonellosis. At the same time, recent advances in whole-genome sequencing (WGS) promise to revolutionize our ability to perform advanced pathogen characterization in support of improved source attribution and outbreak analysis. We present the Salmonella In Silico Typing Resource (SISTR), a bioinformatics platform for rapidly performing simultaneous in silico analyses for several leading subtyping methods on draft Salmonella genome assemblies. In addition to performing serovar prediction by genoserotyping, this resource integrates sequence-based typing analyses for: Multi-Locus Sequence Typing (MLST), ribosomal MLST (rMLST), and core genome MLST (cgMLST). We show how phylogenetic context from cgMLST analysis can supplement the genoserotyping analysis and increase the accuracy of in silico serovar prediction to over 94.6% on a dataset comprised of 4,188 finished genomes and WGS draft assemblies. In addition to allowing analysis of user-uploaded whole-genome assemblies, the SISTR platform incorporates a database comprising over 4,000 publicly available genomes, allowing users to place their isolates in a broader phylogenetic and epidemiological context. The resource incorporates several metadata driven visualizations to examine the phylogenetic, geospatial and temporal distribution of genome-sequenced isolates. As sequencing of Salmonella isolates at public health laboratories around the world becomes increasingly common, rapid in silico analysis of minimally processed draft genome assemblies provides a powerful approach for molecular epidemiology in support of public health investigations. Moreover, this type of integrated analysis using multiple sequence-based methods of sub

  11. Web Pre-fetching Model Based on Concept Association Network

    Institute of Scientific and Technical Information of China (English)

    XUHuanqing; WANGYongcheng

    2004-01-01

    With the enormous growth of information on the web, Internet has become one of the most important information sources. However, limited by the network bandwidth, users always suffer from long time waiting. Web pre-fetching is one of the most popular strategies,which are proposed for reducing the perceived access delay and improving the service quality of web server. This paper presents a pre-fetching model based on concept as sociation network, which mines concept association relationships that are implied in user access patterns and employs online learning and oitiine mining algorithm to construct the user-oriented concept association network. Using concept association network, pre-fetching model makes semantics-based pre-fetching decisions in the client side.This model implements the concept-based analysis on user access patterns and improves the prediction accuracy. Experimental results show that the proposed pre-fetching model has better general performance.

  12. Ontology Based Access Control

    Directory of Open Access Journals (Sweden)

    Özgü CAN

    2010-02-01

    Full Text Available As computer technologies become pervasive, the need for access control mechanisms grow. The purpose of an access control is to limit the operations that a computer system user can perform. Thus, access control ensures to prevent an activity which can lead to a security breach. For the success of Semantic Web, that allows machines to share and reuse the information by using formal semantics for machines to communicate with other machines, access control mechanisms are needed. Access control mechanism indicates certain constraints which must be achieved by the user before performing an operation to provide a secure Semantic Web. In this work, unlike traditional access control mechanisms, an "Ontology Based Access Control" mechanism has been developed by using Semantic Web based policies. In this mechanism, ontologies are used to model the access control knowledge and domain knowledge is used to create policy ontologies.

  13. Web services for distributed and interoperable hydro-information systems

    Science.gov (United States)

    Horak, J.; Orlik, A.; Stromsky, J.

    2008-03-01

    Web services support the integration and interoperability of Web-based applications and enable machine-to-machine interaction. The concepts of web services and open distributed architecture were applied to the development of T-DSS, the prototype customised for web based hydro-information systems. T-DSS provides mapping services, database related services and access to remote components, with special emphasis placed on the output flexibility (e.g. multilingualism), where SOAP web services are mainly used for communication. The remote components are represented above all by remote data and mapping services (e.g. meteorological predictions), modelling and analytical systems (currently HEC-HMS, MODFLOW and additional utilities), which support decision making in water management.

  14. Web services for distributed and interoperable hydro-information systems

    Directory of Open Access Journals (Sweden)

    J. Horak

    2008-03-01

    Full Text Available Web services support the integration and interoperability of Web-based applications and enable machine-to-machine interaction. The concepts of web services and open distributed architecture were applied to the development of T-DSS, the prototype customised for web based hydro-information systems. T-DSS provides mapping services, database related services and access to remote components, with special emphasis placed on the output flexibility (e.g. multilingualism, where SOAP web services are mainly used for communication. The remote components are represented above all by remote data and mapping services (e.g. meteorological predictions, modelling and analytical systems (currently HEC-HMS, MODFLOW and additional utilities, which support decision making in water management.

  15. Web services for distributed and interoperable hydro-information systems

    Directory of Open Access Journals (Sweden)

    J. Horak

    2007-06-01

    Full Text Available Web services support the integration and interoperability of Web-based applications and enable machine-to-machine interaction. The concepts of web services and open distributed architecture were applied to the development of T-DSS, the prototype customised for web based hydro-information systems. T-DSS provides mapping services, database related services and access to remote components, with special emphasis placed on output flexibility (e.g. multilingualism, where SOAP web services are mainly used for communication. The remote components are represented above all by distant data and mapping services (e.g. eteorological predictions, modelling and analytical systems (currently HEC-HMS, Modflow and additional utilities, which support decision making in water management.

  16. Distributing flight dynamics products via the World Wide Web

    Science.gov (United States)

    Woodard, Mark; Matusow, David

    1996-01-01

    The NASA Flight Dynamics Products Center (FDPC), which make available selected operations products via the World Wide Web, is reported on. The FDPC can be accessed from any host machine connected to the Internet. It is a multi-mission service which provides Internet users with unrestricted access to the following standard products: antenna contact predictions; ground tracks; orbit ephemerides; mean and osculating orbital elements; earth sensor sun and moon interference predictions; space flight tracking data network summaries; and Shuttle transport system predictions. Several scientific data bases are available through the service.

  17. 基于群体智慧的Web访问日志会话主题识别研究%Swarm Intelligence Based Topic Identification for Sessions in Web Access Log

    Institute of Scientific and Technical Information of China (English)

    方奇; 刘奕群; 张敏; 茹立云; 马少平

    2011-01-01

    Web访问日志中的会话(session)是指特定用户在一定时间范围内的访问行为的连续序列.会话主题(topic)是指会话中具有相同用户意图的部分.从会话中进一步识别出能体现用户意图的处理单元(topic)是进行用户访问行为分析的重要基础.目前相关工作主要集中在边界识别上,无法处理用户意图交叉情况.为了解决该问题,该文重新形式化定义了session和topic的相关概念,提出最大划分的求解任务,并设计出了基于用户群体智慧的会话主题识别算法.在使用大规模真实Web访问日志的实验中,我们的算法取得了不错的效果.%A session in Web access log denotes a continuous-time sequence of user's Web browsing behavior. A topic of a session represents a hidden browsing intent of a Web user. It is fundamental to identify several topic-based log units from a session. Existing work mainly focuses on detecting boundaries without considering the common situation in which different topics often overlap in one session. In this paper, we first re-define the concept of session and topic, and then the task of largest segmentation is proposed. We further design the session topic identification algorithm based on crowd wisdom of Web users. The effectiveness of the algorithm is validated by the experiments performed on large scale of realistic Web access logs.

  18. Target prediction for an open access set of compounds active against Mycobacterium tuberculosis.

    Directory of Open Access Journals (Sweden)

    Francisco Martínez-Jiménez

    Full Text Available Mycobacterium tuberculosis, the causative agent of tuberculosis (TB, infects an estimated two billion people worldwide and is the leading cause of mortality due to infectious disease. The development of new anti-TB therapeutics is required, because of the emergence of multi-drug resistance strains as well as co-infection with other pathogens, especially HIV. Recently, the pharmaceutical company GlaxoSmithKline published the results of a high-throughput screen (HTS of their two million compound library for anti-mycobacterial phenotypes. The screen revealed 776 compounds with significant activity against the M. tuberculosis H37Rv strain, including a subset of 177 prioritized compounds with high potency and low in vitro cytotoxicity. The next major challenge is the identification of the target proteins. Here, we use a computational approach that integrates historical bioassay data, chemical properties and structural comparisons of selected compounds to propose their potential targets in M. tuberculosis. We predicted 139 target--compound links, providing a necessary basis for further studies to characterize the mode of action of these compounds. The results from our analysis, including the predicted structural models, are available to the wider scientific community in the open source mode, to encourage further development of novel TB therapeutics.

  19. OceanNOMADS: Real-time and retrospective access to operational U.S. ocean prediction products

    Science.gov (United States)

    Harding, J. M.; Cross, S. L.; Bub, F.; Ji, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration (NOAA) National Operational Model Archive and Distribution System (NOMADS) provides both real-time and archived atmospheric model output from servers at the National Centers for Environmental Prediction (NCEP) and National Climatic Data Center (NCDC) respectively (http://nomads.ncep.noaa.gov/txt_descriptions/marRutledge-1.pdf). The NOAA National Ocean Data Center (NODC) with NCEP is developing a complementary capability called OceanNOMADS for operational ocean prediction models. An NCEP ftp server currently provides real-time ocean forecast output (http://www.opc.ncep.noaa.gov/newNCOM/NCOM_currents.shtml) with retrospective access through NODC. A joint effort between the Northern Gulf Institute (NGI; a NOAA Cooperative Institute) and the NOAA National Coastal Data Development Center (NCDDC; a division of NODC) created the developmental version of the retrospective OceanNOMADS capability (http://www.northerngulfinstitute.org/edac/ocean_nomads.php) under the NGI Ecosystem Data Assembly Center (EDAC) project (http://www.northerngulfinstitute.org/edac/). Complementary funding support for the developmental OceanNOMADS from U.S. Integrated Ocean Observing System (IOOS) through the Southeastern University Research Association (SURA) Model Testbed (http://testbed.sura.org/) this past year provided NODC the analogue that facilitated the creation of an NCDDC production version of OceanNOMADS (http://www.ncddc.noaa.gov/ocean-nomads/). Access tool development and storage of initial archival data sets occur on the NGI/NCDDC developmental servers with transition to NODC/NCCDC production servers as the model archives mature and operational space and distribution capability grow. Navy operational global ocean forecast subsets for U.S waters comprise the initial ocean prediction fields resident on the NCDDC production server. The NGI/NCDDC developmental server currently includes the Naval Research Laboratory Inter-America Seas

  20. Analysis and prediction of pest dynamics in an agroforestry context using Tiko'n, a generic tool to develop food web models

    Science.gov (United States)

    Rojas, Marcela; Malard, Julien; Adamowski, Jan; Carrera, Jaime Luis; Maas, Raúl

    2017-04-01

    While it is known that climate change will impact future plant-pest population dynamics, potentially affecting crop damage, agroforestry with its enhanced biodiversity is said to reduce the outbreaks of pest insects by providing natural enemies for the control of pest populations. This premise is known in the literature as the natural enemy hypothesis and has been widely studied qualitatively. However, disagreement still exists on whether biodiversity enhancement reduces pest outbreaks, showing the need of quantitatively understanding the mechanisms behind the interactions between pests and natural enemies, also known as trophic interactions. Crop pest models that study insect population dynamics in agroforestry contexts are very rare, and pest models that take trophic interactions into account are even rarer. This may be due to the difficulty of representing complex food webs in a quantifiable model. There is therefore a need for validated food web models that allow users to predict the response of these webs to changes in climate in agroforestry systems. In this study we present Tiko'n, a Python-based software whose API allows users to rapidly build and validate trophic web models; the program uses a Bayesian inference approach to calibrate the models according to field data, allowing for the reuse of literature data from various sources and reducing the need for extensive field data collection. Tiko'n was run using coffee leaf miner (Leucoptera coffeella) and associated parasitoid data from a shaded coffee plantation, showing the mechanisms of insect population dynamics within a tri-trophic food web in an agroforestry system.

  1. Context dependent reference states of solvent accessibility derived from native protein structures and assessed by predictability analysis

    Directory of Open Access Journals (Sweden)

    Ahmad Shandar

    2009-04-01

    Full Text Available Abstract Background Solvent accessibility (ASA of amino acid residues is often transformed from absolute values of exposed surface area to their normalized relative values. This normalization is typically attained by assuming a highest exposure conformation based on extended state of that residue when it is surrounded by Ala or Gly on both sides i.e. Ala-X-Ala or Gly-X-Gly solvent exposed area. Exact sequence context, the folding state of the residues, and the actual environment of a folded protein, which do impose additional constraints on the highest possible (or highest observed values of ASA, are currently ignored. Here, we analyze the statistics of these constraints and examine how the normalization of absolute ASA values using context-dependent Highest Observed ASA (HOA instead of context-free extended state ASA (ESA of residues can influence the performance of sequence-based prediction of solvent accessibility. Characterization of burial and exposed states of residues based on this normalization has also been shown to provide better enrichment of DNA-binding sites in exposed residues. Results We compiled the statistics of highest observed ASA (HOA of residues in their different contexts and analyzed their distribution in all 400 possible combinations for each residue type. We observe that many trippetides are more exposed than ESA and that HOA residues are often found in turn, coil and bend conformations. On the other hand several residues are never observed in an exposure state close to ESA values. A neural networks trained with HOA-normalized data outperforms the one trained with ESA-normalized values. However, the improvements are subtle in some residues, while they are more significant in others. Conclusion HOA based normalization of solvent accessibility from native structures is proposed and it shows improvement in sequence-based predictability, as well as enrichment in interface residues on surface. There may still be some

  2. Accurate microRNA target prediction using detailed binding site accessibility and machine learning on proteomics data

    Directory of Open Access Journals (Sweden)

    Martin eReczko

    2012-01-01

    Full Text Available MicroRNAs (miRNAs are a class of small regulatory genes regulating gene expression by targetingmessenger RNA. Though computational methods for miRNA target prediction are the prevailingmeans to analyze their function, they still miss a large fraction of the targeted genes and additionallypredict a large number of false positives. Here we introduce a novel algorithm called DIANAmicroT-ANN which combines multiple novel target site features through an artificial neural network(ANN and is trained using recently published high-throughput data measuring the change of proteinlevels after miRNA overexpression, providing positive and negative targeting examples. The featurescharacterizing each miRNA recognition element include binding structure, conservation level and aspecific profile of structural accessibility. The ANN is trained to integrate the features of eachrecognition element along the 3’ untranslated region into a targeting score, reproducing the relativerepression fold change of the protein. Tested on two different sets the algorithm outperforms otherwidely used algorithms and also predicts a significant number of unique and reliable targets notpredicted by the other methods. For 542 human miRNAs DIANA-microT-ANN predicts 120,000targets not provided by TargetScan 5.0. The algorithm is freely available athttp://microrna.gr/microT-ANN.

  3. A web service based tool to plan atmospheric research flights

    Directory of Open Access Journals (Sweden)

    M. Rautenhaus

    2011-09-01

    Full Text Available We present a web service based tool for the planning of atmospheric research flights. The tool provides online access to horizontal maps and vertical cross-sections of numerical weather prediction data and in particular allows the interactive design of a flight route in direct relation to the predictions. It thereby fills a crucial gap in the set of currently available tools for using data from numerical atmospheric models for research flight planning. A distinct feature of the tool is its lightweight, web service based architecture, requiring only commodity hardware and a basic Internet connection for deployment. Access to visualisations of prediction data is achieved by using an extended version of the Open Geospatial Consortium Web Map Service (WMS standard, a technology that has gained increased attention in meteorology in recent years. With the WMS approach, we avoid the transfer of large forecast model output datasets while enabling on-demand generated visualisations of the predictions at campaign sites with limited Internet bandwidth. Usage of the Web Map Service standard also enables access to third-party sources of georeferenced data. We have implemented the software using the open-source programming language Python. In the present article, we describe the architecture of the tool. As an example application, we discuss a case study research flight planned for the scenario of the 2010 Eyjafjalla volcano eruption. Usage and implementation details are provided as Supplement.

  4. The Salmonella In Silico Typing Resource (SISTR: An Open Web-Accessible Tool for Rapidly Typing and Subtyping Draft Salmonella Genome Assemblies.

    Directory of Open Access Journals (Sweden)

    Catherine E Yoshida

    Full Text Available For nearly 100 years serotyping has been the gold standard for the identification of Salmonella serovars. Despite the increasing adoption of DNA-based subtyping approaches, serotype information remains a cornerstone in food safety and public health activities aimed at reducing the burden of salmonellosis. At the same time, recent advances in whole-genome sequencing (WGS promise to revolutionize our ability to perform advanced pathogen characterization in support of improved source attribution and outbreak analysis. We present the Salmonella In Silico Typing Resource (SISTR, a bioinformatics platform for rapidly performing simultaneous in silico analyses for several leading subtyping methods on draft Salmonella genome assemblies. In addition to performing serovar prediction by genoserotyping, this resource integrates sequence-based typing analyses for: Multi-Locus Sequence Typing (MLST, ribosomal MLST (rMLST, and core genome MLST (cgMLST. We show how phylogenetic context from cgMLST analysis can supplement the genoserotyping analysis and increase the accuracy of in silico serovar prediction to over 94.6% on a dataset comprised of 4,188 finished genomes and WGS draft assemblies. In addition to allowing analysis of user-uploaded whole-genome assemblies, the SISTR platform incorporates a database comprising over 4,000 publicly available genomes, allowing users to place their isolates in a broader phylogenetic and epidemiological context. The resource incorporates several metadata driven visualizations to examine the phylogenetic, geospatial and temporal distribution of genome-sequenced isolates. As sequencing of Salmonella isolates at public health laboratories around the world becomes increasingly common, rapid in silico analysis of minimally processed draft genome assemblies provides a powerful approach for molecular epidemiology in support of public health investigations. Moreover, this type of integrated analysis using multiple sequence

  5. Parameter selection for and implementation of a web-based decision-support tool to predict extubation outcome in premature infants

    Directory of Open Access Journals (Sweden)

    Hulsey Thomas C

    2006-03-01

    Full Text Available Abstract Background Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU census. Methods A five-step procedure was developed to identify predictive variables. Clinical expert (CE thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN and a multivariate logistic regression model (MLR. The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML and the mathematical model employing the ANN. Results CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006. Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0–1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with

  6. Medium-Range Predictability of Contrail-Cirrus Demonstrated during Experiments Ml-Cirrus and Access-Ii

    Science.gov (United States)

    Schumann, U.

    2015-12-01

    The Contrail Cirrus Prediction model CoCiP (doi:10.5194/gmd-5-543-2012) has been applied quasi operationally to predict contrails for flight planning of ML-CIRRUS (C. Voigt, DLR, et al.) in Europe and for ACCESS II in California (B. Anderson, NASA, et al.) in March-May 2014. The model uses NWP data from ECMWF and past airtraffic data (actual traffic data are used for analysis). The forecasts provided a sequence of hourly forecast maps of contrail cirrus optical depth for 3.5 days, every 12 h. CoCiP has been compared to observations before, e.g. within a global climate-aerosol-contrail model (Schumann, Penner et al., ACPD, 2015, doi:10.5194/acpd-15-19553-2015). Good predictions would allow for climate optimal routing (see, e.g., US patent by Mannstein and Schumann, US 2012/0173147 A1). The predictions are tested by: 1) Local eyewitness reports and photos, 2) satellite observed cloudiness, 3) autocorrelation analysis of predictions for various forecast periods, 4) comparisons of computed with observed optical depth from COCS (doi:10.5194/amt-7-3233-2014, 2014) by IR METEOSAT-SEVIRI observations over Europe. The results demonstrate medium-range predictability of contrail cirrus to a useful degree for given traffic, soot emissions, and high-quality NWP data. A growing set of satellite, Lidar, and in-situ data from ML-CIRRUS and ACCENT are becoming available and will be used to further test the forecast quality. The autocorrelation of optical depth predictions is near 70% for 3-d forecasts for Europe (outside times with high Sahara dust loads), and only slightly smaller for continental USA. Contrail cirrus is abundant over Europe and USA. More than 1/3 of all cirrus measured with the research aircraft HALO during ML-CIRRUS was impacted by contrails. The radiative forcing (RF) is strongly daytime and ambience dependent. The net annual mean RF, based on our global studies, may reach up to 0.08 W/m2 globally, and may well exceed 1 W/m2 regionally, with maximum over Europe

  7. Barriers to access to treatment for mothers with postpartum depression in primary health care centers: a predictive model

    Directory of Open Access Journals (Sweden)

    Pablo Martínez

    2016-01-01

    Full Text Available Objective to develop a predictive model to evaluate the factors that modify the access to treatment for Postpartum Depression (PPD. Methods prospective study with mothers who participated in the monitoring of child health in primary care centers. For the initial assessment and during 3 months, it was considered: sociodemographic data, gyneco-obstetric data, data on the services provided, depressive symptoms according to the Edinburgh Postpartum Depression Scale (EPDS and quality of life according to the Short Form-36 Health Status Questionnaire (SF-36. The diagnosis of depression was made based on MINI. Mothers diagnosed with PPD in the initial evaluation, were followed-up. Results a statistical model was constructed to determine the factors that prevented access to treatment, which consisted of: item 2 of EPDS (OR 0.43, 95%CI: 0.20-0.93 and item 5 (OR 0.48, 95%CI: 0.21-1.09, and previous history of depression treatment (OR 0.26, 95%CI: 0.61-1.06. Area under the ROC curve for the model=0.79; p-value for the Hosmer-Lemershow=0.73. Conclusion it was elaborated a simple, well standardized and accurate profile, which advises that nurses should pay attention to those mothers diagnosed with PPD, presenting low/no anhedonia (item 2 of EPDS, scarce/no panic/fear (item 5 of EPDS, and no history of depression, as it is likely that these women do not initiate treatment.

  8. Predicting Middle School Students' Use of Web 2.0 Technologies out of School Using Home and School Technological Variables

    Science.gov (United States)

    Hughes, Joan E.; Read, Michelle F.; Jones, Sara; Mahometa, Michael

    2015-01-01

    This study used multiple regression to identify predictors of middle school students' Web 2.0 activities out of school, a construct composed of 15 technology activities. Three middle schools participated, where sixth- and seventh-grade students completed a questionnaire. Independent predictor variables included three demographic and five computer…

  9. Predicting Student Performance in Web-Based Distance Education Courses Based on Survey Instruments Measuring Personality Traits and Technical Skills

    Science.gov (United States)

    Hall, Michael

    2008-01-01

    Two common web-based surveys, "Is Online Learning Right for Me?' and "What Technical Skills Do I Need?", were combined into a single survey instrument and given to 228 on-campus and 83 distance education students. The students were enrolled in four different classes (business, computer information services, criminal justice, and…

  10. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M.B. Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  11. Survey of Techniques for Deep Web Source Selection and Surfacing the Hidden Web Content

    OpenAIRE

    Khushboo Khurana; M B Chandak

    2016-01-01

    Large and continuously growing dynamic web content has created new opportunities for large-scale data analysis in the recent years. There is huge amount of information that the traditional web crawlers cannot access, since they use link analysis technique by which only the surface web can be accessed. Traditional search engine crawlers require the web pages to be linked to other pages via hyperlinks causing large amount of web data to be hidden from the crawlers. Enormous data is available in...

  12. Association Rule Mining for Web Recommendation

    Directory of Open Access Journals (Sweden)

    R. Suguna

    2012-10-01

    Full Text Available Web usage mining is the application of web mining to discover the useful patterns from the web in order to understand and analyze the behavior of the web users and web based applications. It is theemerging research trend for today’s researchers. It entirely deals with web log files which contain the user website access information. It is an interesting thing to analyze and understand the user behaviorabout the web access. Web usage mining normally has three categories: 1. Preprocessing, 2. Pattern Discovery and 3. Pattern Analysis. This paper proposes the association rule mining algorithms for betterWeb Recommendation and Web Personalization. Web recommendation systems are considered as an important role to understand customers’ behavior, interest, improving customer convenience, increasingservice provider profits and future needs.

  13. Development and Validation of a Preprocedural Risk Score to Predict Access Site Complications After Peripheral Vascular Interventions Based on the Vascular Quality Initiative Database

    Directory of Open Access Journals (Sweden)

    Daniel Ortiz

    2016-01-01

    Full Text Available Purpose: Access site complications following peripheral vascular intervention (PVI are associated with prolonged hospitalization and increased mortality. Prediction of access site complication risk may optimize PVI care; however, there is no tool designed for this. We aimed to create a clinical scoring tool to stratify patients according to their risk of developing access site complications after PVI. Methods: The Society for Vascular Surgery’s Vascular Quality Initiative database yielded 27,997 patients who had undergone PVI at 131 North American centers. Clinically and statistically significant preprocedural risk factors associated with in-hospital, post-PVI access site complications were included in a multivariate logistic regression model, with access site complications as the outcome variable. A predictive model was developed with a random sample of 19,683 (70% PVI procedures and validated in 8,314 (30%. Results: Access site complications occurred in 939 (3.4% patients. The risk tool predictors are female gender, age > 70 years, white race, bedridden ambulatory status, insulin-treated diabetes mellitus, prior minor amputation, procedural indication of claudication, and nonfemoral arterial access site (model c-statistic = 0.638. Of these predictors, insulin-treated diabetes mellitus and prior minor amputation were protective of access site complications. The discriminatory power of the risk model was confirmed by the validation dataset (c-statistic = 0.6139. Higher risk scores correlated with increased frequency of access site complications: 1.9% for low risk, 3.4% for moderate risk and 5.1% for high risk. Conclusions: The proposed clinical risk score based on eight preprocedural characteristics is a tool to stratify patients at risk for post-PVI access site complications. The risk score may assist physicians in identifying patients at risk for access site complications and selection of patients who may benefit from bleeding avoidance

  14. Determinants and development of a web-based child mortality prediction model in resource-limited settings: A data mining approach.

    Science.gov (United States)

    Tesfaye, Brook; Atique, Suleman; Elias, Noah; Dibaba, Legesse; Shabbir, Syed-Abdul; Kebede, Mihiretu

    2017-03-01

    Improving child health and reducing child mortality rate are key health priorities in developing countries. This study aimed to identify determinant sand develop, a web-based child mortality prediction model in Ethiopian local language using classification data mining algorithm. Decision tree (using J48 algorithm) and rule induction (using PART algorithm) techniques were applied on 11,654 records of Ethiopian demographic and health survey data. Waikato Environment for Knowledge Analysis (WEKA) for windows version 3.6.8 was used to develop optimal models. 8157 (70%) records were randomly allocated to training group for model building while; the remaining 3496 (30%) records were allocated as the test group for model validation. The validation of the model was assessed using accuracy, sensitivity, specificity and area under Receiver Operating Characteristics (ROC) curve. Using Statistical Package for Social Sciences (SPSS) version 20.0; logistic regressions and Odds Ratio (OR) with 95% Confidence Interval (CI) was used to identify determinants of child mortality. The child mortality rate was 72 deaths per 1000 live births. Breast-feeding (AOR= 1.46, (95% CI [1.22. 1.75]), maternal education (AOR= 1.40, 95% CI [1.11, 1.81]), family planning (AOR= 1.21, [1.08, 1.43]), preceding birth interval (AOR= 4.90, [2.94, 8.15]), presence of diarrhea (AOR= 1.54, 95% CI [1.32, 1.66]), father's education (AOR= 1.4, 95% CI [1.04, 1.78]), low birth weight (AOR= 1.2, 95% CI [0.98, 1.51]) and, age of the mother at first birth (AOR= 1.42, [1.01-1.89]) were found to be determinants for child mortality. The J48 model had better performance, accuracy (94.3%), sensitivity (93.8%), specificity (94.3%), Positive Predictive Value (PPV) (92.2%), Negative Predictive Value (NPV) (94.5%) and, the area under ROC (94.8%). Subsequent to developing an optimal prediction model, we relied on this model to develop a web-based application system for child mortality prediction. In this study

  15. Web-based collaborative decision support services for river runoff and flood risk prediction in the Oak Ridge Moraine Area, Canada

    Science.gov (United States)

    Wang, Lei; Cheng, Qiuming

    2006-10-01

    River runoff is highly related to the precipitation events and the land use characteristics. It is an important component in the hydrologic cycle because of its relationship to issues such as flood and water quantity. The Oak Ridge Moraine (ORM) Area, Southern Ontario has always been faced with the impacts of extreme hydrological events. Flood not only has an impact on the ORM economical, social well-being and particularly public safety, but also exacerbates major environmental problems. Prediction of flood is a complex system of which involves variable factors including climate condition, basin attributes, land use/cover types and ground water discharge. The application of flood prediction model requires the efficient management of large spatial and temporal datasets, which involves data acquisition, storage, and processing, as well as manipulation, reporting and display results. The complexity of flood prediction makes it difficult for individual organization to deal effectively with decision-making. Difficulty in linking data, analysis tools and models across organization is one of the barriers to be overcome in developing integrated river runoff and flood risks prediction system. Therefore, it is required to develop a standardized framework for Web-based Collaborative Decision Support Services (WCDSS), supporting information exchange and knowledge and model sharing from different organizations on the web. Such a WCDSS supply both metadata services, geo-data services and geo-processing services to help collaborative decision-making, not only support distributed data sharing and services, but also support distributed model sharing and services. This paper develop a WCDSS that provides a comprehensive environment for on-line river runoff and flood risk prediction, integrating information retrieval, analysis and model analysis for information sharing and decision-making support. Such a SDSS will improve understanding of the environmental, planning and management

  16. Research on security access control model of the Web-based database of nonfer-rous metal physical & chemical properties%基于Web有色金属物性数据库安全访问控制模型

    Institute of Scientific and Technical Information of China (English)

    李尚勇; 谢刚; 俞小花; 周明

    2009-01-01

    针对基于Web的有色金属物性数据库的访问特点,分析了有色金属物性数据库在多层架构体系中存在的非法入侵、越权访问、信息重放攻击等安全性问题,提出了适应其软件架构要求的安全访问控制模型.并对所提出的安全模型分别进行了访问性能和安全性测试,测试结果表明,访问模型安全性较好,性能稳定.%According to accessing characteristic of the Web-based database of non-ferrous metal physical & chemical properties, the se-curity issues are discussed in Multi-tier Application Architecture of database of non-ferrous metal physical & chemical properties, such as illegal invasion, unauthorized access, and information replay attack. The security access control model which is adaptive to charac-teristic of Multi-tier Application Architecture is proposed and tested in accessing performance and security. The test results show that the model is better in accessing performance and stability.

  17. Bioprocess-Engineering Education with Web Technology

    NARCIS (Netherlands)

    Sessink, O.

    2006-01-01

    Development of learning material that is distributed through and accessible via the World Wide Web. Various options from web technology are exploited to improve the quality and efficiency of learning material.

  18. Access/AML -

    Data.gov (United States)

    Department of Transportation — The AccessAML is a web-based internet single application designed to reduce the vulnerability associated with several accounts assinged to a single users. This is a...

  19. APLIKASI WEB CRAWLER UNTUK WEB CONTENT PADA MOBILE PHONE

    Directory of Open Access Journals (Sweden)

    Sarwosri Sarwosri

    2009-01-01

    Full Text Available Crawling is the process behind a search engine, which served through the World Wide Web in a structured and with certain ethics. Applications that run the crawling process is called Web Crawler, also called web spider or web robot. The growth of mobile search services provider, followed by growth of a web crawler that can browse web pages in mobile content type. Crawler Web applications can be accessed by mobile devices and only web pages that type Mobile Content to be explored is the Web Crawler. Web Crawler duty is to collect a number of Mobile Content. A mobile application functions as a search application that will use the results from the Web Crawler. Crawler Web server consists of the Servlet, Mobile Content Filter and datastore. Servlet is a gateway connection between the client with the server. Datastore is the storage media crawling results. Mobile Content Filter selects a web page, only the appropriate web pages for mobile devices or with mobile content that will be forwarded.

  20. Characteristics of scientific web publications

    DEFF Research Database (Denmark)

    Thorlund Jepsen, Erik; Seiden, Piet; Ingwersen, Peter Emil Rerup

    2004-01-01

    Because of the increasing presence of scientific publications on the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research on techniques and methods for retrieval of scientific Web publications is called for. In this article, we report on the......Vista and AllTheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed....

  1. Dynamic Web Pages: Performance Impact on Web Servers.

    Science.gov (United States)

    Kothari, Bhupesh; Claypool, Mark

    2001-01-01

    Discussion of Web servers and requests for dynamic pages focuses on experimentally measuring and analyzing the performance of the three dynamic Web page generation technologies: CGI, FastCGI, and Servlets. Develops a multivariate linear regression model and predicts Web server performance under some typical dynamic requests. (Author/LRW)

  2. Concordance of HIV type 1 tropism phenotype to predictions using web-based analysis of V3 sequences: composite algorithms may be needed to properly assess viral tropism.

    Science.gov (United States)

    Cabral, Gabriela Bastos; Ferreira, João Leandro de Paula; Coelho, Luana Portes Osório; Fonsi, Mylva; Estevam, Denise Lotufo; Cavalcanti, Jaqueline Souza; Brígido, Luis Fernando de Macedo

    2012-07-01

    Genotypic prediction of HIV-1 tropism has been considered a practical surrogate for phenotypic tests and recently an European Consensus has set up recommendations for its use in clinical practice. Twenty-five antiretroviral-experienced patients, all heavily treated cases with a median of 16 years of antiretroviral therapy, had viral tropism determined by the Trofile assay and predicted by HIV-1 sequencing of partial env, followed by interpretation using web-based tools. Trofile determined 17/24 (71%) as X4 tropic or dual/mixed viruses, with one nonreportable result. The use of European consensus recommendations for single sequences (geno2pheno false-positive rates 20% cutoff) would lead to 4/24 (16.7%) misclassifications, whereas a composite algorithm misclassified 1/24 (4%). The use of the geno2pheno clinical option using CD4 T cell counts at collection was useful in resolving some discrepancies. Applying the European recommendations followed by additional web-based tools for cases around the recommended cutoff would resolve most misclassifications.

  3. AcconPred: Predicting Solvent Accessibility and Contact Number Simultaneously by a Multitask Learning Framework under the Conditional Neural Fields Model

    Directory of Open Access Journals (Sweden)

    Jianzhu Ma

    2015-01-01

    Full Text Available Motivation. The solvent accessibility of protein residues is one of the driving forces of protein folding, while the contact number of protein residues limits the possibilities of protein conformations. The de novo prediction of these properties from protein sequence is important for the study of protein structure and function. Although these two properties are certainly related with each other, it is challenging to exploit this dependency for the prediction. Method. We present a method AcconPred for predicting solvent accessibility and contact number simultaneously, which is based on a shared weight multitask learning framework under the CNF (conditional neural fields model. The multitask learning framework on a collection of related tasks provides more accurate prediction than the framework trained only on a single task. The CNF method not only models the complex relationship between the input features and the predicted labels, but also exploits the interdependency among adjacent labels. Results. Trained on 5729 monomeric soluble globular protein datasets, AcconPred could reach 0.68 three-state accuracy for solvent accessibility and 0.75 correlation for contact number. Tested on the 105 CASP11 domain datasets for solvent accessibility, AcconPred could reach 0.64 accuracy, which outperforms existing methods.

  4. LECTINPred: web Server that Uses Complex Networks of Protein Structure for Prediction of Lectins with Potential Use as Cancer Biomarkers or in Parasite Vaccine Design.

    Science.gov (United States)

    Munteanu, Cristian R; Pedreira, Nieves; Dorado, Julián; Pazos, Alejandro; Pérez-Montoto, Lázaro G; Ubeira, Florencio M; González-Díaz, Humberto

    2014-04-01

    Lectins (Ls) play an important role in many diseases such as different types of cancer, parasitic infections and other diseases. Interestingly, the Protein Data Bank (PDB) contains +3000 protein 3D structures with unknown function. Thus, we can in principle, discover new Ls mining non-annotated structures from PDB or other sources. However, there are no general models to predict new biologically relevant Ls based on 3D chemical structures. We used the MARCH-INSIDE software to calculate the Markov-Shannon 3D electrostatic entropy parameters for the complex networks of protein structure of 2200 different protein 3D structures, including 1200 Ls. We have performed a Linear Discriminant Analysis (LDA) using these parameters as inputs in order to seek a new Quantitative Structure-Activity Relationship (QSAR) model, which is able to discriminate 3D structure of Ls from other proteins. We implemented this predictor in the web server named LECTINPred, freely available at http://bio-aims.udc.es/LECTINPred.php. This web server showed the following goodness-of-fit statistics: Sensitivity=96.7 % (for Ls), Specificity=87.6 % (non-active proteins), and Accuracy=92.5 % (for all proteins), considering altogether both the training and external prediction series. In mode 2, users can carry out an automatic retrieval of protein structures from PDB. We illustrated the use of this server, in operation mode 1, performing a data mining of PDB. We predicted Ls scores for +2000 proteins with unknown function and selected the top-scored ones as possible lectins. In operation mode 2, LECTINPred can also upload 3D structural models generated with structure-prediction tools like LOMETS or PHYRE2. The new Ls are expected to be of relevance as cancer biomarkers or useful in parasite vaccine design.

  5. A novel design of hidden web crawler using ontology

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Deep Web is content hidden behind HTML forms. Since it represents a large portion of the structured, unstructured and dynamic data on the Web, accessing Deep-Web content has been a long challenge for the database community. This paper describes a crawler for accessing Deep-Web using Ontologies. Performance evaluation of the proposed work showed that this new approach has promising results.

  6. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  7. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  8. Chemistry WebBook

    Science.gov (United States)

    SRD 69 NIST Chemistry WebBook (Web, free access)   The NIST Chemistry WebBook contains: Thermochemical data for over 7000 organic and small inorganic compounds; thermochemistry data for over 8000 reactions; IR spectra for over 16,000 compounds; mass spectra for over 33,000 compounds; UV/Vis spectra for over 1600 compounds; electronic and vibrational spectra for over 5000 compounds; constants of diatomic molecules(spectroscopic data) for over 600 compounds; ion energetics data for over 16,000 compounds; thermophysical property data for 74 fluids.

  9. Head First Web Design

    CERN Document Server

    Watrall, Ethan

    2008-01-01

    Want to know how to make your pages look beautiful, communicate your message effectively, guide visitors through your website with ease, and get everything approved by the accessibility and usability police at the same time? Head First Web Design is your ticket to mastering all of these complex topics, and understanding what's really going on in the world of web design. Whether you're building a personal blog or a corporate website, there's a lot more to web design than div's and CSS selectors, but what do you really need to know? With this book, you'll learn the secrets of designing effecti

  10. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    Science.gov (United States)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  11. A new algorithm to create a profile for users of web site benefiting from web usage mining

    Directory of Open Access Journals (Sweden)

    masomeh khabazfazli

    2015-11-01

    Full Text Available Upon integration of internet and its various applications and increase of internet pages, access to information in search engines becomes difficult. To solve this problem, web page recommendation systems are used. In this paper, recommender engine are improved and web usage mining methods are used for this purpose. In recommendation system, clustering was used for classification of users’ behavior. In fact, we implemented usage mining operation on the data related to each user for making its movement pattern. Then, web pages were recommended using neural network and markov model. So, performance of recommendation engine was improved using user’s movement patterns and clustering and neural network and Markov model, and obtained better results than other methods. To predict the data recovery quality on web, two factors including accuracy and coverage were used

  12. Comprendre le Web caché

    OpenAIRE

    Senellart, Pierre

    2007-01-01

    The hidden Web (also known as deep or invisible Web), that is, the part of the Web not directly accessible through hyperlinks, but through HTML forms or Web services, is of great value, but difficult to exploit. We discuss a process for the fully automatic discovery, syntactic and semantic analysis, and querying of hidden-Web services. We propose first a general architecture that relies on a semi-structured warehouse of imprecise (probabilistic) content. We provide a detailed complexity analy...

  13. Quantum Isostere Database: a web-based tool using quantum chemical topology to predict bioisosteric replacements for drug design.

    Science.gov (United States)

    Devereux, Mike; Popelier, Paul L A; McLay, Iain M

    2009-06-01

    This paper introduces the 'Quantum Isostere Database' (QID), a Web-based tool designed to find bioisosteric fragment replacements for lead optimization using stored ab initio data. A wide range of original geometric, electronic, and calculated physical properties are stored for each fragment. Physical descriptors with clear meaning are chosen, such as distribution of electrostatic potential energy values across a fragment surface and geometric parameters to describe fragment conformation and shape from ab initio structures. Further fundamental physical properties are linked to broader chemical characteristics relevant to biological activity, such as H-bond donor and acceptor strengths. Additional properties with less easily interpretable links to biological activity are also stored to allow future development of QSAR/QSPR models for quantities such as pK(a) and solubility. Conformational dependence of the ab initio descriptors is explicitly dealt with by storing properties for a variety of low-energy conformers of each fragment. Capping groups are used in ab initio calculations to represent different chemical environments, based on background research into transferability of electronic descriptors [J. Comput. Chem. 2009, 30, 1300-1318]. The resulting database has a Web interface that allows medicinal chemists to enter a query fragment, select important chemical features, and retrieve a list of suggested replacements with similar chemical characteristics. Examples of known bioisosteric replacements correctly identified by the QID tool are given.

  14. Quality of service prediction of multi-agent web service integration system based on gray neural network%基于灰色神经网络的多Agent服务集成系统服务质量预测

    Institute of Scientific and Technical Information of China (English)

    张淼淼; 李决龙; 邢建春; 杨启亮

    2013-01-01

    Along with emerge of a large number of web service in the net work, quality of service(QoS) becomes more and more significant in the web service selection. The QoS prediction is not only an important means of web services selection, but also has an important significance of the entire service composition process. The Existing QoS prediction is divided into static QoS prediction and dynamic QoS prediction, and the existing QoS prediction method is divided into recommend prediction algorithm, reasoning prediction algorithm, artificial intelligence prediction algorithm. However, most of the prediction method is only for QoS static prediction problem of web service selection, lack of support QoS management and dynamic prediction model and prediction accuracy aspects to be improved. Software Agent is an independent function calculation entity of distributed system and the cooperation system,which has the characteristics of autonomy,interactivity, reactivity and initiative. In order to perform the dynamic QoS prediction and make up for the shortage of the current web service integration model,we put forward a web service integration model based on multi-agent called QoS oriented multi-agent web service integration model (QOMAWSIM) through joined the service agent and quality agent in the traditional web service layer. QOMAWSIM including application layer web service layer, intelligent Agent layer and the user layer. This model can provide QoS management operation,such as web service QoS validation,QoS negotiation and QoS monitoring,which makes it easier to realize the dynamic QoS prediction and dynamic service selection. Then based on QOMAWSIM , this paper put forward a new method for QoS prediction of web service using grey-neural network. Grey neural network not only have the advantages of accumulation generation in the grey forecasting method,but also play the neural network intelligent processing characteristic-cs. Finally MATLAB is used to build the model of Qo

  15. Hidden Page WebCrawler Model for Secure Web Pages

    Directory of Open Access Journals (Sweden)

    K. F. Bharati

    2013-03-01

    Full Text Available The traditional search engines available over the internet are dynamic in searching the relevant content over the web. The search engine has got some constraints like getting the data asked from a varied source, where the data relevancy is exceptional. The web crawlers are designed only to more towards a specific path of the web and are restricted in moving towards a different path as they are secured or at times restricted due to the apprehension of threats. It is possible to design a web crawler that will have the capability of penetrating through the paths of the web, not reachable by the traditional web crawlers, in order to get a better solution in terms of data, time and relevancy for the given search query. The paper makes use of a newer parser and indexer for coming out with a novel idea of web crawler and a framework to support it. The proposed web crawler is designed to attend Hyper Text Transfer Protocol Secure (HTTPS based websites and web pages that needs authentication to view and index. User has to fill a search form and his/her creditionals will be used by the web crawler to attend secure web server for authentication. Once it is indexed the secure web server will be inside the web crawler’s accessible zone

  16. Construction of Community Web Directories based on Web usage Data

    CERN Document Server

    Sandhyarani, Ramancha; Gyani, Jayadev; 10.5121/acij.2012.3205

    2012-01-01

    This paper support the concept of a community Web directory, as a Web directory that is constructed according to the needs and interests of particular user communities. Furthermore, it presents the complete method for the construction of such directories by using web usage data. User community models take the form of thematic hierarchies and are constructed by employing clustering approach. We applied our methodology to the ODP directory and also to an artificial Web directory, which was generated by clustering Web pages that appear in the access log of an Internet Service Provider. For the discovery of the community models, we introduced a new criterion that combines a priori thematic informativeness of the Web directory categories with the level of interest observed in the usage data. In this context, we introduced and evaluated new clustering method. We have tested the methodology using access log files which are collected from the proxy servers of an Internet Service Provider and provided results that ind...

  17. The design and implementation of web mining in web sites security

    Institute of Scientific and Technical Information of China (English)

    ZHANG Guo-yin; GU Guo-chang; LI Jian-li

    2003-01-01

    The backdoor or information leak of Web servers can be detected by using Web Mining techniques on some abnormal Web log and Web application log data. The security of Web servers can be enhanced and the damage of illegal access can be avoided. Firstly, the system for discovering the patterns of information leakages in CGI scripts from Web log data was proposed. Secondly, those patterns for system administrators to modify their codes and enhance their Web site security were provided. The following aspects were described: one is to combine web application log with web log to extract more information,so web data mining could be used to mine web log for discovering the information that firewall and Information Detection System cannot find. Another approach is to propose an operation module of web site to enhance Web site security. In cluster server session, Density-Based Clustering technique is used to reduce resource cost and obtain better efficiency.

  18. Persistent Web References – Best Practices and New Suggestions

    DEFF Research Database (Denmark)

    Zierau, Eld; Nyvang, Caroline; Kromann, Thomas Hvid

    In this paper, we suggest adjustments to best practices for persistent web referencing; adjustments that aim at preservation and long time accessibility of web referenced resources in general, but with focus on web references in web archives. Web referencing is highly relevant and crucial...

  19. Design and Implementation of Domain based Semantic Hidden Web Crawler

    OpenAIRE

    Manvi; Bhatia, Komal Kumar; Dixit, Ashutosh

    2015-01-01

    Web is a wide term which mainly consists of surface web and hidden web. One can easily access the surface web using traditional web crawlers, but they are not able to crawl the hidden portion of the web. These traditional crawlers retrieve contents from web pages, which are linked by hyperlinks ignoring the information hidden behind form pages, which cannot be extracted using simple hyperlink structure. Thus, they ignore large amount of data hidden behind search forms. This paper emphasizes o...

  20. Professional Access 2013 programming

    CERN Document Server

    Hennig, Teresa; Hepworth, George; Yudovich, Dagi (Doug)

    2013-01-01

    Authoritative and comprehensive coverage for building Access 2013 Solutions Access, the most popular database system in the world, just opened a new frontier in the Cloud. Access 2013 provides significant new features for building robust line-of-business solutions for web, client and integrated environments.  This book was written by a team of Microsoft Access MVPs, with consulting and editing by Access experts, MVPs and members of the Microsoft Access team. It gives you the information and examples to expand your areas of expertise and immediately start to develop and upgrade projects. Exp